journal-title
stringclasses
191 values
pmid
stringlengths
8
8
pmc
stringlengths
10
11
doi
stringlengths
12
31
article-title
stringlengths
11
423
abstract
stringlengths
18
3.69k
related-work
stringlengths
12
84k
references
sequencelengths
0
206
reference_info
listlengths
0
192
Frontiers in Neurorobotics
30344486
PMC6182048
10.3389/fnbot.2018.00064
Faster R-CNN for Robust Pedestrian Detection Using Semantic Segmentation Network
Convolutional neural networks (CNN) have enabled significant improvements in pedestrian detection owing to the strong representation ability of the CNN features. However, it is generally difficult to reduce false positives on hard negative samples such as tree leaves, traffic lights, poles, etc. Some of these hard negatives can be removed by making use of high level semantic vision cues. In this paper, we propose a region-based CNN method which makes use of semantic cues for better pedestrian detection. Our method extends the Faster R-CNN detection framework by adding a branch of network for semantic image segmentation. The semantic network aims to compute complementary higher level semantic features to be integrated with the convolutional features. We make use of multi-resolution feature maps extracted from different network layers in order to ensure good detection accuracy for pedestrians at different scales. Boosted forest is used for training the integrated features in a cascaded manner for hard negatives mining. Experiments on the Caltech pedestrian dataset show improvements on detection accuracy with the semantic network. With the deep VGG16 model, our pedestrian detection method achieves robust detection performance on the Caltech dataset.
2. Related work2.1. Hand-engineered feature based pedestrian detectorsHistogram of Oriented Gradient (HOG) (Dalal and Triggs, 2005) based detectors using a multi-scale sliding window mechanism have long been the dominant approach for pedestrian detection. While no single hand-craft feature has been shown to outperform HOG, the combinations of HOG with other feature descriptors for different visual cues have resulted in higher accuracy in terms of the achieved low false-positive rate and high true-positive rate. As for example, in Wang et al. (2009), a texture descriptor based on local binary patterns (LBP) (Ojala et al., 2002) was combined with HOG to overcome the problem of partial occlusions. HOG descriptors are used together with LUV color features in the form of image channels features (ICF) in Dollár et al. (2009a). The ICF detector has faster computational speed than HOG as it uses integral images over feature channels. Aggregated channel features (ACF) (Dollár et al., 2014) approximates multi-scale gradients using nearby scales so that it can achieve very fast feature pyramid for real-time multi-scale detection. Checkerboards (Zhang et al., 2015) is a generalization of the ICF, which filters the HOG+LUV feature channels before feeding them into a boosted decision forest.2.2. Region-CNN based pedestrian detection methodsApart from the dense detection framework using sliding windows scheme, like the HOG detector (Dalal and Triggs, 2005) and its modifications (Wang et al., 2009; Felzenszwalb et al., 2010; Yan et al., 2014; Pedersoli et al., 2015), there is another pipeline of detection methods using “attention” mechanism and is referred to as region-based detection methods (Girshick et al., 2014; Uijlings et al., 2013; Girshick, 2015; Jian et al., 2015, 2017). These methods propose a number of high potential pedestrian candidate regions which is much less than that of sliding window methods. Classifications are performed focusing on the proposal regions so as to be more cost-efficient.Region-based convolutional neural networks (R-CNN) (Girshick et al., 2014) is a representative region-based detection method using deep neural network (DNN) features. The initial version of the R-CNN detector uses the selective search approach (Uijlings et al., 2013) for region proposal. Despite accurate, R-CNN is too slow for real-time applications even with high-end hardware. Faster R-CNN (Ren et al., 2015) improves R-CNN by replacing selective search (Uijlings et al., 2013) with a built-in network that can directly generate proposals. This sub-network, referred to as region proposal network (RPN), is integrated with Fast R-CNN (Girshick, 2015) to pool candidate object bounding boxes with features extracted using region of interest (RoI) pooling.Despite Faster R-CNN being particularly successful for object detection, the results for pedestrian detection are not satisfying on pedestrian benchmark (Dollár et al., 2009b). The anchors used in Ren et al. (2015) for generic object detection are of multiple aspect ratios, which may not be suitable for pedestrian detection. Anchors of inappropriate aspect ratios will induce false detections and are harmful for detection accuracy. In Zhang et al. (2016a), the anchors are tailored into a single aspect ratio of a wider range of scales to be suitable for pedestrian detection and this approach achieves promising results on the Caltech dataset.2.3. Semantic image segmentationSemantic image segmentation, also be referred as semantic image labeling, aims to assign every pixel of an image with an object class label, challengingly combining image segmentation and object recognition in a single process. Before DNN make success on semantic image segmentation, the dominate approaches were Random Forest (RF) based classifiers (Shotton et al., 2008; Yao et al., 2012; Liu and Chan, 2015). The earlier DNN based semantic segmentation approaches (Ciresan et al., 2012) perform classification on image patches. Each pixel was individually classified into a category using a fixed size image patch surrounding this pixel. The reason of using patches was that the deep classification networks usually have full connected layers which require fixed size inputs. In 2015, Fully Convolutional Networks (Long et al., 2015) popularized CNN architectures for dense predictions without any fully connected layers. This allowed segmentation to be performed on a whole image of arbitrary size and also speed up the segmentation process compared to patch-based approaches.For semantic segmentation problems, pooling layers help in classification networks because they help increase the receptive fields, while on the other hand, pooling decreases the spatial resolution. The “encoder-decoder” architecture was proposed for semantic segmentation approaches (Ronneberger et al., 2015; Badrinarayanan et al., 2015; Noh et al., 2015) to recover the spatial dimension. The encoder gradually reduces the spatial dimension with pooling layers, while a decoder recovers the spatial dimension. SegNet (Badrinarayanan et al., 2015) is such an encoder-decoder deep architecture for pixel-wise semantic labeling. The network consists of a convolutional network (be referred to as the encoder network) and an up-scaling network (be referred to as the decoder network), followed by a classification layer. The feature maps obtained from the upsampling process are sparse. For dense image labeling applications, SegNet converts these sparse feature maps into dense ones using the nearest neighbor approach. As reported, SegNet provides competitive performance using less memory, compared to other state-of-the-arts deep semantic segmentation method (Eigen and Fergus, 2015; Long et al., 2015; Noh et al., 2015).
[ "28060704", "28463186", "26353336", "20634557", "26054066", "25291809", "22813957" ]
[ { "pmid": "28060704", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.", "abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet." }, { "pmid": "28463186", "title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.", "abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online." }, { "pmid": "26353336", "title": "Fast Feature Pyramids for Object Detection.", "abstract": "Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures)." }, { "pmid": "20634557", "title": "Object detection with discriminatively trained part-based models.", "abstract": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." }, { "pmid": "26054066", "title": "Fast image interpolation via random forests.", "abstract": "This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time." }, { "pmid": "25291809", "title": "Visual-Patch-Attention-Aware Saliency Detection.", "abstract": "The human visual system (HVS) can reliably perceive salient objects in an image, but, it remains a challenge to computationally model the process of detecting salient objects without prior knowledge of the image contents. This paper proposes a visual-attention-aware model to mimic the HVS for salient-object detection. The informative and directional patches can be seen as visual stimuli, and used as neuronal cues for humans to interpret and detect salient objects. In order to simulate this process, two typical patches are extracted individually and in parallel from the intensity channel and the discriminant color channel, respectively, as the primitives. In our algorithm, an improved wavelet-based salient-patch detector is used to extract the visually informative patches. In addition, as humans are sensitive to orientation features, and as directional patches are reliable cues, we also propose a method for extracting directional patches. These two different types of patches are then combined to form the most important patches, which are called preferential patches and are considered as the visual stimuli applied to the HVS for salient-object detection. Compared with the state-of-the-art methods for salient-object detection, experimental results using publicly available datasets show that our produced algorithm is reliable and effective." }, { "pmid": "22813957", "title": "Layered object models for image segmentation.", "abstract": "We formulate a layered model for object detection and image segmentation. We describe a generative probabilistic model that composites the output of a bank of object detectors in order to define shape masks and explain the appearance, depth ordering, and labels of all pixels in an image. Notably, our system estimates both class labels and object instance labels. Building on previous benchmark criteria for object detection and image segmentation, we define a novel score that evaluates both class and instance segmentation. We evaluate our system on the PASCAL 2009 and 2010 segmentation challenge data sets and show good test results with state-of-the-art performance in several categories, including segmenting humans." } ]
Chemical Science
30393525
PMC6182568
10.1039/c8sc02239a
Chimera: enabling hierarchy based multi-objective optimization for self-driving laboratories† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c8sc02239a
Chimera enables multi-target optimization for experimentation or expensive computations, where evaluations are the limiting factor.
Background and related workMulti-objective (Pareto) optimization is concerned with the simultaneous optimization of a set of objective functions, {fk}n–1k=0, where each of the objective functions, fk, is defined on the same compact parameter space .25 Objectives of interest in the context of chemistry could be, for example, the yield of a reaction and its execution time. Although the desired goal of an optimization procedure is to find a point in parameter space for which each of the objectives fk(x*) assume their desired optimal value (e.g. minimum/maximum), objectives in multi-objective optimization problems oftentimes conflict with each other. Indeed, improving on one objective could imply an unavoidable degradation in other objectives as, for instance, shorter execution times could cause a drop in yield. As a consequence, a single global solution cannot be defined for the generic multi-objective optimization problem. This challenge is illustrated in Fig. 1A, where a set of three objective functions with global minima at different locations is presented.Fig. 1Example for the construction of Chimera from three one-dimensional objective functions. Panel (A) Illustration of the three objective functions, f0, f1 and f2, in order of the hierarchy. For constructing Chimera, each objective is considered only in the parameter region where higher-level objectives satisfy the tolerances (dashed lines). Solid lines indicate the upper objective bound in the region of interest used as a reference for the tolerance on the considered objective. The objective functions considered in different parameter regions for this example are illustrated in A.IV. Panel (B) The construction of Chimera for the considered objective. The discrete variant of Chimera (black, panel B.II) is constructed using eqn (2), which was substituted with eqn (6) to generate smooth variants (green, panel B.III) using different smoothing parameter values, where lighter traces correspond to larger parameter values. Panel (C) Pseudo code showcasing the conceptual implementation of Chimera. Panel (D) Analytic expression for the discrete Chimera variant constructed from three objective functions.Defining and identifying solutions to multi-objective optimization problemsA commonly used criterion for determining solutions to multi-objective optimization problems is Pareto optimality.26 A point is called Pareto optimal if and only if there exists no other point such that all objectives are improved simultaneously. Therefore, deviating from a Pareto optimal point always implies a degradation in at least one of the objectives. Relating to the previous example, this corresponds to a scenario in which the execution time cannot be improved any further without a degradation of the reaction yield. As Pareto optimal points cannot be collectively improved in two or more objectives, solving a multi-objective optimization problem translates to finding Pareto optimal points. Note that for a given multi-objective optimization problem, multiple Pareto optimal points can coexist.27Typically, approaches to solving multi-objective optimization problems aim to assist a decision maker in identifying the favored solution from the set of Pareto optimal solutions (Pareto front). The favored solution is determined from preference information regarding the objectives provided by the decision maker. Methods for multi-objective optimization can be divided into two major classes. A posteriori methods aim to discover the entire Pareto front, such that preferences regarding the objectives can be expressed knowing which objective values are achievable. This relates to knowing by how much the execution time needs to be increased to achieve a desired increase in the reaction yield. A priori methods instead require preference information prior to starting the optimization procedure. As such, a priori methods can be more specifically targeted towards the desired goal and thus reduce the necessary number of objective evaluations if reasonable preference information is provided. A posteriori methods are commonly realized as mathematical programming approaches, such as Normal Boundary Intersection,28,29 Normal Constraint,30,31 or Successive Pareto Optimization,32 which repeat algorithms for finding Pareto optimal solutions. Another strategy consists in evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II,33 or the Sub-population Algorithm based on Novelty,34 where a single run of the algorithm produces a set of Pareto optimal solutions. Recently, a posteriori methods have also been developed following Bayesian approaches for optimization.35–39 However, determining the preferred Pareto point from the entire Pareto front requires a substantial number of objective function evaluations compared to scenarios in which only a subset of the Pareto front is of interest. Such scenarios can be found in the context of experimental design, where preferences regarding objectives like yield and execution time are available prior to the optimization procedure. As such, a priori methods appear to be better suited for multi-objective optimization in the context of designing experiments, as they keep the number of objective evaluations to a minimum.A common a priori approach for expressing preferences for multi-objective optimization is to formulate a single cumulative function from a combination of the set of objectives which accounts for the expressed preferences (see Fig. 1B). For example, instead of considering the yield and the execution time of a reaction independently, a single objective can be constructed from a combination of simultaneous observations for the yield and the execution time. Such cumulative functions are referred to as achievement scalarizing functions (ASFs). The premise of the constructed ASF is that its optimal solution coincides with the preferred Pareto optimal solution of the multi-objective optimization problem.Typically, ASFs are constructed with a set of parameters which account for the expressed preferences regarding the individual objectives. ASFs can be constructed via, for example, weighted sums or weighted products of the objectives. In such approaches, the ASF is computed by summing up each objective function fk multiplied by a pre-defined weight wk accounting for the user preferences. Multiple formulations of weighted sums and products exist,40 and methods have been developed to learn these weights adaptively.41 Weighted approaches are usually simple to implement, but the challenge lies in finding suitable weight vectors to yield Pareto optimal solutions. In addition, Pareto optimal solutions might not be found for non-convex objective spaces.A second a priori approach consists in considering only one of the objectives for optimization while constraining the other objectives based on user preferences.42–44These approaches, referred to as ε-constraint methods, have been shown to find Pareto optimal points even on non-convex objective spaces.27,45 However, the constraint vector needs to be chosen carefully, which typically requires detailed prior knowledge about the objectives.A third a priori approach, known as lexicographic methods, follows yet a different approach.46 Lexicographic methods require preference information expressed in terms of an importance hierarchy in the objectives (see Fig. 1A.I–III). In our example, when optimizing for the yield of a reaction and its execution time, the focus could be either on the reaction yield or on the execution time. In the scenario where the reaction yield matters the most, it is related to a higher hierarchy than the execution time. To start the optimization procedure with a lexicographic method, the objectives are sorted in descending order of importance. Each objective is then subsequently optimized without degrading higher-level objectives.47 Variants of the lexicographic approach allow for minimal violations of the imposed constraints.48,49 Single-objective optimization methodsMost a priori methods reformulate multi-objective optimization problems into single-objective optimization problems. The latter are well studied and a plethora of algorithms have been developed for single-objective optimization.50–53 Some of these algorithms aim to optimize an objective function locally while others aim to locate the global optimum. In some cases, optimization algorithms are based not only on the objective function, but also on its gradients and possibly higher derivatives.Finding optimal conditions for an experimental setup imposes particular requirements on optimization algorithms as the surface of the experimental objectives is unknown. Additionally, running an experiment can be costly in terms of execution time, money, or other budgeted resources. Therefore, an appropriate optimization algorithm must be gradient-free, and global to keep the number of required objective evaluations to a minimum. In addition, such an algorithm must support optimization on possibly non-convex surfaces. In the following paragraphs we describe four techniques which will be considered herein to study the performance of Chimera.Systematic grid searches and (fractional) factorial design strategies are popular methods for experimental design.54–56 These strategies rely on the construction of a grid of parameter points within the parameter (sub-)space, from which points are sampled for evaluation. Grid searches are embarrassingly parallel, as the parameter grid can be constructed prior to running any experiments. However, a constructed grid cannot take into account the most recent experimental results for proposing new parameter points. Moreover, parameter samples proposed from grid searches are correlated, and thus might miss important features of the objective surface or even the Pareto optimal point.The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) samples parameter points from a multinomial distribution defined on the parameter space.57,58 After evaluation of all proposed parameter points, distribution parameters are updated via a maximum-likelihood approach. As a consequence, the means of the multinomial distribution follow a natural gradient descent while the covariance matrix is updated via iterated principal component analysis retaining all principal components. While CMA-ES is successful on highly multi-modal functions, its efficiency drops on well-behaved convex functions.Recently, Bayesian optimization methods have gained increased attention. Spearmint implements Bayesian optimization based on Gaussian processes.59,60 Gaussian processes associate every point in the parameter space with a normal distribution to construct an approximation of the unknown objective function. Parameter points can be proposed from this approximation via an acquisition function, implicitly balancing the explorative and exploitative behavior of the optimization procedure. While Gaussian process based optimization provides high flexibility, it suffers from the adverse cubical scaling of the approach with the number of observations.Recently, we introduced Phoenics for a rapid optimization of unknown black-box functions.61 Phoenics combines concepts from Bayesian optimization with ideas from Bayesian kernel density estimation. Phoenics was shown to be an effective, flexible optimization algorithm on a wide range of objective functions and allows for an efficient parallelization by proposing parameter points based on different sampling strategies. These strategies are enabled by the introduction of an intuitive bias towards exploitation or exploration.
[ "29348235", "29296675", "29560211", "27775725", "17960268", "24437665", "11382355", "12804094", "21941248", "23223125", "23945881", "28358065", "18327941", "18339736", "21809112", "19548714", "23879880", "26588548", "16861264" ]
[ { "pmid": "29348235", "title": "Digitization of multistep organic synthesis in reactionware for on-demand pharmaceuticals.", "abstract": "Chemical manufacturing is often done at large facilities that require a sizable capital investment and then produce key compounds for a finite period. We present an approach to the manufacturing of fine chemicals and pharmaceuticals in a self-contained plastic reactionware device. The device was designed and constructed by using a chemical to computer-automated design (ChemCAD) approach that enables the translation of traditional bench-scale synthesis into a platform-independent digital code. This in turn guides production of a three-dimensional printed device that encloses the entire synthetic route internally via simple operations. We demonstrate the approach for the γ-aminobutyric acid receptor agonist, (±)-baclofen, establishing a concept that paves the way for the local manufacture of drugs outside of specialist facilities." }, { "pmid": "29296675", "title": "Optimizing Chemical Reactions with Deep Reinforcement Learning.", "abstract": "Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability." }, { "pmid": "29560211", "title": "A self optimizing synthetic organic reactor system using real-time in-line NMR spectroscopy.", "abstract": "A configurable platform for synthetic chemistry incorporating an in-line benchtop NMR that is capable of monitoring and controlling organic reactions in real-time is presented. The platform is controlled via a modular LabView software control system for the hardware, NMR, data analysis and feedback optimization. Using this platform we report the real-time advanced structural characterization of reaction mixtures, including 19F, 13C, DEPT, 2D NMR spectroscopy (COSY, HSQC and 19F-COSY) for the first time. Finally, the potential of this technique is demonstrated through the optimization of a catalytic organic reaction in real-time, showing its applicability to self-optimizing systems using criteria such as stereoselectivity, multi-nuclear measurements or 2D correlations." }, { "pmid": "27775725", "title": "Evolutionary multi-objective optimization of colour pixels based on dielectric nanoantennas.", "abstract": "The rational design of photonic nanostructures consists of anticipating their optical response from systematic variations of simple models. This strategy, however, has limited success when multiple objectives are simultaneously targeted, because it requires demanding computational schemes. To this end, evolutionary algorithms can drive the morphology of a nano-object towards an optimum through several cycles of selection, mutation and cross-over, mimicking the process of natural selection. Here, we present a numerical technique that can allow the design of photonic nanostructures with optical properties optimized along several arbitrary objectives. In particular, we combine evolutionary multi-objective algorithms with frequency-domain electrodynamical simulations to optimize the design of colour pixels based on silicon nanostructures that resonate at two user-defined, polarization-dependent wavelengths. The scattering spectra of optimized pixels fabricated by electron-beam lithography show excellent agreement with the targeted objectives. The method is self-adaptive to arbitrary constraints and therefore particularly apt for the design of complex structures within predefined technological limits." }, { "pmid": "17960268", "title": "Intelligent routes to the controlled synthesis of nanoparticles.", "abstract": "We describe an autonomous 'black-box' system for the controlled synthesis of fluorescent nanoparticles. The system uses a microfluidic reactor to carry out the synthesis and an in-line spectrometer to monitor the emission spectra of the emergent particles. The acquired data is fed into a control algorithm which reduces each spectrum to a scalar 'dissatisfaction coefficient' and then intelligently updates the reaction conditions in an effort to minimise this coefficient and so drive the system towards a desired goal. In the tests reported here, CdSe nanoparticles were prepared by separately injecting solutions of CdO and Se into the two inlets of a heated y-shaped microfluidic reactor. A noise-tolerant global search algorithm was then used to efficiently identify-without any human intervention-the injection rates and temperature that yielded the optimum intensity for a chosen emission wavelength." }, { "pmid": "24437665", "title": "General subpopulation framework and taming the conflict inside populations.", "abstract": "Structured evolutionary algorithms have been investigated for some time. However, they have been under explored especially in the field of multi-objective optimization. Despite good results, the use of complex dynamics and structures keep the understanding and adoption rate of structured evolutionary algorithms low. Here, we propose a general subpopulation framework that has the capability of integrating optimization algorithms without restrictions as well as aiding the design of structured algorithms. The proposed framework is capable of generalizing most of the structured evolutionary algorithms, such as cellular algorithms, island models, spatial predator-prey, and restricted mating based algorithms. Moreover, we propose two algorithms based on the general subpopulation framework, demonstrating that with the simple addition of a number of single-objective differential evolution algorithms for each objective, the results improve greatly, even when the combined algorithms behave poorly when evaluated alone at the tests. Most importantly, the comparison between the subpopulation algorithms and their related panmictic algorithms suggests that the competition between different strategies inside one population can have deleterious consequences for an algorithm and reveals a strong benefit of using the subpopulation framework." }, { "pmid": "11382355", "title": "Completely derandomized self-adaptation in evolution strategies.", "abstract": "This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equivalent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigorously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is observed. On moderately mis-scaled functions a speed up factor of three to ten can be expected." }, { "pmid": "12804094", "title": "Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES).", "abstract": "This paper presents a novel evolutionary optimization strategy based on the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). This new approach is intended to reduce the number of generations required for convergence to the optimum. Reducing the number of generations, i.e., the time complexity of the algorithm, is important if a large population size is desired: (1) to reduce the effect of noise; (2) to improve global search properties; and (3) to implement the algorithm on (highly) parallel machines. Our method results in a highly parallel algorithm which scales favorably with large numbers of processors. This is accomplished by efficiently incorporating the available information from a large population, thus significantly reducing the number of generations needed to adapt the covariance matrix. The original version of the CMA-ES was designed to reliably adapt the covariance matrix in small populations but it cannot exploit large populations efficiently. Our modifications scale up the efficiency to population sizes of up to 10n, where n is the problem dimension. This method has been applied to a large number of test problems, demonstrating that in many cases the CMA-ES can be advanced from quadratic to linear time complexity." }, { "pmid": "21941248", "title": "Lessons from nature about solar light harvesting.", "abstract": "Solar fuel production often starts with the energy from light being absorbed by an assembly of molecules; this electronic excitation is subsequently transferred to a suitable acceptor. For example, in photosynthesis, antenna complexes capture sunlight and direct the energy to reaction centres that then carry out the associated chemistry. In this Review, we describe the principles learned from studies of various natural antenna complexes and suggest how to elucidate strategies for designing light-harvesting systems. We envisage that such systems will be used for solar fuel production, to direct and regulate excitation energy flow using molecular organizations that facilitate feedback and control, or to transfer excitons over long distances. Also described are the notable properties of light-harvesting chromophores, spatial-energetic landscapes, the roles of excitonic states and quantum coherence, as well as how antennas are regulated and photoprotected." }, { "pmid": "23223125", "title": "Hot charge-transfer excitons set the time limit for charge separation at donor/acceptor interfaces in organic photovoltaics.", "abstract": "Photocurrent generation in organic photovoltaics (OPVs) relies on the dissociation of excitons into free electrons and holes at donor/acceptor heterointerfaces. The low dielectric constant of organic semiconductors leads to strong Coulomb interactions between electron-hole pairs that should in principle oppose the generation of free charges. The exact mechanism by which electrons and holes overcome this Coulomb trapping is still unsolved, but increasing evidence points to the critical role of hot charge-transfer (CT) excitons in assisting this process. Here we provide a real-time view of hot CT exciton formation and relaxation using femtosecond nonlinear optical spectroscopies and non-adiabatic mixed quantum mechanics/molecular mechanics simulations in the phthalocyanine-fullerene model OPV system. For initial excitation on phthalocyanine, hot CT excitons are formed in 10(-13) s, followed by relaxation to lower energies and shorter electron-hole distances on a 10(-12) s timescale. This hot CT exciton cooling process and collapse of charge separation sets the fundamental time limit for competitive charge separation channels that lead to efficient photocurrent generation." }, { "pmid": "23945881", "title": "Visualizing charge separation in bulk heterojunction organic solar cells.", "abstract": "Solar cells based on conjugated polymer and fullerene blends have been developed as a low-cost alternative to silicon. For efficient solar cells, electron-hole pairs must separate into free mobile charges that can be extracted in high yield. We still lack good understanding of how, why and when carriers separate against the Coulomb attraction. Here we visualize the charge separation process in bulk heterojunction solar cells by directly measuring charge carrier drift in a polymer:fullerene blend with ultrafast time resolution. We show that initially only closely separated (<1 nm) charge pairs are created and they separate by several nanometres during the first several picoseconds. Charge pairs overcome Coulomb attraction and form free carriers on a subnanosecond time scale. Numerical simulations complementing the experimental data show that fast three-dimensional charge diffusion within an energetically disordered medium, increasing the entropy of the system, is sufficient to drive the charge separation process." }, { "pmid": "28358065", "title": "Using coherence to enhance function in chemical and biophysical systems.", "abstract": "Coherence phenomena arise from interference, or the addition, of wave-like amplitudes with fixed phase differences. Although coherence has been shown to yield transformative ways for improving function, advances have been confined to pristine matter and coherence was considered fragile. However, recent evidence of coherence in chemical and biological systems suggests that the phenomena are robust and can survive in the face of disorder and noise. Here we survey the state of recent discoveries, present viewpoints that suggest that coherence can be used in complex chemical systems, and discuss the role of coherence as a design element in realizing function." }, { "pmid": "18327941", "title": "Light harvesting in photosystem II core complexes is limited by the transfer to the trap: can the core complex turn into a photoprotective mode?", "abstract": "A structure-based modeling and analysis of the primary photophysical reactions in photosystem II (PS-II) core complexes is presented. The modeling is based on a description of stationary and time-resolved optical spectra of the CP43, CP47, and D1-D2-cytb559 subunits and whole core complexes. It shows that the decay of excited states in PS-II core complexes with functional (open) reaction centers (RCs) is limited by the excitation energy transfer from the CP43 and CP47 core antennae to the RC occurring with a time constant of 40-50 ps at room temperature. The chlorophylls responsible for the low energy absorbance bands in the CP43 and CP47 subunits are assigned, and their signatures in hole burning, fluorescence line narrowing, and triplet-minus-singlet spectra are explained. The different locations of these trap states in the CP43 and CP47 antennae with respect to the reaction center lead to a dramatic change of the transfer dynamics at low temperatures. The calculations predict that, compared to room temperature, the fluorescence decay at 77 K should reveal a faster transfer from CP43 and a much slower and highly dispersive transfer from CP47 to the RC. A factor of 3 increase in the fastest decay time constant of fluorescence that was reported to occur when the RC is closed (the plastoquinone QA is reduced) is understood in the present model by assuming that the intrinsic rate constant for primary electron transfer decreases from 100 fs-1 for open RCs to 6 ps-1 for closed RCs, leading to a reduction of the primary electron acceptor PheoD1, in 300 fs and 18 ps, respectively. The model suggests that the reduced QA switches the photosystem into a photoprotective mode in which a large part of the excitation energy of the RC returns to the CP43 and CP47 core antennae, where the physiologically dangerous triplet energy of the chlorophylls can be quenched by the carotenoids. Experiments are suggested to test this hypothesis. The ultrafast primary electron transfer inferred for open RCs provides further support for the accessory chlorophyll ChlD1 to be the primary electron donor in photosystem II." }, { "pmid": "18339736", "title": "Spectroscopic properties of reaction center pigments in photosystem II core complexes: revision of the multimer model.", "abstract": "Absorbance difference spectra associated with the light-induced formation of functional states in photosystem II core complexes from Thermosynechococcus elongatus and Synechocystis sp. PCC 6803 (e.g., P(+)Pheo(-),P(+)Q(A)(-),(3)P) are described quantitatively in the framework of exciton theory. In addition, effects are analyzed of site-directed mutations of D1-His(198), the axial ligand of the special-pair chlorophyll P(D1), and D1-Thr(179), an amino-acid residue nearest to the accessory chlorophyll Chl(D1), on the spectral properties of the reaction center pigments. Using pigment transition energies (site energies) determined previously from independent experiments on D1-D2-cytb559 complexes, good agreement between calculated and experimental spectra is obtained. The only difference in site energies of the reaction center pigments in D1-D2-cytb559 and photosystem II core complexes concerns Chl(D1). Compared to isolated reaction centers, the site energy of Chl(D1) is red-shifted by 4 nm and less inhomogeneously distributed in core complexes. The site energies cause primary electron transfer at cryogenic temperatures to be initiated by an excited state that is strongly localized on Chl(D1) rather than from a delocalized state as assumed in the previously described multimer model. This result is consistent with earlier experimental data on special-pair mutants and with our previous calculations on D1-D2-cytb559 complexes. The calculations show that at 5 K the lowest excited state of the reaction center is lower by approximately 10 nm than the low-energy exciton state of the two special-pair chlorophylls P(D1) and P(D2) which form an excitonic dimer. The experimental temperature dependence of the wild-type difference spectra can only be understood in this model if temperature-dependent site energies are assumed for Chl(D1) and P(D1), reducing the above energy gap from 10 to 6 nm upon increasing the temperature from 5 to 300 K. At physiological temperature, there are considerable contributions from all pigments to the equilibrated excited state P*. The contribution of Chl(D1) is twice that of P(D1) at ambient temperature, making it likely that the primary charge separation will be initiated by Chl(D1) under these conditions. The calculations of absorbance difference spectra provide independent evidence that after primary electron transfer the hole stabilizes at P(D1), and that the physiologically dangerous charge recombination triplets, which may form under light stress, equilibrate between Chl(D1) and P(D1)." }, { "pmid": "21809112", "title": "Structure-based simulation of linear optical spectra of the CP43 core antenna of photosystem II.", "abstract": "The linear optical spectra (absorbance, linear dichroism, circular dichroism, fluorescence) of the CP43 (PsbC) antenna of the photosystem II core complex (PSIIcc) pertaining to the S(0) → S(1) (Q(Y)) transitions of the chlorophyll (Chl) a pigments are simulated by applying a combined quantum chemical/electrostatic method to obtain excitonic couplings and local transition energies (site energies) on the basis of the 2.9 Å resolution crystal structure (Guskov et al., Nat Struct Mol Biol 16:334-342, 2009). The electrostatic calculations identify three Chls with low site energies (Chls 35, 37, and 45 in the nomenclature of Loll et al. (Nature 438:1040-1044, 2005). A refined simulation of experimental spectra of isolated CP43 suggests a modified set of site energies within 143 cm(-1) of the directly calculated values (root mean square deviation: 80 cm(-1)). In the refined set, energy sinks are at Chls 37, 43, and 45 in agreement with earlier fitting results (Raszewski and Renger, J Am Chem Soc 130:4431-4446, 2008). The present structure-based simulations reveal that a large part of the redshift of Chl 37 is due to a digalactosyldiacylglycerol lipid. This finding suggests a new role for lipids in PSIIcc, namely the tuning of optical spectra and the creation of an excitation energy funnel towards the reaction center. The analysis of electrostatic pigment-protein interactions is used to identify amino acid residues that are of potential interest for an experimental approach to an assignment of site energies and energy sinks by site-directed mutagenesis." }, { "pmid": "19548714", "title": "On the adequacy of the Redfield equation and related approaches to the study of quantum dynamics in electronic energy transfer.", "abstract": "The observation of long-lived electronic coherence in photosynthetic excitation energy transfer (EET) by Engel et al. [Nature (London) 446, 782 (2007)] raises questions about the role of the protein environment in protecting this coherence and the significance of the quantum coherence in light harvesting efficiency. In this paper we explore the applicability of the Redfield equation in its full form, in the secular approximation and with neglect of the imaginary part of the relaxation terms for the study of these phenomena. We find that none of the methods can give a reliable picture of the role of the environment in photosynthetic EET. In particular the popular secular approximation (or the corresponding Lindblad equation) produces anomalous behavior in the incoherent transfer region leading to overestimation of the contribution of environment-assisted transfer. The full Redfield expression on the other hand produces environment-independent dynamics in the large reorganization energy region. A companion paper presents an improved approach, which corrects these deficiencies [A. Ishizaki and G. R. Fleming, J. Chem. Phys. 130, 234111 (2009)]." }, { "pmid": "23879880", "title": "Disentangling electronic and vibronic coherences in two-dimensional echo spectra.", "abstract": "The prevalence of long-lasting oscillatory signals in two-dimensional (2D) echo spectroscopy of light-harvesting complexes has led to a search for possible mechanisms. We investigate how two causes of oscillatory signals are intertwined: (i) electronic coherences supporting delocalized wavelike motion and (ii) narrow bands in the vibronic spectral density. To disentangle the vibronic and electronic contributions, we introduce a time-windowed Fourier transform of the signal amplitude. We find that 2D spectra can be dominated by excitations of pathways which are absent in excitonic energy transport. This leads to an underestimation of the lifetime of electronic coherences by 2D spectra." }, { "pmid": "26588548", "title": "Scalable High-Performance Algorithm for the Simulation of Exciton Dynamics. Application to the Light-Harvesting Complex II in the Presence of Resonant Vibrational Modes.", "abstract": "The accurate simulation of excitonic energy transfer in molecular complexes with coupled electronic and vibrational degrees of freedom is essential for comparing excitonic system parameters obtained from ab initio methods with measured time-resolved spectra. Several exact methods for computing the exciton dynamics within a density-matrix formalism are known but are restricted to small systems with less than 10 sites due to their computational complexity. To study the excitonic energy transfer in larger systems, we adapt and extend the exact hierarchical equation of motion (HEOM) method to various high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time scale of the transfer process. We investigate the impact of resonantly coupled vibrations on the relaxation and show that the transfer does not rely on a fine-tuning of specific modes." }, { "pmid": "16861264", "title": "How proteins trigger excitation energy transfer in the FMO complex of green sulfur bacteria.", "abstract": "A simple electrostatic method for the calculation of optical transition energies of pigments in protein environments is presented and applied to the Fenna-Matthews-Olson (FMO) complex of Prosthecochloris aestuarii and Chlorobium tepidum. The method, for the first time, allows us to reach agreement between experimental optical spectra and calculations based on transition energies of pigments that are calculated in large part independently, rather than fitted to the spectra. In this way it becomes possible to understand the molecular mechanism allowing the protein to trigger excitation energy transfer reactions. The relative shift in excitation energies of the seven bacteriochlorophyll-a pigments of the FMO complex of P. aestuarii and C. tepidum are obtained from calculations of electrochromic shifts due to charged amino acids, assuming a standard protonation pattern of the protein, and by taking into account the three different ligand types of the pigments. The calculations provide an explanation of some of the earlier results for the transition energies obtained from fits of optical spectra. In addition, those earlier fits are verified here by using a more advanced theory of optical spectra, a genetic algorithm, and excitonic couplings obtained from electrostatic calculations that take into account the influence of the dielectric protein environment. The two independent calculations of site energies strongly favor one of the two possible orientations of the FMO trimer relative to the photosynthetic membrane, which were identified by electron microscopic studies and linear dichroism experiments. Efficient transfer of excitation energy to the reaction center requires bacteriochlorophylls 3 and 4 to be the linker pigments. The temporal and spatial transfer of excitation energy through the FMO complex is calculated to proceed along two branches, with transfer times that differ by an order of magnitude." } ]
Cognitive Computation
30363787
PMC6182572
10.1007/s12559-018-9559-8
Distributed Drone Base Station Positioning for Emergency Cellular Networks Using Reinforcement Learning
Due to the unpredictability of natural disasters, whenever a catastrophe happens, it is vital that not only emergency rescue teams are prepared, but also that there is a functional communication network infrastructure. Hence, in order to prevent additional losses of human lives, it is crucial that network operators are able to deploy an emergency infrastructure as fast as possible. In this sense, the deployment of an intelligent, mobile, and adaptable network, through the usage of drones—unmanned aerial vehicles—is being considered as one possible alternative for emergency situations. In this paper, an intelligent solution based on reinforcement learning is proposed in order to find the best position of multiple drone small cells (DSCs) in an emergency scenario. The proposed solution’s main goal is to maximize the amount of users covered by the system, while drones are limited by both backhaul and radio access network constraints. Results show that the proposed Q-learning solution largely outperforms all other approaches with respect to all metrics considered. Hence, intelligent DSCs are considered a good alternative in order to enable the rapid and efficient deployment of an emergency communication network.
Related WorkAerial platforms, such as drones, are expected to have an important role in the next generation of mobile networks. Because of their flexibility, adaptability, and mobility capabilities, these platforms can be deployed in a wide range of situations, ranging from providing extra coverage and capacity whenever a big event takes place, supplying the necessary communication infrastructure in case of an emergency, or bringing service in rural and isolated areas, to name a few. Because of these reasons, the deployment of drones in mobile communication networks has seen an increased attention recently [1, 3, 5–15]. In addition, the deployment of machine learning solutions in cellular networks, more specifically self-organizing cellular networks, has also seen an increase in recent years and research groups all over the world are developing intelligent solutions in order to tackle the various challenges of cellular networks [4].Erdelj et al., in [1], present a survey of the advances in drone technology focused on wireless sensor networks and disaster management. The survey divides a disaster into three main stages and presents drone applications and challenges for each one of them. In [8], the authors show key aspects of the design and implementation of future aerial communication networks; however, instead of focusing on small drones, the authors focus on Helikite platforms.Other works, such as [9–11], attempt to find the best position of DSCs analytically. In [9], for example, the authors attempt to find the best position for low altitude platforms (LAPs) in order to maximize their coverage range. The authors develop an analytical solution to determine the best altitude of a LAP and end up concluding that the optimum altitude is strongly dependent on the environment. Mozaffari et al., in [10], derive the optimal altitude of DSCs which gives the maximum coverage, while minimizing the transmit power. The system is investigated in two different scenarios, one considering interference between drones and another being interference-free. Results showed that, when interference is considered, there is an optimal separation distance between drones in order to maximize the network coverage. In [11], Alzenad et al. present an optimal placement algorithm for DSCs that maximize the coverage while minimizing the transmit power of the drones. In addition, the authors also decouple the problem in two, considering the placement of the drones as two separate problems in both horizontal and vertical dimensions. Results show that their system is able to save a significant amount of power, while also increasing the number of covered users.Kalantari et al., in [3], propose to find the best position of DSCs, but instead of determining it analytically, they utilize a particle swarm optimization (PSO). Their results show that the algorithm is capable of adapting to different scenarios and that the drones were able to find by themselves the best positions in order to maximize the number of users being covered. Ahmadi et al., in [5], propose a novel mobile network architecture, considering drones as a core part of the network. Their work formulates the optimum placement of drones, while also presenting some challenges and future research directions. Also, regarding the positioning of drones, Merwaday et. al. show in [12] that, in an emergency scenario, finding the optimal position for temporary DSCs via exploiting the mobility of the drones yields improvements in network throughput and spectral efficiency.Another work by Kalantari et al., in [13], investigates the usage of flying base stations considering different types of backhaul links. The authors introduce two different approaches, mainly a network-centric approach and a user-centric approach, and determine the best 3D position of DSCs. Their results show that the network-centric approach is able to maximize the number of covered users and that the user-centric solution maximizes user throughput. Another paper which considers backhaul limitations is the work in [11], by Alzenad et al., wherein the authors study the feasibility of a novel backhaul framework considering aerial platforms and free-space optics point-to-point links. Their results demonstrate that this type of backhaul is capable of delivering higher data rates than others, but it is also very sensitive to the environment, including clouds and fog. In [7], the authors consider the utilization of drones as a complementary approach to future terrestrial mobile networks. The authors present some design opportunities and challenges and also develop a case study on the positioning of DSCs.Mozaffari et al., in [6], present the deployment of a drone network on top of an already existing device-to-device network. The authors evaluate the system in two different scenarios, considering static and mobile drones. The authors derive the outage and coverage probabilities for each case and show that the mobile strategy performs better than the static one in terms of coverage and energy efficiency. Azari et al., in [14], propose a framework for the analysis and optimization of air to ground systems considering altitude and cooperation diversity. The authors consider drones as relays and develop analytical solutions for the drones’ height in order to maximize its reliability and coverage range. Lastly, Shah et al., in [15], propose a new solution to the problem of user cell association considering flying BSs with backhaul constraints. The authors present a distributed solution based on a greedy search algorithm and show that the proposed approach has better results than other baseline approaches and it is less computational complex.Regarding the application of intelligent techniques, a particular family of algorithms that has gained a lot of attention recently are the ones based on RL. Because of their capability of online learning despite the environment they are inserted in, RL algorithms can be applied in many different domains. One example of application is the one in [16], in which the authors use Q-learning together with deep learning to develop an algorithm that can play several Atari 2600 games, like Pong and Breakout. By taking only the raw pixels of the screen as inputs, the authors were able to show that their algorithm was capable of learning by itself how to play each game and was even able to outperform previous approaches and beat human experts in some games.Another example is the work in [17], in which the authors propose a brain-inspired cognitive architecture for autonomous learning of knowledge representation. This architecture presents key concepts in terms of acquiring knowledge based on behavioral needs and reusing patterns to explain new situations. Results show that their implementation is able to solve simple problems, but the authors state that this approach might be better in terms of scalability of more complex tasks. In [18], the authors describe an approach to control a robot based on the actor critic algorithm. The proposed method is tested in a landmark approach, involving movable cameras, which successfully control two types of robots in performing a navigational task. Results show that the proposed solution is capable of performing autonomous navigation and highlighted the possibilities toward a more independent robot control in the future. Moreover, Zhao et al., in [19], propose a general computational model inspired by the human brain and RL concepts. The proposed algorithm is verified in a drone application, in which drones had to fly through specific paths, such as through windows and doors in order to avoid certain obstacles.In the context of wireless networks, several intelligent solutions are being proposed. The work in [20], for example, proposes a novel cognitively inspired clustering algorithm in order to manage the energy consumption of a wireless sensor network. However, shifting the focus toward the applications of RL algorithms in cellular networks, the works by Jaber et al., in [21–23], are a good example. In these works, the authors propose a Q-learning solution in order to tackle the problem of user cell association considering backhaul constraints. By adjusting the offsets of small cells in order to allocate users with different requirements to the best fitting cell, based not only in RAN requirements, but also in backhaul parameters, the proposed solution is able to mitigate user dissatisfaction at a slight reduction in total perceived throughput.Despite some works covering the deployment of drones in emergency situations [1, 8], other works covering the deployment of drones with backhaul limitations [11, 13, 15], and others considering the positioning of aerial platforms [3, 5–7, 9–11, 14], only [3] proposes an intelligent solution in order to determine the best position of DSCs. Also, as it can be seen from the reviewed literature, most studies address the drone positioning problem analytically, through the development of closed-form equations. These methods, although important, require several assumptions, such as the knowledge about how many users are in the network and their positions. In addition, most of these works also do not take into account user mobility and perform the drone placement optimization for a specific, static scenario. Hence, these types of solutions might not be suitable for real situations, in which the environment is constantly changing, users can move at different speeds, and even network parameters, such as cell load and backhaul conditions, can change as well.In addition, as previously mentioned, the only work that proposes an intelligent solution to the problem of drone positioning optimization is the work of Kalantari et al., in [3]. However, the proposed work utilizes a PSO algorithm, which can be viewed as a branch of genetic algorithms or heuristic methods (in constrast to genetic algorithms, PSO does not perform selection in between generations) [24, 25]. Although able to solve the proposed problems in a simulated environment, solutions such as GA, heuristics, and PSO, due to their inherit nature of having to search for the best possible solution among a family of available ones, are not suitable for applications that require continuous interaction between the system and its environment. This occurs because any change in the initial original set of solutions would require the whole computation to be performed again. For instance, PSO is not able to perform an online optimization of the problem.As the authors show in [3], the approach is tested in two fixed scenarios, without considering user mobility. Because PSO performs an offline computation, this solution is also not capable of adapting itself to real-time changes in the network. For example, if mobility was taken into account, the proposed PSO algorithm would have to run again, every time a user would move, in order to determine the best new solution for this new network configuration, resulting in an impractical system. Additionally, due to the vast search space that the PSO solution has to evaluate, a centralized unit would be required in order to perform all the required computations and determine the best configuration. Again, in real systems this is not practical, as this would result in an increase in communication signaling between the centralized unit and the drones, as well as the need of synchronization. Lastly, due to the heuristic nature of PSO, this approach would also not be scalable as well as computationally efficient due to the vast search space that it must compute in order to find the best possible configuration. In a real environment, for example, in which network conditions and user positions change frequently, PSO would not be able to cope with these changes, becoming an impractical solution in real scenarios.Based on the issues mentioned above, it is clear that the development of a novel solution that is capable of adapting itself online and that is also able to analyze the environment and determine the best possible actions to be taken is needed. Based on that, RL algorithms are a suitable approach since, independently of the environment they are inserted in, they can explore the possibilities and determine the best actions to be taken.
[]
[]
Frontiers in Neuroinformatics
30349471
PMC6186990
10.3389/fninf.2018.00050
Automatically Selecting a Suitable Integration Scheme for Systems of Differential Equations in Neuron Models
On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class of models. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models.
5. Related workIn this section we compare our proposed framework for choosing evolution schemes for systems of ODEs in neural models with the corresponding approaches implemented in the simulators Brian (Goodman and Brette, 2009; Stimberg et al., 2014) and NEURON (Hines and Carnevale, 2000; Carnevale and Hines, 2006). These two simulators were chosen as they are in wide-spread use in the community. We will further consider the application of software for symbolic computation (for exact mathematical calculations) or scientific computing (for numerical calculations) to our setting in language modeling for neural simulators.5.1. BrianSimilar to our framework, the implementation of the Brian simulator also makes a distinction between systems of ODEs that can be solved analytically and systems that can only be solved efficiently in a numeric manner. In addition to simple integrate-and-fire neurons, Brian also supports multi-compartmental neurons and neurons described by stochastic ODEs. As these types of models cannot be currently analyzed by our ODE analysis toolbox, we will not take them into account here. Instead we focus on single-compartmental deterministic neuron models as we can only draw a meaningful comparison for this group of neuron models.In Brian, neuron dynamics can be described by a system consisting of ODEs and time-dependent functions. They are either classified as linear, meaning they can be solved analytically, or as non-linear, meaning they cannot be solved analytically and must be solved numerically using the forward Euler method (if not stated otherwise by the author of the model). In theory, linear constant coefficient ODEs can be solved analytically by Brian. However, if the dynamics of a neuron are described using a non-constant function of time rather than an ODE defining this function they are always solved numerically. This could be improved by using our proposed framework, which allows an analytical solver to be generated even for a system consisting of time-dependent functions that satisfy a linear homogeneous ODE and feed into a linear constant coefficient ODE. Our framework thus allows an analytical evolution for a larger class of neuron dynamics. In particular, our framework seems to be more robust with respect to the use of several different postsynaptic shapes, as they are treated seperately in contrast to Brian's approach, where the system is analyzed by SymPy as a whole.All systems of ODEs in Brian that are not evolved by an analytical evolution scheme are by default evolved using the simple Euler method. To circumvent this, it is possible to choose a numerical evolution scheme from a list of other methods. This approach works well for users who are aware of the numerical consequences of their choice of solver but can be problematic for scientists who lack the ability to weigh up the advantages and disadvantages of different numerical evolution schemes for their particular system of ODEs. Moreover, as demonstrated in Figure 3, the choice of an appropriate evolution scheme might depend on the exact parameters for the ODEs and thus not be obvious even for an advanced user.5.2. NMODLNMODL is the model specification language of the NEURON simulator. NEURON was created for describing large multi-compartmental neuron models and thus also supports a wider range of models than our proposed framework currently does. We will again only contrast those types of models for which a comparison is meaningful.For linear systems of ODEs, NMODL chooses an evolution method that propagates the system by evolving each variable under the assumption that all other variables are constant during one time step. In many cases this approach approximates the true solution well, but it is still less accurate than an actual analytical solution. For all other systems of ODEs, i.e., all non-linear ODEs, an implicit method is chosen, regardless of the exact properties of the equations to guarantee an evolution of stiff ODEs without causing numeric instabilities. This is a robust solution but may lead to excessively large simulation run times in cases where the choice of an explicit evolution scheme for non-stiff ODE systems would be sufficient.5.3. Software for symbolic computation and scientific computingThere are a number of high quality and widely used applications available for symbolic computation, most notably Wolfram Mathematica (Benker, 2016), Modelica (Tiller, 2001), and Maple (Westermann, 2010). All three provide frameworks for solving ordinary differential equations both symbolically and numerically. Here, we will briefly describe their capabilities and limitations for both symbolic and numeric integration of systems of ODEs.5.3.1. Symbolic integratorsAt first appearance the integration schemes provided by the programming languages (or in the case of Modelica, modeling language) seem appropriate for the task addressed in our study. As discussed in section 1, the ordinary differential equations used to define neuron models and to describe their dynamical behavior are typically linear (though not homogeneous and not linear with a constant coefficient) and can in several cases be solved analytically by any of the programs above. However, for the specific requirements related to neural simulations, there are several reasons why they are not entirely well suited.Firstly, neurons receive input that generally changes in every integration step due to the arrival of incoming spikes, thus changing the differential equations to be solved. Although each of these differential equations can be integrated easily using, e.g., Wolfram Mathematica, none of these frameworks provide a general, exact solution for each integration step, that takes a run-time generated varying input into account. The next two points are related to the size of neural systems commonly investigated. Spiking neuronal network models often contain of the order of 103–105 neurons, and sometimes substantially more (Kunkel et al., 2014). Calling external software for symbolic computation of ordinary differential equations during run time for each neuron is therefore often too costly. Moreover, for large models, the simulation software is likely to be deployed on a large cluster or supercomputer. The aforementioned applications are typically not installed on such architectures, whereas Python is a standard installation, providing the package SymPy, which is sufficient for symbolic computation in this context.5.3.2. Numerical integratorsThere are a number of approaches to automatically select numeric integrators depending on whether the problem is stiff or non-stiff (Petzold, 1983; Shampine, 1983, 1991). These approaches are typically designed to switch integration schemes during runtime when the problem changes its properties. All of them rely in one way or another on the behavior of the Jacobian matrix evaluated at the point of integration. Typically, the methods try to approximate the dominant eigenvalue of the Jacobian with a low cost compared to that of the stepping algorithm. However, for a spiking neural network simulation, the determination of the stiffness of the system, and thus the solver, should occur before the simulation starts, as to minimize runtime costs.Thus the question remains whether it would be possible to carry out these kind of tests during generation of the neuron model. Applying the test to a large number of randomly selected values of the state variables, or carrying out a number of test runs using representative spike trains would allow to work around the fact that the solution up to a given point is not yet known. However, as these tests rely on determining the stiffness through the properties of the Jacobian, they would still not be completely precise. As we have the advantage of effectively no computational constraints during generation of the neuron model, there is thus no advantage by using such a low-cost strategy. In our approach we compute the solution using both explicit and implicit schemes and compare their behaviors a posteriori, thus obtaining an accurate assessment of the appropriate solver for a given set of parameters.In addition, as for symbolic integration, the packages that provide such stiffness testing capability for numeric integration do not provide a framework for handling a run-time determined variable input due to incoming spikes. Thus we conclude that the specific problem addressed by our toolbox lies outside the scope of general purpose symbolic and numeric integration packages.
[ "16014787", "1691879", "19431309", "20011141", "21031031", "10905805", "18244602", "21415913", "25346682", "7260316", "17134317", "30123121", "23203991", "10592015", "24550820", "26325661" ]
[ { "pmid": "16014787", "title": "Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.", "abstract": "We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro." }, { "pmid": "1691879", "title": "Intrinsic firing patterns of diverse neocortical neurons.", "abstract": "Neurons of the neocortex differ dramatically in the patterns of action potentials they generate in response to current steps. Regular-spiking cells adapt strongly during maintained stimuli, whereas fast-spiking cells can sustain very high firing frequencies with little or no adaptation. Intrinsically bursting cells generate clusters of spikes (bursts), either singly or repetitively. These physiological distinctions have morphological correlates. RS and IB cells can be either pyramidal neurons or spiny stellate cells, and thus constitute the excitatory cells of the cortex. FS cells are smooth or sparsely spiny non-pyramidal cells, and are likely to be GABAergic inhibitory interneurons. The different firing properties of neurons in neocortex contribute significantly to its network behavior." }, { "pmid": "19431309", "title": "Impulses and Physiological States in Theoretical Models of Nerve Membrane.", "abstract": "Van der Pol's equation for a relaxation oscillator is generalized by the addition of terms to produce a pair of non-linear differential equations with either a stable singular point or a limit cycle. The resulting \"BVP model\" has two variables of state, representing excitability and refractoriness, and qualitatively resembles Bonhoeffer's theoretical model for the iron wire model of nerve. This BVP model serves as a simple representative of a class of excitable-oscillatory systems including the Hodgkin-Huxley (HH) model of the squid giant axon. The BVP phase plane can be divided into regions corresponding to the physiological states of nerve fiber (resting, active, refractory, enhanced, depressed, etc.) to form a \"physiological state diagram,\" with the help of which many physiological phenomena can be summarized. A properly chosen projection from the 4-dimensional HH phase space onto a plane produces a similar diagram which shows the underlying relationship between the two models. Impulse trains occur in the BVP and HH models for a range of constant applied currents which make the singular point representing the resting state unstable." }, { "pmid": "20011141", "title": "The brian simulator.", "abstract": "\"Brian\" is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience." }, { "pmid": "21031031", "title": "A general and efficient method for incorporating precise spike times in globally time-driven simulations.", "abstract": "Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision." }, { "pmid": "10905805", "title": "Expanding NEURON's repertoire of mechanisms with NMODL.", "abstract": "Neuronal function involves the interaction of electrical and chemical signals that are distributed in time and space. The mechanisms that generate these signals and regulate their interactions are marked by a rich diversity of properties that precludes a \"one size fits all\" approach to modeling. This article presents a summary of how the model description language NMODL enables the neuronal simulation environment NEURON to accommodate these differences." }, { "pmid": "18244602", "title": "Simple model of spiking neurons.", "abstract": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC." }, { "pmid": "21415913", "title": "Limits to the development of feed-forward structures in large recurrent neuronal networks.", "abstract": "Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it." }, { "pmid": "25346682", "title": "Spiking network simulation code for petascale computers.", "abstract": "Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today." }, { "pmid": "7260316", "title": "Voltage oscillations in the barnacle giant muscle fiber.", "abstract": "Barnacle muscle fibers subjected to constant current stimulation produce a variety of types of oscillatory behavior when the internal medium contains the Ca++ chelator EGTA. Oscillations are abolished if Ca++ is removed from the external medium, or if the K+ conductance is blocked. Available voltage-clamp data indicate that the cell's active conductance systems are exceptionally simple. Given the complexity of barnacle fiber voltage behavior, this seems paradoxical. This paper presents an analysis of the possible modes of behavior available to a system of two noninactivating conductance mechanisms, and indicates a good correspondence to the types of behavior exhibited by barnacle fiber. The differential equations of a simple equivalent circuit for the fiber are dealt with by means of some of the mathematical techniques of nonlinear mechanics. General features of the system are (a) a propensity to produce damped or sustained oscillations over a rather broad parameter range, and (b) considerable latitude in the shape of the oscillatory potentials. It is concluded that for cells subject to changeable parameters (either from cell to cell or with time during cellular activity), a system dominated by two noninactivating conductances can exhibit varied oscillatory and bistable behavior." }, { "pmid": "17134317", "title": "Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.", "abstract": "Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques." }, { "pmid": "30123121", "title": "Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models.", "abstract": "Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models." }, { "pmid": "23203991", "title": "The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model.", "abstract": "In the past decade, the cell-type specific connectivity and activity of local cortical networks have been characterized experimentally to some detail. In parallel, modeling has been established as a tool to relate network structure to activity dynamics. While available comprehensive connectivity maps ( Thomson, West, et al. 2002; Binzegger et al. 2004) have been used in various computational studies, prominent features of the simulated activity such as the spontaneous firing rates do not match the experimental findings. Here, we analyze the properties of these maps to compile an integrated connectivity map, which additionally incorporates insights on the specific selection of target types. Based on this integrated map, we build a full-scale spiking network model of the local cortical microcircuit. The simulated spontaneous activity is asynchronous irregular and cell-type specific firing rates are in agreement with in vivo recordings in awake animals, including the low rate of layer 2/3 excitatory cells. The interplay of excitation and inhibition captures the flow of activity through cortical layers after transient thalamic stimulation. In conclusion, the integration of a large body of the available connectivity data enables us to expose the dynamical consequences of the cortical microcircuitry." }, { "pmid": "10592015", "title": "Exact digital simulation of time-invariant linear systems with applications to neuronal modeling.", "abstract": "An efficient new method for the exact digital simulation of time-invariant linear systems is presented. Such systems are frequently encountered as models for neuronal systems, or as submodules of such systems. The matrix exponential is used to construct a matrix iteration, which propagates the dynamic state of the system step by step on a regular time grid. A large and general class of dynamic inputs to the system, including trains of delta-pulses, can be incorporated into the exact simulation scheme. An extension of the proposed scheme presents an attractive alternative for the approximate simulation of networks of integrate-and-fire neurons with linear sub-threshold integration and non-linear spike generation. The performance of the proposed method is analyzed in comparison with a number of multi-purpose solvers. In simulations of integrate-and-fire neurons, Exact Integration systematically generates the smallest error with respect to both sub-threshold dynamics and spike timing. For the simulation of systems where precise spike timing is important, this results in a practical advantage in particular at moderate integration step sizes." }, { "pmid": "24550820", "title": "Equation-oriented specification of neural models for simulations.", "abstract": "Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator." }, { "pmid": "26325661", "title": "Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations.", "abstract": "Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited." } ]
Micromachines
30424047
PMC6187464
10.3390/mi9030113
Adaptive Absolute Ego-Motion Estimation Using Wearable Visual-Inertial Sensors for Indoor Positioning
This paper proposes an adaptive absolute ego-motion estimation method using wearable visual-inertial sensors for indoor positioning. We introduce a wearable visual-inertial device to estimate not only the camera ego-motion, but also the 3D motion of the moving object in dynamic environments. Firstly, a novel method dynamic scene segmentation is proposed using two visual geometry constraints with the help of inertial sensors. Moreover, this paper introduces a concept of “virtual camera” to consider the motion area related to each moving object as if a static object were viewed by a “virtual camera”. We therefore derive the 3D moving object’s motion from the motions for the real and virtual camera because the virtual camera’s motion is actually the combined motion of both the real camera and the moving object. In addition, a multi-rate linear Kalman-filter (MR-LKF) as our previous work was selected to solve both the problem of scale ambiguity in monocular camera tracking and the different sampling frequencies of visual and inertial sensors. The performance of the proposed method is evaluated by simulation studies and practical experiments performed in both static and dynamic environments. The results show the method’s robustness and effectiveness compared with the results from a Pioneer robot as the ground truth.
2. Related WorkIn recent years, with the development of technology in computer vision, more and more researchers have been attracted to develop monocular visual-based localization algorithms based on the theory of structure from motion (SFM) [3,4,5,6]. However, there are two main problems with monocular visual-based localization algorithms. One is the triangulation problem, which can only be enabled in at least two views where the 3D scene is commonly assumed to be static. If there are other objects moving in the 3D scene, which is referred to as the dynamic 3D scene, the rule of triangulation will fail unless some constraints are further applied [7]. The other is the visual scale problem, which is usually lost when projecting a 3D scene on a 2D imaging plane. The most common approach for doing so is stereo vision [8,9]. Although these systems work well in many environments, stereo vision is fundamentally limited by two specific cameras. In addition, the structure of 3D environment and the motion of camera could be recovered from a monocular camera using structure from motion (SFM) techniques [10,11,12,13,14], but they are up to an arbitrary scale. Methods appearing in structure from motion to infer the scale of the 3D structure is to place an artificial reference with a known scale into the scene. However, it limits its applications to place a marker before the 3D reconstruction .In the past 10 years, the integration of visual and inertial sensors has shown more significant performance than a single sensor system, especially in positioning and tracking systems [8,15,16,17] due to their complementary properties [18]. Inertial sensors provide good signals with high-rate motions in the short term but suffer from accumulated drift due to the double integration during the estimation of position. On the contrary, visual sensors offer accurate ego-motion estimation with low-rate motion in the long term, but are sensitive to blurred features during unpredicted and fast motions [19]. Therefore, recently, these complementary properties have been utilized by more and more researchers as the basic principle for integrating visual and inertial sensors together. Moreover, the inertial sensors can not only be small in size, light weight and low in cost, but also easily adopt wireless communication technologies, so it is much easier for people to wear them. This is why we call them “wearable” inertial sensors.In general, the Kalman filter (KF) is a common and popular algorithm for sensor fusion and data fusion, which is an efficient recursive filter and widely used in many applications. In recent years, more and more researchers have been attracted to develop novel Kalman-filter-based algorithms to deal with structural systems. In structural systems, the states including displacements and velocities are difficult or sometimes impossible to measure, so a variety of novel Kalman filters have been developed from Kalman’s original formulation by accounting for non-stationary unknown external inputs and theoretical investigation of observability, stability and associated advancements [20,21,22,23]. To our knowledge, nonlinear Kalman filter techniques are usually applied to almost all of the inertial-visual fusion algorithms, such as extended KF, unscented KF, etc. [8,17,24,25,26], because a large state vector and a complex nonlinear model are required when both the orientation and the position are optimized in the same process. However, an unacceptable computational burden would be imposed because of so many recursive formulas. Moreover, the linear approximations of EKF may result in non optimal estimates. Although [27] proposed a modified linear Kalman filter to perform the fusion of inertial and visual data, the accurate orientation estimates were based on the assumption of gyroscope measurements trusted for up to several minutes. In [28], the authors proposed a novel fusion algorithm by separating the orientation fusion and the position fusion process, while the orientation estimation could only be robust for a static or slow movement without magnetic distortions using the method proposed in [29]. In contrast, in this paper, the orientation is firstly estimated by our previously proposed orientation filter in [2] only from inertial measurements. Our orientation filter can not only obtain the robust orientation in real time for both extra acceleration and magnetic distortions, but also eliminate the bias and noise in angular velocity and acceleration. In addition, the sampling rates for visual and inertial sensors are inherently different. As a result, an efficient inertial-visual fusion algorithm, called multi-rate AGOF/Linear Kalman filter (MR-LKF), is proposed to separate the orientation and the position estimation; thus, this results in a small state vector and a linear model. A summary of the related work on inertial-visual integration is presented in Table 1.
[ "22801527" ]
[ { "pmid": "22801527", "title": "An adaptive-gain complementary filter for real-time human motion tracking with MARG sensors in free-living environments.", "abstract": "High-resolution, real-time data obtained by human motion tracking systems can be used for gait analysis, which helps better understanding the cause of many diseases for more effective treatments, such as rehabilitation for outpatients or recovery from lost motor functions after a stroke. In order to achieve real-time ambulatory human motion tracking with low-cost MARG (magnetic, angular rate, and gravity) sensors, a computationally efficient and robust algorithm for orientation estimation is critical. This paper presents an analytically derived method for an adaptive-gain complementary filter based on the convergence rate from the Gauss-Newton optimization algorithm (GNA) and the divergence rate from the gyroscope, which is referred as adaptive-gain orientation filter (AGOF) in this paper. The AGOF has the advantages of one iteration calculation to reduce the computing load and accurate estimation of gyroscope measurement error. Moreover, for handling magnetic distortions especially in indoor environments and movements with excessive acceleration, adaptive measurement vectors and a reference vector for earth's magnetic field selection schemes are introduced to help the GNA find more accurate direction of gyroscope error. The features of this approach include the accurate estimation of the gyroscope bias to correct the instantaneous gyroscope measurements and robust estimation in conditions of fast motions and magnetic distortions. Experimental results are presented to verify the performance of the proposed method, which shows better accuracy of orientation estimation than several well-known methods." } ]
Micromachines
30424375
PMC6187565
10.3390/mi9090442
MEMS Inertial Sensors Based Gait Analysis for Rehabilitation Assessment via Multi-Sensor Fusion
Gait and posture are regular activities which are fully controlled by the sensorimotor cortex. In this study, fluctuations of joint angle and asymmetry of foot elevation in human walking stride records are analyzed to assess gait in healthy adults and patients affected with gait disorders. This paper aims to build a low-cost, intelligent and lightweight wearable gait analysis platform based on the emerging body sensor networks, which can be used for rehabilitation assessment of patients with gait impairments. A calibration method for accelerometer and magnetometer was proposed to deal with ubiquitous orthoronal error and magnetic disturbance. Proportional integral controller based complementary filter and error correction of gait parameters have been defined with a multi-sensor data fusion algorithm. The purpose of the current work is to investigate the effectiveness of obtained gait data in differentiating healthy subjects and patients with gait impairments. Preliminary clinical gait experiments results showed that the proposed system can be effective in auxiliary diagnosis and rehabilitation plan formulation compared to existing methods, which indicated that the proposed method has great potential as an auxiliary for medical rehabilitation assessment.
2. Related WorksQuantitative gait analysis system mainly including a camera system [4,5,6], electromyography measuring system [7] and force platform [8,9]. The camera system consists of multiple high resolution cameras located in an indoor space, the orientation and position information of the target subject can be calculated using attached highlighting reflective spots. The electromyography measuring system detects human lower limb muscle signals by surface electromyography in the waking process; force platform reflects the change of plantar pressure during walking. However, the applications of above gait analysis system are limited in clinical practice, and the main reasons lie in three aspects. Firstly, the systems are expensive, which might be a barrier to routine use. Secondly, the usage of the systems are complex and require special operation, and it usually takes hours to complete the whole gait measurement process. Finally, specific space is normally needed to perform gait analysis using the above systems. In particular, the camera system may need more than 100 hundred square meters [10,11,12,13]. Table 2 lists a brief comparison of mainstream gait analysis methods.Considerable research has been conducted into the progression of gait dysfunction through the various stages of stroke. Specifically, stroke subjects experience decreased stride length, cadence and walking speed, significant variability in stride length and gait cycle, and Walking imbalances [10,14,15,16]. Chang et al. [17] employed a specialized wearable system and found that stroke subjects demonstrated decreased gait velocity, stride length and prolonged double support phase. They further identified a high correlation between these gait parameters and age of onset. Further investigation into gait impairments in subjects with neurological diseases has also indicated that the degree of gait abnormality and the disease progression. Previous research has highlighted the advantages of quantitative gait analysis in gait diagnosis; however, laboratory-based systems such as optical tracking and plantar pressure measurement are typically expensive and are not available in ordinary clinical settings [18,19,20]. Therefore, significant interests have increased rapidly in the development of alternative gait analysis tools.With the maturity of microelectromechanical systems and the development of information fusion technologies, the application of inertial motion analysis technology is becoming more and more extensive [8,21,22,23,24,25]. Due to the noticeable advantages of small size and low cost, wearable sensors can be mounted directly on the body segment with no need for specified test environment [24,26,27,28]. Such system may also serve as a good supplement of the gold standard including optical system and plantar pressure monitoring system. In previous studies, we have adopted a wearable inertial sensor in a walking distance calculation and walking pattern classification [29,30,31]. Ambulatory measurement of the participant’s trunk inclination using inertial measurement unit (IMU) was carried out by Farris et al. [3]. Bao et al. [32] developed a smart shoe for gait analysis using force sensitive resistors and IMU sensors. Luinge et al. [33] proposed the estimation of arm orientation by wearable inertial sensors. Dejnabadi et al. [34] introduced an approach to accurate measurement of joint angles based on IMU. However, due to their inability to detect heading reference, inertial based systems generally fail to measure differential orientation, a prerequisite for computing the 3D knee flexion angle recommended by the Internal Society of Biomechanics [35]. Roetenberg et al. [36] developed an ambulatory position and orientation tracking method fusing magnetic and inertial sensing. Since magnetometers measure the strength and direction of the local magnetic field, the geographical north direction can be found. In this case, the initial heading orientation can be obtained with the supplement of a magnetometer. Moreover, the system remains self-contained, which means it does not rely on any external infrastructure [37]. In addition, there are already wireless IMU BSN commercial products such as Trigno Avanti (Delsys Inc., Natick, MA, USA), Mvn Suit (XSens Inc., Enschede, The Netherlands), Perception Neuron (Noitom Inc., Beijing, China) and iSen (STT Systems Inc., San Sebastian, Spain). The current limitations of the state-of-the-art mentioned in the literature are the sensor alignment and integral error. Cost-effectiveness and stability are two other concerns. Moreover, little research about the follow-up monitoring of patients’ lower limbs has been carried out. Therefore, the contributions of this paper include the sensor alignment method and the availability of follow-up monitoring of patients’ key gait parameters.The rest of this paper is organized as follows: Section 3 describes the structure of the proposed gait analysis system and the methodology used to estimate the gait parameters during walking; experimental results are given in Section 4; and the potential applications of gait analysis are discussed in Section 5, which concludes the paper as well.
[ "28057016", "23797285", "25468687", "25789489", "28208591", "16455089", "16119244", "19665712", "20667801", "27973406" ]
[ { "pmid": "28057016", "title": "The effectiveness of robotic-assisted gait training for paediatric gait disorders: systematic review.", "abstract": "BACKGROUND\nRobotic-assisted gait training (RAGT) affords an opportunity to increase walking practice with mechanical assistance from robotic devices, rather than therapists, where the child may not be able to generate a sufficient or correct motion with enough repetitions to promote improvement. However the devices are expensive and clinicians and families need to understand if the approach is worthwhile for their children, and how it may be best delivered.\n\n\nMETHODS\nThe objective of this review was to identify and appraise the existing evidence for the effectiveness of RAGT for paediatric gait disorders, including modes of delivery and potential benefit. Six databases were searched from 1980 to October 2016, using relevant search terms. Any clinical trial that evaluated a clinical aspect of RAGT for children/adolescents with altered gait was selected for inclusion. Data were extracted following the PRISMA approach. Seventeen trials were identified, assessed for level of evidence and risk of bias, and appropriate data extracted for reporting.\n\n\nRESULTS\nThree randomized controlled trials were identified, with the remainder of lower level design. Most individual trials reported some positive benefits for RAGT with children with cerebral palsy (CP), on activity parameters such as standing ability, walking speed and distance. However a meta-analysis of the two eligible RCTs did not confirm this finding (p = 0.72). Training schedules were highly variable in duration and frequency and adverse events were either not reported or were minimal. There was a paucity of evidence for diagnoses other than CP.\n\n\nCONCLUSION\nThere is weak and inconsistent evidence regarding the use of RAGT for children with gait disorders. If clinicians (and their clients) choose to use RAGT, they should monitor individual progress closely with appropriate outcome measures including monitoring of adverse events. Further research is required using higher level trial design, increased numbers, in specific populations and with relevant outcome measures to both confirm effectiveness and clarify training schedules." }, { "pmid": "23797285", "title": "A preliminary assessment of legged mobility provided by a lower limb exoskeleton for persons with paraplegia.", "abstract": "This paper presents an assessment of a lower limb exoskeleton for providing legged mobility to people with paraplegia. In particular, the paper presents a single-subject case study comparing legged locomotion using the exoskeleton to locomotion using knee-ankle-foot orthoses (KAFOs) on a subject with a T10 motor and sensory complete injury. The assessment utilizes three assessment instruments to characterize legged mobility, which are the timed up-and-go test, the Ten-Meter Walk Test (10 MWT), and the Six-Minute Walk Test (6 MWT), which collectively assess the subject's ability to stand, walk, turn, and sit. The exertion associated with each assessment instrument was assessed using the Physiological Cost Index. Results indicate that the subject was able to perform the respective assessment instruments 25%, 70%, and 80% faster with the exoskeleton relative to the KAFOs for the timed up-and-go test, the 10 MWT, and the 6 MWT, respectively. Measurements of exertion indicate that the exoskeleton requires 1.6, 5.2, and 3.2 times less exertion than the KAFOs for each respective assessment instrument. The results indicate that the enhancement in speed and reduction in exertion are more significant during walking than during gait transitions." }, { "pmid": "25468687", "title": "Adaptive method for real-time gait phase detection based on ground contact forces.", "abstract": "A novel method is presented to detect real-time gait phases based on ground contact forces (GCFs) measured by force sensitive resistors (FSRs). The traditional threshold method (TM) sets a threshold to divide the GCFs into on-ground and off-ground statuses. However, TM is neither an adaptive nor real-time method. The threshold setting is based on body weight or the maximum and minimum GCFs in the gait cycles, resulting in different thresholds needed for different walking conditions. Additionally, the maximum and minimum GCFs are only obtainable after data processing. Therefore, this paper proposes a proportion method (PM) that calculates the sums and proportions of GCFs wherein the GCFs are obtained from FSRs. A gait analysis is then implemented by the proposed gait phase detection algorithm (GPDA). Finally, the PM reliability is determined by comparing the detection results between PM and TM. Experimental results demonstrate that the proposed PM is highly reliable in all walking conditions. In addition, PM could be utilized to analyze gait phases in real time. Finally, PM exhibits strong adaptability to different walking conditions." }, { "pmid": "25789489", "title": "Stride segmentation during free walk movements using multi-dimensional subsequence dynamic time warping on inertial sensor data.", "abstract": "Changes in gait patterns provide important information about individuals' health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson's disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living." }, { "pmid": "28208591", "title": "Fusion of Inertial/Magnetic Sensor Measurements and Map Information for Pedestrian Tracking.", "abstract": "The wearable inertial/magnetic sensor based human motion analysis plays an important role in many biomedical applications, such as physical therapy, gait analysis and rehabilitation. One of the main challenges for the lower body bio-motion analysis is how to reliably provide position estimations of human subject during walking. In this paper, we propose a particle filter based human position estimation method using a foot-mounted inertial and magnetic sensor module, which not only uses the traditional zero velocity update (ZUPT), but also applies map information to further correct the acceleration double integration drift and thus improve estimation accuracy. In the proposed method, a simple stance phase detector is designed to identify the stance phase of a gait cycle based on gyroscope measurements. For the non-stance phase during a gait cycle, an acceleration control variable derived from ZUPT information is introduced in the process model, while vector map information is taken as binary pseudo-measurements to further enhance position estimation accuracy and reduce uncertainty of walking trajectories. A particle filter is then designed to fuse ZUPT information and binary pseudo-measurements together. The proposed human position estimation method has been evaluated with closed-loop walking experiments in indoor and outdoor environments. Results of comparison study have illustrated the effectiveness of the proposed method for application scenarios with useful map information." }, { "pmid": "16455089", "title": "Ambulatory measurement of arm orientation.", "abstract": "In order to evaluate the impact of neuromuscular disorders affecting the upper extremities, the functional use of the arm need to be evaluated during daily activities. A system suitable for measuring arm kinematics should be ambulatory and not interfere with activities of daily living. A measurement system based on miniature accelerometers and gyroscopes is adequate because the sensors are small and do not suffer from line of sight problems. A disadvantage of such sensors is the cumulative drift around the vertical and the problems with aligning the sensor with the segment. A method that uses constraints in the elbow to measure the orientation of the lower arm with respect to the upper arm is described. This requires a calibration method to determine the exact orientation of each of the sensors with respect to the segment. Some preliminary measurements were analyzed and they indicated a strong reduction in orientation error around the vertical. It seemed that the accuracy of the method is limited by the accuracy of the sensor to segment calibration." }, { "pmid": "16119244", "title": "A new approach to accurate measurement of uniaxial joint angles based on a combination of accelerometers and gyroscopes.", "abstract": "A new method of measuring joint angle using a combination of accelerometers and gyroscopes is presented. The method proposes a minimal sensor configuration with one sensor module mounted on each segment. The model is based on estimating the acceleration of the joint center of rotation by placing a pair of virtual sensors on the adjacent segments at the center of rotation. In the proposed technique, joint angles are found without the need for integration, so absolute angles can be obtained which are free from any source of drift. The model considers anatomical aspects and is personalized for each subject prior to each measurement. The method was validated by measuring knee flexion-extension angles of eight subjects, walking at three different speeds, and comparing the results with a reference motion measurement system. The results are very close to those of the reference system presenting very small errors (rms = 1.3, mean = 0.2, SD = 1.1 deg) and excellent correlation coefficients (0.997). The algorithm is able to provide joint angles in real-time, and ready for use in gait analysis. Technically, the system is portable, easily mountable, and can be used for long term monitoring without hindrance to natural activities." }, { "pmid": "19665712", "title": "Functional calibration procedure for 3D knee joint angle description using inertial sensors.", "abstract": "Measurement of three-dimensional (3D) knee joint angle outside a laboratory is of benefit in clinical examination and therapeutic treatment comparison. Although several motion capture devices exist, there is a need for an ambulatory system that could be used in routine practice. Up-to-date, inertial measurement units (IMUs) have proven to be suitable for unconstrained measurement of knee joint differential orientation. Nevertheless, this differential orientation should be converted into three reliable and clinically interpretable angles. Thus, the aim of this study was to propose a new calibration procedure adapted for the joint coordinate system (JCS), which required only IMUs data. The repeatability of the calibration procedure, as well as the errors in the measurement of 3D knee angle during gait in comparison to a reference system were assessed on eight healthy subjects. The new procedure relying on active and passive movements reported a high repeatability of the mean values (offset<1 degrees) and angular patterns (SD<0.3 degrees and CMC>0.9). In comparison to the reference system, this functional procedure showed high precision (SD<2 degrees and CC>0.75) and moderate accuracy (between 4.0 degrees and 8.1 degrees) for the three knee angle. The combination of the inertial-based system with the functional calibration procedure proposed here resulted in a promising tool for the measurement of 3D knee joint angle. Moreover, this method could be adapted to measure other complex joint, such as ankle or elbow." }, { "pmid": "20667801", "title": "Zero-velocity detection --- an algorithm evaluation.", "abstract": "In this study, we investigate the problem of detecting time epochs when zero-velocity updates can be applied in a foot-mounted inertial navigation (motion tracking) system. We examine three commonly used detectors: the acceleration moving variance detector, the acceleration magnitude detector, and the angular rate energy detector. We demonstrate that all detectors can be derived within the same general likelihood ratio test framework given the different prior knowledge about the sensor signals. Further, by combining all prior knowledge, we derive a new likelihood ratio test detector. Subsequently, we develop a methodology to evaluate the performance of the detectors. Employing the developed methodology, we evaluate the performance of the detectors using leveled ground, slow (approx. 3 km/h) and normal (approx. 5 km/h) gait data. The test results are presented in terms of detection versus false-alarm probability. Our preliminary results shows that the new detector performs marginally better than the angular rate energy detector that outperforms both the acceleration moving variance detector and the acceleration magnitude detector." }, { "pmid": "27973406", "title": "An IMU-to-Body Alignment Method Applied to Human Gait Analysis.", "abstract": "This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis." } ]
Frontiers in Neurorobotics
30356820
PMC6189580
10.3389/fnbot.2018.00063
Intrinsic Rewards for Maintenance, Approach, Avoidance, and Achievement Goal Types
In reinforcement learning, reward is used to guide the learning process. The reward is often designed to be task-dependent, and it may require significant domain knowledge to design a good reward function. This paper proposes general reward functions for maintenance, approach, avoidance, and achievement goal types. These reward functions exploit the inherent property of each type of goal and are thus task-independent. We also propose metrics to measure an agent's performance for learning each type of goal. We evaluate the intrinsic reward functions in a framework that can autonomously generate goals and learn solutions to those goals using a standard reinforcement learning algorithm. We show empirically how the proposed reward functions lead to learning in a mobile robot application. Finally, using the proposed reward functions as building blocks, we demonstrate how compound reward functions, reward functions to generate sequences of tasks, can be created that allow the mobile robot to learn more complex behaviors.
Background and related workIn RL, an agent perceives the state of its environment with its sensors and takes action to change that state. The environment may comprise variables such as the robot's position, velocity, sensor values, etc. These parameters collectively form the state of the agent. With every action that the agent executes in the environment, it moves to a new state. The state of the agent at time t can be expressed as:St= [st1, st2,st3,…,stn]where each attribute sti is typically a numerical value describing some internal or external variable of the robot, and n is the number of attributes of the state. The agent takes an action At to change the state of the environment from the finite set of m actions:A={ A1, A2,A3,…,Am}This state change is denoted by event Et, formally denoted as:Et=[et1, et2,et3,…,etn]where an event attribute eti= sti- st-1i. That is,Et=St− St−1= [Δ(st1− st−11), Δ(st2− st−12),…,Δ(stn− st−1n)]Thus, an event, which is a vector of difference variables, models the transition between the states. An action can cause a number of different transitions, and an event is used to represent those transitions. Since this representation does not make any task-specific assumption about the values of the event attributes, it can be used to represent the transition in a task-independent manner (Merrick, 2007).Finally, the experience of the agent includes the states St it has encountered, the events Et that have occurred and the actions At that it has performed. Thus, the experience X is a trajectory denoted as the following, and it provides the data from which the goals can be constructed.X= {S0, A0, S1, E1, A1, S2, E2, A2, S3, E3,…}Design of reward functionsIn RL, the reward is used to direct the learning process. A simple example of a reward function is a pre-defined value assignment for known states or transitions. For example:(1)r(St)={        1if a paricular state St is reached        0otherwiseA more specific, task-dependent example can be seen from the canonical cart-pole domain in which a pole is attached to a cart that moves along a frictionless track. The aim of the agent is to maintain the pole balanced on the cart by moving the cart to the right or left. The reward, in this case, depends on the attributes specific to the task:(2)r(St)= −c2*(G1−st1)2−c3*(G2−st2)2where st1 is the position of the cart and st2 is the angle of the pole with respect to the cart, G (with attributes G1–desired position and G2–desired angle) is the goal state, and c2 and c3 are constants.For an even more complex task like ball paddling, where a table-tennis ball is attached to a paddle by an elastic string with the goal to bounce the ball above the paddle, it is quite difficult to design a reward function. Should the agent be rewarded for bouncing the ball a maximum number of times? Should the agent be rewarded for keeping the ball above the paddle? As detailed in Amodei et al. (2016), the agent might find ways to “hack the reward” resulting in unpredictable or unexpected behavior.For some complex domains, it is only feasible to design “sparse reward signals” which assign non-zero reward in only a small proportion of circumstances. This makes learning difficult as the agent gets very little information about what actions resulted in the correct solution. Proposed alternatives for such environments include “hallucinating” positive rewards (Andrychowicz et al., 2017) or bootstrap with self-supervised learning to build a good world model. Also, imitation learning and inverse RL have shown reward functions can be implicitly defined by human demonstrations, so they do not allow a fully autonomous development of the agent.“Reward engineering” is another area that has attracted the attention of the RL community, which is concerned with the principles of constructing reward signals that enable efficient learning (Dewey, 2014). Dewey (2014) concluded that as artificial intelligence becomes more general and autonomous, the design of reward mechanisms that result in desired behaviors are becoming more complex. Early artificial intelligence research tended to ignore reward design altogether and focused on the problem of efficient learning of an arbitrary given goal. However, it is now acknowledged that reward design can enable or limit autonomy, and there is a need for reward functions that can motivate more open-ended learning beyond a single, fixed task. The following sections review work that focus in this area.Intrinsic motivationReward modeled as intrinsic motivation is an example of an engineered reward leading to open-ended learning (Baldassarre and Mirolli, 2013). It may be computed online as a function of experienced states, actions or events and is independent of a priori knowledge of task-specific factors that will be present in the environment. The signal may serve to drive acquisition of knowledge or a skill that is not immediately useful but could be useful later on (Singh et al., 2005). This signal may be generated by an agent because a task is inherently “interesting,” leading to further exploration of its environment, manipulation/play or learning of the skill.Intrinsic motivation can be used to model reward that can lead to the emergence of task-oriented performance, without making strong assumptions about which specific tasks will be learned prior to the interaction with the environment. The motivation signal may be used in addition to a task-specific reward signal, aggregated based on a predefined formula, to achieve more adaptive, and multitask learning. It can also be used in the absence of a task-specific reward signal to reduce the handcrafting and tuning of the task-specific reward thus moving a step closer to creating a true task independent learner (Merrick and Maher, 2009). Oudeyer and Kaplan (2007) proposed the following categories for a computational model of motivation: knowledge-based, and competence-based. In knowledge-based motivation, the motivation signal is based on an internal prediction error between the agent's prediction of what is supposed to happen and what actually happens when the agent executes a particular action. In competence-based motivation, the motivation signal is generated based on the appropriate level of learning challenge. This competency motivation depends on the task or the goal to accomplish. The activity at a correct level of learnability given the agent's current level of mastery of that skill generates maximum motivation signal. Barto et al. (2013) further differentiated between surprise (prediction error) and novelty based motivation. Novelty motivation signal is computed based on the experience of an event that was not experienced before (Neto and Nehmzow, 2004; Nehmzow et al., 2013).Intrinsically motivated reinforcement learningFrameworks that combine intrinsic motivation with RL are capable of autonomous learning, and they are commonly termed intrinsically motivated reinforcement learning frameworks. Singh et al. (2005) and Oudeyer et al. (2007) state that intrinsic motivation is essential to create machines capable of lifelong learning in a task-independent manner as it favors the development of competence and reduces reliance on externally directed goals driving learning. When intrinsic motivation is combined with RL, it creates a mechanism whereby the system designer is no longer required to program a task-specific reward (Singh et al., 2005). An intrinsically motivated reinforcement learning agent can autonomously select a task to learn and interact with the environment to learn the task. It results in the development of an autonomous entity capable of resolving a wide variety of activities, as compared to an agent capable of resolving only a specific activity for which a task-specific reward is provided.Like in RL, in an intrinsically motivated reinforcement learning framework, the agent senses the states, takes actions and receives an external reward from the environment, however as an additional element, the agent internally generates a motivation signal that forms the basis for its actions. This internal signal is independent of task-specific factors in the environment. Incorporating intrinsic motivation with RL enables agents to select which skills they will learn and to shift their attention to learn different skills as required (Merrick, 2012). Broadly speaking, intrinsically motivated reinforcement learning introduces a meta-learning layer in which a motivation function provides the learning algorithm with a motivation signal to focus the learning (Singh et al., 2005).Role of goals to direct the learningWhere early work focused on generating reward directly from environmental stimuli, more recent works have acknowledged the advantages of using the intermediate concept of a goal to motivate complexity and diversity of behavior (Merrick et al., 2016; Santucci et al., 2016). It has been shown by Santucci et al. (2012) that using intrinsic motivation (generated by prediction error) directly for skill acquisition can be problematic and a possible solution to that is to instead generate goals using the intrinsic motivation which in turn can be used to direct the learning. Further, it has been argued by Mirolli and Baldassarre (2013) that a cumulative acquisition of skills requires a hierarchical structure, in which multiple “expert” sub-structures focus on acquiring different skills and a “selector” sub-structure decides which expert to select. The expert substructure can be implemented using knowledge-based intrinsic motivation that decides what to learn (by forming goals), and the selector sub-structure can be implemented using competence-based intrinsic motivation that can be used to decide which skill to focus on. Goal-directed learning is also shown to be a promising direction for learning motor skills. Rolf et al. (2010) show how their system auto-generates goals using inconsistencies during exploration to learn inverse kinematics and that the approach can scale for a high dimension problem.Recently, using goals to direct the learning has even attracted the attention of the deep learning community. Andrychowicz et al. (2017) have proposed using auto-generated interim goals to make learning possible even when the rewards are sparse. These interim goals are used to train the deep learning network using experience replay. It is shown that the RL agent is able to learn to achieve the end goal even if it has never been observed during the training of the network. Similarly, in a framework proposed by Held et al. (2017), they auto-generate interim tasks/goals at an appropriate level of difficulty. This curriculum of tasks then directs the learning enabling the agent to learn a wide set of skills without any prior knowledge of its environment.Regardless of whether the goals are intrinsic, extrinsic, of social origin, whether they are created to direct the learning or generated by an autonomous learning framework, the approach of using goal-based reward functions detailed in the next section can be applied to them.
[ "24376428", "18958277", "11229402" ]
[ { "pmid": "24376428", "title": "Novelty or surprise?", "abstract": "Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise." }, { "pmid": "18958277", "title": "What is Intrinsic Motivation? A Typology of Computational Approaches.", "abstract": "Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics." } ]
Frontiers in Neurorobotics
30356836
PMC6189603
10.3389/fnbot.2018.00065
Evolving Robust Policy Coverage Sets in Multi-Objective Markov Decision Processes Through Intrinsically Motivated Self-Play
Many real-world decision-making problems involve multiple conflicting objectives that can not be optimized simultaneously without a compromise. Such problems are known as multi-objective Markov decision processes and they constitute a significant challenge for conventional single-objective reinforcement learning methods, especially when an optimal compromise cannot be determined beforehand. Multi-objective reinforcement learning methods address this challenge by finding an optimal coverage set of non-dominated policies that can satisfy any user's preference in solving the problem. However, this is achieved with costs of computational complexity, time consumption, and lack of adaptability to non-stationary environment dynamics. In order to address these limitations, there is a need for adaptive methods that can solve the problem in an online and robust manner. In this paper, we propose a novel developmental method that utilizes the adversarial self-play between an intrinsically motivated preference exploration component, and a policy coverage set optimization component that robustly evolves a convex coverage set of policies to solve the problem using preferences proposed by the former component. We show experimentally the effectiveness of the proposed method in comparison to state-of-the-art multi-objective reinforcement learning methods in stationary and non-stationary environments.
3. Related workIn this section, we explore the related work for multi-objective reinforcement learning (MORL) and intrinsically motivated reinforcement learning (IMRL), to highlight the contribution of our paper.3.1. Multi-objective reinforcement learning (MORL)MORL methods address the MOMDP problem by two main approaches: single policy approaches; and multiple policy approaches (Roijers et al., 2013). If the user's preference is known before solving the problem, then a single policy can be found by scalarizing the multiple reward signals and optimizing the scalarized reward return using conventional single objective reinforcement learning methods. However, this assumption is rarely satisfied. Alternatively, the multiple policy approach aims at exploring and ranking the non-dominated policies in order to find the policy coverage set that can satisfy any user's preference for solving the problem. In the following subsections, we review relevant literature for each of these two approaches.3.1.1. Single policy approachesLizotte et al. (2010) proposed a value iteration algorithm for ranking actions in finite state spaces using a linear scalarization function. Moffaert et al. (2013) proposed an updated version of the Q-learning algorithm (Watkins and Dayan, 1992) using the Chebyshev scalarization function to solve an MOMDP grid-world problem. Castelletti et al. (2013) utilized non-linear scalarization methods with a random weight space exploration technique to optimize the operation of water resource management systems. Perny and Weng (2010) addressed the MOMDP problem using a linear programming technique adopting the Chebyshev scalarization function. Ogryczak et al. (2011) extended previously mentioned linear programming method by replacing the non-linear scalarization with an ordered weighted regret technique for ranking actions. Their technique estimates the regret value per each objective with respect to a reference point, then actions are ranked using the combined regret value overall objectives.Alternatively to the scalarization approach, constrained methods for the MOMDP problem have been introduced by Feinberg and Shwartz (1995) and Altman (1999). These methods optimize a single objective, while treating the other objectives as constraints on the optimization problem.3.1.2. Multiple policy approachesA preference elicitation approach has been proposed by Akrour et al. (2011) to incorporate an expert's preference during the policy learning process in an algorithm called preference-based policy learning (PPL). Basically, the proposed algorithm needs a parameterized formalism of the policy in order to sample different trajectories by sampling from the parameter space, then the expert provides his qualitative preference based on the lately demonstrated trajectories, which is used to optimize the policy's parameters in a way that maximizes the expert's expected feedback. Similarly, Fürnkranz et al. (2012) proposed a framework for ranking policy trajectories based on qualitative feedback provided by the user. However, this methodology requires reaching the Pareto front of optimal policies in the beginning, then ranking trajectories samples from those policies according to the user's feedback.An evolutionary computation method was introduced by Busa-Fekete et al. (2014) in order to generate the set of non-dominated policies shaping the Pareto front. Then, at each state, they rollout actions from this Pareto optimal set and rank them given the user's feedback in order to identify the optimal action to follow.Roijers et al. (2014) proposed the Optimistic Linear Support (OLS) algorithm which aims at evolving an approximate policy coverage set by examining different possible weight vectors of the defined objectives. For example, if there are two objectives in the problem, it starts by examining the two corner preferences (i.e., [0.1, 0.9], [0.9, 0.1]) and evolves two optimal policies for those preferences through single-objective reinforcement learner (i.e., Q-learning). Then, the algorithm is going to evaluate the performance of the two evolved policies in terms of average reward achieved given a threshold value (epsilon). The policy that will exceed this value will be added to the coverage set. Afterwards, the algorithm will try to find a mid-point preference between each explored preference pairs and repeat the performance evaluation against the defined threshold until no more performance enhancements are achieved.Gábor et al. (1998) introduced the Threshold Lexicographic Ordering (TLO) algorithm which starts with a sample of uniformly distributed preferences and for each of them it evolves a policy by selecting at each state one of the optimal actions (each dedicated with a specific single objective given its weight) exceeding a threshold value or taking the action with the max value if all actions are below the threshold value. Similarly, the decision to add a policy to the coverage set is made given a specific performance threshold value.The two latter algorithms have been used in many of MORL literature (Geibel, 2006; Roijers et al., 2015; Mossalam et al., 2016) to find a coverage set of policies that solves the MOMDP problem. It has to be noted that both of these algorithms follow an iterative preference exploration approach that require simulation on the environment assuming stationary dynamics in order to evolve the policy coverage set. However, our proposed method aims at evolving such coverage set in a developmental and adaptive manner with stationary and non-stationary environment's dynamics.3.2. Intrinsically motivated reinforcement learning (IMRL)Inspired by the learning paradigms in humans and animals, computational models for intrinsically motivated learning aim at learning guided by internally generated reward signals. Ryan and Deci (2000) defined intrinsic motivation as performing activities for their inherit satisfaction instead of separable consequences. They further explained that this is similar to humans performing actions for fun or challenge rather than being directed to perform it due to external pressure or rewards. Intrinsically motivated reinforcement learning (IMRL) aims at extending the conventional reinforcement learning paradigm by allowing the learner agent to generate an intrinsic reward signal that either can supplement the extrinsic reward signal or completely replace it (Barto, 2013). Basically, this intrinsic reward signal can provide assistance to the learning agent when dealing with a sparse extrinsic reward signal, enhance the exploration strategy, or completely guides it to achieve the task.There are multiple drives to the intrinsic motivation in literature such as curiosity, novelty, happiness, emotions, or surprise (Singh et al., 2009). Despite of the differences between their fitness functions, they are positioned around the same assumption that the learning agent only needs to use its internal and external state representations in order to calculate the intrinsic reward signal. Therefore, the agent can generate such a reward independent of external (task-specific) reward signals. Schmidhuber (2010) describes the learning assumption of IMRL as “maximizing the fun or internal joy for the discovery or creation of novel patterns.” According to his perspective, a pattern is a sequence of observed data that is compressible. Compression here means that an encoding program can find a compact representation of the data sequence that is sufficient to regenerate the original sequence or predict any occurrence within it given the predecessor occurrences (Ming and Vitányi, 1997). While the novelty of the pattern means that the learning agent initially did not expect it but it could learn it. The pattern discovering/creation progress can be projected into an intrinsic reward for a conventional RL algorithm that acts to optimize it and consequently encouraging the agent to discover/create more novel patterns.IMRL methods can be categorized differently based on either a reward source perspective or an objective perspective. For the reward source perspective categorization, Merrick and Maher (2009) indicated that IMRL methods can fall into two broad categories: methods that use both extrinsic and intrinsic reward signals; and methods that use only intrinsic reward signals. Alternatively, Oudeyer and Kaplan (2009) proposed a different categorization from an objective perspective. They divided the IMRL literature into three main groups based on the objective of the intrinsic motivation learning process: knowledge-based models, competency-based models, and morphological models. We adopt a knowledge-based intrinsic motivation model according to the objective categorization that falls into the first category of the reward source perspective as it used both extrinsic and intrinsic reward signals. Accordingly, we only explore knowledge-based intrinsic motivation relevant literature in this paper.One of the early approaches to knowledge-based IMRL was proposed by Schmidhuber (1991b) which included two recurrent neural networks (RNNs): a model network, and a control network. The model network aimed at learning to model environmental dynamics in terms of predicting the state transitions conditioned on action taken. While the control network optimizes the action selection policy to explore states space regions in which the model network has high marginal uncertainty (prediction error). The control network is guided by intrinsic reward represented by the model network's prediction error. This method is considered a category II as it works mainly with intrinsic reward signals.Pathak et al. (2017) proposed an intrinsically motivated exploration technique following a predictive perspective. They indicated two main objectives for the proposed technique. First, to learn representative features that distill the state-space features that are controllable by the agent capabilities from those that are out of the agent's control. Then, using these learned representative features, they optimize a predictive model for the state transition probability distribution. In order to achieve the first objective, an inverse dynamics model was used to learn the action taken based on the encoding (features) of the states before and after taking the action, using the experience replay buffer. The authors stated that this inverse dynamics inference technique will discourage learning encodings (features) that cannot affect or being affected by the agent's actions. While for the second objective, a forward dynamics model was proposed to predict the next state encoding based on the current state encoding and the action taken. The intrinsic reward was formulated as the prediction error of the forward dynamics model and combined with the extrinsic reward using summation. The learning agent uses the combined version of the extrinsic and intrinsic rewards to optimize the current policy.Qureshi et al. (2018) targeted robotics domains for the application of intrinsic motivation. The authors proposed an intrinsically motivated learning algorithm for a humanoid robot to interact with a human given three basic events to represent the current state of the interaction: eye contact, smile, and handshake. Their algorithm is based on an event predictive objective where a predictive neural network called Pnet is learning to predict the coming event conditioned on the current one and the action taken, while another controller network called Qnet is optimizing the action selection policy guided only with the intrinsic reward represented by the prediction error of the Pnet. The authors showed that their proposed algorithm outperformed a conventional reinforcement learning algorithm using only extrinsic sparse reward signal in a real interaction experiment with humans that lasted for 14 days.One drawback of formulating the intrinsic reward based on prediction error is that it encourages the action sampler (e.g., control network) to favor state space regions that involve noisy observation or require further sensing capabilities beyond the currently available to the agent, this might limit the learning progress of the whole system in such situations.In order to overcome this drawback, we need to change the formulation of the intrinsic reward to depend on the model's improvement (e.g., prediction accuracy) rather than its prediction error. Consequently, the learning agent will be bored from state-space regions that either completely predictable (high prediction accuracy) or completely unpredictable (due to noise or lack of sufficient sensors) as for both scenarios the gradient of the improvement will be small.A first attempt to tackle this issue was proposed by Schmidhuber (1991a), where the intrinsic reward was formulated based on prediction reliability rather than the error. A probabilistic inference model was optimized to learn the state transition probability distribution conditioning on the taken action, then four different metrics were proposed to estimate the prediction reliability locally and globally based on the past interactions with the environment. A Q-learning algorithm was adopted to optimize the action selection policy guided by the reliability value as an intrinsic reward signal. The proposed methodology was evaluated on a non-deterministic environment with noisy state regions and compared with a random-search exploration technique, results showed that the intrinsically motivated agent was 10 times faster to decrease the prediction error.Oudeyer et al. (2007) proposed a developmental learning system for robotics called intelligent adaptive curiosity (IAC). The IAC system aims at maximizing the learning progress of the agent represented by focusing the learning process on situations that neither fully predictable nor fully unpredictable, as the derivative of the progress will be small in both situations. The novelty in this method comes in the division of the state space into regions that share common dynamics and for each region, the IAC evolves an expert predictive model (e.g., neural network) to learn the state transition dynamics. The division of the state space into regions was done in a developmental manner, so at the beginning, there is only one region and when the number of examples exceeds a specific threshold value (C1) then it is split into two regions based on a second metric (C2) that aims at minimizing the variance between samples in a specific region (i.e., this is symmetric to density-based clustering techniques Kriegel et al., 2011). The intrinsic reward is calculated using the first derivative of the prediction error between times (t and t + 1). Finally, a Q-learning algorithm is adopted to optimize the action selection policy guided by the intrinsic reward. The authors showed by experiments the effectiveness of the proposed system in comparison to conventional exploration strategies.Our propose intrinsically motivated preference exploration component follows the same intrinsic reward formulation approach as the last two methods based on the predictive model improvement rather than the prediction error. However, we extend the existing work to multi-objective scenarios.
[ "26017442", "25719670", "18958277", "29631753", "10620381", "26819042", "29052630" ]
[ { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "25719670", "title": "Human-level control through deep reinforcement learning.", "abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks." }, { "pmid": "18958277", "title": "What is Intrinsic Motivation? A Typology of Computational Approaches.", "abstract": "Intrinsic motivation, centrally involved in spontaneous exploration and curiosity, is a crucial concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics." }, { "pmid": "29631753", "title": "Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.", "abstract": "For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement." }, { "pmid": "10620381", "title": "Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions.", "abstract": "Intrinsic and extrinsic types of motivation have been widely studied, and the distinction between them has shed important light on both developmental and educational practices. In this review we revisit the classic definitions of intrinsic and extrinsic motivation in light of contemporary research and theory. Intrinsic motivation remains an important construct, reflecting the natural human propensity to learn and assimilate. However, extrinsic motivation is argued to vary considerably in its relative autonomy and thus can either reflect external control or true self-regulation. The relations of both classes of motives to basic human needs for autonomy, competence and relatedness are discussed. Copyright 2000 Academic Press." }, { "pmid": "26819042", "title": "Mastering the game of Go with deep neural networks and tree search.", "abstract": "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away." }, { "pmid": "29052630", "title": "Mastering the game of Go without human knowledge.", "abstract": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo." } ]
Micromachines
30404267
PMC6190053
10.3390/mi7050091
Low-Cost BD/MEMS Tightly-Coupled Pedestrian Navigation Algorithm
Pedestrian Dead Reckoning (PDR) by combining the Inertial Measurement Unit (IMU) and magnetometer is an independent navigation approach based on multiple sensors. Since the inertial component error is significantly determined by the parameters of navigation equations, the navigation precision may deteriorate with time, which is inappropriate for long-time navigation. Although the BeiDou (BD) navigation system can provide high navigation precision in most scenarios, the signal from satellites is easily degraded because of buildings or thick foliage. To solve this problem, a tightly-coupled BD/MEMS (Micro-Electro-Mechanical Systems) integration algorithm is proposed in this paper, and a prototype was built for implementing the integrated system. The extensive experiments prove that the BD/MEMS system performs well in different environments, such as an open sky environment and a playground surrounded by trees and thick foliage. The proposed algorithm is able to provide continuous and reliable positioning service for pedestrian outdoors and thereby has wide practical application.
2. Related WorkTransit navigation satellite system is one of the early representative satellite navigation systems. From 1960 to 1996, more than 30 transit navigation satellites were launched. However, the time cost for each positioning is significantly high, and the positioning accuracy is poor. After that, GPS and GLONASS were put into use in the 1990s. Nowadays, many advanced technologies are used in the latest satellite navigation systems, such as Binary Offset Carrier (BOC). BOC is used to improve the precision of orientation estimation and to enhance the anti-jamming and weak signal detection capacity [10,11,12,13,14].The MEMS-based inertial sensors have been widely used due to the low cost [15]. To improve MEMS-INS performance, Vitanov in [16] proposed a Gaussian process-enhanced unscented Kalman filter architecture to perform the fault detection and isolation on gyros and accelerometers of strap-down MEMS-based INS. Aggarwal in [17] established a thermal model for the low-cost MEMS-based INS, which can be used for integrated vehicle navigation. Akeila in [18] proposed an error resetting approach using three accelerometers and three magnetometers for moving objects.There are a large number of researchers focusing on the integration of GNSS and MEMS-based INS. This scheme guarantees meter-level localization accuracy when the GNSS performances are poor for the location services because of weak signals or less than four visible satellites. Angrisano in [19] integrated GPS/GLONASS with the low-cost MEMS-INS for pedestrian and vehicular navigation. Zhuang in [20] proposed an integrated navigation system by using a smartphone platform, which consists of a three-axis accelerometer, a three-axis gyroscope, a three-axis magnetometer and a GPS receiver. In the system, the loosely-coupled scheme based on the extended Kalman filter is implemented. Jia in [21] developed an integrated navigation system based on the low-cost MEMS-IMU and GPS receiver. This system is able to achieve continuous navigation, but the data information from the GPS and IMU is not well used. To address this problem, the tightly-coupled approach was considered recently. Lachapelle in [22] invented the tightly-coupled navigation system based on GPS and IMU to improve the pedestrian positioning accuracy when the signal from satellites is severely degraded. In [23], an improved INS/GPS integration routine using the unscented Kalman filter and the adaptive unscented Kalman filter was presented. It is concluded that UKF provides only a slight improvement in the navigation performance over EKF. In [24], a closely-coupled GPS/INS integration is described in which inertial measurements are combined with available GPS ranges even when less than four satellites are in view. The map-matching approach, which can be used to enhance the performance of the integrated GPS/MEMS system, is addressed in [25]. Godha in [26,27] compared the performance of the Personal Navigation System (PNS), GPS and the integrated PDR/GPS system. O’Keefe in [28] designed a navigation system by integrating the Ultra-Wideband (UWB) and satellite navigation system and meanwhile applied it to intelligent transportation.Different from the previous literature, to take advantage of the BD with high positioning accuracy and MEMS-based INS with a low cost, we design a new tightly-coupled BD/MEMS system for pedestrian navigation.
[ "23979480" ]
[ { "pmid": "23979480", "title": "GPS/MEMS INS data fusion and map matching in urban areas.", "abstract": "This paper presents an evaluation of the map-matching scheme of an integrated GPS/INS system in urban areas. Data fusion using a Kalman filter and map matching are effective approaches to improve the performance of navigation system applications based on GPS/MEMS IMUs. The study considers the curve-to-curve matching algorithm after Kalman filtering to correct mismatch and eliminate redundancy. By applying data fusion and map matching, the study easily accomplished mapping of a GPS/INS trajectory onto the road network. The results demonstrate the effectiveness of the algorithms in controlling the INS drift error and indicate the potential of low-cost MEMS IMUs in navigation applications." } ]
Micromachines
null
PMC6190292
10.3390/mi8040100
Red Blood Cell Responses during a Long-Standing Load in a Microfluidic Constriction
Red blood cell responses during a long-standing load were experimentally investigated. With a high-speed camera and a high-speed actuator, we were able to manipulate cells staying inside a microfluidic constriction, and each cell was compressed due to the geometric constraints. During the load inside the constriction, the color of the cells was found to gradually darken, while the cell lengths became shorter and shorter. According to the analysis results of a 5 min load, the average increase of the cell darkness was 60.9 in 8-bit color resolution, and the average shrinkage of the cell length was 15% of the initial length. The same tendency was consistently observed from cell to cell. A correlation between the changes of the color and the length were established based on the experimental results. The changes are believed partially due to the viscoelastic properties of the cells that the cells’ configurations change with time for adapting to the confined space inside the constriction.
2. Related WorksRBC deformability has been found to be correlated with certain diseases [5,15,16,17,18]. For example, RBCs with reduced deformability are found in obese patients and in those with diabetic foot diseases [15,16]. Malaria is a well-known blood-related disease that reduces RBC deformability [17,18]. Unusual morphology of RBCs has been found from those of the compound heterozygous for hemoglobin S and hemoglobin C [19]. There are different approaches for investigating RBC deformability [20,21]. For example, Tözeren et al. used micropipette for developing a constitutive equation for the RBC membrane [22]. Brandao et al. employed optical tweezers for measuring RBC elasticity [23]. Dulinska et al. applied an atomic force microscope (AFM) for measuring the stiffness of erythrocytes [24]. Among different evaluation methods, microfluidic constrictions have become popular in the last decade because of their precise fabrication and accurate control [25,26,27]. For example, Zheng et al. put RBCs through a constriction for high-throughput biophysical measurements [28]. Sakuma et al. evaluated the cell fatigue state by inducing continuous and repetitive deformation [29]. There are also works investigating RBC responses after long-standing loads. For example, Fischer discovered the shape memory of RBCs based on their recovery from the deformation by shear stress up to 4 h [12]. Markle et al. investigated the viscoelastic properties of the RBC membrane by characterizing their shape after constant-pressure aspiration up to 90 min [13]. Through the microfluidic platform, it becomes possible to stably monitor cells’ behaviors and responses during a fixed-displacement load.In our previous works, we succeeded in applying long-standing loads to RBCs by high-speed cell manipulation and discussed how the RBCs recover after the load [30,31]. Preliminary results of the RBC color and length changes during the load were independently observed and presented at recent conferences [32,33]. In this work, we firstly present a comprehensive analysis of cell changes during the long-standing load. Furthermore, a correlation between the changes of the RBC color and length is found according to the experimental results.
[ "11807013", "25332724", "1571406", "23230450", "23440063", "22547795", "25643151", "26865054", "24223621", "15111443", "6838984", "7115970", "17546846", "26018868", "10827427", "27354532", "1912587", "23681312", "7104447", "12656742", "16443279", "24658243", "22581052", "24463842", "28233788", "25713696", "1276386" ]
[ { "pmid": "11807013", "title": "Contribution of parasite proteins to altered mechanical properties of malaria-infected red blood cells.", "abstract": "Red blood cells (RBCs) parasitized by Plasmodium falciparum are rigid and poorly deformable and show abnormal circulatory behavior. During parasite development, knob-associated histidine-rich protein (KAHRP) and P falciparum erythrocyte membrane protein 3 (PfEMP3) are exported from the parasite and interact with the RBC membrane skeleton. Using micropipette aspiration, the membrane shear elastic modulus of RBCs infected with transgenic parasites (with kahrp or pfemp3 genes deleted) was measured to determine the contribution of these proteins to the increased rigidity of parasitized RBCs (PRBCs). In the absence of either protein, the level of membrane rigidification was significantly less than that caused by the normal parental parasite clone. KAHRP had a significantly greater effect on rigidification than PfEMP3, contributing approximately 51% of the overall increase that occurs in PRBCs compared to 15% for PfEMP3. This study provides the first quantitative information on the contribution of specific parasite proteins to altered mechanical properties of PRBCs." }, { "pmid": "25332724", "title": "Biomechanical properties of red blood cells in health and disease towards microfluidics.", "abstract": "Red blood cells (RBCs) possess a unique capacity for undergoing cellular deformation to navigate across various human microcirculation vessels, enabling them to pass through capillaries that are smaller than their diameter and to carry out their role as gas carriers between blood and tissues. Since there is growing evidence that red blood cell deformability is impaired in some pathological conditions, measurement of RBC deformability has been the focus of numerous studies over the past decades. Nevertheless, reports on healthy and pathological RBCs are currently limited and, in many cases, are not expressed in terms of well-defined cell membrane parameters such as elasticity and viscosity. Hence, it is often difficult to integrate these results into the basic understanding of RBC behaviour, as well as into clinical applications. The aim of this review is to summarize currently available reports on RBC deformability and to highlight its association with various human diseases such as hereditary disorders (e.g., spherocytosis, elliptocytosis, ovalocytosis, and stomatocytosis), metabolic disorders (e.g., diabetes, hypercholesterolemia, obesity), adenosine triphosphate-induced membrane changes, oxidative stress, and paroxysmal nocturnal hemoglobinuria. Microfluidic techniques have been identified as the key to develop state-of-the-art dynamic experimental models for elucidating the significance of RBC membrane alterations in pathological conditions and the role that such alterations play in the microvasculature flow dynamics." }, { "pmid": "1571406", "title": "The clinical importance of erythrocyte deformability, a hemorrheological parameter.", "abstract": "Hemorheology, the science of the flow behavior of blood, has become increasingly important in clinical situations. The rheology of blood is dependent on its viscosity, which in turn is influenced by plasma viscosity, hematocrit, erythrocyte aggregation, and erythrocyte deformability. In recent years it has become apparent that the shape and elasticity of erythrocytes may be important in explaining the etiology of certain pathological situations. Thus, clinicians have become increasingly interested in hemorheology in general and erythrocyte deformability in particular. In the course of time, many clinical studies have been performed, but no concise review has thus far been published. This article encompasses a review of the clinically based literature on this subject." }, { "pmid": "23230450", "title": "Continuum- and particle-based modeling of shapes and dynamics of red blood cells in health and disease.", "abstract": "We review recent advances in multiscale modeling of the mechanics of healthy and diseased red blood cells (RBCs), and blood flow in the microcirculation. We cover the traditional continuum-based methods but also particle-based methods used to model both the RBCs and the blood plasma. We highlight examples of successful simulations of blood flow including malaria and sickle cell anemia." }, { "pmid": "23440063", "title": "Measuring cell mechanics by optical alignment compression cytometry.", "abstract": "To address the need for a high throughput, non-destructive technique for measuring individual cell mechanical properties, we have developed optical alignment compression (OAC) cytometry. OAC combines hydrodynamic drag in an extensional flow microfluidic device with optical forces created with an inexpensive diode laser to induce measurable deformations between compressed cells. In this, a low-intensity linear optical trap aligns incoming cells with the flow stagnation point allowing hydrodynamic drag to induce deformation during cell-cell interaction. With this novel approach, we measure cell mechanical properties with a throughput that improves significantly on current non-destructive individual cell testing methods." }, { "pmid": "22547795", "title": "Hydrodynamic stretching of single cells for large population mechanical phenotyping.", "abstract": "Cell state is often assayed through measurement of biochemical and biophysical markers. Although biochemical markers have been widely used, intrinsic biophysical markers, such as the ability to mechanically deform under a load, are advantageous in that they do not require costly labeling or sample preparation. However, current techniques that assay cell mechanical properties have had limited adoption in clinical and cell biology research applications. Here, we demonstrate an automated microfluidic technology capable of probing single-cell deformability at approximately 2,000 cells/s. The method uses inertial focusing to uniformly deliver cells to a stretching extensional flow where cells are deformed at high strain rates, imaged with a high-speed camera, and computationally analyzed to extract quantitative parameters. This approach allows us to analyze cells at throughputs orders of magnitude faster than previously reported biophysical flow cytometers and single-cell mechanics tools, while creating easily observable larger strains and limiting user time commitment and bias through automation. Using this approach we rapidly assay the deformability of native populations of leukocytes and malignant cells in pleural effusions and accurately predict disease state in patients with cancer and immune activation with a sensitivity of 91% and a specificity of 86%. As a tool for biological research, we show the deformability we measure is an early biomarker for pluripotent stem cell differentiation and is likely linked to nuclear structural changes. Microfluidic deformability cytometry brings the statistical accuracy of traditional flow cytometric techniques to label-free biophysical biomarkers, enabling applications in clinical diagnostics, stem cell characterization, and single-cell biophysics." }, { "pmid": "25643151", "title": "Real-time deformability cytometry: on-the-fly cell mechanical phenotyping.", "abstract": "We introduce real-time deformability cytometry (RT-DC) for continuous cell mechanical characterization of large populations (>100,000 cells) with analysis rates greater than 100 cells/s. RT-DC is sensitive to cytoskeletal alterations and can distinguish cell-cycle phases, track stem cell differentiation into distinct lineages and identify cell populations in whole blood by their mechanical fingerprints. This technique adds a new marker-free dimension to flow cytometry with diverse applications in biology, biotechnology and medicine." }, { "pmid": "26865054", "title": "Deformation and internal stress in a red blood cell as it is driven through a slit by an incoming flow.", "abstract": "To understand the deformation and internal stress of a red blood cell when it is pushed through a slit by an incoming flow, we conduct a numerical investigation by combining a fluid-cell interaction model based on boundary-integral equations with a multiscale structural model of the cell membrane that takes into account the detailed molecular architecture of this biological system. Our results confirm the existence of cell 'infolding', during which part of the membrane is inwardly bent to form a concave region. The time histories and distributions of area deformation, shear deformation, and contact pressure during and after the translocation are examined. Most interestingly, it is found that in the recovery phase after the translocation significant dissociation pressure may develop between the cytoskeleton and the lipid bilayer. The magnitude of this pressure is closely related to the locations of the dimple elements during the transit. Large dissociation pressure in certain cases suggests the possibility of mechanically induced structural remodeling and structural damage such as vesiculation. With quantitative knowledge about the stability of intra-protein, inter-protein and protein-to-lipid linkages under dynamic loads, it will be possible to achieve numerical prediction of these processes." }, { "pmid": "24223621", "title": "Transient dynamics of an elastic capsule in a microfluidic constriction.", "abstract": "In this paper we investigate computationally the transient dynamics of an elastic capsule flowing in a square microchannel with a rectangular constriction, and compare it with that of a droplet. The confinement and expansion dynamics of the fluid flow results in a rich deformation behavior for the capsule, from an elongated shape at the constriction entrance, to a flattened parachute shape at its exit. Larger capsules are shown to take more time to pass the constriction and cause higher additional pressure difference, owing to higher flow blocking. Our work highlights the effects of two different mechanisms for non-tank-treading transient capsule dynamics. The capsule deformation results from the combined effects of the surrounding and inner fluids' normal stresses on the soft particle's interface, and thus when the capsule viscosity increases, its transient deformation decreases, as for droplets. However, the capsule deformation is not able to create a strong enough inner circulation (owing to restrictions imposed by the material membrane), and thus the viscosity ratio does not affect much the capsule velocity and the additional pressure difference. In addition, the weak inner circulation results in a positive additional pressure difference ΔP+ even for low-viscosity capsules, in direct contrast to low-viscosity droplets which create a negative ΔP+. Our findings suggest that the high cytoplasmatic viscosity, owing to the protein hemoglobin required for oxygen transport, does not affect adversely the motion of non-tank-trading erythrocytes in vascular capillaries." }, { "pmid": "15111443", "title": "Shape memory of human red blood cells.", "abstract": "The human red cell can be deformed by external forces but returns to the biconcave resting shape after removal of the forces. If after such shape excursions the rim is always formed by the same part of the membrane, the cell is said to have a memory of its biconcave shape. If the rim can form anywhere on the membrane, the cell would have no shape memory. The shape memory was probed by an experiment called go-and-stop. Locations on the membrane were marked by spontaneously adhering latex spheres. Shape excursions were induced by shear flow. In virtually all red cells, a shape memory was found. After stop of flow and during the return of the latex spheres to the original location, the red cell shape was biconcave. The return occurred by a tank-tread motion of the membrane. The memory could not be eliminated by deforming the red cells in shear flow up to 4 h at room temperature as well as at 37 degrees C. It is suggested that 1). the characteristic time of stress relaxation is >80 min and 2). red cells in vivo also have a shape memory." }, { "pmid": "6838984", "title": "Force relaxation and permanent deformation of erythrocyte membrane.", "abstract": "Force relaxation and permanent deformation processes in erythrocyte membrane were investigated with two techniques: micropipette aspiration of a portion of a flaccid cell, and extension of a whole cell between two micropipettes. In both experiments, at surface extension ratios less than 3:1, the extent of residual membrane deformation is negligible when the time of extension is less than several minutes. However, extensions maintained longer result in significant force relaxation and permanent deformation. The magnitude of the permanent deformation is proportional to the total time period of extension and the level of the applied force. Based on these observations, a nonlinear constitutive relation for surface deformation is postulated that serially couples a hyperelastic membrane component to a linear viscous process. In contrast with the viscous dissipation of energy as heat that occurs in rapid extension of a viscoelastic solid, or in plastic flow of a material above yield, the viscous process in this case represents dissipation produced by permanent molecular reorganization through relaxation of structural membrane components. Data from these experiments determine a characteristic time constant for force relaxation, tau, which is the ratio of a surface viscosity, eta to the elastic shear modulus, mu. Because it was found that the concentration of albumin in the cell suspension strongly mediates the rate of force relaxation, values for tau of 10.1, 40.0, 62.8, and 120.7 min are measured at albumin concentrations of 0.0, 0.01, 0.1, and 1.% by weight in grams, respectively. The surface viscosity, eta, is calculated from the product of tau and mu. For albumin concentrations of 0.0, 0.01, 0.1, and 1% by weight in grams, eta is equal to 3.6, 14.8, 25.6, and 51.9 dyn s/cm, respectively." }, { "pmid": "17546846", "title": "Red blood cell aggregation and deformability among patients qualified for bariatric surgery.", "abstract": "BACKGROUND\nThe study presents red blood cell (RBC) aggregability and deformability among obese patients qualified for bariatric surgery and its correlation with plasma lipid concentration.\n\n\nMETHODS\nWe studied 40 morbidly obese patients who were qualified for bariatric surgery: mean age was 43.5 +/- 11.3 years, and mean body mass index (BMI) was 48.9 +/- 7.7 kg/m2. The RBC deformability and aggregation parameters: aggregation index (AI), syllectogram amplitude (AMP) and aggregation half-time (t1/2) were measured by Laser-assisted Optical Rotational Cell Analyser - LORCA.\n\n\nRESULTS\nElongation index of RBC was significantly lower in obese patients than in the control group (P<0.001) in 3.16-60.03 Pa shear stresses. Correlations between elongation index and triglyceride levels ranged between 0.42 to 0.44 (P<0.05). AI was significantly higher in the obese patients (P<0.001), t1/2 and the AMP were decreased (P<0.001) compared to the control group. The RBC aggregation index correlated positively with total cholesterol level (r = 0.61, P<0.05), non-HDL cholesterol level (r = 0.54, P<0.05) and BMI (r = 0.48, P<0.05). Negative correlation presented t1/2 with total cholesterol (r = -0.64, P<0.05), non-HDL cholesterol (r = - 0.51, P<0.05) and BMI (r= -0.59, P<0.05).\n\n\nCONCLUSION\nObesity is associated with RBC rheological disturbances expressed by a decrease in RBC deformability, increased total aggregation extent and the alteration of kinetics of RBC aggregation. These results may suggest the necessity of introducing treatment forms to correct erythrocyte rheological properties, which may improve the blood-flow condition in the microcirculation and prevent postoperative complications after bariatric surgery." }, { "pmid": "26018868", "title": "Diabetic foot disease is associated with reduced erythrocyte deformability.", "abstract": "The pathogenesis of diabetic foot disease is multifactorial and encompasses microvascular and macrovascular pathologies. Abnormal blood rheology may also play a part in its development. Using a cell flow analyser (CFA), we examined the association between erythrocyte deformability and diabetic foot disease. Erythrocytes from diabetic patients with no known microvascular complications (n = 11) and patients suffering from a diabetic foot ulcer (n = 11) were isolated and their average elongation ratio (ER) as well as the ER distribution curve were measured. Average ER was decreased in the diabetic foot patients compared with the patients with diabetes and no complications (1·64 ± 0·07 versus 1·71 ± 0·1; P = 0·036). A significant rise in the percentage of minimally deformable red blood cells RBCs in diabetic foot patients compared with the patients with no complications was observed (37·89% ± 8·12% versus 30·61% ± 10·17%; P = 0·039) accompanied by a significant decrease in the percentage of highly deformable RBCs (12·47% ± 4·43% versus 17·49% ± 8·17% P = 0·046). Reduced erythrocyte deformability may slow capillary flow in the microvasculature and prolong wound healing in diabetic foot patients. Conversely, it may be the low-grade inflammatory state imposed by diabetic foot disease that reduces erythrocyte deformability. Further study of the rheological changes associated with diabetic foot disease may enhance our understanding of its pathogenesis and aid in the study of novel therapeutic approaches." }, { "pmid": "10827427", "title": "Abnormal blood flow and red blood cell deformability in severe malaria.", "abstract": "Obstruction of the microcirculation plays a central role in the pathophysiology of severe malaria. Here, Arjen Dondorp and colleagues describe the various contributors to impaired microcirculatory flow in falciparum malaria: sequestration, rosetting and recent findings regarding impaired red blood cell deformability. The correlation with clinical findings and possible therapeutic consequences are discussed." }, { "pmid": "27354532", "title": "Biomechanics of red blood cells in human spleen and consequences for physiology and disease.", "abstract": "Red blood cells (RBCs) can be cleared from circulation when alterations in their size, shape, and deformability are detected. This function is modulated by the spleen-specific structure of the interendothelial slit (IES). Here, we present a unique physiological framework for development of prognostic markers in RBC diseases by quantifying biophysical limits for RBCs to pass through the IES, using computational simulations based on dissipative particle dynamics. The results show that the spleen selects RBCs for continued circulation based on their geometry, consistent with prior in vivo observations. A companion analysis provides critical bounds relating surface area and volume for healthy RBCs beyond which the RBCs fail the \"physical fitness test\" to pass through the IES, supporting independent experiments. Our results suggest that the spleen plays an important role in determining distributions of size and shape of healthy RBCs. Because mechanical retention of infected RBC impacts malaria pathogenesis, we studied key biophysical parameters for RBCs infected with Plasmodium falciparum as they cross the IES. In agreement with experimental results, surface area loss of an infected RBC is found to be a more important determinant of splenic retention than its membrane stiffness. The simulations provide insights into the effects of pressure gradient across the IES on RBC retention. By providing quantitative biophysical limits for RBCs to pass through the IES, the narrowest circulatory bottleneck in the spleen, our results offer a broad approach for developing quantitative markers for diseases such as hereditary spherocytosis, thalassemia, and malaria." }, { "pmid": "1912587", "title": "The unique red cell heterogeneity of SC disease: crystal formation, dense reticulocytes, and unusual morphology.", "abstract": "Knowledge concerning SS (homozygous for the beta s gene) red blood cell (RBC) heterogeneity has been useful for understanding the pathophysiology of sickle cell anemia. No equivalent information exists for RBCs of the compound heterozygote for the beta s and beta c genes (SC) RBCs. These RBCs are known to be denser than most cells in normal blood and even most cells in SS blood (Fabry et al, J Clin Invest 70:1284, 1981). We have analyzed the characteristics of SC RBC heterogeneity and find that: (1) SC cells exhibit unusual morphologic features, particularly the tendency for membrane \"folding\" (multifolded, unifolded, and triangular shapes are all common); (2) SC RBCs containing crystals and some containing round hemoglobin (Hb) aggregates (billiard-ball cells) are detectable in circulating SC blood; (3) in contrast to normal reticulocytes, which are found mainly in a low-density RBC fraction, SC reticulocytes are found in the densest SC RBC fraction; and (4) both deoxygenation and replacement of extracellular Cl- by NO3- (both inhibitors of K:Cl cotransport) led to moderate depopulation of the dense fraction and a dramatic shift of the reticulocytes to lower density fractions. We conclude that the RBC heterogeneity of SC disease is very different from that of SS disease. The major contributions of properties introduced by HbC are \"folded\" RBCs, intracellular crystal formation in circulating SC cells, and apparently a very active K:Cl cotransporter that leads to unusually dense reticulocytes." }, { "pmid": "23681312", "title": "Recent advances in microfluidic techniques for single-cell biophysical characterization.", "abstract": "Biophysical (mechanical and electrical) properties of living cells have been proven to play important roles in the regulation of various biological activities at the molecular and cellular level, and can serve as promising label-free markers of cells' physiological states. In the past two decades, a number of research tools have been developed for understanding the association between the biophysical property changes of biological cells and human diseases; however, technical challenges of realizing high-throughput, robust and easy-to-perform measurements on single-cell biophysical properties have yet to be solved. In this paper, we review emerging tools enabled by microfluidic technologies for single-cell biophysical characterization. Different techniques are compared. The technical details, advantages, and limitations of various microfluidic devices are discussed." }, { "pmid": "7104447", "title": "Viscoelastic behavior of erythrocyte membrane.", "abstract": "A nonlinear viscoelastic relation is developed to describe the viscoelastic properties of erythrocyte membrane. This constitutive equation is used in the analysis of the time-dependent aspiration of an erythrocyte membrane into a micropipette. Equations governing this motion are reduced to a nonlinear integral equation of the Volterra type. A numerical procedure based on a finite difference scheme is used to solve the integral equation and to match the experimental data. The data, aspiration length vs. time, is used to determine the relaxation function at each time step. The inverse problem of obtaining the time dependence of the aspiration length from a given relaxation function is also solved. Analytical results obtained are applied to the experimental data of Chien et al. 1978. Biophys. J. 24:463-487. A relaxation function similar to that of a four-parameter solid with a shear-thinning viscous term is proposed." }, { "pmid": "12656742", "title": "Optical tweezers for measuring red blood cell elasticity: application to the study of drug response in sickle cell disease.", "abstract": "The deformability of erythrocytes is a critical determinant of blood flow in microcirculation. By capturing red blood cells (RBC) with optical tweezers and dragging them through a viscous fluid we were able to measure their overall elasticity. We measured, and compared, the RBC deformability of 15 homozygous patients (HbSS) including five patients taking hydroxyurea (HU) for at least 6 months (HbSS/HU), 10 subjects with sickle cell trait (HbAS) and 35 normal controls. Our results showed that the RBC deformability was significantly lower in haemoglobin S (HbS) subjects (HbSS and HbAS), except for HbSS/HU cells, whose deformability was similar to the normal controls. Our data showed that the laser optical tweezers technique is able to detect differences in HbS RBC from subjects taking HU, and to differentiate RBC from normal controls and HbAS, indicating that this is a very sensitive method and can be applied for detection of drug-response in sickle cell disease." }, { "pmid": "16443279", "title": "Stiffness of normal and pathological erythrocytes studied by means of atomic force microscopy.", "abstract": "During recent years, atomic force microscopy has become a powerful technique for studying the mechanical properties (such as stiffness, viscoelasticity, hardness and adhesion) of various biological materials. The unique combination of high-resolution imaging and operation in physiological environment made it useful in investigations of cell properties. In this work, the microscope was applied to measure the stiffness of human red blood cells (erythrocytes). Erythrocytes were attached to the poly-L-lysine-coated glass surface by fixation using 0.5% glutaraldehyde for 1 min. Different erythrocyte samples were studied: erythrocytes from patients with hemolytic anemias such as hereditary spherocytosis and glucose-6-phosphate-dehydrogenase deficiency patients with thalassemia, and patients with anisocytosis of various causes. The determined Young's modulus was compared with that obtained from measurements of erythrocytes from healthy subjects. The results showed that the Young's modulus of pathological erythrocytes was higher than in normal cells. Observed differences indicate possible changes in the organization of cell cytoskeleton associated with various diseases." }, { "pmid": "24658243", "title": "A new dimensionless index for evaluating cell stiffness-based deformability in microchannel.", "abstract": "This paper proposes a new index for evaluating the stiffness-based deformability of a cell using a microchannel. In conventional approaches, the transit time of a cell through a microchannel is often utilized for the evaluation of cell deformability. However, such time includes both the information of cell stiffness and viscosity. In this paper, we eliminate the effect from cell viscosity, and focus on the cell stiffness only. We find that the velocity of a cell varies when it enters a channel, and eventually reaches to equilibrium where the velocity becomes constant. The constant velocity is defined as the equilibrium velocity of the cell, and it is utilized to define the observability of stiffness-based deformability. The necessary and sufficient numbers of sensing points for evaluating stiffness-based deformability are discussed. Through the dimensional analysis on the microchannel system, three dimensionless parameters determining stiffness-based deformability are derived, and a new index is introduced based on these parameters. The experimental study is conducted on the red blood cells from a healthy subject and a diabetes patient. With the proposed index, we showed that the experimental data can be nicely arranged." }, { "pmid": "22581052", "title": "High-throughput biophysical measurement of human red blood cells.", "abstract": "This paper reports a microfluidic system for biophysical characterization of red blood cells (RBCs) at a speed of 100-150 cells s(-1). Electrical impedance measurement is made when single RBCs flow through a constriction channel that is marginally smaller than RBCs' diameters. The multiple parameters quantified as mechanical and electrical signatures of each RBC include transit time, impedance amplitude ratio, and impedance phase increase. Histograms, compiled from 84,073 adult RBCs (from 5 adult blood samples) and 82,253 neonatal RBCs (from 5 newborn blood samples), reveal different biophysical properties across samples and between the adult and neonatal RBC populations. In comparison with previously reported microfluidic devices for single RBC biophysical measurement, this system has a higher throughput, higher signal to noise ratio, and the capability of performing multi-parameter measurements." }, { "pmid": "24463842", "title": "Red blood cell fatigue evaluation based on the close-encountering point between extensibility and recoverability.", "abstract": "Red blood cells (RBC) circulate the human body several hundred thousand times in their life span. Therefore, their deformability is really important, especially when they pass through a local capillary whose diameter can be as narrow as 3 μm. While there have been a number of works discussing the deformability in a simulated capillary such as a microchannel, as far as we examined in the literature, no work focusing on the change of shape after reciprocated mechanical stress has been reported so far. One of the reasons is that there have been no appropriate experimental systems to achieve such a test. This paper presents a new concept of RBC fatigue evaluation. The fatigue state is defined by the time of reciprocated mechanical stress when the extensibility and the recoverability characteristics meet each other. Our challenge is how to construct a system capable of achieving stable and accurate control of RBCs in a microchannel. For this purpose, we newly introduced two fundamental components. One is a robotic pump capable of manipulating a cell in the accuracy of ±0.24 μm in an equilibrium state with a maximum response time of 15 ms. The other is an online high speed camera capable of chasing the position of RBCs with a sampling rate of 1 kHz. By utilizing these components, we could achieve continuous observation of the length of a RBC over a 1000 times reciprocated mechanical stress. Through these experiments, we found that the repeat number that results in the fatigue state has a close correlation with extensibility." }, { "pmid": "28233788", "title": "Mechanical diagnosis of human erythrocytes by ultra-high speed manipulation unraveled critical time window for global cytoskeletal remodeling.", "abstract": "Large deformability of erythrocytes in microvasculature is a prerequisite to realize smooth circulation. We develop a novel tool for the three-step \"Catch-Load-Launch\" manipulation of a human erythrocyte based on an ultra-high speed position control by a microfluidic \"robotic pump\". Quantification of the erythrocyte shape recovery as a function of loading time uncovered the critical time window for the transition between fast and slow recoveries. The comparison with erythrocytes under depletion of adenosine triphosphate revealed that the cytoskeletal remodeling over a whole cell occurs in 3 orders of magnitude longer timescale than the local dissociation-reassociation of a single spectrin node. Finally, we modeled septic conditions by incubating erythrocytes with endotoxin, and found that the exposure to endotoxin results in a significant delay in the characteristic transition time for cytoskeletal remodeling. The high speed manipulation of erythrocytes with a robotic pump technique allows for high throughput mechanical diagnosis of blood-related diseases." }, { "pmid": "25713696", "title": "On-chip actuation transmitter for enhancing the dynamic response of cell manipulation using a macro-scale pump.", "abstract": "An on-chip actuation transmitter for achieving fast and accurate cell manipulation is proposed. Instead of manipulating cell position by a directly connected macro-scale pump, polydimethylsiloxane deformation is used as a medium to transmit the actuation generated from the pump to control the cell position. This actuation transmitter has three main advantages. First, the dynamic response of cell manipulation is faster than the conventional method with direct flow control based on both the theoretical modeling and experimental results. The cell can be manipulated in a simple harmonic motion up to 130 Hz by the proposed actuation transmitter as opposed to 90 Hz by direct flow control. Second, there is no need to fill the syringe pump with the sample solution because the actuation transmitter physically separates the fluids between the pump and the cell flow, and consequently, only a very small quantity of the sample is required (<1 μl). In addition, such fluid separation makes it easy to keep the experiment platform sterilized because there is no direct fluid exchange between the sample and fluid inside the pump. Third, the fabrication process is simple because of the single-layer design, making it convenient to implement the actuation transmitter in different microfluidic applications. The proposed actuation transmitter is implemented in a lab-on-a-chip system for red blood cell (RBC) evaluation, where the extensibility of red blood cells is evaluated by manipulating the cells through a constriction channel at a constant velocity. The application shows a successful example of implementing the proposed transmitter." }, { "pmid": "1276386", "title": "Elastic area compressibility modulus of red cell membrane.", "abstract": "Micropipette measurements of isotropic tension vs. area expansion in pre-swollen single human red cells gave a value of 288 +/- 50 SD dyn/cm for the elastic, area compressibility modulus of the total membrane at 25 degrees C. This elastic constant, characterizing the resistance to area expansion or compression, is about 4 X 10(4) times greater than the elastic modulus for shear rigidity; therefore, in situations where deformation of the membrane does not require large isotropic tensions (e.g., in passage through normal capillaries), the membrane can be treated by a simple constitutive relation for a two-dimensionally, incompressible material (i.e. fixed area). The tension was found to be linear and reversible for the range of area changes observed (within the experimental system resolution of 10%). The maximum fractional area expansion required to produce lysis was uniformly distributed between 2 and 4% with 3% average and 0.7% SD. By heating the cells to 50 degrees C, it appears that the structural matrix (responsible for the shear rigidity and most of the strength in isotropic tension) is disrupted and primarily the lipid bilayer resists lysis. Therefore, the relative contributions of the structural matrix and lipid bilayer to the elastic, area compressibility could be estimated. The maximum isotropic tension at 25 degrees C is 10-12 dyn/cm and at 50 degrees C is between 3 and 4 dyn/cm. From this data, the respective compressibilities are estimated at 193 dyn/cm and 95 dyn/cm for structural network and bilayer. The latter value correlates well with data on in vitro, monolayer surface pressure versus area curves at oil-water interfaces." } ]
Micromachines
30404309
PMC6190313
10.3390/mi7080138
Fluid-Mediated Stochastic Self-Assembly at Centimetric and Sub-Millimetric Scales: Design, Modeling, and Control
Stochastic self-assembly provides promising means for building micro-/nano-structures with a variety of properties and functionalities. Numerous studies have been conducted on the control and modeling of the process in engineered self-assembling systems constituted of modules with varied capabilities ranging from completely reactive nano-/micro-particles to intelligent miniaturized robots. Depending on the capabilities of the constituting modules, different approaches have been utilized for controlling and modeling these systems. In the quest of a unifying control and modeling framework and within the broader perspective of investigating how stochastic control strategies can be adapted from the centimeter-scale down to the (sub-)millimeter-scale, as well as from mechatronic to MEMS-based technology, this work presents the outcomes of our research on self-assembly during the past few years. As the first step, we leverage an experimental platform to study self-assembly of water-floating passive modules at the centimeter scale. A dedicated computational framework is developed for real-time tracking, modeling and control of the formation of specific structures. Using a similar approach, we then demonstrate controlled self-assembly of microparticles into clusters of a preset dimension in a microfluidic chamber, where the control loop is closed again through real-time tracking customized for a much faster system dynamics. Finally, with the aim of distributing the intelligence and realizing programmable self-assembly, we present a novel experimental system for fluid-mediated programmable stochastic self-assembly of active modules at the centimeter scale. The system is built around the water-floating 3-cm-sized Lily robots specifically designed to be operative in large swarms and allows for exploring the whole range of fully-centralized to fully-distributed control strategies. The outcomes of our research efforts extend the state-of-the-art methodologies for designing, modeling and controlling massively-distributed, stochastic self-assembling systems at different length scales, constituted of modules from centimetric down to sub-millimetric size. As a result, our work provides a solid milestone in structure formation through controlled self-assembly.
2. Related WorkIn this section, we provide an overview on the related works in the literature. We divide our review into two classes, the SA of miniaturized robots and the SA of M/NEMS.2.1. Self-Assembly of Miniaturized RobotsThe self-assembly of miniaturized robots has been studied in numerous works, where a wide range of hardware implementations, including fully-autonomous robots and controllable environments, along with corresponding control approaches, have been developed. These works differ mainly in the capabilities of the robots, i.e., the SA building blocks, the type of the environment and its level of controllability, and the approach employed to guide the SA process towards the target. The cubic modules presented in [11] are capable of forming structures in three dimensions deploying magnetic latching and a lattice-based locomotion approach on a test table. Self-assembly of a swarm of autonomous floating robots in two dimensions has been studied in [12]. In [7], programmable self-assembly has been demonstrated to be a powerful means for the formation of structured patterns in two dimensions in a large swarm of miniaturized robots, the Kilobots. While the motion of the Kilobots is inherently noisy, the primitive collective behavior programmed implements a deterministic and quasi-serial approach to shape formation. Taking advantage of the stochastic ambient dynamics for module transportation can, however, allow for the simplification of the internal design of the modules, as well as increased parallelization of the SA process. 3D stochastic SA of passive modules on an active substrate is investigated in [13]. Tribolon modules stochastically assemble into floating structures [14]. Tribolons are actuated using vibrating motors, and the environment is not capable of providing any control to guide the assembling process; the robots have a pantograph for both energy supply and control, including a latching mechanism based on the Peltier effect. The intelligent programmable parts in [15] are capable of local communication via infra-red and have controllable permanent magnet-based latches. The modules stochastically self-assemble on an air table, based on their internal behavior. The system of Pebble robots in [8] starts the process of shape formation with an ordered lattice; the stochastic forces in the environment are then used to detach unwanted blocks. Pebbles are only powered once they connect to the structure formed around the seed node; once connected and powered, they are capable of local communication among themselves. Abundant research has been dedicated to theoretical and experimental aspects of self-replication [16]. Self-replication of robotic units, for which self-assembled structure formation is crucial, is demonstrated using five 2D coding strings as templates [17] and also using self-swiveling microcontroller-based gripping blocks in [18]. Furthermore, a wide variety of novel platforms was developed in the context of modular robotics, some of which have the capabilities of autonomous locomotion and docking [12,19,20,21,22,23].Both theoretical and experimental aspects of achieving SA and aggregation are of a high interest in distributed and modular robotics [20]. Probabilistic models of the SA of mobile robots are developed in several works [15,24,25,26]. Stochastic distributed control of robotic swarms is studied in [27], employing modeling methods originally developed for chemical systems [28]. In several other studies, the chemical formalism is demonstrated to suit the description of SA, both in real robot experimental scenarios and in simulation [9,29,30].2.2. Self-Assembly of Micro-/Nano-SystemsNumerous applications are envisioned in the realm of SA of micro-/nano-systems, which demonstrate the potential and effectiveness of SA in providing alternative fabrication approaches across length scales and material interfaces. Landmark achievements include the SA of electrical networks [31], 3D electric circuits [32], integration of semiconductor devices into substrates [33,34,35,36], flexible LED surfaces [37], polyhedral containers [38] and monocrystalline solar cells [39]. Several types of physical interactions are employed: gravitational [40], hydrophobic [41], steric [42], electric [43], magnetic [44], capillary [45], DNA hybridization-mediated [46] and fluidic [47]. Most of these interactions have been shown to be tunable [48,49]. In almost all cases, the M/NEMS modules are designed, such that they are able to scavenge energy from the environment and exploit the information coded in physical templates. Miniature MEMS robots fabricated by micromachining capable of forming 2D structures several times their body size are demonstrated in [50]. Sub-millimetric MEMS robots utilizing a wireless resonant magnetic microactuator are demonstrated in [51]. An external AC electric field is used in [52] to control the electro-osmotic motion of light-responsive millimetric diodes. Self-folding miniature robots capable of being steered by an external magnetic field and dissolving into the environment are demonstrated in [53].
[ "20209016", "25124435", "17037977", "10947979", "16968780", "20349446", "20080682", "19708253", "11959929", "19517482", "17293850", "23580796", "17023540" ]
[ { "pmid": "20209016", "title": "Self-assembly from milli- to nanoscales: methods and applications.", "abstract": "The design and fabrication techniques for microelectromechanical systems (MEMS) and nanodevices are progressing rapidly. However, due to material and process flow incompatibilities in the fabrication of sensors, actuators and electronic circuitry, a final packaging step is often necessary to integrate all components of a heterogeneous microsystem on a common substrate. Robotic pick-and-place, although accurate and reliable at larger scales, is a serial process that downscales unfavorably due to stiction problems, fragility and sheer number of components. Self-assembly, on the other hand, is parallel and can be used for device sizes ranging from millimeters to nanometers. In this review, the state-of-the-art in methods and applications for self-assembly is reviewed. Methods for assembling three-dimensional (3D) MEMS structures out of two-dimensional (2D) ones are described. The use of capillary forces for folding 2D plates into 3D structures, as well as assembling parts onto a common substrate or aggregating parts to each other into 2D or 3D structures, is discussed. Shape matching and guided assembly by magnetic forces and electric fields are also reviewed. Finally, colloidal self-assembly and DNA-based self-assembly, mainly used at the nanoscale, are surveyed, and aspects of theoretical modeling of stochastic assembly processes are discussed." }, { "pmid": "25124435", "title": "Robotics. Programmable self-assembly in a thousand-robot swarm.", "abstract": "Self-assembly enables nature to build complex forms, from multicellular organisms to complex animal structures such as flocks of birds, through the interaction of vast numbers of limited and unreliable individuals. Creating this ability in engineered systems poses challenges in the design of both algorithms and physical systems that can operate at such scales. We report a system that demonstrates programmable self-assembly of complex two-dimensional shapes with a thousand-robot swarm. This was enabled by creating autonomous robots designed to operate in large groups and to cooperate through local interactions and by developing a collective algorithm for shape formation that is highly robust to the variability and error characteristic of large-scale decentralized systems. This work advances the aim of creating artificial swarms with the capabilities of natural ones." }, { "pmid": "17037977", "title": "Stochastic simulation of chemical kinetics.", "abstract": "Stochastic chemical kinetics describes the time evolution of a well-stirred chemically reacting system in a way that takes into account the fact that molecules come in whole numbers and exhibit some degree of randomness in their dynamical behavior. Researchers are increasingly using this approach to chemical kinetics in the analysis of cellular systems in biology, where the small molecular populations of only a few reactant species can lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. After reviewing the supporting theory of stochastic chemical kinetics, I discuss some recent advances in methods for using that theory to make numerical simulations. These include improvements to the exact stochastic simulation algorithm (SSA) and the approximate explicit tau-leaping procedure, as well as the development of two approximate strategies for simulating systems that are dynamically stiff: implicit tau-leaping and the slow-scale SSA." }, { "pmid": "10947979", "title": "Forming electrical networks in three dimensions by self-assembly", "abstract": "Self-assembly of millimeter-scale polyhedra, with surfaces patterned with solder dots, wires, and light-emitting diodes, generated electrically functional, three-dimensional networks. The patterns of dots and wires controlled the structure of the networks formed; both parallel and serial connections were generated." }, { "pmid": "16968780", "title": "Self-assembled single-crystal silicon circuits on plastic.", "abstract": "We demonstrate the use of self-assembly for the integration of freestanding micrometer-scale components, including single-crystal, silicon field-effect transistors (FETs) and diffusion resistors, onto flexible plastic substrates. Preferential self-assembly of multiple microcomponent types onto a common platform is achieved through complementary shape recognition and aided by capillary, fluidic, and gravitational forces. We outline a microfabrication process that yields single-crystal, silicon FETs in a freestanding, powder-like collection for use with self-assembly. Demonstrations of self-assembled FETs on plastic include logic inverters and measured electron mobility of 592 cm2/V-s. Finally, we extend the self-assembly process to substrates each containing 10,000 binding sites and realize 97% self-assembly yield within 25 min for 100-microm-sized elements. High-yield self-assembly of micrometer-scale functional devices as outlined here provides a powerful approach for production of macroelectronic systems." }, { "pmid": "20349446", "title": "Three-dimensional fabrication at small size scales.", "abstract": "Despite the fact that we live in a 3D world and macroscale engineering is 3D, conventional submillimeter-scale engineering is inherently 2D. New fabrication and patterning strategies are needed to enable truly 3D-engineered structures at small size scales. Here, strategies that have been developed over the past two decades that seek to enable such millimeter to nanoscale 3D fabrication and patterning are reviewed. A focus is the strategy of self-assembly, specifically in a biologically inspired, more deterministic form, known as self-folding. Self-folding methods can leverage the strengths of lithography to enable the construction of precisely patterned 3D structures and \"smart\" components. This self-assembly approach is compared with other 3D fabrication paradigms, and its advantages and disadvantages are discussed." }, { "pmid": "20080682", "title": "Self-assembly of microscopic chiplets at a liquid-liquid-solid interface forming a flexible segmented monocrystalline solar cell.", "abstract": "This paper introduces a method for self-assembling and electrically connecting small (20-60 micrometer) semiconductor chiplets at predetermined locations on flexible substrates with high speed (62500 chips/45 s), accuracy (0.9 micrometer, 0.14 degrees), and yield (> 98%). The process takes place at the triple interface between silicone oil, water, and a penetrating solder-patterned substrate. The assembly is driven by a stepwise reduction of interfacial free energy where chips are first collected and preoriented at an oil-water interface before they assemble on a solder-patterned substrate that is pulled through the interface. Patterned transfer occurs in a progressing linear front as the liquid layers recede. The process eliminates the dependency on gravity and sedimentation of prior methods, thereby extending the minimal chip size to the sub-100 micrometer scale. It provides a new route for the field of printable electronics to enable the integration of microscopic high performance inorganic semiconductors on foreign substrates with the freedom to choose target location, pitch, and integration density. As an example we demonstrate a fault-tolerant segmented flexible monocrystalline silicon solar cell, reducing the amount of Si that is used when compared to conventional rigid cells." }, { "pmid": "19708253", "title": "Hydrodynamically tunable affinities for fluidic assembly.", "abstract": "Most current micro- and nanoscale self-assembly methods rely on static, preprogrammed assembly affinities between the assembling components such as capillarity, DNA base pair matching, and geometric interactions. While these techniques have proven successful at creating relatively simple and regular structures, it is difficult to adapt these methods to enable dynamic reconfiguration of the structure or on-the-fly error correction. Here we demonstrate a technique to hydrodynamically tune affinities between assembling components by direct thermal modulation of the local viscosity field surrounding them. This approach is shown here for two-dimensional silicon elements of 500 microm length using a thermorheological fluid that undergoes reversible sol-gel transition on heating. Using this system, we demonstrate the ability to dynamically change the assembly point in a fluidic self-assembly process and selectively attract and reject elements from a larger structure. Although this technique is demonstrated here for a small number of passive mobile components around a fixed structure, it has the potential to overcome some of the limitations of current static affinity based self-assembly." }, { "pmid": "11959929", "title": "Beyond molecules: self-assembly of mesoscopic and macroscopic components.", "abstract": "Self-assembly is a process in which components, either separate or linked, spontaneously form ordered aggregates. Self-assembly can occur with components having sizes from the molecular to the macroscopic, provided that appropriate conditions are met. Although much of the work in self-assembly has focused on molecular components, many of the most interesting applications of self-assembling processes can be found at larger sizes (nanometers to micrometers). These larger systems also offer a level of control over the characteristics of the components and over the interactions among them that makes fundamental investigations especially tractable." }, { "pmid": "19517482", "title": "Nanoscale forces and their uses in self-assembly.", "abstract": "The ability to assemble nanoscopic components into larger structures and materials depends crucially on the ability to understand in quantitative detail and subsequently \"engineer\" the interparticle interactions. This Review provides a critical examination of the various interparticle forces (van der Waals, electrostatic, magnetic, molecular, and entropic) that can be used in nanoscale self-assembly. For each type of interaction, the magnitude and the length scale are discussed, as well as the scaling with particle size and interparticle distance. In all cases, the discussion emphasizes characteristics unique to the nanoscale. These theoretical considerations are accompanied by examples of recent experimental systems, in which specific interaction types were used to drive nanoscopic self-assembly. Overall, this Review aims to provide a comprehensive yet easily accessible resource of nanoscale-specific interparticle forces that can be implemented in models or simulations of self-assembly processes at this scale." }, { "pmid": "17293850", "title": "Remotely powered self-propelling particles and micropumps based on miniature diodes.", "abstract": "Microsensors and micromachines that are capable of self-propulsion through fluids could revolutionize many aspects of technology. Few principles to propel such devices and supply them with energy are known. Here, we show that various types of miniature semiconductor diodes floating in water act as self-propelling particles when powered by an external alternating electric field. The millimetre-sized diodes rectify the voltage induced between their electrodes. The resulting particle-localized electro-osmotic flow propels them in the direction of either the cathode or the anode, depending on their surface charge. These rudimentary self-propelling devices can emit light or respond to light and could be controlled by internal logic. Diodes embedded in the walls of microfluidic channels provide locally distributed pumping or mixing functions powered by a global external field. The combined application of a.c. and d.c. fields in such devices allows decoupling of the velocity of the particles and the liquid and could be used for on-chip separations." }, { "pmid": "23580796", "title": "Planning and Control for Microassembly of Structures Composed of Stress-Engineered MEMS Microrobots.", "abstract": "We present control strategies that implement planar microassembly using groups of stress-engineered MEMS microrobots (MicroStressBots) controlled through a single global control signal. The global control signal couples the motion of the devices, causing the system to be highly underactuated. In order for the robots to assemble into arbitrary planar shapes despite the high degree of underactuation, it is desirable that each robot be independently maneuverable (independently controllable). To achieve independent control, we fabricated robots that behave (move) differently from one another in response to the same global control signal. We harnessed this differentiation to develop assembly control strategies, where the assembly goal is a desired geometric shape that can be obtained by connecting the chassis of individual robots. We derived and experimentally tested assembly plans that command some of the robots to make progress toward the goal, while other robots are constrained to remain in small circular trajectories (closed-loop orbits) until it is their turn to move into the goal shape. Our control strategies were tested on systems of fabricated MicroStressBots. The robots are 240-280 μm × 60 μm × 7-20 μm in size and move simultaneously within a single operating environment. We demonstrated the feasibility of our control scheme by accurately assembling five different types of planar microstructures." }, { "pmid": "17023540", "title": "Recent progress in understanding hydrophobic interactions.", "abstract": "We present here a brief review of direct force measurements between hydrophobic surfaces in aqueous solutions. For almost 70 years, researchers have attempted to understand the hydrophobic effect (the low solubility of hydrophobic solutes in water) and the hydrophobic interaction or force (the unusually strong attraction of hydrophobic surfaces and groups in water). After many years of research into how hydrophobic interactions affect the thermodynamic properties of processes such as micelle formation (self-assembly) and protein folding, the results of direct force measurements between macroscopic surfaces began to appear in the 1980s. Reported ranges of the attraction between variously prepared hydrophobic surfaces in water grew from the initially reported value of 80-100 Angstrom to values as large as 3,000 Angstrom. Recent improved surface preparation techniques and the combination of surface force apparatus measurements with atomic force microscopy imaging have made it possible to explain the long-range part of this interaction (at separations >200 Angstrom) that is observed between certain surfaces. We tentatively conclude that only the short-range part of the attraction (<100 Angstrom) represents the true hydrophobic interaction, although a quantitative explanation for this interaction will require additional research. Although our force-measuring technique did not allow collection of reliable data at separations <10 Angstrom, it is clear that some stronger force must act in this regime if the measured interaction energy curve is to extrapolate to the measured adhesion energy as the surface separation approaches zero (i.e., as the surfaces come into molecular contact)." } ]
Micromachines
30404351
PMC6190329
10.3390/mi7100176
An On-Chip RBC Deformability Checker Significantly Improves Velocity-Deformation Correlation
An on-chip deformability checker is proposed to improve the velocity–deformation correlation for red blood cell (RBC) evaluation. RBC deformability has been found related to human diseases, and can be evaluated based on RBC velocity through a microfluidic constriction as in conventional approaches. The correlation between transit velocity and amount of deformation provides statistical information of RBC deformability. However, such correlations are usually only moderate, or even weak, in practical evaluations due to limited range of RBC deformation. To solve this issue, we implemented three constrictions of different width in the proposed checker, so that three different deformation regions can be applied to RBCs. By considering cell responses from the three regions as a whole, we practically extend the range of cell deformation in the evaluation, and could resolve the issue about the limited range of RBC deformation. RBCs from five volunteer subjects were tested using the proposed checker. The results show that the correlation between cell deformation and transit velocity is significantly improved by the proposed deformability checker. The absolute values of the correlation coefficients are increased from an average of 0.54 to 0.92. The effects of cell size, shape and orientation to the evaluation are discussed according to the experimental results. The proposed checker is expected to be useful for RBC evaluation in medical practices.
2. Related WorksDifferent approaches for evaluating cell deformability have been developed [11,12,13,14]. For example, Radmacher et al. measure the viscoelastic properties of human platelets with an atomic force microscope (AFM) [15]. Brandao et al. use optical tweezers for realizing the mechanical characterization of human RBCs [16]. Among different approaches, microfluidic devices provide a convenient way to evaluate single cell deformability at high throughput and in a clean environment [7,17,18,19,20,21,22]. For example, Otto et al. developed a real-time deformability cytometry for on-the-fly cell phenotyping [23]. Gossett et al. achieved the rate of 2000 cells per second by hydrodynamic stretching [24]. Parallel constrictions, which are similar to the structure of the proposed design, have been previously developed for different purposes [25,26]. For example, Gifford et al. used parallel microchannels with different shapes for measuring the mean corpuscular volume of individual RBCs [27]. Young et al. investigated endothelial cell adhesion by a parallel microfluidic network for different coating conditions [28].To the best of the authors’ knowledge, the proposed deformability checker is the first work aiming to improve the velocity–deformation correlation by widening the range of cell deformation. The idea of the checker is simple and straightforward. According to the experimental results, the method significantly improves the correlations between the transit velocity and cell deformation. It is believed that the proposed deformability checker could benefit the RBC evaluation in medical practices.
[ "16321622", "11807013", "11192248", "2665857", "22581052", "24658243", "10033323", "17257698", "11336534", "20176536", "8770233", "23681312", "21826361", "24463842", "25643151", "22547795", "17366485", "22367556", "12524315", "18030398", "25325848", "22179505", "26865054" ]
[ { "pmid": "16321622", "title": "Mechanical models for living cells--a review.", "abstract": "As physical entities, living cells possess structural and physical properties that enable them to withstand the physiological environment as well as mechanical stimuli occurring within and outside the body. Any deviation from these properties will not only undermine the physical integrity of the cells, but also their biological functions. As such, a quantitative study in single cell mechanics needs to be conducted. In this review, we will examine some mechanical models that have been developed to characterize mechanical responses of living cells when subjected to both transient and dynamic loads. The mechanical models include the cortical shell-liquid core (or liquid drop) models which are widely applied to suspended cells; the solid model which is generally used for adherent cells; the power-law structural damping model which is more suited for studying the dynamic behavior of adherent cells; and finally, the biphasic model which has been widely used to study musculoskeletal cell mechanics. Based upon these models, future attempts can be made to develop even more detailed and accurate mechanical models of living cells once these three factors are adequately addressed: structural heterogeneity, appropriate constitutive relations for each of the distinct subcellular regions and components, and active forces acting within the cell. More realistic mechanical models of living cells can further contribute towards the study of mechanotransduction in cells." }, { "pmid": "11807013", "title": "Contribution of parasite proteins to altered mechanical properties of malaria-infected red blood cells.", "abstract": "Red blood cells (RBCs) parasitized by Plasmodium falciparum are rigid and poorly deformable and show abnormal circulatory behavior. During parasite development, knob-associated histidine-rich protein (KAHRP) and P falciparum erythrocyte membrane protein 3 (PfEMP3) are exported from the parasite and interact with the RBC membrane skeleton. Using micropipette aspiration, the membrane shear elastic modulus of RBCs infected with transgenic parasites (with kahrp or pfemp3 genes deleted) was measured to determine the contribution of these proteins to the increased rigidity of parasitized RBCs (PRBCs). In the absence of either protein, the level of membrane rigidification was significantly less than that caused by the normal parental parasite clone. KAHRP had a significantly greater effect on rigidification than PfEMP3, contributing approximately 51% of the overall increase that occurs in PRBCs compared to 15% for PfEMP3. This study provides the first quantitative information on the contribution of specific parasite proteins to altered mechanical properties of PRBCs." }, { "pmid": "11192248", "title": "Viscoelastic properties of chondrocytes from normal and osteoarthritic human cartilage.", "abstract": "The deformation behavior and mechanical properties of articular chondrocytes are believed to play an important role in their response to mechanical loading of the extracellular matrix. This study utilized the micropipette aspiration test to measure the viscoelastic properties of chondrocytes isolated from macroscopically normal or end-stage osteoarthritic cartilage. A three-parameter standard linear solid was used to model the viscoelastic behavior of the cells. Significant differences were found between the mechanical properties of chondrocytes isolated from normal and osteoarthritic cartilage. Specifically, osteoarthritic chondrocytes exhibited a significantly higher equilibrium modulus (0.33 +/- 0.23 compared with 0.24 +/- 0.11 kPa), instantaneous modulus (0.63 +/- 0.51 compared with 0.41 +/- 0.17 kPa), and apparent viscosity (5.8 +/- 6.5 compared with 3.0 +/- 1.8 kPa-s) compared with chondrocytes isolated from macroscopically normal, nonosteoarthritic cartilage. The elastic moduli and relaxation time constant determined experimentally in this study were used to estimate the apparent biphasic properties of the chondrocyte on the basis of the equation for the gel relaxation time of a biphasic material. The differences in viscoelastic properties may reflect alterations in the structure and composition of the chondrocyte cytoskeleton that have previously been associated with osteoarthritic cartilage. Coupled with earlier theoretical models of cell-matrix interactions in articular cartilage, the increased elastic and viscous properties suggest that the mechanical environment of the chondrocyte may be altered in osteoarthritic cartilage." }, { "pmid": "2665857", "title": "Abnormalities in the mechanical properties of red blood cells caused by Plasmodium falciparum.", "abstract": "Although changes in the mechanical properties of infected red cells may contribute to the pathophysiology of malaria, such changes have not previously been described in detail. In this study, the physical properties of individual cells from both clinical and cultured samples infected with Plasmodium falciparum were tested using micropipette aspiration techniques. Cells containing ring forms took about 50% longer to enter 3 microns pipettes compared with nonparasitised cells, and there was a similar increase in the critical pressure required to induce cell entry. These abnormalities were similar in clinical and cultured samples. More mature cultured parasites (ie, trophozoites and schizonts containing pigment) caused much greater loss of deformability, with entry time and pressure increased four to sixfold. The decrease in deformability of the ring forms was attributable to a deficit in cell surface area/volume ratio (based on micropipette measurement of the surface area and volume of individual cells) and slight stiffening of the cell membrane (shear elastic modulus increased 13%, as measured by pipette aspiration of small membrane tongues). Measurement of the rate of cell shape recovery indicated that the membrane of parasitised cells was not more viscous. The main factor in the drastic loss of deformability of the trophozoites and schizonts was the presence of the large very resistant parasite itself. Otherwise, the cell surface area/volume deficit was slightly less and membrane rigidification slightly greater compared with ring forms. The above abnormalities should cause the trophozoites and schizonts to have great difficulty in traversing splenic or marrow sinuses and could contribute to microvascular occlusion and sequestration. On the other hand, the ring forms may be expected to circulate relatively unhindered." }, { "pmid": "22581052", "title": "High-throughput biophysical measurement of human red blood cells.", "abstract": "This paper reports a microfluidic system for biophysical characterization of red blood cells (RBCs) at a speed of 100-150 cells s(-1). Electrical impedance measurement is made when single RBCs flow through a constriction channel that is marginally smaller than RBCs' diameters. The multiple parameters quantified as mechanical and electrical signatures of each RBC include transit time, impedance amplitude ratio, and impedance phase increase. Histograms, compiled from 84,073 adult RBCs (from 5 adult blood samples) and 82,253 neonatal RBCs (from 5 newborn blood samples), reveal different biophysical properties across samples and between the adult and neonatal RBC populations. In comparison with previously reported microfluidic devices for single RBC biophysical measurement, this system has a higher throughput, higher signal to noise ratio, and the capability of performing multi-parameter measurements." }, { "pmid": "24658243", "title": "A new dimensionless index for evaluating cell stiffness-based deformability in microchannel.", "abstract": "This paper proposes a new index for evaluating the stiffness-based deformability of a cell using a microchannel. In conventional approaches, the transit time of a cell through a microchannel is often utilized for the evaluation of cell deformability. However, such time includes both the information of cell stiffness and viscosity. In this paper, we eliminate the effect from cell viscosity, and focus on the cell stiffness only. We find that the velocity of a cell varies when it enters a channel, and eventually reaches to equilibrium where the velocity becomes constant. The constant velocity is defined as the equilibrium velocity of the cell, and it is utilized to define the observability of stiffness-based deformability. The necessary and sufficient numbers of sensing points for evaluating stiffness-based deformability are discussed. Through the dimensional analysis on the microchannel system, three dimensionless parameters determining stiffness-based deformability are derived, and a new index is introduced based on these parameters. The experimental study is conducted on the red blood cells from a healthy subject and a diabetes patient. With the proposed index, we showed that the experimental data can be nicely arranged." }, { "pmid": "17257698", "title": "Biomechanics approaches to studying human diseases.", "abstract": "Nanobiomechanics has recently been identified as an emerging field that can potentially make significant contributions in the study of human diseases. Research into biomechanics at the cellular and molecular levels of some human diseases has not only led to a better elucidation of the mechanisms behind disease progression, because diseased cells differ physically from healthy ones, but has also provided important knowledge in the fight against these diseases. This article highlights some of the cell and molecular biomechanics research carried out on human diseases such as malaria, sickle cell anemia and cancer and aims to provide further important insights into the pathophysiology of such diseases. It is hoped that this can lead to new methods of early detection, diagnosis and treatment." }, { "pmid": "11336534", "title": "Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.", "abstract": "To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall." }, { "pmid": "20176536", "title": "Mechanical characterization of human red blood cells under different osmotic conditions by robotic manipulation with optical tweezers.", "abstract": "The physiological functions of human red blood cells (RBCs) play a crucial role to human health and are greatly influenced by their mechanical properties. Any alteration of the cell mechanics may cause human diseases. The osmotic condition is an important factor to the physiological environment, but its effect on RBCs has been little studied. To investigate this effect, robotic manipulation technology with optical tweezers is utilized in this paper to characterize the mechanical properties of RBCs in different osmotic conditions. The effectiveness of this technology is demonstrated first in the manipulation of microbeads. Then the optical tweezers are used to stretch RBCs to acquire the force-deformation relationships. To extract cell properties from the experimental data, a mechanical model is developed for RBCs in hypotonic conditions by extending our previous work , and the finite element model is utilized for RBCs in isotonic and hypertonic conditions. Through comparing the modeling results to the experimental data, the shear moduli of RBCs in different osmotic solutions are characterized, which shows that the cell stiffness increases with elevated osmolality. Furthermore, the property variation and potential biomedical significance of this study are discussed. In conclusion, this study indicates that the osmotic stress has a significant effect on the cell properties of human RBCs, which may provide insight into the pathology analysis and therapy of some human diseases." }, { "pmid": "8770233", "title": "Measuring the viscoelastic properties of human platelets with the atomic force microscope.", "abstract": "We have measured force curves as a function of the lateral position on top of human platelets with the atomic force microscope. These force curves show the indentation of the cell as the tip loads the sample. By analyzing these force curves we were able to determine the elastic modulus of the platelet with a lateral resolution of approximately 100 nm. The elastic moduli were in a range of 1-50 kPa measured in the frequency range of 1-50 Hz. Loading forces could be controlled with a resolution of 80 pN and indentations of the platelet could be determined with a resolution of 20 nm." }, { "pmid": "23681312", "title": "Recent advances in microfluidic techniques for single-cell biophysical characterization.", "abstract": "Biophysical (mechanical and electrical) properties of living cells have been proven to play important roles in the regulation of various biological activities at the molecular and cellular level, and can serve as promising label-free markers of cells' physiological states. In the past two decades, a number of research tools have been developed for understanding the association between the biophysical property changes of biological cells and human diseases; however, technical challenges of realizing high-throughput, robust and easy-to-perform measurements on single-cell biophysical properties have yet to be solved. In this paper, we review emerging tools enabled by microfluidic technologies for single-cell biophysical characterization. Different techniques are compared. The technical details, advantages, and limitations of various microfluidic devices are discussed." }, { "pmid": "21826361", "title": "Classification of cell types using a microfluidic device for mechanical and electrical measurement on single cells.", "abstract": "This paper presents a microfluidic system for cell type classification using mechanical and electrical measurements on single cells. Cells are aspirated continuously through a constriction channel with cell elongations and impedance profiles measured simultaneously. The cell transit time through the constriction channel and the impedance amplitude ratio are quantified as cell's mechanical and electrical property indicators. The microfluidic device and measurement system were used to characterize osteoblasts (n=206) and osteocytes (n=217), revealing that osteoblasts, compared with osteocytes, have a larger cell elongation length (64.51 ± 14.98 μm vs. 39.78 ± 7.16 μm), a longer transit time (1.84 ± 1.48 s vs. 0.94 ± 1.07 s), and a higher impedance amplitude ratio (1.198 ± 0.071 vs. 1.099 ± 0.038). Pattern recognition using the neural network was applied to cell type classification, resulting in classification success rates of 69.8% (transit time alone), 85.3% (impedance amplitude ratio alone), and 93.7% (both transit time and impedance amplitude ratio as input to neural network) for osteoblasts and osteocytes. The system was also applied to test EMT6 (n=747) and EMT6/AR1.0 cells (n=770, EMT6 treated by doxorubicin) that have a comparable size distribution (cell elongation length: 51.47 ± 11.33 μm vs. 50.09 ± 9.70 μm). The effects of cell size on transit time and impedance amplitude ratio were investigated. Cell classification success rates were 51.3% (cell elongation alone), 57.5% (transit time alone), 59.6% (impedance amplitude ratio alone), and 70.2% (both transit time and impedance amplitude ratio). These preliminary results suggest that biomechanical and bioelectrical parameters, when used in combination, could provide a higher cell classification success rate than using electrical or mechanical parameter alone." }, { "pmid": "24463842", "title": "Red blood cell fatigue evaluation based on the close-encountering point between extensibility and recoverability.", "abstract": "Red blood cells (RBC) circulate the human body several hundred thousand times in their life span. Therefore, their deformability is really important, especially when they pass through a local capillary whose diameter can be as narrow as 3 μm. While there have been a number of works discussing the deformability in a simulated capillary such as a microchannel, as far as we examined in the literature, no work focusing on the change of shape after reciprocated mechanical stress has been reported so far. One of the reasons is that there have been no appropriate experimental systems to achieve such a test. This paper presents a new concept of RBC fatigue evaluation. The fatigue state is defined by the time of reciprocated mechanical stress when the extensibility and the recoverability characteristics meet each other. Our challenge is how to construct a system capable of achieving stable and accurate control of RBCs in a microchannel. For this purpose, we newly introduced two fundamental components. One is a robotic pump capable of manipulating a cell in the accuracy of ±0.24 μm in an equilibrium state with a maximum response time of 15 ms. The other is an online high speed camera capable of chasing the position of RBCs with a sampling rate of 1 kHz. By utilizing these components, we could achieve continuous observation of the length of a RBC over a 1000 times reciprocated mechanical stress. Through these experiments, we found that the repeat number that results in the fatigue state has a close correlation with extensibility." }, { "pmid": "25643151", "title": "Real-time deformability cytometry: on-the-fly cell mechanical phenotyping.", "abstract": "We introduce real-time deformability cytometry (RT-DC) for continuous cell mechanical characterization of large populations (>100,000 cells) with analysis rates greater than 100 cells/s. RT-DC is sensitive to cytoskeletal alterations and can distinguish cell-cycle phases, track stem cell differentiation into distinct lineages and identify cell populations in whole blood by their mechanical fingerprints. This technique adds a new marker-free dimension to flow cytometry with diverse applications in biology, biotechnology and medicine." }, { "pmid": "22547795", "title": "Hydrodynamic stretching of single cells for large population mechanical phenotyping.", "abstract": "Cell state is often assayed through measurement of biochemical and biophysical markers. Although biochemical markers have been widely used, intrinsic biophysical markers, such as the ability to mechanically deform under a load, are advantageous in that they do not require costly labeling or sample preparation. However, current techniques that assay cell mechanical properties have had limited adoption in clinical and cell biology research applications. Here, we demonstrate an automated microfluidic technology capable of probing single-cell deformability at approximately 2,000 cells/s. The method uses inertial focusing to uniformly deliver cells to a stretching extensional flow where cells are deformed at high strain rates, imaged with a high-speed camera, and computationally analyzed to extract quantitative parameters. This approach allows us to analyze cells at throughputs orders of magnitude faster than previously reported biophysical flow cytometers and single-cell mechanics tools, while creating easily observable larger strains and limiting user time commitment and bias through automation. Using this approach we rapidly assay the deformability of native populations of leukocytes and malignant cells in pleural effusions and accurately predict disease state in patients with cancer and immune activation with a sensitivity of 91% and a specificity of 86%. As a tool for biological research, we show the deformability we measure is an early biomarker for pluripotent stem cell differentiation and is likely linked to nuclear structural changes. Microfluidic deformability cytometry brings the statistical accuracy of traditional flow cytometric techniques to label-free biophysical biomarkers, enabling applications in clinical diagnostics, stem cell characterization, and single-cell biophysics." }, { "pmid": "17366485", "title": "Parallel separation of multiple samples with negative pressure sample injection on a 3-D microfluidic array chip.", "abstract": "A simple and powerful microfluidic array chip-based electrophoresis system, which is composed of a 3-D microfluidic array chip, a microvacuum pump-based negative pressure sampling device, a high-voltage supply and an LIF detector, was developed. The 3-D microfluidic array chip was fabricated with three glass plates, in which a common sample waste bus (SW(bus)) was etched in the bottom layer plate to avoid intersecting with the separation channel array. The negative pressure sampling device consists of a microvacuum air pump, a buffer vessel, a 3-way electromagnet valve, and a vacuum gauge. In the sample loading step, all the six samples and buffer solutions were drawn from their reservoirs across the injection intersections through the SW(bus) toward the common sample waste reservoir (SW(T)) by negative pressure. Only 0.5 s was required to obtain six pinched sample plugs at the channel crossings. By switching the three-way electromagnetic valve to release the vacuum in the reservoir SW(T), six sample plugs were simultaneously injected into the separation channels by EOF and electrophoretic separation was activated. Parallel separations of different analytes are presented on the 3-D array chip by using the newly developed sampling device." }, { "pmid": "22367556", "title": "Improvement in cell capture throughput using parallel bioactivated microfluidic channels.", "abstract": "Optimization of targeted cell capture with microfluidic devices continues to be a challenge. On the one hand, microfluidics allow working with microliter volumes of liquids, whereas various applications in the real world require detection of target analyte in large volumes, such as capture of rare cell types in several ml of blood. This contrast of volumes (microliter vs. ml) has prevented the emergence of microfluidic cell capture sensors in the clinical setting. Here, we study the improvement in cell capture and throughput achieved using parallel bioactivated microfluidic channels. The device consists of channels in parallel with each other tied to a single channel. We discuss fabrication and testing of our devices, and show the ability for an improvement in throughput detection of target cells." }, { "pmid": "12524315", "title": "Parallel microchannel-based measurements of individual erythrocyte areas and volumes.", "abstract": "We describe a microchannel device which utilizes a novel approach to obtain area and volume measurements on many individual red blood cells. Red cells are aspirated into the microchannels much as a single red blood cell is aspirated into a micropipette. Inasmuch as there are thousands of identical microchannels with defined geometry, data for many individual red cells can be rapidly acquired, and the fundamental heterogeneity of cell membrane biophysics can be analyzed. Fluorescent labels can be used to quantify red cell surface and cytosolic features of interest simultaneously with the measurement of area and volume for a given cell. Experiments that demonstrate and evaluate the microchannel measuring capabilities are presented and potential improvements and extensions are discussed." }, { "pmid": "18030398", "title": "Matrix-dependent adhesion of vascular and valvular endothelial cells in microfluidic channels.", "abstract": "The interactions between endothelial cells and the underlying extracellular matrix regulate adhesion and cellular responses to microenvironmental stimuli, including flow-induced shear stress. In this study, we investigated the adhesion properties of primary porcine aortic endothelial cells (PAECs) and valve endothelial cells (PAVECs) in a microfluidic network. Taking advantage of the parallel arrangement of the microchannels, we compared adhesion of PAECs and PAVECs to fibronectin and type I collagen, two prominent extracellular matrix proteins, over a broad range of concentrations. Cell spreading was measured morphologically, based on cytoplasmic staining with a vital dye, while adhesion strength was characterized by the number of cells attached after application of shear stresses of 11, 110, and 220 dyn cm(-2). Results showed that PAVECs were more well spread on fibronectin than on type I collagen (P < 0.0001), particularly for coating concentrations of 100, 200, and 500 microg mL(-1). PAVECs also withstood shear significantly better on fibronectin than on collagen for 500 microg mL(-1). PAECs were more well spread on collagen compared to PAVECs (P < 0.0001), but did not have significantly better adhesion strength. These results demonstrate that cell adhesion is both cell-type and matrix dependent. Furthermore, they reveal important phenotypic differences between vascular and valvular endothelium, with implications for endothelial mechanobiology and the design of microdevices and engineered tissues." }, { "pmid": "25325848", "title": "Multiplexed fluidic plunger mechanism for the measurement of red blood cell deformability.", "abstract": "The extraordinary deformability of red blood cells gives them the ability to repeatedly transit through the microvasculature of the human body. The loss of this capability is part of the pathology of a wide range of diseases including malaria, hemoglobinopathies, and micronutrient deficiencies. We report on a technique for multiplexed measurements of the pressure required to deform individual red blood cell through micrometer-scale constrictions. This measurement is performed by first infusing single red blood cells into a parallel array of ~1.7 μm funnel-shaped constrictions. Next, a saw-tooth pressure waveform is applied across the constrictions to squeeze each cell through its constriction. The threshold deformation pressure is then determined by relating the pressure-time data with the video of the deformation process. Our key innovation is a self-compensating fluidic network that ensures identical pressures are applied to each cell regardless of its position, as well as the presence of cells in neighboring constrictions. These characteristics ensure the consistency of the measurement process and robustness against blockages of the constrictions by rigid cells and debris. We evaluate this technique using in vitro cultures of RBCs infected with P. falciparum, the parasite that causes malaria, to demonstrate the ability to profile the deformability signature of a heterogeneous sample." }, { "pmid": "22179505", "title": "Design of pressure-driven microfluidic networks using electric circuit analogy.", "abstract": "This article reviews the application of electric circuit methods for the analysis of pressure-driven microfluidic networks with an emphasis on concentration- and flow-dependent systems. The application of circuit methods to microfluidics is based on the analogous behaviour of hydraulic and electric circuits with correlations of pressure to voltage, volumetric flow rate to current, and hydraulic to electric resistance. Circuit analysis enables rapid predictions of pressure-driven laminar flow in microchannels and is very useful for designing complex microfluidic networks in advance of fabrication. This article provides a comprehensive overview of the physics of pressure-driven laminar flow, the formal analogy between electric and hydraulic circuits, applications of circuit theory to microfluidic network-based devices, recent development and applications of concentration- and flow-dependent microfluidic networks, and promising future applications. The lab-on-a-chip (LOC) and microfluidics community will gain insightful ideas and practical design strategies for developing unique microfluidic network-based devices to address a broad range of biological, chemical, pharmaceutical, and other scientific and technical challenges." }, { "pmid": "26865054", "title": "Deformation and internal stress in a red blood cell as it is driven through a slit by an incoming flow.", "abstract": "To understand the deformation and internal stress of a red blood cell when it is pushed through a slit by an incoming flow, we conduct a numerical investigation by combining a fluid-cell interaction model based on boundary-integral equations with a multiscale structural model of the cell membrane that takes into account the detailed molecular architecture of this biological system. Our results confirm the existence of cell 'infolding', during which part of the membrane is inwardly bent to form a concave region. The time histories and distributions of area deformation, shear deformation, and contact pressure during and after the translocation are examined. Most interestingly, it is found that in the recovery phase after the translocation significant dissociation pressure may develop between the cytoskeleton and the lipid bilayer. The magnitude of this pressure is closely related to the locations of the dimple elements during the transit. Large dissociation pressure in certain cases suggests the possibility of mechanically induced structural remodeling and structural damage such as vesiculation. With quantitative knowledge about the stability of intra-protein, inter-protein and protein-to-lipid linkages under dynamic loads, it will be possible to achieve numerical prediction of these processes." } ]
Micromachines
null
PMC6190445
10.3390/mi8030080
Transfer Function of Macro-Micro Manipulation on a PDMS Microfluidic Chip
To achieve fast and accurate cell manipulation in a microfluidic channel, it is essential to know the true nature of its input-output relationship. This paper aims to reveal the transfer function of such a micro manipulation controlled by a macro actuator. Both a theoretical model and experimental results for the manipulation are presented. A second-order transfer function is derived based on the proposed model, where the polydimethylsiloxane (PDMS) deformation plays an important role in the manipulation. Experiments are conducted with input frequencies up to 300 Hz. An interesting observation from the experimental results is that the frequency responses of the transfer function behave just like a first-order integration operator in the system. The role of PDMS deformation for the transfer function is discussed based on the experimentally-determined parameters and the proposed model.
2. Related WorksVarious approaches have been developed for cell manipulation, for example, using flow control in a microfluidic channel [2,6,8,9], optical tweezers [10,11], micro grippers [12,13,14,15], electrical or magnetic force [16], and acoustic trapping [17,18,19]. The combination of a high-speed pump and a high-speed vision is often used together for high-speed manipulation in a microfluidic channel. For example, Chen et al. performed high-speed cell sorting of single bacterial cells by a PZT pumping [20]. The manipulation is especially important for active cell assessments. For example, Monzawa et al. introduced an actuation transmitter for cell manipulation and evaluation [2]. Sakuma et al. applied a fatigue test to human red blood cells by imparting periodical mechanical stress [8]. Murakami et al. investigated the shape recovery of a cell by controlling different loading time in a constriction [21]. The throughput and stability of such active assessments directly depend on the manipulation speed and resolution. While the frequency characteristics under such closed-loop manipulation systems were previously discussed [2,6], the open-loop transfer function is important to know for designing a faster and more accurate cell manipulation system. To the best of our knowledge, there have been no works discussing the transfer function of the open-loop PDMS microfluidic channel, and this is the first work that exploits the true nature of the macro-micro manipulation on a PDMS chip.
[ "23403762", "25713696", "19190786", "21494795", "23314607", "24463842", "3320757", "16152668", "16834563", "22842855", "22105715", "21809842", "18094770" ]
[ { "pmid": "23403762", "title": "Cell manipulation in microfluidics.", "abstract": "Recent advances in the lab-on-a-chip field in association with nano/microfluidics have been made for new applications and functionalities to the fields of molecular biology, genetic analysis and proteomics, enabling the expansion of the cell biology field. Specifically, microfluidics has provided promising tools for enhancing cell biological research, since it has the ability to precisely control the cellular environment, to easily mimic heterogeneous cellular environment by multiplexing, and to analyze sub-cellular information by high-contents screening assays at the single-cell level. Various cell manipulation techniques in microfluidics have been developed in accordance with specific objectives and applications. In this review, we examine the latest achievements of cell manipulation techniques in microfluidics by categorizing externally applied forces for manipulation: (i) optical, (ii) magnetic, (iii) electrical, (iv) mechanical and (v) other manipulations. We furthermore focus on history where the manipulation techniques originate and also discuss future perspectives with key examples where available." }, { "pmid": "25713696", "title": "On-chip actuation transmitter for enhancing the dynamic response of cell manipulation using a macro-scale pump.", "abstract": "An on-chip actuation transmitter for achieving fast and accurate cell manipulation is proposed. Instead of manipulating cell position by a directly connected macro-scale pump, polydimethylsiloxane deformation is used as a medium to transmit the actuation generated from the pump to control the cell position. This actuation transmitter has three main advantages. First, the dynamic response of cell manipulation is faster than the conventional method with direct flow control based on both the theoretical modeling and experimental results. The cell can be manipulated in a simple harmonic motion up to 130 Hz by the proposed actuation transmitter as opposed to 90 Hz by direct flow control. Second, there is no need to fill the syringe pump with the sample solution because the actuation transmitter physically separates the fluids between the pump and the cell flow, and consequently, only a very small quantity of the sample is required (<1 μl). In addition, such fluid separation makes it easy to keep the experiment platform sterilized because there is no direct fluid exchange between the sample and fluid inside the pump. Third, the fabrication process is simple because of the single-layer design, making it convenient to implement the actuation transmitter in different microfluidic applications. The proposed actuation transmitter is implemented in a lab-on-a-chip system for red blood cell (RBC) evaluation, where the extensibility of red blood cells is evaluated by manipulating the cells through a constriction channel at a constant velocity. The application shows a successful example of implementing the proposed transmitter." }, { "pmid": "19190786", "title": "A microfluidic droplet generator based on a piezoelectric actuator.", "abstract": "Droplet based microfluidic systems have been shown to be most valuable in biology and chemistry research. However droplet modulation and manipulation requires still further improvement in order to make this technology feasible particularly for biological applications. On demand generation of droplets and droplet synchronization, which is crucial for coalescence, remain largely unanswered. The present study describes a simple and robust droplet generator based on a piezoelectric actuator which is integrated into a microfluidic device. The droplet generator is able to independently control the droplet size, rate of formation and distance between droplets. Moreover, the droplet uniformity is especially high, deviating from the mean value by less than 0.3%. The cross flow and T-junction configurations are tested and show no significant differences, yet the inlet to main channel ratio is found to be important. As this ratio increases, droplets tend to be generated in bursts instead of individually. The physical mechanisms involved are discussed, providing insight into optimized design of such systems." }, { "pmid": "21494795", "title": "Diaphragm pico-liter pump for single-cell manipulation.", "abstract": "A pico-liter pump is developed and integrated into a robotic manipulation system that automatically selects and transfers individual living cells of interest to analysis locations. The pump is a displacement type pump comprising one cylindrical chamber connected to a capillary micropipette. The top of the chamber is a thin diaphragm which, when deflected, causes the volume of the fluid-filled cylindrical chamber to change thereby causing fluid in the chamber to flow in and out of the micropipette. This enables aspirating and dispensing individual living cells. The diaphragm is deflected by a piezoelectric actuator that pushes against its center. The pump aspirates and dispenses volumes of fluid between 500 pL and 250 nL at flow rates up to 250 nL/s. The piezo-driven diaphragm arrangement provides exquisite control of the flow rate in and out of the capillary orifice. This feature, in turn, allows reduced perturbation of live cells by controlling and minimizing the applied shear stresses." }, { "pmid": "23314607", "title": "On-chip microrobot for investigating the response of aquatic microorganisms to mechanical stimulation.", "abstract": "In this paper, we propose a novel, magnetically driven microrobot equipped with a frame structure to measure the effects of stimulating aquatic microorganisms. The design and fabrication of the force-sensing structure with a displacement magnification mechanism based on beam deformation are described. The microrobot is composed of a Si-Ni hybrid structure constructed using micro-electro-mechanical system (MEMS) technologies. The microrobots with 5 μm-wide force sensors are actuated in a microfluidic chip by permanent magnets so that they can locally stimulate the microorganisms with the desired force within the stable environment of the closed microchip. They afford centimetre-order mobility (untethered drive) and millinewton-order forces (high power) as well as force-sensing. Finally, we apply the developed microrobots for the quantitative evaluation of the stimuation of Pleurosira laevis (P. laevis) and determine the relationship between the applied force and the response of a single cell." }, { "pmid": "24463842", "title": "Red blood cell fatigue evaluation based on the close-encountering point between extensibility and recoverability.", "abstract": "Red blood cells (RBC) circulate the human body several hundred thousand times in their life span. Therefore, their deformability is really important, especially when they pass through a local capillary whose diameter can be as narrow as 3 μm. While there have been a number of works discussing the deformability in a simulated capillary such as a microchannel, as far as we examined in the literature, no work focusing on the change of shape after reciprocated mechanical stress has been reported so far. One of the reasons is that there have been no appropriate experimental systems to achieve such a test. This paper presents a new concept of RBC fatigue evaluation. The fatigue state is defined by the time of reciprocated mechanical stress when the extensibility and the recoverability characteristics meet each other. Our challenge is how to construct a system capable of achieving stable and accurate control of RBCs in a microchannel. For this purpose, we newly introduced two fundamental components. One is a robotic pump capable of manipulating a cell in the accuracy of ±0.24 μm in an equilibrium state with a maximum response time of 15 ms. The other is an online high speed camera capable of chasing the position of RBCs with a sampling rate of 1 kHz. By utilizing these components, we could achieve continuous observation of the length of a RBC over a 1000 times reciprocated mechanical stress. Through these experiments, we found that the repeat number that results in the fatigue state has a close correlation with extensibility." }, { "pmid": "3320757", "title": "Optical trapping and manipulation of single cells using infrared laser beams.", "abstract": "Use of optical traps for the manipulation of biological particles was recently proposed, and initial observations of laser trapping of bacteria and viruses with visible argon-laser light were reported. We report here the use of infrared (IR) light to make much improved laser traps with significantly less optical damage to a variety of living cells. Using IR light we have observed the reproduction of Escherichia coli within optical traps at power levels sufficient to give manipulation at velocities up to approximately 500 micron s-1. Reproduction of yeast cells by budding was also achieved in IR traps capable of manipulating individual cells and clumps of cells at velocities of approximately micron s-1. Damage-free trapping and manipulation of suspensions of red blood cells of humans and of organelles located within individual living cells of spirogyra was also achieved, largely as a result of the reduced absorption of haemoglobin and chlorophyll in the IR. Trapping of many types of small protozoa and manipulation of organelles within protozoa is also possible. The manipulative capabilities of optical techniques were exploited in experiments showing separation of individual bacteria from one sample and their introduction into another sample. Optical orientation of individual bacterial cells in space was also achieved using a pair of laser-beam traps. These new manipulative techniques using IR light are capable of producing large forces under damage-free conditions and improve the prospects for wider use of optical manipulation techniques in microbiology." }, { "pmid": "16152668", "title": "Single cell manipulation, analytics, and label-free protein detection in microfluidic devices for systems nanobiology.", "abstract": "Single cell analytics for proteomic analysis is considered a key method in the framework of systems nanobiology which allows a novel proteomics without being subjected to ensemble-averaging, cell-cycle, or cell-population effects. We are currently developing a single cell analytical method for protein fingerprinting combining a structured microfluidic device with latest optical laser technology for single cell manipulation (trapping and steering), free-solution electrophoretical protein separation, and (label-free) protein detection. In this paper we report on first results of this novel analytical device focusing on three main issues. First, single biological cells were trapped, injected, steered, and deposited by means of optical tweezers in a poly(dimethylsiloxane) microfluidic device and consecutively lysed with SDS at a predefined position. Second, separation and detection of fluorescent dyes, amino acids, and proteins were achieved with LIF detection in the visible (VIS) (488 nm) as well as in the deep UV (266 nm) spectral range for label-free, native protein detection. Minute concentrations of 100 fM injected fluorescein could be detected in the VIS and a first protein separation and label-free detection could be achieved in the UV spectral range. Third, first analytical experiments with single Sf9 insect cells (Spodoptera frugiperda) in a tailored microfluidic device exhibiting distinct electropherograms of a green fluorescent protein-construct proved the validity of the concept. Thus, the presented microfluidic concept allows novel and fascinating single cell experiments for systems nanobiology in the future." }, { "pmid": "16834563", "title": "Electrical forces for microscale cell manipulation.", "abstract": "Electrical forces for manipulating cells at the microscale include electrophoresis and dielectrophoresis. Electrophoretic forces arise from the interaction of a cell's charge and an electric field, whereas dielectrophoresis arises from a cell's polarizability. Both forces can be used to create microsystems that separate cell mixtures into its component cell types or act as electrical \"handles\" to transport cells or place them in specific locations. This review explores the use of these two forces for microscale cell manipulation. We first examine the forces and electrodes used to create them, then address potential impacts on cell health, followed by examples of devices for both separating cells and handling them." }, { "pmid": "22842855", "title": "Acoustofluidics 17: theory and applications of surface acoustic wave devices for particle manipulation.", "abstract": "In this paper, number 17 of the thematic tutorial series \"Acoustofluidics-exploiting ultrasonic standing waves, forces and acoustic streaming in microfluidic systems for cell and particle manipulation\", we present the theory of surface acoustic waves (SAWs) and some related microfluidic applications. The equations describing SAWs are derived for a solid-vacuum interface before generalisations are made about solid-solid and solid-fluid interfaces. Techniques for SAW generation are discussed before an overview of applications is presented." }, { "pmid": "22105715", "title": "Acoustofluidics 2: perturbation theory and ultrasound resonance modes.", "abstract": "In the second part of the thematic tutorial series \"Acoustofluidics--exploiting ultrasonic standing waves forces and acoustic streaming in microfluidic systems for cell and particle manipulation\", we develop the perturbation theory of the acoustic field in fluids and apply the result in a study of acoustic resonance modes in microfluidic channels." }, { "pmid": "21809842", "title": "Specific sorting of single bacterial cells with microfabricated fluorescence-activated cell sorting and tyramide signal amplification fluorescence in situ hybridization.", "abstract": "When attempting to probe the genetic makeup of diverse bacterial communities that elude cell culturing, researchers face two primary challenges: isolation of rare bacteria from microbial samples and removal of contaminating cell-free DNA. We report a compact, low-cost, and high-performance microfabricated fluorescence-activated cell sorting (μFACS) technology in combination with a tyramide signal amplification fluorescence in situ hybridization (TSA-FISH) to address these two challenges. The TSA-FISH protocol that was adapted for flow cytometry yields a 10-30-fold enhancement in fluorescence intensity over standard FISH methods. The μFACS technology, capable of enhancing its sensitivity by ~18 dB through signal processing, was able to enrich TSA-FISH-labeled E. coli cells by 223-fold. The μFACS technology was also used to remove contaminating cell-free DNA. After two rounds of sorting on E. coli mixed with λ-phage DNA (10 ng/μL), we demonstrated over 100,000-fold reduction in λ-DNA concentration. The integrated μFACS and TSA-FISH technologies provide a highly effective and low-cost solution for research on the genomic complexity of bacteria as well as single-cell genomic analysis of other sample types." }, { "pmid": "18094770", "title": "Optical sectioning for microfluidics: secondary flow and mixing in a meandering microchannel.", "abstract": "Secondary flow plays a critical function in a microchannel, such as a micromixer, because it can enhance heat and mass transfer. However, there is no experimental method to visualize the secondary flow and the associated mixing pattern in a microchannel because of difficulties in high-resolution, non-invasive, cross-sectional imaging. Here, we simultaneously imaged and quantified the secondary flow and pattern of two-liquid mixing inside a meandering square microchannel with spectral-domain Doppler optical coherence tomography. We observed an increase in the efficiency of two-liquid mixing when air was injected to produce a bubble-train flow and identified the three-dimensional enhancement mechanism behind the complex mixing phenomena. An alternating pair of counter-rotating and toroidal vortices cooperated to enhance two-liquid mixing." } ]
Micromachines
30404289
PMC6190453
10.3390/mi7070116
Gravity-Based Precise Cell Manipulation System Enhanced by In-Phase Mechanism
This paper proposes a gravity-based system capable of generating high-resolution pressure for precise cell manipulation or evaluation in a microfluidic channel. While the pressure resolution of conventional pumps for microfluidic applications is usually about hundreds of pascals as the resolution of their feedback sensors, precise cell manipulation at the pascal level cannot be done. The proposed system successfully achieves a resolution of 100 millipascals using water head pressure with an in-phase noise cancelation mechanism. The in-phase mechanism aims to suppress the noises from ambient vibrations to the system. The proposed pressure system is tested with a microfluidic platform for pressure validation. The experimental results show that the in-phase mechanism effectively reduces the pressure turbulence, and the pressure-driven cell movement matches the theoretical simulations. Preliminary experiments on deformability evaluation with red blood cells under incremental pressures of one pascal are successfully performed. Different deformation patterns are observed from cell to cell under precise pressure control.
2. Related WorksThere are various approaches for single-cell evaluation and cell manipulation. For example, Sakuma et al. determined the RBC fatigue state by continuously pushing cells through a narrow channel using a high-speed syringe pump and a high-speed vision system [5]. Tan et al. measured the mechanical characteristics of RBCs under different osmotic pressure with optical tweezers [6]. Avci et al. achieved cell manipulation by dynamic release with chopstick-like microgrippers [7]. Tanyeri et al. developed a microfluidic Wheatstone bridge for rapid sample analysis [8]. Although these approaches demonstrate solid results in system functionalities, they require either costly experimental setup or great effort in system tuning and adjustments. On the other hand, gravity-based pressure/flow control for microfluidics has the great advantages of low cost, easy control and stability for the microfluidic system, and it has also been employed in the applications of cell evaluation. For example, Zhang et al. manipulated droplets by hanging reservoirs on a turn table [9,10]. Kang et al. controlled a micro-object by simple rotary arms [11]. Yamada et al. used stationary reservoirs as a constant pressure source for cell counting [12]. There are also works generating pressure with the water head difference using tilted microfluidic chips [13,14].This paper successfully achieves the high-resolution control of pressure using a gravity-based system aimed for microfluidic applications, particularly in cell deformability testing. In addition, the in-phase noise cancelation mechanism is implemented in the system and experimentally evaluated.
[ "1571406", "20433347", "25713696", "24463842", "20176536", "22030805", "21853976", "15570372", "15565685" ]
[ { "pmid": "1571406", "title": "The clinical importance of erythrocyte deformability, a hemorrheological parameter.", "abstract": "Hemorheology, the science of the flow behavior of blood, has become increasingly important in clinical situations. The rheology of blood is dependent on its viscosity, which in turn is influenced by plasma viscosity, hematocrit, erythrocyte aggregation, and erythrocyte deformability. In recent years it has become apparent that the shape and elasticity of erythrocytes may be important in explaining the etiology of certain pathological situations. Thus, clinicians have become increasingly interested in hemorheology in general and erythrocyte deformability in particular. In the course of time, many clinical studies have been performed, but no concise review has thus far been published. This article encompasses a review of the clinically based literature on this subject." }, { "pmid": "20433347", "title": "Microfluidic platforms for single-cell analysis.", "abstract": "Microfluidics, the study and control of the fluidic behavior in microstructures, has emerged as an important enabling tool for single-cell chemical analysis. The complex procedures for chemical cytometry experiments can be integrated into a single microfabricated device. The capability of handling a volume of liquid as small as picoliters can be utilized to manipulate cells, perform controlled cell lysis and chemical reactions, and efficiently minimize sample dilution after lysis. The separation modalities such as chromatography and electrophoresis within microchannels are incorporated to analyze various types of intracellular components quantitatively. The microfluidic approach offers a rapid, accurate, and cost-effective tool for single-cell biology. We present an overview of the recent developments in microfluidic technology for chemical-content analysis of individual cells." }, { "pmid": "25713696", "title": "On-chip actuation transmitter for enhancing the dynamic response of cell manipulation using a macro-scale pump.", "abstract": "An on-chip actuation transmitter for achieving fast and accurate cell manipulation is proposed. Instead of manipulating cell position by a directly connected macro-scale pump, polydimethylsiloxane deformation is used as a medium to transmit the actuation generated from the pump to control the cell position. This actuation transmitter has three main advantages. First, the dynamic response of cell manipulation is faster than the conventional method with direct flow control based on both the theoretical modeling and experimental results. The cell can be manipulated in a simple harmonic motion up to 130 Hz by the proposed actuation transmitter as opposed to 90 Hz by direct flow control. Second, there is no need to fill the syringe pump with the sample solution because the actuation transmitter physically separates the fluids between the pump and the cell flow, and consequently, only a very small quantity of the sample is required (<1 μl). In addition, such fluid separation makes it easy to keep the experiment platform sterilized because there is no direct fluid exchange between the sample and fluid inside the pump. Third, the fabrication process is simple because of the single-layer design, making it convenient to implement the actuation transmitter in different microfluidic applications. The proposed actuation transmitter is implemented in a lab-on-a-chip system for red blood cell (RBC) evaluation, where the extensibility of red blood cells is evaluated by manipulating the cells through a constriction channel at a constant velocity. The application shows a successful example of implementing the proposed transmitter." }, { "pmid": "24463842", "title": "Red blood cell fatigue evaluation based on the close-encountering point between extensibility and recoverability.", "abstract": "Red blood cells (RBC) circulate the human body several hundred thousand times in their life span. Therefore, their deformability is really important, especially when they pass through a local capillary whose diameter can be as narrow as 3 μm. While there have been a number of works discussing the deformability in a simulated capillary such as a microchannel, as far as we examined in the literature, no work focusing on the change of shape after reciprocated mechanical stress has been reported so far. One of the reasons is that there have been no appropriate experimental systems to achieve such a test. This paper presents a new concept of RBC fatigue evaluation. The fatigue state is defined by the time of reciprocated mechanical stress when the extensibility and the recoverability characteristics meet each other. Our challenge is how to construct a system capable of achieving stable and accurate control of RBCs in a microchannel. For this purpose, we newly introduced two fundamental components. One is a robotic pump capable of manipulating a cell in the accuracy of ±0.24 μm in an equilibrium state with a maximum response time of 15 ms. The other is an online high speed camera capable of chasing the position of RBCs with a sampling rate of 1 kHz. By utilizing these components, we could achieve continuous observation of the length of a RBC over a 1000 times reciprocated mechanical stress. Through these experiments, we found that the repeat number that results in the fatigue state has a close correlation with extensibility." }, { "pmid": "20176536", "title": "Mechanical characterization of human red blood cells under different osmotic conditions by robotic manipulation with optical tweezers.", "abstract": "The physiological functions of human red blood cells (RBCs) play a crucial role to human health and are greatly influenced by their mechanical properties. Any alteration of the cell mechanics may cause human diseases. The osmotic condition is an important factor to the physiological environment, but its effect on RBCs has been little studied. To investigate this effect, robotic manipulation technology with optical tweezers is utilized in this paper to characterize the mechanical properties of RBCs in different osmotic conditions. The effectiveness of this technology is demonstrated first in the manipulation of microbeads. Then the optical tweezers are used to stretch RBCs to acquire the force-deformation relationships. To extract cell properties from the experimental data, a mechanical model is developed for RBCs in hypotonic conditions by extending our previous work , and the finite element model is utilized for RBCs in isotonic and hypertonic conditions. Through comparing the modeling results to the experimental data, the shear moduli of RBCs in different osmotic solutions are characterized, which shows that the cell stiffness increases with elevated osmolality. Furthermore, the property variation and potential biomedical significance of this study are discussed. In conclusion, this study indicates that the osmotic stress has a significant effect on the cell properties of human RBCs, which may provide insight into the pathology analysis and therapy of some human diseases." }, { "pmid": "22030805", "title": "Microfluidic Wheatstone bridge for rapid sample analysis.", "abstract": "We developed a microfluidic analogue of the classic Wheatstone bridge circuit for automated, real-time sampling of solutions in a flow-through device format. We demonstrate precise control of flow rate and flow direction in the \"bridge\" microchannel using an on-chip membrane valve, which functions as an integrated \"variable resistor\". We implement an automated feedback control mechanism in order to dynamically adjust valve opening, thereby manipulating the pressure drop across the bridge and precisely controlling fluid flow in the bridge channel. At a critical valve opening, the flow in the bridge channel can be completely stopped by balancing the flow resistances in the Wheatstone bridge device, which facilitates rapid, on-demand fluid sampling in the bridge channel. In this article, we present the underlying mechanism for device operation and report key design parameters that determine device performance. Overall, the microfluidic Wheatstone bridge represents a new and versatile method for on-chip flow control and sample manipulation." }, { "pmid": "21853976", "title": "Comprehensive two-dimensional manipulations of picoliter microfluidic droplets sampled from nanoliter samples.", "abstract": "A facile method is presented for achieving comprehensive two-dimensional droplet manipulations in closed microstructures consisting of microwell arrays and a straight microchannel. In this method, picoliter/nanoliter droplets with controllable sizes and numbers are sampled from nanoliter samples/reagents with almost 100% efficiency. Droplet motions are precisely controlled in the ±X-direction and ±Y-direction by managing hydrostatic pressure and magnetic repulsion, respectively. As a demonstration, a fluorescein-labeled droplet and a deionized droplet are successively generated and trapped in adjacent microwells. Then their positions are quickly exchanged without cross-contamination and fusion is implemented on-demand. After operations, hydrophobic ferrofluid can be completely replaced by mineral oil and droplets still remain in microwells safely. A typical fluorescence intensity-based assay is demonstrated: droplet arrays containing copper ion are diluted disproportionately first and then detected by addition of droplet arrays containing Calcein. With the ability of comprehensive two-dimensional droplet manipulations, this method could be used in various miniaturized biochemical analyses including requirements of multistep procedures and in situ monitoring." }, { "pmid": "15570372", "title": "A microfluidic device based on gravity and electric force driving for flow cytometry and fluorescence activated cell sorting.", "abstract": "A novel method based on gravity and electric force driving of cells was developed for flow cytometry and fluorescence activated cell sorting in a microfluidic chip system. In the experiments cells flowed spontaneously under their own gravity in a upright microchip, passed through the detection region and then entered into the sorting electric field one by one at an average velocity of 0.55 mm s(-1) and were fluorescence activated cell sorted (FACS) by a switch-off activation program. In order to study the dynamical and kinematic characteristics of single cells in gravity and electric field of microchannels a physical and numerical module based on Newton's Law of motion was established and optimized. Hydroxylpropylmethyl cellulose (HPMC) was used to minimize cell assembling, sedimentation and adsorption to microchannels. This system was applied to estimate the necrotic and apoptotic effects of ultraviolet (UV) light on HeLa cells by exposing them to UV radiation for 10, 20 or 40 min and the results showed that UV radiation induced membrane damage contributed to the apoptosis and necrosis of HeLa cells." }, { "pmid": "15565685", "title": "Gravity-induced convective flow in microfluidic systems: electrochemical characterization and application to enzyme-linked immunosorbent assay tests.", "abstract": "A way of using gravity flow to induce a linear convection within a microfluidic system is presented. It is shown and mathematically supported that tilting a 1 cm long covered microchannel is enough to generate flow rates up to 1000 nL.min(-1), which represents a linear velocity of 2.4 mm.s(-1). This paper also presents a method to monitor the microfluidic events occurring in a covered microchannel when a difference of pressure is applied to force a solution to flow in said covered microchannel, thanks to electrodes inserted in the microfluidic device. Gravity-induced flow monitored electrochemically is applied to the performance of a parallel-microchannel enzyme-linked immunosorbent assay (ELISA) of the thyroid-stimulating hormone (TSH) with electrochemical detection. A simple method for generating and monitoring fluid flows is described, which can, for instance, be used for controlling parallel assays in microsystems." } ]
Frontiers in Neurorobotics
30386227
PMC6198278
10.3389/fnbot.2018.00066
Robot End Effector Tracking Using Predictive Multisensory Integration
We propose a biologically inspired model that enables a humanoid robot to learn how to track its end effector by integrating visual and proprioceptive cues as it interacts with the environment. A key novel feature of this model is the incorporation of sensorimotor prediction, where the robot predicts the sensory consequences of its current body motion as measured by proprioceptive feedback. The robot develops the ability to perform smooth pursuit-like eye movements to track its hand, both in the presence and absence of visual input, and to track exteroceptive visual motions. Our framework makes a number of advances over past work. First, our model does not require a fiducial marker to indicate the robot hand explicitly. Second, it does not require the forward kinematics of the robot arm to be known. Third, it does not depend upon pre-defined visual feature descriptors. These are learned during interaction with the environment. We demonstrate that the use of prediction in multisensory integration enables the agent to incorporate the information from proprioceptive and visual cues better. The proposed model has properties that are qualitatively similar to the characteristics of human eye-hand coordination.
Related workThe problem of learning end effector tracking is a part of the larger problem of autonomous learning of the body schema. The body schema is a sensorimotor representation of the body that can be used to direct motion and actions. It integrates multiple cues, including proprioception, vision, audition, vestibular cues, tactile cues, and motor cues, to represent the relations between the spatial positions of the body parts. Knowledge of the body schema can be used in a number of different tasks, e.g., end effector tracking, reaching, posture control and locomotion.The review by Hoffmann et al. (2010) classifies body schema representations used in robotics into two classes: explicit and implicit. Both have been used to address the problem of end effector tracking. In the explicit approach (e.g., Bennett et al., 1991; Hollerbach and Wampler, 1996; Gatla et al., 2007), transformations between sensory and motor coordinates are broken down into a chain of closed form transformations where each link corresponds explicitly to part of the robot structure. The work we present here falls into the class of implicit models, where an implicit representation (e.g., a look up table or neural network) is used.Past work has often used a point representation of the end effector, where artificial markers (e.g., color blobs) have been used to enable easy identification of the end effector (Hersch et al., 2008; Sturm et al., 2009). For example, a biologically inspired model to learn visuomotor coordination for the robot Nao was proposed in Schillaci et al. (2014). Learning occurred during motor babbling, which is similar to how infants may learn early eye-hand coordination skills. The proposed method used two Dynamic Self Organizing Maps (DSOMs) to represent the arm and neck position of the robot. The connections between the DSOMs were strengthened if the robot was looking at a fiducial marker positioned on the end effector. After learning, the robot had the ability to track the end effector by controlling the neck joints. One advantage of this model is that the method has no assumption that the forward arm kinematics of the robot is known. However, one limitation of the approach is that it required a fiducial marker.Subsequent work has relaxed the assumption that the end effector is a point and removed the requirements for explicit markers. However, it has still required hard-coded visual feature descriptors. For example, an algorithm to learn the mapping from arm joint space to the corresponding region in image space containing the end effector was proposed in Zhou and Shi (2016), based on a measure of visual consistency defined using SIFT features (Lowe, 2004). This algorithm did not require prior knowledge of the arm model, and was robust to changes in the appearance of the end effector. Other marker-less approaches have relied upon knowledge of a 3D CAD model of the end effector (Vicente et al., 2016; Fantacci et al., 2017). Vicente et al. (2016) eliminated calibration errors using a particle filter. The likelihood associated with each particle was evaluated by comparing the outputs of Canny edge detectors applied to both the real and simulated camera images. Fantacci et al. (2017) extended this particle filter and 3D CAD model based approach to estimate the end effector pose. The likelihood was evaluated using a Histogram of Oriented Gradient (HOG) (Dalal and Triggs, 2005) based transformation to compare the two images. The approach to bootstrap a kinematic model of a robot arm proposed in Broun et al. (2014) does not require a priori knowledge of a CAD model, as it constructs a model of the end-effector on the fly from Kinect point cloud data. However, it still requires a hard-coded optical flow extraction stage to identify the arm in the image through visuomotor correlation.Some of the limitations in the aforementioned research (e.g., the requirement for a marker and/or hard-coded image features) were addressed in our prior work (Wijesinghe et al., 2017), which proposed a multisensory neural network that combined visual and proprioceptive modalities to track a robot arm. Retinal slip during the motion was represented by encoding two temporally consecutive image frames using a sparse coding algorithm where the basis vectors were learned online (Zhang et al., 2014). The sparse coefficients were combined with proprioceptive input to control the eye to track the arm. This paper extends our previous idea by introducing a new model following the hypothesis that the brain generates internal predictions for consequences of actions.
[ "28602353", "16672304", "25973550", "27628207", "28264980", "24171930", "2930639", "3208853", "3208852", "22681686", "20510853", "7433622", "11369946", "20188652", "15729913", "10195184", "9835398", "24185423", "6129637", "5378050", "5657071", "19665561", "8871226", "8338495", "15165552", "27974161" ]
[ { "pmid": "28602353", "title": "Visuomotor Coupling Shapes the Functional Development of Mouse Visual Cortex.", "abstract": "The emergence of sensory-guided behavior depends on sensorimotor coupling during development. How sensorimotor experience shapes neural processing is unclear. Here, we show that the coupling between motor output and visual feedback is necessary for the functional development of visual processing in layer 2/3 (L2/3) of primary visual cortex (V1) of the mouse. Using a virtual reality system, we reared mice in conditions of normal or random visuomotor coupling. We recorded the activity of identified excitatory and inhibitory L2/3 neurons in response to transient visuomotor mismatches in both groups of mice. Mismatch responses in excitatory neurons were strongly experience dependent and driven by a transient release from inhibition mediated by somatostatin-positive interneurons. These data are consistent with a model in which L2/3 of V1 computes a difference between an inhibitory visual input and an excitatory locomotion-related input, where the balance between these two inputs is finely tuned by visuomotor experience." }, { "pmid": "16672304", "title": "Smooth pursuit of nonvisual motion.", "abstract": "Unlike saccades, smooth pursuit eye movements (SPEMs) are not under voluntary control and their initiation generally requires a moving visual target. However, there are various reports of limited smooth pursuit of the motion of a subject's own finger in total darkness (pursuit based on proprioceptive feedback) and to the combination of proprioception and tactile motion as an unseen finger was moved voluntarily over a smooth surface. In contrast, SPEMs to auditory motion are not distinguishable from pursuit of imagined motion. These reports of smooth pursuit of nonvisual motion cues used a variety of paradigms and different stimuli. In addition, the results have often relied primarily on qualitative descriptions of the smooth pursuit. Here, we directly compare measurements of smooth pursuit gain (eye velocity/stimulus velocity) to visual, auditory, proprioceptive, tactile, and combined tactile + proprioceptive motion stimuli. The results demonstrate high gains for visual pursuit, low gains for auditory pursuit, and intermediate, statistically indistinguishable gains for tactile, proprioceptive, and proprioceptive + tactile pursuit." }, { "pmid": "25973550", "title": "Learning Slowness in a Sparse Model of Invariant Feature Detection.", "abstract": "Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invariant feature detectors: temporal slowness and sparsity. This model, the generative adaptive subspace self-organizing map (GASSOM), extends Kohonen's adaptive subspace self-organizing map (ASSOM) with a generative model of the input. Each observation is assumed to be generated by one among many nodes in the network, each being associated with a different subspace in the space of all observations. The generating nodes evolve according to a first-order Markov chain and generate inputs that lie close to the associated subspace. This model differs from prior approaches in that temporal slowness is not an externally imposed criterion to be maximized during learning but, rather, an emergent property of the model structure as it seeks a good model of the input statistics. Unlike the ASSOM, the GASSOM does not require an explicit segmentation of the input training vectors into separate episodes. This enables us to apply this model to an unlabeled naturalistic image sequence generated by a realistic eye movement model. We show that the emergence of temporal slowness within the model improves the invariance of feature detectors trained on this input." }, { "pmid": "27628207", "title": "Role of motor execution in the ocular tracking of self-generated movements.", "abstract": "When human observers track the movements of their own hand with their gaze, the eyes can start moving before the finger (i.e., anticipatory smooth pursuit). The signals driving anticipation could come from motor commands during finger motor execution or from motor intention and decision processes associated with self-initiated movements. For the present study, we built a mechanical device that could move a visual target either in the same direction as the participant's hand or in the opposite direction. Gaze pursuit of the target showed stronger anticipation if it moved in the same direction as the hand compared with the opposite direction, as evidenced by decreased pursuit latency, increased positional lead of the eye relative to target, increased pursuit gain, decreased saccade rate, and decreased delay at the movement reversal. Some degree of anticipation occurred for incongruent pursuit, indicating that there is a role for higher-level movement prediction in pursuit anticipation. The fact that anticipation was larger when target and finger moved in the same direction provides evidence for a direct coupling between finger and eye motor commands." }, { "pmid": "28264980", "title": "Locomotion Enhances Neural Encoding of Visual Stimuli in Mouse V1.", "abstract": "Neurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are thought to be initiated by projections from the mesencephalic locomotor region, the latter through a disinhibitory circuit in V1. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a threefold to fivefold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving.SIGNIFICANCE STATEMENT This paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of 3 the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third, we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations." }, { "pmid": "24171930", "title": "Kinesthesis can make an invisible hand visible.", "abstract": "Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown whether actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one's own hand in front of one's covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that grapheme-color synesthetes experience substantially stronger kinesthesis-induced visual sensations than nonsynesthetes do. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants' ability to smoothly track self-generated hand movements with their eyes in darkness, which indicates that these sensations function like typical retinally driven visual sensations. Evidently, even in the complete absence of external visual input, the brain predicts visual consequences of actions." }, { "pmid": "2930639", "title": "Interaction of visual and non-visual signals in the initiation of smooth pursuit eye movements in primates.", "abstract": "The initiation of smooth pursuit eye movements (PEM) by visual and non-visual signals was analysed in humans and monkeys. While PEM latency ranged around 150 ms when a purely visual target was provided, it often dropped to about 0 ms, or even became negative, when target movement was coupled to the subject's arm; this suggests that signals about the intention to move the arm can be evaluated for PEM control. Eye movements always started in the visually correct direction, independent of the sign of coupling between arm and target; from this we conclude that intentional signals are not mere triggers, but also convey directional information. Short-latency PEM trials were intermixed with those characterized by normal latencies, which often resulted in bimodal latency distributions; this suggests that visual and intentional signals compete for the control of PEM." }, { "pmid": "3208853", "title": "Oculo-manual tracking of visual targets in monkey: role of the arm afferent information in the control of arm and eye movements.", "abstract": "The study was aimed at defining the role of hand (and arm) kinaesthetic information in coordination control of the visuo-oculo-manual tracking system. Baboons were trained to follow slow-moving and stepping visual targets either with the eyes alone or with the eyes and a lever moved by the forelimb about the vertical axis. A LED was attached to the lever extremity. Four oculo-manual tracking conditions were tested and compared to eye-alone tracking: Eye and hand tracking of a visual target presented on a screen, eye tracking of the hand, and eye tracking of an imaginary target actively moved by the arm. The performance of the animals evaluated in terms of latency, and velocity and position precision for both eye and hand movements was seen to be equivalent to that of humans in similar situations. After dorsal root rhizotomy (C1-T2) the animals were unable to produce slow arm motion in response to slow-moving targets. Instead, they produced successions of ballistic-like motions whose amplitude decreased as retraining proceeded. In addition, the animals could no longer respond with smooth pursuit eye movements to an imaginary target actively displaced by the animal's forelimb. It was concluded that the absence of ocular smooth pursuit after lesion results from the disruption of a signal derived from arm kinaesthetic information and addresses to the oculomotor system. This signal is likely to be used in the control of coordination between arm and eye movements during visuo-oculo-manual tracking tasks. One cause of the animal's inability to achieve slow arm movement in response to slow target motion is thought to be due to a lesion-induced alteration of the spinal common pathway dynamics which normally integrate the velocity signal descending from the arm movement command system." }, { "pmid": "3208852", "title": "Oculo-manual tracking of visual targets: control learning, coordination control and coordination model.", "abstract": "The processes which develop to coordinate eye and hand movements in response to motion of a visual target were studied in young children and adults. We have shown that functional maturation of the coordination control between eye and hand takes place as a result of training. We observed, in the trained child and in the adult, that when the hand is used either as a target or to track a visual target, the dynamic characteristics of the smooth pursuit system are markedly improved: the eye to target delay is decreased from 150 ms in eye alone tracking to 30 ms, and smooth pursuit maximum velocity is increased by 100%. Coordination signals between arm and eye motor systems may be responsible for smooth pursuit eye movements which occur during self-tracking of hand or finger in darkness. These signals may also account for the higher velocity smooth pursuit eye movements and the shortened tracking delay when the hand is used as a target, as well as for the synkinetic eye-arm motions observed at the early stage of oculo-manual tracking training in children. We propose a model to describe the interaction which develops between two systems involved in the execution of a common sensorimotor task. The model applies to the visuo-oculo-manual tracking system, but it may be generalized to other coordinated systems. According to our definition, coordination control results from the reciprocal transfer of sensory and motor information between two or more systems involved in the execution of single, goal-directed or conjugate actions. This control, originating in one or more highly specialized structures of the central nervous system, combines with the control processes normally operating in each system. Our model relies on two essential notions which describe the dynamic and static aspects of coordination control: timing and mutual coupling." }, { "pmid": "22681686", "title": "Sensorimotor mismatch signals in primary visual cortex of the behaving mouse.", "abstract": "Studies in anesthetized animals have suggested that activity in early visual cortex is mainly driven by visual input and is well described by a feedforward processing hierarchy. However, evidence from experiments on awake animals has shown that both eye movements and behavioral state can strongly modulate responses of neurons in visual cortex; the functional significance of this modulation, however, remains elusive. Using visual-flow feedback manipulations during locomotion in a virtual reality environment, we found that responses in layer 2/3 of mouse primary visual cortex are strongly driven by locomotion and by mismatch between actual and expected visual feedback. These data suggest that processing in visual cortex may be based on predictive coding strategies that use motor-related and visual input to detect mismatches between predicted and actual visual feedback." }, { "pmid": "20510853", "title": "Visual guidance of smooth-pursuit eye movements: sensation, action, and what happens in between.", "abstract": "Smooth-pursuit eye movements transform 100 ms of visual motion into a rapid initiation of smooth eye movement followed by sustained accurate tracking. Both the mean and variation of the visually driven pursuit response can be accounted for by the combination of the mean tuning curves and the correlated noise within the sensory representation of visual motion in extrastriate visual area MT. Sensory-motor and motor circuits have both housekeeping and modulatory functions, implemented in the cerebellum and the smooth eye movement region of the frontal eye fields. The representation of pursuit is quite different in these two regions of the brain, but both regions seem to control pursuit directly with little or no noise added downstream. Finally, pursuit exhibits a number of voluntary characteristics that happen on short timescales. These features make pursuit an excellent exemplar for understanding the general properties of sensory-motor processing in the brain." }, { "pmid": "11369946", "title": "The cerebellum coordinates eye and hand tracking movements.", "abstract": "The cerebellum is thought to help coordinate movement. We tested this using functional magnetic resonance imaging (fMRI) of the human brain during visually guided tracking tasks requiring varying degrees of eye-hand coordination. The cerebellum was more active during independent rather than coordinated eye and hand tracking. However, in three further tasks, we also found parametric increases in cerebellar blood oxygenation signal (BOLD) as eye-hand coordination increased. Thus, the cerebellar BOLD signal has a non-monotonic relationship to tracking performance, with high activity during both coordinated and independent conditions. These data provide the most direct evidence from functional imaging that the cerebellum supports motor coordination. Its activity is consistent with roles in coordinating and learning to coordinate eye and hand movement." }, { "pmid": "20188652", "title": "Modulation of visual responses by behavioral state in mouse visual cortex.", "abstract": "Studies of visual processing in rodents have conventionally been performed on anesthetized animals, precluding examination of the effects of behavior on visually evoked responses. We have now studied the response properties of neurons in primary visual cortex of awake mice that were allowed to run on a freely rotating spherical treadmill with their heads fixed. Most neurons showed more than a doubling of visually evoked firing rate as the animal transitioned from standing still to running, without changes in spontaneous firing or stimulus selectivity. Tuning properties in the awake animal were similar to those measured previously in anesthetized animals. Response magnitude in the lateral geniculate nucleus did not increase with locomotion, demonstrating that the striking change in responsiveness did not result from peripheral effects at the eye. Interestingly, some narrow-spiking cells were spontaneously active during running but suppressed by visual stimuli. These results demonstrate powerful cell-type-specific modulation of visual processing by behavioral state in awake mice." }, { "pmid": "15729913", "title": "A biologically inspired algorithm for the recovery of shading and reflectance images.", "abstract": "We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red-green, and blue-yellow channels of the primate visual system. The red-green and blue-yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm." }, { "pmid": "10195184", "title": "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.", "abstract": "We describe a model of visual processing in which feedback connections from a higher- to a lower-order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images." }, { "pmid": "9835398", "title": "Self-perception and action in infancy.", "abstract": "By 2-3 months, infants engage in exploration of their own body as it moves and acts in the environment. They babble and touch their own body, attracted and actively involved in investigating the rich intermodal redundancies, temporal contingencies, and spatial congruence of self-perception. Recent research is presented, which investigates the spatial and temporal determinants of self-perception and action infancy. This research shows that, in the course of the first weeks of life, infants develop an ability to detect intermodal invariants and regularities in their sensorimotor experience, which specify themselves as separate entities agent in the environment. Recent observations on the detection of intermodal invariants regarding self-produced leg movements and auditory feedback of sucking by young infants are reported. These observations demonstrate that, early in development and long before mirror self-recognition, infants develop a perceptual ability to specify themselves. It is tentatively proposed that young infants' propensity to engage in self-perception and systematic exploration of the perceptual consequences of their own action plays an important role in the intermodal calibration of the body and is probably at the origin of an early sense of self: the ecological self." }, { "pmid": "24185423", "title": "Integration of visual motion and locomotion in mouse visual cortex.", "abstract": "Successful navigation through the world requires accurate estimation of one's own speed. To derive this estimate, animals integrate visual speed gauged from optic flow and run speed gauged from proprioceptive and locomotor systems. The primary visual cortex (V1) carries signals related to visual speed, and its responses are also affected by run speed. To study how V1 combines these signals during navigation, we recorded from mice that traversed a virtual environment. Nearly half of the V1 neurons were reliably driven by combinations of visual speed and run speed. These neurons performed a weighted sum of the two speeds. The weights were diverse across neurons, and typically positive. As a population, V1 neurons predicted a linear combination of visual and run speeds better than either visual or run speeds alone. These data indicate that V1 in the mouse participates in a multimodal processing system that integrates visual motion and locomotion during navigation." }, { "pmid": "6129637", "title": "Predictive coding: a fresh view of inhibition in the retina.", "abstract": "Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment emphasizes that a neuron's dynamic range should be matched to both its receptive field and the statistical properties of the visual pattern expected within this field. Finally, the analysis is synthetic because it is an extension of the background suppression hypothesis (Barlow & Levick 1976), satisfies the redundancy reduction hypothesis (Barlow 1961 a, b) and is equivalent to deblurring under certain conditions (Ratliff 1965)." }, { "pmid": "5657071", "title": "Eye tracking of observer-generated target movements.", "abstract": "When an observer moves his arm he shows more precise visual tracking of a target mounted on his fingertip-the eye lags behind the target less and makes fewer corrective saccades-than when he relaxes his arm and the experimenter moves it in a similar manner. Apparently the control system for eye movements can use outflow (efferent) signals in order to anticipate motion of the self-moved target." }, { "pmid": "19665561", "title": "Body schema learning for robotic manipulators from visual self-perception.", "abstract": "We present an approach to learning the kinematic model of a robotic manipulator arm from scratch using self-observation via a single monocular camera. We introduce a flexible model based on Bayesian networks that allows a robot to simultaneously identify its kinematic structure and to learn the geometrical relationships between its body parts as a function of the joint angles. Further, we show how the robot can monitor the prediction quality of its internal kinematic model and how to adapt it when its body changes-for example due to failure, repair, or material fatigue. In experiments carried out both on real and simulated robotic manipulators, we verified the validity of our approach for real-world problems such as end-effector pose prediction and end-effector pose control." }, { "pmid": "8871226", "title": "Self-moved target eye tracking in control and deafferented subjects: roles of arm motor command and proprioception in arm-eye coordination.", "abstract": "1. When a visual target is moved by the subject's hand (self-moved target tracking), smooth pursuit (SP) characteristics differ from eye-alone tracking: SP latency is shorter and maximal eye velocity is higher in self-moved target tracking than in eye-alone tracking. The aim of this study was to determine which signals (motor command and/or proprioception) generated during arm motion are responsible for the decreased time interval between arm and eye motion onsets in self-moved target tracking. 2. Six control subjects tracked a visual target whose motion was generated by active or passive movements of the observer's arm in order to determine the role played by arm proprioception in the arm-eye coordination. In a second experiment, the participation of two subjects suffering complete loss of proprioception allowed us to assess the contribution of arm motor command signals. 3. In control subjects, passive movement of the arm led to eye latencies significantly longer (130 ms) than when the arm was actively self-moved (-5 ms:negative values meaning that the eyes actually started to move before the target) but slightly shorter than in eye-alone tracking (150 ms). These observations indicate that active movement of the arm is necessary to trigger short-latency SP of self-moved targets. 4. Despite the lack of proprioceptive information about arm motion, the two deafferented subjects produced early SP (-8 ms on average) when they actively moved their arms. In this respect they did not differ from control subjects. Active control of the arm is thus sufficient to trigger short-latency SP. However, in contrast with control subjects, in deafferented subjects SP gain declined with increasing target motion frequency more rapidly in self-moved target tracking than in eye-alone tracking. 5. The deafferented subjects also tracked a self-moved target while the relationship between arm and target motions was altered either by introducing a delay between arm motion and target motion or by reversing target motion relative to arm motion. As with control subjects, delayed target motion did not affect SP latency. Furthermore, the deafferented subjects adapted to the reversed arm-target relationship faster than control subjects. 6. The results suggest that arm motor command is necessary for the eye-to-arm motion onset synchronization, because eye tracking of the passively moved arm was performed by control subjects with a latency comparable with that of eye-alone tracking of an external target. On the other hand, as evidenced by the data from the deafferented subjects, afferent information does not appear to be necessary for reducing the time between arm motion and SP onsets. However, afferent information appears to contribute to the parametric adjustment between arm motor command and visual information about arm motion." }, { "pmid": "8338495", "title": "Dynamic analysis of human visuo-oculo-manual coordination control in target tracking tasks.", "abstract": "Human subjects tracked a visual target controlled either by a function generator (sine wave at different frequencies) or directly by the observer's arm. Gain and phase curves of the oculomotor response as a function of target frequency were determined. Data show that the upper frequency limit of smooth pursuit is higher when the target is driven by the observer's hand, confirming previous reports that smooth pursuit can reach higher velocities when tracking self-moved targets. Comparative analysis of ocular tracking with and without manual target control showed that subjects could be classified into two groups. One group exhibited an increase in gain at high frequency, but showed no significant phase changes. Conversely, the reverse was found in the other group: a significant decrease of phase lag at high frequency and no change in gain. These results demonstrate the existence, within the oculo-manual coordination control system, of at least two separate mechanisms (or strategies), tending either to synchronize the eye and arm motor activities (timing coordination) or to adjust their gain (spatial coordination)." }, { "pmid": "15165552", "title": "An action perspective on motor development.", "abstract": "Motor development has all too often been considered as a set of milestones with little significance for the psychology of the child. Nothing could be more wrong. From an action perspective, motor development is at the heart of development and reflects all its different aspects, including perception, planning and motivation. Recent converging evidence demonstrates that, from birth onwards, children are agents who act on the world. Even in the newborn child, their movements are never just reflexes. On the contrary, they are purposeful goal-directed actions that foresee events in the world. Thus, motor development is not just a question of gaining control over muscles; equally important are questions such as why a particular movement is made, how the movements are planned, and how they anticipate what is going to happen next." }, { "pmid": "27974161", "title": "Mismatch Receptive Fields in Mouse Visual Cortex.", "abstract": "In primary visual cortex, a subset of neurons responds when a particular stimulus is encountered in a certain location in visual space. This activity can be modeled using a visual receptive field. In addition to visually driven activity, there are neurons in visual cortex that integrate visual and motor-related input to signal a mismatch between actual and predicted visual flow. Here we show that these mismatch neurons have receptive fields and signal a local mismatch between actual and predicted visual flow in restricted regions of visual space. These mismatch receptive fields are aligned to the retinotopic map of visual cortex and are similar in size to visual receptive fields. Thus, neurons with mismatch receptive fields signal local deviations of actual visual flow from visual flow predicted based on self-motion and could therefore underlie the detection of objects moving relative to the visual flow caused by self-motion. VIDEO ABSTRACT." } ]
Frontiers in Neurorobotics
30405387
PMC6206748
10.3389/fnbot.2018.00068
Learning Inverse Statics Models Efficiently With Symmetry-Based Exploration
Learning (inverse) kinematics and dynamics models of dexterous robots for the entire action or observation space is challenging and costly. Sampling the entire space is usually intractable in terms of time, tear, and wear. We propose an efficient approach to learn inverse statics models—primarily for gravity compensation—by exploring only a small part of the configuration space and exploiting the symmetry properties of the inverse statics mapping. In particular, there exist symmetric configurations that require the same absolute motor torques to be maintained. We show that those symmetric configurations can be discovered, the functional relations between them can be successfully learned and exploited to generate multiple training samples from one sampled configuration-torque pair. This strategy drastically reduces the number of samples required for learning inverse statics models. Moreover, we demonstrate that exploiting symmetries for learning inverse statics models is a generally applicable strategy for online and offline learning algorithms. We exemplify this by two different learning approaches. First, we modify the Direction Sampling approach for learning inverse statics models online, in a plain exploratory fashion, from scratch and without using a closed-loop controller. Second, we show that inverse statics mappings can be efficiently learned offline utilizing lattice sampling. Results for a 2R planar robot and a 3R simplified human arm demonstrate that their inverse statics mappings can be learned successfully for the entire configuration space. Furthermore, we demonstrate that the number of samples required for learning inverse statics mappings for 2R and 3R manipulators can be reduced at least by factors of approximately 8 and 16, respectively–depending on the number of discovered symmetries.
2. Related workOur main goal is increasing the efficiency of learning models, in particular for learning inverse statics. As learning ISMs has been done previously only offline, we modified the Direction Sampling method (Rolf, 2013) for learning ISMs online as well. This paper therefore discusses three major points: learning efficiently, learning inverse statics models, and online goal-directed approaches. This section presents the previous related work.2.1. Learning efficientlyVarious approaches have previously been proposed for tackling the efficiency problem of learning. Some previous research proposed exploring the observation space instead of the action space to avoid the curse of dimensionality. For instance, learning IK by exploring the observation space (Cartesian space) and learning only one configuration for each pose to mimic infants efficient sensorimotor learning (e.g., Rolf et al., 2011; Rolf and Steil, 2014; Rayyes and Steil, 2016) instead of learning forward kinematics mappings by exploring the higher dimensional action space (configuration space) e.g., Motor Babbling (Demiris and Meltzoff, 2008).Other research proposed that online learning of inverse models can be done in part of the workspace only in order to increase the efficiency and reduce the number of required samples (Rolf et al., 2011; Baranes and Oudeyer, 2013), since online learning approaches have the tendency to require more samples than offline methods. Efficient exploration by efficient sampling (active policy iteration) was proposed in Akiyama et al. (2010), however it has been proposed for batch learning only. Efficient learning has been also addressed for solving different tasks (e.g., Şimşek and Barto, 2006) based on Markov Decision Process and reward function. In this paper, we propose symmetry-based exploration to learn ISMs for the entire configuration space effectively by exploring a small part of it and exploiting the symmetries of ISMs which reduces the number of required samples. The proposed strategy is applicable for online and offline learning schemes.2.2. Learning inverse statics modelsCompensating forces and torques due to gravity is very important for advanced model-based robot control. The gravitational terms of the inverse dynamics models are usually computed either by estimating inertial parameters of the links or from CAD data of the robot. However, if no appropriate model exists e.g., for advanced complex robots or for soft robots, or if no prior knowledge on the inertial parameters of the links is available, learning these gravitational terms is a promising option. Previous research on learning ISMs has been done offline using a closed-loop controller to collect training data and often to enhanced existing (parametric) models (e.g., Luca and Panzieri, 1993; Xie et al., 2008). Early data-driven gravity compensation approaches are based on iterative procedures for end-point regulation (De Luca and Panzieri, 1994; De Luca and Panzieri, 1996). Recent works (Giorelli et al., 2015; Thuruthel et al., 2016b) have proposed data-driven learning techniques to control the end-point of continuum robots in task space. Where ISMs map between the desired end effector poses and the cable tensions. However, feedback controllers and inefficient Motor Babbling were implemented to obtain the training data and to learn ISMs offline only. In contrast, we propose learning ISMs online, in an exploratory fashion, from scratch and without using a closed-loop controller. Besides, we exploit the symmetry properties of ISMs to learn ISMs efficiently online and offline for the entire configuration space.2.3. Goal babbling and direction samplingVarious schemes have been proposed to replicate human movement skill learning and human motor control based on internal models (Wolpert et al., 1998), i.e., learning forward models (e.g., Motor Babbling Demiris and Meltzoff, 2008), and inverse models (e.g., distal teachers Jordan and Rumelhart, 1992 and feedback error learning; Gomi and Kawato, 1993). In contrast to Motor Babbling where the robot executes random motor commands and the outcomes are observed, there is evidence that even infants do not behave randomly but rather demonstrate goal-directed motion already few days after birth (von Hofsten, 1982). They learn how to reach by trying to reach and they iterate their trails to adapt their motion. Hence, Goal Babbling was proposed and inspired by infant motor learning skills for direct learning of IK within a few 100 samples (Rolf et al., 2010, 2011). Various other schemes were proposed for learning IK e.g., direct learning of IK (D'Souza et al., 2001; Thuruthel et al., 2016a) and incremental learning of IK (Vijayakumar et al., 2005; Baranes and Oudeyer, 2013).To apply Goal Babbling, a set of predefined targets, e.g., a set of positions to be reached, is required and then used to obtain the IK which is valid only in the predefined area. Direction Sampling (Rolf, 2013) has been proposed as an extension of Goal Babbling, to overcome the need for predefined targets and gradually discover the entire workspace. The targets are generated while exploring and the IK is learned simultaneously. In previous work, we already illustrated the scalability of online Goal Babbling with Direction Sampling in higher dimensional sensorimotor spaces up to 9-DoF COMAN floating-base (Rayyes and Steil, 2016). Goal Babbling has also been extended to learn IK in restricted areas (Loviken and Hemion, 2017) and to other domains e.g., speech production (Moulin-Frier et al., 2013; Philippsen et al., 2016) and tool usage (Forestier and Oudeyer, 2016). Besides, it has been also applied to soft robots (Rolf and Steil, 2014). However, it is striking that none of these schemes have been extended or transferred to learn the forward or inverse dynamics. As Goal Babbling shows high scalability and adaptability in "learning while behaving" fashion, we focus in this paper on learning ISMs, as a first step in the direction of exploratory dynamics leaning, by modifying the previously proposed Direction Sampling based on online Goal Babbling.
[ "20080026", "18458795", "24474941", "16212764", "12662752", "21227230" ]
[ { "pmid": "20080026", "title": "Efficient exploration through active learning for value function approximation in reinforcement learning.", "abstract": "Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares policy iteration (LSPI) framework allows us to employ statistical active learning methods for linear regression. Then we propose a design method of good sampling policies for efficient exploration, which is particularly useful when the sampling cost of immediate rewards is high. The effectiveness of the proposed method, which we call active policy iteration (API), is demonstrated through simulations with a batting robot." }, { "pmid": "18458795", "title": "The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots.", "abstract": "Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system." }, { "pmid": "24474941", "title": "Self-organization of early vocal development in infants and machines: the role of intrinsic motivation.", "abstract": "vocal development and intrinsic motivation. We propose and experimentally test the hypothesis that general mechanisms of intrinsically motivated spontaneous exploration, also called curiosity-driven learning, can self-organize developmental stages during early vocal learning. We introduce a computational model of intrinsically motivated vocal exploration, which allows the learner to autonomously structure its own vocal experiments, and thus its own learning schedule, through a drive to maximize competence progress. This model relies on a physical model of the vocal tract, the auditory system and the agent's motor control as well as vocalizations of social peers. We present computational experiments that show how such a mechanism can explain the adaptive transition from vocal self-exploration with little influence from the speech environment, to a later stage where vocal exploration becomes influenced by vocalizations of peers. Within the initial self-exploration phase, we show that a sequence of vocal production stages self-organizes, and shares properties with data from infant developmental psychology: the vocal learner first discovers how to control phonation, then focuses on vocal variations of unarticulated sounds, and finally automatically discovers and focuses on babbling with articulated proto-syllables. As the vocal learner becomes more proficient at producing complex sounds, imitating vocalizations of peers starts to provide high learning progress explaining an automatic shift from self-exploration to vocal imitation." }, { "pmid": "16212764", "title": "Incremental online learning in high dimensions.", "abstract": "Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces." }, { "pmid": "12662752", "title": "Multiple paired forward and inverse models for motor control.", "abstract": "Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. In this paper, we propose a modular approach to such motor learning and control. We review the behavioral evidence and benefits of modularity, and propose a new architecture based on multiple pairs of inverse (controller) and forward (predictor) models. Within each pair, the inverse and forward models are tightly coupled both during their acquisition, through motor learning, and use, during which the forward models determine the contribution of each inverse model's output to the final motor command. This architecture can simultaneously learn the multiple inverse models necessary for control as well as how to select the inverse models appropriate for a given environment. Finally, we describe specific predictions of the model, which can be tested experimentally." }, { "pmid": "21227230", "title": "Internal models in the cerebellum.", "abstract": "This review will focus on the possibility that the cerebellum contains an internal model or models of the motor apparatus. Inverse internal models can provide the neural command necessary to achieve some desired trajectory. First, we review the necessity of such a model and the evidence, based on the ocular following response, that inverse models are found within the cerebellar circuitry. Forward internal models predict the consequences of actions and can be used to overcome time delays associated with feedback control. Secondly, we review the evidence that the cerebellum generates predictions using such a forward model. Finally, we review a computational model that includes multiple paired forward and inverse models and show how such an arrangement can be advantageous for motor learning and control." } ]
Micromachines
30424467
PMC6215188
10.3390/mi9100534
MagIO: Magnetic Field Strength Based Indoor- Outdoor Detection with a Commercial Smartphone
A wide range of localization techniques has been proposed recently that leverage smartphone sensors. Context awareness serves as the backbone of these localization techniques, which helps them to shift the localization technologies to improve efficiency and energy utilization. Indoor-outdoor (IO) context sensing plays a vital role for such systems, which serve both indoor and outdoor localization. IO systems work with collaborative technologies including the Global Positioning System (GPS), cellular tower signals, Wi-Fi, Bluetooth and a variety of smartphone sensors. GPS- and Wi-Fi-based systems are power hungry, and their accuracy is severed by limiting factors like multipath, shadowing, etc. On the other hand, various built-in smartphone sensors can be deployed for environmental sensing. Although these sensors can play a crucial role, yet they are very less studied. This research aims at investigating the use of ambient magnetic field data alone from a smartphone for IO detection. The research first investigates the feasibility of utilizing magnetic field data alone for IO detection and then extracts different features suitable for IO detection to be used in machine learning-based classifiers to discriminate between indoor and outdoor environments. The experiments are performed at three different places including a subway station, a shopping mall and Yeungnam University (YU), Korea. The training data are collected from one spot of the campus, and testing is performed with data from various locations of the above-mentioned places. The experiment involves Samsung Galaxy S8, LG G6 and Samsung Galaxy Round smartphones. The results show that the magnetic data from smartphone magnetic sensor embody enough information and can discriminate the indoor environment from the outdoor environment. Naive Bayes (NB) outperforms with a classification accuracy of 83.26%, as against Support vector machines (SVM), random induction (RI), gradient boosting machines (GBM), random forest (RF), k-nearest neighbor (kNN) and decision trees (DT), whose accuracies are 67.21%, 73.38%, 73.40%, 78.59%, 69.53% and 68.60%, respectively. kNN, SVM and DT do not perform well when noisy data are used for classification. Additionally, other dynamic scenarios affect the attitude of magnetic data and degrade the performance of SVM, RI and GBM. NB and RF prove to be more noise tolerant and environment adaptable and perform very well in dynamic scenarios. Keeping in view the performance of these classifiers, an ensemble-based stacking scheme is presented, which utilizes DT and RI as the base learners and naive Bayes as the ensemble classifier. This approach is able to achieve an accuracy of 85.30% using the magnetic data of the smartphone magnetic sensor. Moreover, with an increase in training data, the accuracy of the stacking scheme can be elevated by 0.83%. The performance of the proposed approach is compared with GPS-, Wi-Fi- and light sensor-based IO detection.
3. Related WorkThe research focusing on indoor and outdoor localization is increasing exponentially especially those that utilize smartphone sensors [25,26,27,28,29]. Such localization systems mostly work with the assumption that the system clearly has the knowledge of the indoor and outdoor environment, which is hardly true in practice. The detection of indoors/outdoors is needed to make the smooth switching of modes in many localization systems. A few research efforts that aim at the detection of IO environment are discussed here.Zhou et al. in [23], made use of smartphone sensors including the accelerometer, proximity, light sensor, cell tower RSS and magnetometer to distinguish between outdoor, semi-outdoor and indoor environments. The patterns of different signals are studied under various weather conditions and at different times of the day. The patterns are used to define thresholds for various signals, and those thresholds are then integrated using the hidden Markov model. Radu Valentin et al. in [13], introduced the concept of machine learning for the IO detection problem. The study considered both supervised and semi-supervised training methods for the said problem. The semi-supervised learning approach is pretty useful when enough labeled data are not available for training purposes. It can improve classification accuracy by considering the unlabeled data from unfamiliar environments. The concept of co-training is utilized, which can accomplish an accuracy of 92.33% for IO detection in comparison to GPS (75.23%), IO Detector (48.51%) and naive Bayes (81.29%). Data were collected for this study from smartphone sensors including light, proximity, magnetic, microphone, cell, Wi-Fi, battery thermometer and GPS.Bhargava et al. in [30], proposed a scheme named SenseMe, which can sense environmental context, as well as the context-aware location. The system used the data from GPS in addition to smartphone sensors’ data including gyroscope, accelerometer and the Bluetooth module. Three different environments, indoor, outdoor and indoor-outdoor, were detected using the C4.5 classifier on data collected over a time span of 1 min. The proposed scheme was able to achieve a classification accuracy of 98.4%, 93% and 82% for indoor, outdoor and indoor-outdoor environments, respectively. Canovas et al. in [31], presented a binary classification technique that utilized the RSSI (received signal strength indicator) from 802.11 access points to identify a device’s indoor or outdoor state. RSSI features from different APs were fed as input to a weak learner to be trained on, during the offline phase. The proposed method was compared with the nearest neighbor and naive Bayes to show its performance.Sung et al. in [32], proposed an outdoor-indoor environment detection technique, which was based on the chirp sound probe. A binary classification method was used to determine the indoor and outdoor environments by using the reverberation score, which was calculated with the help of the envelope of the sound. An empirical threshold value for indoor and outdoor environments was applied to the reverberation scores for classification. The proposed method achieved the transition detection in only 3.81 s on average. Liu et al. in [33], proposed a method to detect if a mobile phone user was currently in the indoor or outdoor environment. The proposed method was based on the cell identity map in addition to the light and proximity sensors of a mobile phone. The light sensor was used mainly for IO detection with a threshold value for both environments. However, in case the light sensor were absent, the cell identity map was utilized for the said purpose. The cell identity map was built based on the indoor deployed cells to increase the mobile broadband capacity in the 3G and 4G networks. Cell identity and its relevant RSS were used to identify the indoor and outdoor environment. The proposed scheme was able to achieve 98% accuracy.Okamoto et al. in [34], presented a system that complements GPS-based IO detection by adding moving direction information. The GPS-based environment detection was done with support vector machines by utilizing the S/N ratio. It achieved an accuracy of 96%. Later, direction sensing was done with a compass, and environment sensing was performed for the canyon of the buildings. The method with direction information performed well compared to a GPS-based system in canyons. Bisio et al. in [35], presented a method based on the ultrasonic signals sensed by a smartphone. The phone’s built-in speakers and microphone were used to get the features required for IO classification. The features were then fed into a support vector machine and naive Bayes classifier for classification. The ultrasonic-based environment sensing system achieved an accuracy of 88.9% for fast latency and 92.7% for slow latency. An indoor-outdoor detection technique was presented by Zou et al. in [36], which leveraged the low power iBeacon technology. The environment was divided into outdoor, semi-outdoor and indoor classes. For the outdoor environment, GPS was used, which was turned off once the semi-outdoor state was confirmed, which was triggered by the decrease in mean GPS signals. iBeacon was utilized to discriminate between semi-outdoor and indoor environments. Two BLE beacons were installed at each entrance point, which marked the transitions between semi-outdoor and indoor environments. The user’s state of coming into and going from an indoor environment was established based on the RSS values of BLE beacons. The environment detection accuracy of the proposed scheme was 96.2%.Wang et al. in [37], utilized the machine learning algorithm to identify the user’s context of being indoors and outdoors. The Global System for Mobile (GSM) communication cellular base station’s signal strength was exploited for that purpose. Since the signal strength was affected by various environments, the single strength characteristic was utilized. The data were collected at 2 Hz to classify four environments including deep indoor, semi-outdoor, light indoor and open outdoor environments. Machine learning algorithms of support vector machine, k nearest neighbor, decision trees, naive Bayes, logistic regression, nearest neighbor and random forest were tested on 8 s of data for this purpose. Random forest was reported to achieve the highest accuracy of 95.3% on the GSM data with four nearby satellites. He et al. in [38], presented the IO environment detection as a one-class inside-outside region detection. The radio map of Wi-Fi was built during the offline phase, which was later utilized in different machine learning techniques. Support vector data description, self-organizing map, mini-max probability machine and principal component analysis were used for classification. The measured signal was mapped against the pre-built database. If the measured signal resembled a fingerprint (indoor region), then it was highly likely to belong to the indoor region, and outdoor otherwise. Principal component analysis (PCA) achieved the highest accuracy of 95.69% for IO classification.Li et al. in [39], presented a lightweight IO detector based on Wi-Fi RSS signals with a light sensor. The light sensor was assisted by the proximity sensor to validate if the light sensor were blocked by an obstacle. Various thresholds were used for the light sensor for day and night and for the indoor-outdoor environment. Wi-Fi RSS and the light sensor detected the environment separately, and their aggregate was used in a semi-CRF (conditional random field) algorithm to generate the integrated output. The proposed system achieved an accuracy of 96.67%.The discussed research works were constrained by one or more limitations. For example, [23] made use of the variance of the magnetic component for data taken over 10 s. First, using the variance alone is not a good parameter, as the variance is abruptly affected in the outdoor environment as well, due to the presence of vehicles. Secondly, data were collected over 8 to 10 s, which consumed a substantial amount of energy. Similarly, IO systems that utilize cell tower data or GPS data have the conspicuous drawback of energy consumption. The systems based on iBeacon and Wi-Fi are susceptible to infrastructure changes and rely on additional hardware and software. We, therefore, focus on the magnetic field, which is both energy efficient and omnipresent and needs no additional infrastructure. In addition, we are working with magnetic field data collected over 2 s only, which is robust and energy efficient.
[ "29165131", "29891788", "29883386", "30011927", "26184230", "28587088", "26907295", "27669252", "26353228", "21637787", "18397250" ]
[ { "pmid": "29165131", "title": "Classification of indoor-outdoor location using combined global positioning system (GPS) and temperature data for personal exposure assessment.", "abstract": "OBJECTIVES\nThe objectives of this study was to determine the accuracy of indoor-outdoor classification based on GPS and temperature data in three different seasons.\n\n\nMETHODS\nIn the present study, a global positioning system (GPS) was used alongside temperature data collected in the field by a technician who visited 53 different indoor locations during summer, autumn and winter. The indoor-outdoor location was determined by GPS data alone, and in combination with temperature data.\n\n\nRESULTS\nDetermination of location by the GPS signal alone, based on the loss of GPS signal and using the used number of satellites (NSAT) signal factor, simple percentage agreements of 73.6 ± 2.9%, 72.9 ± 3.4%, and 72.1 ± 3.1% were obtained for summer, autumn, and winter, respectively. However, when temperature and GPS data were combined, simple percentage agreements were significantly improved (87.9 ± 3.3%, 84.1 ± 2.8%, and 86.3 ± 3.1%, respectively). A temperature criterion for indoor-outdoor determination of ~ Δ 2°C for 2 min could be applied during all three seasons.\n\n\nCONCLUSION\nThe results showed that combining GPS and temperature data improved the accuracy of indoor-outdoor determination." }, { "pmid": "29891788", "title": "An IBeacon-Based Location System for Smart Home Control.", "abstract": "Indoor location and intelligent control system can bring convenience to people&rsquo;s daily life. In this paper, an indoor control system is designed to achieve equipment remote control by using low-energy Bluetooth (BLE) beacon and Internet of Things (IoT) technology. The proposed system consists of five parts: web server, home gateway, smart terminal, smartphone app and BLE beacons. In the web server, fingerprint matching based on RSSI stochastic characteristic and posture recognition model based on geomagnetic sensing are used to establish a more efficient equipment control system, combined with Pedestrian Dead Reckoning (PDR) technology to improve the accuracy of location. A personalized menu of remote &ldquo;one-click&rdquo; control is finally offered to users in a smartphone app. This smart home control system has been implemented by hardware, and precision and stability tests have been conducted, which proved the practicability and good user experience of this solution." }, { "pmid": "29883386", "title": "An Interactive Real-Time Locating System Based on Bluetooth Low-Energy Beacon Network †.", "abstract": "The ubiquity of Bluetooth-enabled smartphones and peripherals has brought tremendous convenience to our daily life. In recent years, Bluetooth beacons have also been gaining popularity in implementing a variety of innovative location-based services such as self-guided systems in exhibition centers. However, the broadcast-based beacon technology can only provide unidirectional communication. In case smartphone users would like to respond to the beacon messages, they have to rely on their own mobile Internet connections to send the information back to the backend system. Nevertheless, mobile Internet services may not be always available or too costly. In this work, we develop a real-time locating system based only on the Bluetooth low energy (BLE) technology to support interactive communications by combining the broadcast and mesh topology options to extend the applicability of beacon solutions. Specifically, we turn the smartphone into a beacon device and augment the beacon devices with the capability of forming a mesh network. The implementation result shows that our beacon devices can detect the presence of specific users at specific locations, and then the presence state can be sent to the application server via the relay of beacon devices. Moreover, the application server can send personalized location-based messages to the users, again via the relay of beacon devices. With the capability of relaying messages between the beacon devices, it would be convenient for developers to implement a variety of interactive applications such as tracking VIP customers at the airport, or tracking an elder with Alzheimer&rsquo;s disease in the neighborhood." }, { "pmid": "30011927", "title": "mPILOT-Magnetic Field Strength Based Pedestrian Indoor Localization.", "abstract": "An indoor localization system based on off-the-shelf smartphone sensors is presented which employs the magnetometer to find user location. Further assisted by the accelerometer and gyroscope, the proposed system is able to locate the user without any prior knowledge of user initial position. The system exploits the fingerprint database approach for localization. Traditional fingerprinting technology stores data intensity values in database such as RSSI (Received Signal Strength Indicator) values in the case of WiFi fingerprinting and magnetic flux intensity values in the case of geomagnetic fingerprinting. The down side is the need to update the database periodically and device heterogeneity. We solve this problem by using the fingerprint database of patterns formed by magnetic flux intensity values. The pattern matching approach solves the problem of device heterogeneity and the algorithm's performance with Samsung Galaxy S8 and LG G6 is comparable. A deep learning based artificial neural network is adopted to identify the user state of walking and stationary and its accuracy is 95%. The localization is totally infrastructure independent and does not require any other technology to constraint the search space. The experiments are performed to determine the accuracy in three buildings of Yeungnam University, Republic of Korea with different path lengths and path geometry. The results demonstrate that the error is 2⁻3 m for 50 percentile with various buildings. Even though many locations in the same building exhibit very similar magnetic attitude, the algorithm achieves an accuracy of 4 m for 75 percentile irrespective of the device used for localization." }, { "pmid": "26184230", "title": "MagicFinger: 3D Magnetic Fingerprints for Indoor Location.", "abstract": "Given the indispensable role of mobile phones in everyday life, phone-centric sensing systems are ideal candidates for ubiquitous observation purposes. This paper presents a novel approach for mobile phone-centric observation applied to indoor location. The approach involves a location fingerprinting methodology that takes advantage of the presence of magnetic field anomalies inside buildings. Unlike existing work on the subject, which uses the intensity of magnetic field for fingerprinting, our approach uses all three components of the measured magnetic field vectors to improve accuracy. By using adequate soft computing techniques, it is possible to adequately balance the constraints of common solutions. The resulting system does not rely on any infrastructure devices and therefore is easy to manage and deploy. The proposed system consists of two phases: the offline phase and the online phase. In the offline phase, magnetic field measurements are taken throughout the building, and 3D maps are generated. Then, during the online phase, the user's location is estimated through the best estimator for each zone of the building. Experimental evaluations carried out in two different buildings confirm the satisfactory performance of indoor location based on magnetic field vectors. These evaluations provided an error of (11.34 m, 4.78 m) in the (x; y) components of the estimated positions in the first building where the experiments were carried out, with a standard deviation of (3.41 m, 4.68 m); and in the second building, an error of (4 m, 2.98 m) with a deviation of (2.64 m, 2.33 m)." }, { "pmid": "28587088", "title": "LOCALI: Calibration-Free Systematic Localization Approach for Indoor Positioning.", "abstract": "Recent advancements in indoor positioning systems are based on infrastructure-free solutions, aimed at improving the location accuracy in complex indoor environments without the use of specialized resources. A popular infrastructure-free solution for indoor positioning is a calibration-based positioning, commonly known as fingerprinting. Fingerprinting solutions require extensive and error-free surveys of environments to build radio-map databases, which play a key role in position estimation. Fingerprinting also requires random updates of the database, when there are significant changes in the environment or a decrease in the accuracy. The calibration of the fingerprinting database is a time-consuming and laborious effort that prevents the extensive adoption of this technique. In this paper, we present a systematic LOCALIzation approach, \"LOCALI\", for indoor positioning, which does not require a calibration database and extensive updates. The LOCALI exploits the floor plan/wall map of the environment to estimate the target position by generating radio maps by integrating path-losses over certain trajectories in complex indoor environments, where triangulation using time information or the received signal strength level is highly erroneous due to the fading effects caused by multi-path propagation or absorption by environmental elements or varying antenna alignment. Experimental results demonstrate that by using the map information and environmental parameters, a significant level of accuracy in indoor positioning can be achieved. Moreover, this process requires considerably lesser effort compared to the calibration-based techniques." }, { "pmid": "26907295", "title": "BlueDetect: An iBeacon-Enabled Scheme for Accurate and Energy-Efficient Indoor-Outdoor Detection and Seamless Location-Based Service.", "abstract": "The location and contextual status (indoor or outdoor) is fundamental and critical information for upper-layer applications, such as activity recognition and location-based services (LBS) for individuals. In addition, optimizations of building management systems (BMS), such as the pre-cooling or heating process of the air-conditioning system according to the human traffic entering or exiting a building, can utilize the information, as well. The emerging mobile devices, which are equipped with various sensors, become a feasible and flexible platform to perform indoor-outdoor (IO) detection. However, power-hungry sensors, such as GPS and WiFi, should be used with caution due to the constrained battery storage on mobile device. We propose BlueDetect: an accurate, fast response and energy-efficient scheme for IO detection and seamless LBS running on the mobile device based on the emerging low-power iBeacon technology. By leveraging the on-broad Bluetooth module and our proposed algorithms, BlueDetect provides a precise IO detection service that can turn on/off on-board power-hungry sensors smartly and automatically, optimize their performances and reduce the power consumption of mobile devices simultaneously. Moreover, seamless positioning and navigation services can be realized by it, especially in a semi-outdoor environment, which cannot be achieved by GPS or an indoor positioning system (IPS) easily. We prototype BlueDetect on Android mobile devices and evaluate its performance comprehensively. The experimental results have validated the superiority of BlueDetect in terms of IO detection accuracy, localization accuracy and energy consumption." }, { "pmid": "27669252", "title": "Indoor-Outdoor Detection Using a Smart Phone Sensor.", "abstract": "In the era of mobile internet, Location Based Services (LBS) have developed dramatically. Seamless Indoor and Outdoor Navigation and Localization (SNAL) has attracted a lot of attention. No single positioning technology was capable of meeting the various positioning requirements in different environments. Selecting different positioning techniques for different environments is an alternative method. Detecting the users' current environment is crucial for this technique. In this paper, we proposed to detect the indoor/outdoor environment automatically without high energy consumption. The basic idea was simple: we applied a machine learning algorithm to classify the neighboring Global System for Mobile (GSM) communication cellular base station's signal strength in different environments, and identified the users' current context by signal pattern recognition. We tested the algorithm in four different environments. The results showed that the proposed algorithm was capable of identifying open outdoors, semi-outdoors, light indoors and deep indoors environments with 100% accuracy using the signal strength of four nearby GSM stations. The required hardware and signal are widely available in our daily lives, implying its high compatibility and availability." }, { "pmid": "26353228", "title": "Learning Nonlinear Functions Using Regularized Greedy Forest.", "abstract": "We consider the problem of learning a forest of nonlinear decision rules with general loss functions. The standard methods employ boosted decision trees such as Adaboost for exponential loss and Friedman's gradient boosting for general loss. In contrast to these traditional boosting algorithms that treat a tree learner as a black box, the method we propose directly learns decision forests via fully-corrective regularized greedy search using the underlying forest structure. Our method achieves higher accuracy and smaller models than gradient boosting on many of the datasets we have tested on." }, { "pmid": "21637787", "title": "Multi-scale approach for predicting fish species distributions across coral reef seascapes.", "abstract": "Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii) surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT) and Maximum Entropy Species Distribution Modelling (MaxEnt). The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9) for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9). In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy) than BRT (68% map accuracy). We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support conservation prioritization in marine protected area design, zoning in marine spatial planning, and ecosystem-based fisheries management." }, { "pmid": "18397250", "title": "A working guide to boosted regression trees.", "abstract": "1. Ecologists use statistical models for both explanation and prediction, and need techniques that are flexible enough to express typical features of their data, such as nonlinearities and interactions. 2. This study provides a working guide to boosted regression trees (BRT), an ensemble method for fitting statistical models that differs fundamentally from conventional techniques that aim to fit a single parsimonious model. Boosted regression trees combine the strengths of two algorithms: regression trees (models that relate a response to their predictors by recursive binary splits) and boosting (an adaptive method for combining many simple models to give improved predictive performance). The final BRT model can be understood as an additive regression model in which individual terms are simple trees, fitted in a forward, stagewise fashion. 3. Boosted regression trees incorporate important advantages of tree-based methods, handling different types of predictor variables and accommodating missing data. They have no need for prior data transformation or elimination of outliers, can fit complex nonlinear relationships, and automatically handle interaction effects between predictors. Fitting multiple trees in BRT overcomes the biggest drawback of single tree models: their relatively poor predictive performance. Although BRT models are complex, they can be summarized in ways that give powerful ecological insight, and their predictive performance is superior to most traditional modelling methods. 4. The unique features of BRT raise a number of practical issues in model fitting. We demonstrate the practicalities and advantages of using BRT through a distributional analysis of the short-finned eel (Anguilla australis Richardson), a native freshwater fish of New Zealand. We use a data set of over 13 000 sites to illustrate effects of several settings, and then fit and interpret a model using a subset of the data. We provide code and a tutorial to enable the wider use of BRT by ecologists." } ]
Scientific Reports
30397240
PMC6218547
10.1038/s41598-018-34688-x
AutoImpute: Autoencoder based imputation of single-cell RNA-seq data
The emergence of single-cell RNA sequencing (scRNA-seq) technologies has enabled us to measure the expression levels of thousands of genes at single-cell resolution. However, insufficient quantities of starting RNA in the individual cells cause significant dropout events, introducing a large number of zero counts in the expression matrix. To circumvent this, we developed an autoencoder-based sparse gene expression matrix imputation method. AutoImpute, which learns the inherent distribution of the input scRNA-seq data and imputes the missing values accordingly with minimal modification to the biologically silent genes. When tested on real scRNA-seq datasets, AutoImpute performed competitively wrt., the existing single-cell imputation methods, on the grounds of expression recovery from subsampled data, cell-clustering accuracy, variance stabilization and cell-type separability.
Related WorkRecently, attempts have been made to devise imputation methods for single-cell RNA sequencing data, most notable among these are MAGIC, scImpute, and drImpute11–13. MAGIC uses a neighborhood-based Markov-affinity matrix and shares the weight information across cells to generate an imputed count matrix.On the other hand, for a zero expression value, scImpute first estimates the probability of it being a dropout. It uses a Gamma-Normal mixture model to take into account the dropout events. Zero expressions, which are likely to be dropouts are then estimated by borrowing information from similar cells. scImpute has been shown to be superior as compared to MAGIC. Another method, drImpute, repeatedly identifies similar cells based on clustering and performs imputation multiple times by averaging the expression values from similar cells.Our approach, AutoImpute is motivated by a similar problem14 of sparse matrix imputation frequently encountered in recommender systems a.k.a collaborative filtering in information retrieval. The problem is well illustrated with the following example. When designing a recommender system for movies (like in Netflix), we are given a user-movie rating matrix in which each entry (i, j) represents the rating of movie j by user i only if the user i has watched movie j and is otherwise missing. The problem now is to predict the remaining entries of the user-movie matrix, to make suitable movie recommendations to the users.With the aim to impute the sparse user-movie rating matrix in the aforementioned problem, various algorithms have been proposed; the most popular ones amongst which are Matrix Factorization15,16 and Neighborhood Models17. The use of latent factor models like those based on the autoencoders18 have been rising, stemming from the recent successes of (deep) neural network models for vision and speech tasks. Justifying their popularity in the recent years autoencoder based matrix imputation methods outperform the current state-of-the-art methods. So, we adopt and deploy this idea to address the problem of dropouts in scRNA-seq data.
[ "26000846", "24836921", "25628217", "24747814", "27052890", "29788498", "27317252", "26431182", "26060301", "25420068", "25700174" ]
[ { "pmid": "26000846", "title": "The technology and biology of single-cell RNA sequencing.", "abstract": "The differences between individual cells can have profound functional consequences, in both unicellular and multicellular organisms. Recently developed single-cell mRNA-sequencing methods enable unbiased, high-throughput, and high-resolution transcriptomic analysis of individual cells. This provides an additional dimension to transcriptomic information relative to traditional methods that profile bulk populations of cells. Already, single-cell RNA-sequencing methods have revealed new biology in terms of the composition of tissues, the dynamics of transcription, and the regulatory relationships between genes. Rapid technological developments at the level of cell capture, phenotyping, molecular biology, and bioinformatics promise an exciting future with numerous biological and medical applications." }, { "pmid": "24836921", "title": "Bayesian approach to single-cell differential expression analysis.", "abstract": "Single-cell data provide a means to dissect the composition of complex tissues and specialized cellular environments. However, the analysis of such measurements is complicated by high levels of technical noise and intrinsic biological variability. We describe a probabilistic model of expression-magnitude distortions typical of single-cell RNA-sequencing measurements, which enables detection of differential expression signatures and identification of subpopulations of cells in a way that is more tolerant of noise." }, { "pmid": "25628217", "title": "Computational and analytical challenges in single-cell transcriptomics.", "abstract": "The development of high-throughput RNA sequencing (RNA-seq) at the single-cell level has already led to profound new discoveries in biology, ranging from the identification of novel cell types to the study of global patterns of stochastic gene expression. Alongside the technological breakthroughs that have facilitated the large-scale generation of single-cell transcriptomic data, it is important to consider the specific computational and analytical challenges that still have to be overcome. Although some tools for analysing RNA-seq data from bulk cell populations can be readily applied to single-cell RNA-seq data, many new computational strategies are required to fully exploit this data type and to enable a comprehensive yet detailed study of gene expression at the single-cell level." }, { "pmid": "24747814", "title": "Validation of noise models for single-cell transcriptomics.", "abstract": "Single-cell transcriptomics has recently emerged as a powerful technology to explore gene expression heterogeneity among single cells. Here we identify two major sources of technical variability: sampling noise and global cell-to-cell variation in sequencing efficiency. We propose noise models to correct for this, which we validate using single-molecule FISH. We demonstrate that gene expression variability in mouse embryonic stem cells depends on the culture condition." }, { "pmid": "27052890", "title": "Design and computational analysis of single-cell RNA-sequencing experiments.", "abstract": "Single-cell RNA-sequencing (scRNA-seq) has emerged as a revolutionary tool that allows us to address scientific questions that eluded examination just a few years ago. With the advantages of scRNA-seq come computational challenges that are just beginning to be addressed. In this article, we highlight the computational methods available for the design and analysis of scRNA-seq experiments, their advantages and disadvantages in various settings, the open questions for which novel methods are needed, and expected future developments in this exciting area." }, { "pmid": "29788498", "title": "CellAtlasSearch: a scalable search engine for single cells.", "abstract": "Owing to the advent of high throughput single cell transcriptomics, past few years have seen exponential growth in production of gene expression data. Recently efforts have been made by various research groups to homogenize and store single cell expression from a large number of studies. The true value of this ever increasing data deluge can be unlocked by making it searchable. To this end, we propose CellAtlasSearch, a novel search architecture for high dimensional expression data, which is massively parallel as well as light-weight, thus infinitely scalable. In CellAtlasSearch, we use a Graphical Processing Unit (GPU) friendly version of Locality Sensitive Hashing (LSH) for unmatched speedup in data processing and query. Currently, CellAtlasSearch features over 300 000 reference expression profiles including both bulk and single-cell data. It enables the user query individual single cell transcriptomes and finds matching samples from the database along with necessary meta information. CellAtlasSearch aims to assist researchers and clinicians in characterizing unannotated single cells. It also facilitates noise free, low dimensional representation of single-cell expression profiles by projecting them on a wide variety of reference samples. The web-server is accessible at: http://www.cellatlassearch.com." }, { "pmid": "27317252", "title": "Gene expression prediction using low-rank matrix completion.", "abstract": "BACKGROUND\nAn exponential growth of high-throughput biological information and data has occurred in the past decade, supported by technologies, such as microarrays and RNA-Seq. Most data generated using such methods are used to encode large amounts of rich information, and determine diagnostic and prognostic biomarkers. Although data storage costs have reduced, process of capturing data using aforementioned technologies is still expensive. Moreover, the time required for the assay, from sample preparation to raw value measurement is excessive (in the order of days). There is an opportunity to reduce both the cost and time for generating such expression datasets.\n\n\nRESULTS\nWe propose a framework in which complete gene expression values can be reliably predicted in-silico from partial measurements. This is achieved by modelling expression data as a low-rank matrix and then applying recently discovered techniques of matrix completion by using nonlinear convex optimisation. We evaluated prediction of gene expression data based on 133 studies, sourced from a combined total of 10,921 samples. It is shown that such datasets can be constructed with a low relative error even at high missing value rates (>50 %), and that such predicted datasets can be reliably used as surrogates for further analysis.\n\n\nCONCLUSION\nThis method has potentially far-reaching applications including how bio-medical data is sourced and generated, and transcriptomic prediction by optimisation. We show that gene expression data can be computationally constructed, thereby potentially reducing the costs of gene expression profiling. In conclusion, this method shows great promise of opening new avenues in research on low-rank matrix completion in biological sciences." }, { "pmid": "26431182", "title": "Single Cell RNA-Sequencing of Pluripotent States Unlocks Modular Transcriptional Variation.", "abstract": "Embryonic stem cell (ESC) culture conditions are important for maintaining long-term self-renewal, and they influence cellular pluripotency state. Here, we report single cell RNA-sequencing of mESCs cultured in three different conditions: serum, 2i, and the alternative ground state a2i. We find that the cellular transcriptomes of cells grown in these conditions are distinct, with 2i being the most similar to blastocyst cells and including a subpopulation resembling the two-cell embryo state. Overall levels of intercellular gene expression heterogeneity are comparable across the three conditions. However, this masks variable expression of pluripotency genes in serum cells and homogeneous expression in 2i and a2i cells. Additionally, genes related to the cell cycle are more variably expressed in the 2i and a2i conditions. Mining of our dataset for correlations in gene expression allowed us to identify additional components of the pluripotency network, including Ptma and Zfp640, illustrating its value as a resource for future discovery." }, { "pmid": "26060301", "title": "A survey of human brain transcriptome diversity at the single cell level.", "abstract": "The human brain is a tissue of vast complexity in terms of the cell types it comprises. Conventional approaches to classifying cell types in the human brain at single cell resolution have been limited to exploring relatively few markers and therefore have provided a limited molecular characterization of any given cell type. We used single cell RNA sequencing on 466 cells to capture the cellular complexity of the adult and fetal human brain at a whole transcriptome level. Healthy adult temporal lobe tissue was obtained during surgical procedures where otherwise normal tissue was removed to gain access to deeper hippocampal pathology in patients with medical refractory seizures. We were able to classify individual cells into all of the major neuronal, glial, and vascular cell types in the brain. We were able to divide neurons into individual communities and show that these communities preserve the categorization of interneuron subtypes that is typically observed with the use of classic interneuron markers. We then used single cell RNA sequencing on fetal human cortical neurons to identify genes that are differentially expressed between fetal and adult neurons and those genes that display an expression gradient that reflects the transition between replicating and quiescent fetal neuronal populations. Finally, we observed the expression of major histocompatibility complex type I genes in a subset of adult neurons, but not fetal neurons. The work presented here demonstrates the applicability of single cell RNA sequencing on the study of the adult human brain and constitutes a first step toward a comprehensive cellular atlas of the human brain." }, { "pmid": "25420068", "title": "Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing.", "abstract": "The primary sensory system requires the integrated function of multiple cell types, although its full complexity remains unclear. We used comprehensive transcriptome analysis of 622 single mouse neurons to classify them in an unbiased manner, independent of any a priori knowledge of sensory subtypes. Our results reveal eleven types: three distinct low-threshold mechanoreceptive neurons, two proprioceptive, and six principal types of thermosensitive, itch sensitive, type C low-threshold mechanosensitive and nociceptive neurons with markedly different molecular and operational properties. Confirming previously anticipated major neuronal types, our results also classify and provide markers for new, functionally distinct subtypes. For example, our results suggest that itching during inflammatory skin diseases such as atopic dermatitis is linked to a distinct itch-generating type. We demonstrate single-cell RNA-seq as an effective strategy for dissecting sensory responsive cells into distinct neuronal types. The resulting catalog illustrates the diversity of sensory types and the cellular complexity underlying somatic sensation." }, { "pmid": "25700174", "title": "Brain structure. Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq.", "abstract": "The mammalian cerebral cortex supports cognitive functions such as sensorimotor integration, memory, and social behaviors. Normal brain function relies on a diverse set of differentiated cell types, including neurons, glia, and vasculature. Here, we have used large-scale single-cell RNA sequencing (RNA-seq) to classify cells in the mouse somatosensory cortex and hippocampal CA1 region. We found 47 molecularly distinct subclasses, comprising all known major cell types in the cortex. We identified numerous marker genes, which allowed alignment with known cell types, morphology, and location. We found a layer I interneuron expressing Pax6 and a distinct postmitotic oligodendrocyte subclass marked by Itpr2. Across the diversity of cortical cell types, transcription factors formed a complex, layered regulatory code, suggesting a mechanism for the maintenance of adult cell type identity." } ]
PLoS Computational Biology
30359364
PMC6219815
10.1371/journal.pcbi.1006518
Modeling sensory-motor decisions in natural behavior
Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.
Related work in reinforcement learningThe proposed modular IRL algorithm is an extension and refinement of [19] which introduced the first modular IRL and demonstrated its effectiveness using an simulated avatar. The navigation tasks are similar but we use data from actual human subjects. While they use a simulated human avatar and moving from the straight path, our curved path proves quite different in practice, as well, being significantly more challenging for both humans and virtual agents. We then generalize the state space to let the agent consider multiple objects for each module, while the original work assumes the agent considers one nearest object of each module.Bayesian IRL was first introduced by [36] as a principled way of approaching an ill-posed reward learning problem. Existing works using Bayesian IRL usually experiment in discretized gridworlds with no more than 1000 states with an exception being the work of [39] which was able to test on a goal-oriented MDP with 20,518 states using hierarchical Bayesian IRL.The modular RL architecture proposed in this work is most similar to a recent work in [40], in which they decompose the reward function in the same way as the modular reinforcement learning. Their focus is not on modeling human behavior, but rather on using deep reinforcement learning to learn a separate value function for each subtask and combining them to obtain a good policy. Other examples of divide-and-conquer approach in RL include factored MDP [41] and co-articulation [42].Hierarchical RL [43, 44] utilizes the idea of temporal abstraction to allow more efficient computation of the policy. [45] analyzes human decision data in spatial navigation tasks and the Tower of Hanoi; they suggest that human subjects learn to decompose tasks and construct action hierarchy in an optimal way. In contrast with that approach, modular RL assumes parallel decomposition of the task. The difference can be visualized in Fig 7. These two approaches are complementary, and are both important for understanding and reproducing natural behaviors. For example, a hierarchical RL agent could have multiple concurrent options [43, 44] executing at a given time for different behavioral objectives. Another possibility is to extend the modular RL to a two-level hierarchical system. Learned module policies are stored and a higher-level scheduler or arbitrator decides which modules to activate or deactivate given the current context and the protocol to synthesize module policies. An example of this type of architecture can be found in [2].10.1371/journal.pcbi.1006518.g007Fig 7Modular reinforcement learning (left) vs. hierarchical reinforcement learning (right).Modular RL assumes modules run concurrently and do not extend over multiple time steps. Hierarchical RL assumes that a single option may extend over multiple time steps.
[ "28715958", "18217811", "22647641", "14973239", "12374324", "17374483", "10706212", "22462543", "16938431", "21435563", "18368048", "15235607", "25004371", "23832417", "14692633", "23713205", "19864565", "18434531", "24659960", "19309533", "25291805", "25122479", "22766486", "27389780", "27190012", "19439601", "25719670", "24395971", "25392517", "29657116" ]
[ { "pmid": "28715958", "title": "Vision and Action.", "abstract": "Investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions by highlighting the importance of behavioral goals and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. This conceptualization of visually guided actions as a sequence of sensory-motor decisions has been formalized within the framework of statistical decision theory, which structures the problem and provides the context for much recent progress in vision and action. Components of a good decision include the task, which defines the behavioral goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge." }, { "pmid": "18217811", "title": "Task and context determine where you look.", "abstract": "The deployment of human gaze has been almost exclusively studied independent of any specific ongoing task and limited to two-dimensional picture viewing. This contrasts with its use in everyday life, which mostly consists of purposeful tasks where gaze is crucially involved. To better understand deployment of gaze under such circumstances, we devised a series of experiments, in which subjects navigated along a walkway in a virtual environment and executed combinations of approach and avoidance tasks. The position of the body and the gaze were monitored during the execution of the task combinations and dependence of gaze on the ongoing tasks as well as the visual features of the scene was analyzed. Gaze distributions were compared to a random gaze allocation strategy as well as a specific \"saliency model.\" Gaze distributions showed high similarity across subjects. Moreover, the precise fixation locations on the objects depended on the ongoing task to the point that the specific tasks could be predicted from the subject's fixation data. By contrast, gaze allocation according to a random or a saliency model did not predict the executed fixations or the observed dependence of fixation locations on the specific task." }, { "pmid": "22647641", "title": "Motor control is decision-making.", "abstract": "Motor behavior may be viewed as a problem of maximizing the utility of movement outcome in the face of sensory, motor and task uncertainty. Viewed in this way, and allowing for the availability of prior knowledge in the form of a probability distribution over possible states of the world, the choice of a movement plan and strategy for motor control becomes an application of statistical decision theory. This point of view has proven successful in recent years in accounting for movement under risk, inferring the loss function used in motor tasks, and explaining motor behavior in a wide variety of circumstances." }, { "pmid": "14973239", "title": "A neural correlate of reward-based behavioral learning in caudate nucleus: a functional magnetic resonance imaging study of a stochastic decision task.", "abstract": "Humans can acquire appropriate behaviors that maximize rewards on a trial-and-error basis. Recent electrophysiological and imaging studies have demonstrated that neural activity in the midbrain and ventral striatum encodes the error of reward prediction. However, it is yet to be examined whether the striatum is the main locus of reward-based behavioral learning. To address this, we conducted functional magnetic resonance imaging (fMRI) of a stochastic decision task involving monetary rewards, in which subjects had to learn behaviors involving different task difficulties that were controlled by probability. We performed a correlation analysis of fMRI data by using the explanatory variables derived from subject behaviors. We found that activity in the caudate nucleus was correlated with short-term reward and, furthermore, paralleled the magnitude of a subject's behavioral change during learning. In addition, we confirmed that this parallelism between learning and activity in the caudate nucleus is robustly maintained even when we vary task difficulty by controlling the probability. These findings suggest that the caudate nucleus is one of the main loci for reward-based behavioral learning." }, { "pmid": "12374324", "title": "The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity.", "abstract": "The authors present a unified account of 2 neural systems concerned with the development and expression of adaptive behaviors: a mesencephalic dopamine system for reinforcement learning and a \"generic\" error-processing system associated with the anterior cingulate cortex. The existence of the error-processing system has been inferred from the error-related negativity (ERN), a component of the event-related brain potential elicited when human participants commit errors in reaction-time tasks. The authors propose that the ERN is generated when a negative reinforcement learning signal is conveyed to the anterior cingulate cortex via the mesencephalic dopamine system and that this signal is used by the anterior cingulate cortex to modify performance on the task at hand. They provide support for this proposal using both computational modeling and psychophysiological experimentation." }, { "pmid": "17374483", "title": "Efficient reinforcement learning: computational theories, neuroscience and robotics.", "abstract": "Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data." }, { "pmid": "10706212", "title": "A model of hippocampally dependent navigation, using the temporal difference learning rule.", "abstract": "This paper presents a model of how hippocampal place cells might be used for spatial navigation in two watermaze tasks: the standard reference memory task and a delayed matching-to-place task. In the reference memory task, the escape platform occupies a single location and rats gradually learn relatively direct paths to the goal over the course of days, in each of which they perform a fixed number of trials. In the delayed matching-to-place task, the escape platform occupies a novel location on each day, and rats gradually acquire one-trial learning, i.e., direct paths on the second trial of each day. The model uses a local, incremental, and statistically efficient connectionist algorithm called temporal difference learning in two distinct components. The first is a reinforcement-based \"actor-critic\" network that is a general model of classical and instrumental conditioning. In this case, it is applied to navigation, using place cells to provide information about state. By itself, the actor-critic can learn the reference memory task, but this learning is inflexible to changes to the platform location. We argue that one-trial learning in the delayed matching-to-place task demands a goal-independent representation of space. This is provided by the second component of the model: a network that uses temporal difference learning and self-motion information to acquire consistent spatial coordinates in the environment. Each component of the model is necessary at a different stage of the task; the actor-critic provides a way of transferring control to the component that performs best. The model successfully captures gradual acquisition in both tasks, and, in particular, the ultimate development of one-trial learning in the delayed matching-to-place task. Place cells report a form of stable, allocentric information that is well-suited to the various kinds of learning in the model." }, { "pmid": "22462543", "title": "Neural basis of reinforcement learning and decision making.", "abstract": "Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animal's knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain." }, { "pmid": "16938431", "title": "Neural systems implicated in delayed and probabilistic reinforcement.", "abstract": "This review considers the theoretical problems facing agents that must learn and choose on the basis of reward or reinforcement that is uncertain or delayed, in implicit or procedural (stimulus-response) representational systems and in explicit or declarative (action-outcome-value) representational systems. Individual differences in sensitivity to delays and uncertainty may contribute to impulsivity and risk taking. Learning and choice with delayed and uncertain reinforcement are related but in some cases dissociable processes. The contributions to delay and uncertainty discounting of neuromodulators including serotonin, dopamine, and noradrenaline, and of specific neural structures including the nucleus accumbens core, nucleus accumbens shell, orbitofrontal cortex, basolateral amygdala, anterior cingulate cortex, medial prefrontal (prelimbic/infralimbic) cortex, insula, subthalamic nucleus, and hippocampus are examined." }, { "pmid": "21435563", "title": "Model-based influences on humans' choices and striatal prediction errors.", "abstract": "The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making." }, { "pmid": "18368048", "title": "Modulators of decision making.", "abstract": "Human and animal decisions are modulated by a variety of environmental and intrinsic contexts. Here I consider computational factors that can affect decision making and review anatomical structures and neurochemical systems that are related to contextual modulation of decision making. Expectation of a high reward can motivate a subject to go for an action despite a large cost, a decision that is influenced by dopamine in the anterior cingulate cortex. Uncertainty of action outcomes can promote risk taking and exploratory choices, in which norepinephrine and the orbitofrontal cortex appear to be involved. Predictable environments should facilitate consideration of longer-delayed rewards, which depends on serotonin in the dorsal striatum and dorsal prefrontal cortex. This article aims to sort out factors that affect the process of decision making from the viewpoint of reinforcement learning theory and to bridge between such computational needs and their neurophysiological substrates." }, { "pmid": "15235607", "title": "Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops.", "abstract": "Evaluation of both immediate and future outcomes of one's actions is a critical requirement for intelligent behavior. Using functional magnetic resonance imaging (fMRI), we investigated brain mechanisms for reward prediction at different time scales in a Markov decision task. When human subjects learned actions on the basis of immediate rewards, significant activity was seen in the lateral orbitofrontal cortex and the striatum. When subjects learned to act in order to obtain large future rewards while incurring small immediate losses, the dorsolateral prefrontal cortex, inferior parietal cortex, dorsal raphe nucleus and cerebellum were also activated. Computational model-based regression analysis using the predicted future rewards and prediction errors estimated from subjects' performance data revealed graded maps of time scale within the insula and the striatum: ventroanterior regions were involved in predicting immediate rewards and dorsoposterior regions were involved in predicting future rewards. These results suggest differential involvement of the cortico-basal ganglia loops in reward prediction at different time scales." }, { "pmid": "25004371", "title": "Modeling task control of eye movements.", "abstract": "In natural behavior, visual information is actively sampled from the environment by a sequence of gaze changes. The timing and choice of gaze targets, and the accompanying attentional shifts, are intimately linked with ongoing behavior. Nonetheless, modeling of the deployment of these fixations has been very difficult because they depend on characterizing the underlying task structure. Recently, advances in eye tracking during natural vision, together with the development of probabilistic modeling techniques, have provided insight into how the cognitive agenda might be included in the specification of fixations. These techniques take advantage of the decomposition of complex behaviors into modular components. A particular subset of these models casts the role of fixation as that of providing task-relevant information that is rewarding to the agent, with fixation being selected on the basis of expected reward and uncertainty about environmental state. We review this work here and describe how specific examples can reveal general principles in gaze control." }, { "pmid": "23832417", "title": "Modular inverse reinforcement learning for visuomotor behavior.", "abstract": "In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent's goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent's goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations." }, { "pmid": "14692633", "title": "Inter-module credit assignment in modular reinforcement learning.", "abstract": "Critical issues in modular or hierarchical reinforcement learning (RL) are (i) how to decompose a task into sub-tasks, (ii) how to achieve independence of learning of sub-tasks, and (iii) how to assure optimality of the composite policy for the entire task. The second and last requirements are often under trade-off. We propose a method for propagating the reward for the entire task achievement between modules. This is done in the form of a 'modular reward', which is calculated from the temporal difference of the module gating signal and the value of the succeeding module. We implement modular reward for a multiple model-based reinforcement learning (MMRL) architecture and show its effectiveness in simulations of a pursuit task with hidden states and a continuous-time non-linear control task." }, { "pmid": "23713205", "title": "A hierarchical modular architecture for embodied cognition.", "abstract": "Cognition can appear complex owing to the fact that the brain is capable of an enormous repertoire of behaviors. However, this complexity can be greatly reduced when constraints of time and space are taken into account. The brain is constrained by the body to limit its goal-directed behaviors to just a few independent tasks over the scale of 1-2 min, and can pursue only a very small number of independent agendas. These limitations have been characterized from a number of different vantage points such as attention, working memory and dual task performance. It may be possible that the disparate perspectives of all these methodologies can be unified if behaviors can be seen as modular and hierarchically organized. From this vantage point, cognition can be seen as having a central problem of scheduling behaviors to achieve short-term goals. Thus dual-task paradigms can be seen as studying the concurrent management of simultaneous, competing agendas. Attention can be seen as focusing on the decision as to whether to interrupt the current agenda or persevere. Working memory can be seen as the bookkeeping necessary to manage the state of the current active agenda items." }, { "pmid": "19864565", "title": "Human reinforcement learning subdivides structured action spaces by learning effector-specific values.", "abstract": "Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable because of the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning-such as prediction error signals for action valuation associated with dopamine and the striatum-can cope with this \"curse of dimensionality.\" We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and blood oxygen level-dependent (BOLD) activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to \"divide and conquer\" reinforcement learning over high-dimensional action spaces." }, { "pmid": "18434531", "title": "Low-serotonin levels increase delayed reward discounting in humans.", "abstract": "Previous animal experiments have shown that serotonin is involved in the control of impulsive choice, as characterized by high preference for small immediate rewards over larger delayed rewards. Previous human studies under serotonin manipulation, however, have been either inconclusive on the effect on impulsivity or have shown an effect in the speed of action-reward learning or the optimality of action choice. Here, we manipulated central serotonergic levels of healthy volunteers by dietary tryptophan depletion and loading. Subjects performed a \"dynamic\" delayed reward choice task that required a continuous update of the reward value estimates to maximize total gain. By using a computational model of delayed reward choice learning, we estimated the parameters governing the subjects' reward choices in low-, normal, and high-serotonin conditions. We found an increase of proportion in small reward choices, together with an increase in the rate of discounting of delayed rewards in the low-serotonin condition compared with the control and high-serotonin conditions. There were no significant differences between conditions in the speed of learning of the estimated delayed reward values or in the variability of reward choice. Therefore, in line with previous animal experiments, our results show that low-serotonin levels steepen delayed reward discounting in humans. The combined results of our previous and current studies suggest that serotonin may adjust the rate of delayed reward discounting via the modulation of specific loops in parallel corticobasal ganglia circuits." }, { "pmid": "24659960", "title": "Does temporal discounting explain unhealthy behavior? A systematic review and reinforcement learning perspective.", "abstract": "The tendency to make unhealthy choices is hypothesized to be related to an individual's temporal discount rate, the theoretical rate at which they devalue delayed rewards. Furthermore, a particular form of temporal discounting, hyperbolic discounting, has been proposed to explain why unhealthy behavior can occur despite healthy intentions. We examine these two hypotheses in turn. We first systematically review studies which investigate whether discount rates can predict unhealthy behavior. These studies reveal that high discount rates for money (and in some instances food or drug rewards) are associated with several unhealthy behaviors and markers of health status, establishing discounting as a promising predictive measure. We secondly examine whether intention-incongruent unhealthy actions are consistent with hyperbolic discounting. We conclude that intention-incongruent actions are often triggered by environmental cues or changes in motivational state, whose effects are not parameterized by hyperbolic discounting. We propose a framework for understanding these state-based effects in terms of the interplay of two distinct reinforcement learning mechanisms: a \"model-based\" (or goal-directed) system and a \"model-free\" (or habitual) system. Under this framework, while discounting of delayed health may contribute to the initiation of unhealthy behavior, with repetition, many unhealthy behaviors become habitual; if health goals then change, habitual behavior can still arise in response to environmental cues. We propose that the burgeoning development of computational models of these processes will permit further identification of health decision-making phenotypes." }, { "pmid": "19309533", "title": "Image statistics at the point of gaze during human navigation.", "abstract": "Theories of efficient sensory processing have considered the regularities of image properties due to the structure of the environment in order to explain properties of neuronal representations of the visual world. The regularities imposed on the input to the visual system due to the regularities of the active selection process mediated by the voluntary movements of the eyes have been considered to a much lesser degree. This is surprising, given that the active nature of vision is well established. The present article investigates statistics of image features at the center of gaze of human subjects navigating through a virtual environment and avoiding and approaching different objects. The analysis shows that contrast can be significantly higher or lower at fixation location compared to random locations, depending on whether subjects avoid or approach targets. Similarly, significant differences in the distribution of responses of model simple and complex cells between horizontal and vertical orientations are found over timescales of tens of seconds. By clustering the model simple cell responses, it is established that gaze was directed toward three distinct features of intermediate complexity the vast majority of time. Thus, this study demonstrates and quantifies how the visuomotor tasks of approaching and avoiding objects during navigation determine feature statistics of the input to the visual system through the combined influence on body and eye movements." }, { "pmid": "25291805", "title": "Hierarchical Bayesian inverse reinforcement learning.", "abstract": "Inverse reinforcement learning (IRL) is the problem of inferring the underlying reward function from the expert's behavior data. The difficulty in IRL mainly arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behavior data as optimal. Another difficulty comes from the noisy behavior data due to sub-optimal experts. We propose a hierarchical Bayesian framework, which subsumes most of the previous IRL algorithms as well as models the sub-optimality of the expert's behavior. Using a number of experiments on a synthetic problem, we demonstrate the effectiveness of our approach including the robustness of our hierarchical Bayesian framework to the sub-optimal expert behavior data. Using a real dataset from taxi GPS traces, we additionally show that our approach predicts the driving behavior with a high accuracy." }, { "pmid": "25122479", "title": "Optimal behavioral hierarchy.", "abstract": "Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies." }, { "pmid": "22766486", "title": "The root of all value: a neural common currency for choice.", "abstract": "How do humans make choices between different types of rewards? Economists have long argued on theoretical grounds that humans typically make these choices as if the values of the options they consider have been mapped to a single common scale for comparison. Neuroimaging studies in humans have recently begun to suggest the existence of a small group of specific brain sites that appear to encode the subjective values of different types of rewards on a neural common scale, almost exactly as predicted by theory. We have conducted a meta analysis using data from thirteen different functional magnetic resonance imaging studies published in recent years and we show that the principle brain area associated with this common representation is a subregion of the ventromedial prefrontal cortex (vmPFC)/orbitofrontal cortex (OFC). The data available today suggest that this common valuation path is a core system that participates in day-to-day decision making suggesting both a neurobiological foundation for standard economic theory and a tool for measuring preferences neurobiologically. Perhaps even more exciting is the possibility that our emerging understanding of the neural mechanisms for valuation and choice may provide fundamental insights into pathological choice behaviors like addiction, obesity and gambling." }, { "pmid": "27389780", "title": "Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection.", "abstract": "The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes' equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called 'prototypic' and 'arkypallidal' neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions." }, { "pmid": "27190012", "title": "The human subthalamic nucleus encodes the subjective value of reward and the cost of effort during decision-making.", "abstract": "Adaptive behaviour entails the capacity to select actions as a function of their energy cost and expected value and the disruption of this faculty is now viewed as a possible cause of the symptoms of Parkinson's disease. Indirect evidence points to the involvement of the subthalamic nucleus-the most common target for deep brain stimulation in Parkinson's disease-in cost-benefit computation. However, this putative function appears at odds with the current view that the subthalamic nucleus is important for adjusting behaviour to conflict. Here we tested these contrasting hypotheses by recording the neuronal activity of the subthalamic nucleus of patients with Parkinson's disease during an effort-based decision task. Local field potentials were recorded from the subthalamic nucleus of 12 patients with advanced Parkinson's disease (mean age 63.8 years ± 6.8; mean disease duration 9.4 years ± 2.5) both OFF and ON levodopa while they had to decide whether to engage in an effort task based on the level of effort required and the value of the reward promised in return. The data were analysed using generalized linear mixed models and cluster-based permutation methods. Behaviourally, the probability of trial acceptance increased with the reward value and decreased with the required effort level. Dopamine replacement therapy increased the rate of acceptance for efforts associated with low rewards. When recording the subthalamic nucleus activity, we found a clear neural response to both reward and effort cues in the 1-10 Hz range. In addition these responses were informative of the subjective value of reward and level of effort rather than their actual quantities, such that they were predictive of the participant's decisions. OFF levodopa, this link with acceptance was weakened. Finally, we found that these responses did not index conflict, as they did not vary as a function of the distance from indifference in the acceptance decision. These findings show that low-frequency neuronal activity in the subthalamic nucleus may encode the information required to make cost-benefit comparisons, rather than signal conflict. The link between these neural responses and behaviour was stronger under dopamine replacement therapy. Our findings are consistent with the view that Parkinson's disease symptoms may be caused by a disruption of the processes involved in balancing the value of actions with their associated effort cost." }, { "pmid": "19439601", "title": "Adaptive gaze control in natural environments.", "abstract": "The sequential acquisition of visual information from scenes is a fundamental component of natural visually guided behavior. However, little is known about the control mechanisms responsible for the eye movement sequences that are executed in the service of such behavior. Theoretical attempts to explain gaze patterns have almost exclusively concerned two-dimensional displays that do not accurately reflect the demands of natural behavior in dynamic environments or the importance of the observer's behavioral goals. A difficult problem for all models of gaze control, intrinsic to selective perceptual systems, is how to detect important but unexpected stimuli without consuming excessive computational resources. We show, in a real walking environment, that human gaze patterns are remarkably sensitive to the probabilistic structure of the environment, suggesting that observers handle the uncertainty of the natural world by proactively allocating gaze on the basis of learned statistical structure. This is consistent with the role of reward in the oculomotor neural circuitry and supports a reinforcement learning approach to understanding gaze control in natural environments." }, { "pmid": "25719670", "title": "Human-level control through deep reinforcement learning.", "abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks." }, { "pmid": "24395971", "title": "Predicting human visuomotor behaviour in a driving task.", "abstract": "The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. We introduce a softmax barrier model for gaze selection that uses two key elements: a priority parameter that represents task importance per module, and noise estimates that allow modules to represent uncertainty about the state of task-relevant visual information. Comparisons with human gaze data gathered in a virtual driving environment show that the model closely approximates human performance." }, { "pmid": "25392517", "title": "Attention, reward, and information seeking.", "abstract": "Decision making is thought to be guided by the values of alternative options and involve the accumulation of evidence to an internal bound. However, in natural behavior, evidence accumulation is an active process whereby subjects decide when and which sensory stimulus to sample. These sampling decisions are naturally served by attention and rapid eye movements (saccades), but little is known about how saccades are controlled to guide future actions. Here we review evidence that was discussed at a recent symposium, which suggests that information selection involves basal ganglia and cortical mechanisms and that, across different contexts, it is guided by two central factors: the gains in reward and gains in information (uncertainty reduction) associated with sensory cues." }, { "pmid": "29657116", "title": "Gaze and the Control of Foot Placement When Walking in Natural Terrain.", "abstract": "Human locomotion through natural environments requires precise coordination between the biomechanics of the bipedal gait cycle and the eye movements that gather the information needed to guide foot placement. However, little is known about how the visual and locomotor systems work together to support movement through the world. We developed a system to simultaneously record gaze and full-body kinematics during locomotion over different outdoor terrains. We found that not only do walkers tune their gaze behavior to the specific information needed to traverse paths of varying complexity but that they do so while maintaining a constant temporal look-ahead window across all terrains. This strategy allows walkers to use gaze to tailor their energetically optimal preferred gait cycle to the upcoming path in order to balance between the drive to move efficiently and the need to place the feet in stable locations. Eye movements and locomotion are intimately linked in a way that reflects the integration of energetic costs, environmental uncertainty, and momentary informational demands of the locomotor task. Thus, the relationship between gaze and gait reveals the structure of the sensorimotor decisions that support successful performance in the face of the varying demands of the natural world. VIDEO ABSTRACT." } ]
PLoS Computational Biology
30408028
PMC6224031
10.1371/journal.pcbi.1006504
The Cultural Brain Hypothesis: How culture drives brain expansion, sociality, and life history
In the last few million years, the hominin brain more than tripled in size. Comparisons across evolutionary lineages suggest that this expansion may be part of a broader trend toward larger, more complex brains in many taxa. Efforts to understand the evolutionary forces driving brain expansion have focused on climatic, ecological, and social factors. Here, building on existing research on learning, we analytically and computationally model the predictions of two closely related hypotheses: The Cultural Brain Hypothesis and the Cumulative Cultural Brain Hypothesis. The Cultural Brain Hypothesis posits that brains have been selected for their ability to store and manage information, acquired through asocial or social learning. The model of the Cultural Brain Hypothesis reveals relationships between brain size, group size, innovation, social learning, mating structures, and the length of the juvenile period that are supported by the existing empirical literature. From this model, we derive a set of predictions—the Cumulative Cultural Brain Hypothesis—for the conditions that favor an autocatalytic take-off characteristic of human evolution. This narrow evolutionary pathway, created by cumulative cultural evolution, may help explain the rapid expansion of human brains and other aspects of our species’ life history and psychology.
Related workUnder the broad rubric of the Social Brain or Social Intelligence Hypothesis, different researchers have highlighted different underlying evolutionary mechanisms [35–38, 40]. These models have had differing levels of success in accounting for empirical phenomena, but they highlight the need to be specific in identifying the driving processes that underlie brain evolution in general, and the human brain specifically. From the perspective of the CBH, these models have been limited in their success, because they only tell part of the story. Our results suggest that the CBH can account for all the empirical relationships emphasized by the Social Brain Hypotheses, plus other empirical patterns not tackled by the SBH. Moreover, our approach specifies a clear ‘take-off’ mechanism for human evolution that can account for our oversized crania, heavy reliance on social learning with sophisticated forms of oblique transmission (and possibly the emergence of adolescence as a human life history stage), and the empirically-established relationship between group size and toolkit size/complexity [76]—as well as, of course, our species’ extreme reliance on cumulative culture for survival [19].Our results echo some of the predictions of models of learning and levels of competition. In particular, an early paper by Gavrilets and Vose [40] pitched as a model of Machievellian intelligence might equally be viewed as a model of culture, showing similar co-evolution of brain size, adaptive knowledge and learning ability. A more recent paper by Gavrilets [38] modeled socio-cognitive competencies in competition between groups, between individuals, and against the environment. This model showed how socio-cognitive competencies were enhanced with weaker individual-level selection, which is echoed in the CBH predictions. Finally, a recent paper by González-Forero and Gardner [39] model the energy tradeoff between brains, bodies, and reproduction under different challenges and costs. This energy model takes a different approach to how the variable and parameters are specified, particularly in tracking in ratios of brain size and energy extraction efficiency, making it difficult to directly compare to the CBH and CCBH. While the mapping is not perfect, these are potentially complementary models, particularly in the overall result that humans emerge where competition is 60% ecological, 30% cooperative, and 10% between groups with little individual-level competition, reflecting the importance of a high λ and low φ in our model. The authors conclude by noting how their model may intersect with a model of culture like the CBH in how social learning and life history interact with ecological factors and the relationship between adaptive knowledge and survival.Our simulation’s predictions are consistent with other theoretical work on cultural evolution and culture-gene coevolution. For example, several researchers have argued for the causal effect of sociality on both the complexity and quantity of adaptive knowledge [77, 78]. Similarly, several researchers have argued for the importance of high fidelity transmission for the rise of cumulative cultural evolution [29, 48, 79].Cultural variation is common among many animals (e.g., rats, pigeons, chimpanzees, and octopuses), but cumulative cultural evolution is rare [24, 80]. Boyd and Richerson [24] have argued that although learning mechanisms, such as local enhancement (often classified as a type of social learning), can maintain cultural variation, observational learning is required for cumulative cultural evolution. Moreover, the fitness valley between culture and cumulative culture grows larger as social learning becomes rarer. Our model supports both arguments by showing that only high fidelity social learning gives rise to cumulative cultural evolution and that the parameter range to enter this realm expands if social learning is more common (see Fig 15). In our model, cumulative cultural evolution exerts a selection pressure for larger brains that, in turn, allows more culture to accumulate. Prior research has identified many mechanisms, such as teaching, imitation, and theory of mind, underlying high fidelity transmission and cumulative cultural evolution [18, 28, 81]. Our model reveals that in general, social learning leads to more adaptive knowledge and larger brain sizes, but shows that asocial learning can also lead to increased brain size. Further, our model indicates that asocial learning may provide a foundation for the evolution of larger-brained social learners. These findings are consistent with Reader et al. [20], who argue for a primate general intelligence that may be a precursor to cultural intelligence and also correlates with absolute brain volume. And, though more speculative, key mutations, such as the recently discovered NOTCH2NL genes [82, 83], may have allowed for the transition from smart asocial learners to larger brained social learners as specified in the narrow pathway of the CCBH.The CBH is consistent with much existing work on comparative cognition across diverse taxa. For example, in a study of 36 species across many taxa, MacLean et al. [84] show that brain size correlates with the ability to monitor food locations when the food was moved by experimenters and to avoid a transparent barrier to acquire snacks, using previously acquired knowledge. The authors also show that brain size predicts dietary breadth, which was also an independent predictor of performance on these tasks. Brain size did not predict group size across all these species (some of whom relied heavily on asocial learning). This alternative pathway of asocial learning is consistent with emerging evidence from other taxa. For example, in mammalian carnivores brain size predicts greater problem solving ability, but not necessarily social cognition [85, 86]. These results are precisely what one would expect based on the Cultural Brain Hypothesis; brains have primarily evolved to acquire, store and manage adaptive knowledge that can be acquired socially or asocially (or via both). The Cultural Brain Hypothesis predicts a strong relationship between brain size and group size among social learning species, but a weaker or non-existent relationship among species that rely heavily on asocial learning.Our simulation results are also consistent with empirical data for relationships between brain size, sociality, culture, and life history among extant primates [e.g. 87] and even cetaceans [34], but suggest a different pathway for humans. In our species, the need to socially acquire, store, and organize an ever expanding body of cultural know-how resulted in a runaway coevolution of brains, learning, sociality and life history. Of course, this hypothesis should be kept separate from the CBH: at the point of the human take-off, brain size may have already been pushed up by the coordination demands of large groups, Machiavellian competition, or asocial learning opportunities [19]. For example, Machiavellian competition may have elevated mentalizing abilities in our primate ancestors that were later high-jacked, or re-purposed, by selective pressure associated with the CCBH to improve social learning by raising transmission fidelity, thereby creating cumulative cultural evolution. Thus, the CBH and CCBH should be evaluated independently.Note that in understanding these results, it is worth remembering that our model assumes a relationship between brain size and adaptive knowledge capacity, but not adaptive knowledge; similarly between adaptive knowledge and carrying capacity, but not population size; and between brain size and decreased survival and adaptive knowledge and increased survival. These tradeoffs and co-evolutionary dynamics help us understand why we see stronger or weaker relationships between social and asocial species.Synthesis and namingThese ideas, which have been developed concurrently by researchers in different fields, are sufficiently new such that naming and labeling conventions have not yet converged. We use Cultural Brain Hypothesis and the Cumulative Cultural Brain Hypothesis for the ideas embodied in our formal model. We nevertheless emphasize that we are building directly on a wide variety of prior work that has used various naming conventions, including The Cultural Intelligence Hypothesis [21] and the Vygotskian Intelligence Hypothesis [88]. And, of course, Humphrey (41) originally described the importance of social learning in his paper on the social functions of intellect, though subsequent work has shifted the emphasis away from social learning and toward both Machiavellian strategizing and the management of social relationships. Whiten and Van Schaik (21) first used the term “Cultural Intelligence Hypothesis” to argue that culture may have driven the evolution of brain size in non-human great apes. Later, Herrmann, Call (26) used the same term to argue that humans have a suite of cognitive abilities that have allowed for the acquisition of culture. Supporting data for both uses of the term are consistent with the CBH and the CCBH [for a rich set of data and analyses, see 20]. We used two new terms not to neologize, but because though our approach is clearly related to these other efforts, our approach contains novel elements and distinctions not clarified or formalized in earlier formulations.
[ "17823343", "15866152", "23903660", "23290552", "17148287", "26926282", "22513173", "17823346", "24033987", "22383851", "26254515", "19575315", "23804623", "25551149", "29795254", "5938775", "22658335", "15514155", "27791123", "27791123", "8728982", "22230623", "19732937", "9210020", "16890272", "19692401", "18703660", "15795887", "16365310", "20378813", "17908248", "17412589", "7569951", "20392733", "19498164", "22555004", "29856954", "29856955", "24753565", "28479979", "9144286" ]
[ { "pmid": "17823343", "title": "Evolution in the social brain.", "abstract": "The evolution of unusually large brains in some groups of animals, notably primates, has long been a puzzle. Although early explanations tended to emphasize the brain's role in sensory or technical competence (foraging skills, innovations, and way-finding), the balance of evidence now clearly favors the suggestion that it was the computational demands of living in large, complex societies that selected for large brains. However, recent analyses suggest that it may have been the particular demands of the more intense forms of pairbonding that was the critical factor that triggered this evolutionary development. This may explain why primate sociality seems to be so different from that found in most other birds and mammals: Primate sociality is based on bonded relationships of a kind that are found only in pairbonds in other taxa." }, { "pmid": "15866152", "title": "Evolution of the brain and intelligence.", "abstract": "Intelligence has evolved many times independently among vertebrates. Primates, elephants and cetaceans are assumed to be more intelligent than 'lower' mammals, the great apes and humans more than monkeys, and humans more than the great apes. Brain properties assumed to be relevant for intelligence are the (absolute or relative) size of the brain, cortex, prefrontal cortex and degree of encephalization. However, factors that correlate better with intelligence are the number of cortical neurons and conduction velocity, as the basis for information-processing capacity. Humans have more cortical neurons than other mammals, although only marginally more than whales and elephants. The outstanding intelligence of humans appears to result from a combination and enhancement of properties found in non-human primates, such as theory of mind, imitation and language, rather than from 'unique' properties." }, { "pmid": "23903660", "title": "Evolutionary origins of the avian brain.", "abstract": "Features that were once considered exclusive to modern birds, such as feathers and a furcula, are now known to have first appeared in non-avian dinosaurs. However, relatively little is known of the early evolutionary history of the hyperinflated brain that distinguishes birds from other living reptiles and provides the important neurological capablities required by flight. Here we use high-resolution computed tomography to estimate and compare cranial volumes of extant birds, the early avialan Archaeopteryx lithographica, and a number of non-avian maniraptoran dinosaurs that are phylogenetically close to the origins of both Avialae and avian flight. Previous work established that avian cerebral expansion began early in theropod history and that the cranial cavity of Archaeopteryx was volumetrically intermediate between these early forms and modern birds. Our new data indicate that the relative size of the cranial cavity of Archaeopteryx is reflective of a more generalized maniraptoran volumetric signature and in several instances is actually smaller than that of other non-avian dinosaurs. Thus, bird-like encephalization indices evolved multiple times, supporting the conclusion that if Archaeopteryx had the neurological capabilities required of flight, so did at least some other non-avian maniraptorans. This is congruent with recent findings that avialans were not unique among maniraptorans in their ability to fly in some form." }, { "pmid": "23290552", "title": "Artificial selection on relative brain size in the guppy reveals costs and benefits of evolving a larger brain.", "abstract": "The large variation in brain size that exists in the animal kingdom has been suggested to have evolved through the balance between selective advantages of greater cognitive ability and the prohibitively high energy demands of a larger brain (the \"expensive-tissue hypothesis\"). Despite over a century of research on the evolution of brain size, empirical support for the trade-off between cognitive ability and energetic costs is based exclusively on correlative evidence, and the theory remains controversial. Here we provide experimental evidence for costs and benefits of increased brain size. We used artificial selection for large and small brain size relative to body size in a live-bearing fish, the guppy (Poecilia reticulata), and found that relative brain size evolved rapidly in response to divergent selection in both sexes. Large-brained females outperformed small-brained females in a numerical learning assay designed to test cognitive ability. Moreover, large-brained lines, especially males, developed smaller guts, as predicted by the expensive-tissue hypothesis, and produced fewer offspring. We propose that the evolution of brain size is mediated by a functional trade-off between increased cognitive ability and reproductive performance and discuss the implications of these findings for vertebrate brain evolution." }, { "pmid": "17148287", "title": "Metabolic costs of brain size evolution.", "abstract": "In the ongoing discussion about brain evolution in vertebrates, the main interest has shifted from theories focusing on energy balance to theories proposing social or ecological benefits of enhanced intellect. With the availability of a wealth of new data on basal metabolic rate (BMR) and brain size and with the aid of reliable techniques of comparative analysis, we are able to show that in fact energetics is an issue in the maintenance of a relatively large brain, and that brain size is positively correlated with the BMR in mammals, controlling for body size effects. We conclude that attempts to explain brain size variation in different taxa must consider the ability to sustain the energy costs alongside cognitive benefits." }, { "pmid": "26926282", "title": "Innovation in the collective brain.", "abstract": "Innovation is often assumed to be the work of a talented few, whose products are passed on to the masses. Here, we argue that innovations are instead an emergent property of our species' cultural learning abilities, applied within our societies and social networks. Our societies and social networks act as collective brains. We outline how many human brains, which evolved primarily for the acquisition of culture, together beget a collective brain. Within these collective brains, the three main sources of innovation are serendipity, recombination and incremental improvement. We argue that rates of innovation are heavily influenced by (i) sociality, (ii) transmission fidelity, and (iii) cultural variance. We discuss some of the forces that affect these factors. These factors can also shape each other. For example, we provide preliminary evidence that transmission efficiency is affected by sociality--languages with more speakers are more efficient. We argue that collective brains can make each of their constituent cultural brains more innovative. This perspective sheds light on traits, such as IQ, that have been implicated in innovation. A collective brain perspective can help us understand otherwise puzzling findings in the IQ literature, including group differences, heritability differences and the dramatic increase in IQ test scores over time." }, { "pmid": "22513173", "title": "Explaining brain size variation: from social to cultural brain.", "abstract": "Although the social brain hypothesis has found near-universal acceptance as the best explanation for the evolution of extensive variation in brain size among mammals, it faces two problems. First, it cannot account for grade shifts, where species or complete lineages have a very different brain size than expected based on their social organization. Second, it cannot account for the observation that species with high socio-cognitive abilities also excel in general cognition. These problems may be related. For birds and mammals, we propose to integrate the social brain hypothesis into a broader framework we call cultural intelligence, which stresses the importance of the high costs of brain tissue, general behavioral flexibility and the role of social learning in acquiring cognitive skills." }, { "pmid": "17823346", "title": "Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.", "abstract": "Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more \"general intelligence,\" we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world." }, { "pmid": "24033987", "title": "Human cumulative culture: a comparative perspective.", "abstract": "Many animals exhibit social learning and behavioural traditions, but human culture exhibits unparalleled complexity and diversity, and is unambiguously cumulative in character. These similarities and differences have spawned a debate over whether animal traditions and human culture are reliant on homologous or analogous psychological processes. Human cumulative culture combines high-fidelity transmission of cultural knowledge with beneficial modifications to generate a 'ratcheting' in technological complexity, leading to the development of traits far more complex than one individual could invent alone. Claims have been made for cumulative culture in several species of animals, including chimpanzees, orangutans and New Caledonian crows, but these remain contentious. Whilst initial work on the topic of cumulative culture was largely theoretical, employing mathematical methods developed by population biologists, in recent years researchers from a wide range of disciplines, including psychology, biology, economics, biological anthropology, linguistics and archaeology, have turned their attention to the experimental investigation of cumulative culture. We review this literature, highlighting advances made in understanding the underlying processes of cumulative culture and emphasising areas of agreement and disagreement amongst investigators in separate fields." }, { "pmid": "22383851", "title": "Identification of the social and cognitive processes underlying human cumulative culture.", "abstract": "The remarkable ecological and demographic success of humanity is largely attributed to our capacity for cumulative culture, with knowledge and technology accumulating over time, yet the social and cognitive capabilities that have enabled cumulative culture remain unclear. In a comparative study of sequential problem solving, we provided groups of capuchin monkeys, chimpanzees, and children with an experimental puzzlebox that could be solved in three stages to retrieve rewards of increasing desirability. The success of the children, but not of the chimpanzees or capuchins, in reaching higher-level solutions was strongly associated with a package of sociocognitive processes-including teaching through verbal instruction, imitation, and prosociality-that were observed only in the children and covaried with performance." }, { "pmid": "26254515", "title": "A large head circumference is more strongly associated with unplanned cesarean or instrumental delivery and neonatal complications than high birthweight.", "abstract": "OBJECTIVE\nFetal size impacts on perinatal outcomes. We queried whether the fetal head, as the fetal part interfacing with the birth canal, might impact on obstetric outcomes more than birthweight (BW). We examined associations between neonatal head circumference (HC) and delivery mode and risk of perinatal complications as compared to high BW.\n\n\nSTUDY DESIGN\nThis was an electronic medical records-based study of term singleton births (37-42 weeks' gestation) from January 2010 through December 2012 (N = 24,780, 6343 primiparae). We assessed risks of unplanned cesarean or instrumental delivery and maternal and fetal complications in cases with HC or BW ≥95th centile (large HC, high BW) vs those with parameters <95th centile (normal). Newborns were stratified into 4 subgroups: normal HC/normal BW (reference, n = 22,548, primiparae 5862); normal HC/high BW (n = 817, P = 213); large HC/normal BW (n = 878, P = 265); and large HC/high BW (n = 537, P = 103). Multinomial multivariable regression provided adjusted odds ratio (aOR) while controlling for potential confounders.\n\n\nRESULTS\nInfants with HC ≥95th centile (n = 1415) were delivered vaginally in 62% of cases, unplanned cesarean delivery 16%, and instrumental delivery 11.2%; 78.4% of infants with HC <95th centile were delivered vaginally, 7.8% unplanned cesarean, and 6.7% instrumental delivery. Odds ratio (OR) for unplanned cesarean was 2.58 (95% confidence interval [CI], 2.22-3.01) and for instrumental delivery OR was 2.13 (95% CI, 1.78-2.54). In contrast, in those with BW ≥95th centile (n = 1354) 80.3% delivered vaginally, 10.2% by unplanned cesarean (OR, 1.2; 95% CI, 1.01-1.44), and 3.4% instrumental delivery (OR, 0.46; 95% CI, 0.34-0.62) compared to infants with BW <95th centile: spontaneous vaginal delivery, 77.3%, unplanned cesarean 8.2%, instrumental 7.1%. Multinomial regression with normal HC/normal BW as reference group showed large HC/normal BW infants were more likely to be delivered by unplanned cesarean (aOR, 3.08; 95% CI, 2.52-3.75) and instrumental delivery (aOR, 3.03; 95% CI, 2.46-3.75). Associations were strengthened in primiparae. Normal HC/high BW was not associated with unplanned cesarean (aOR, 1.18; 95% CI, 0.91-1.54), while large HC/high BW was (aOR, 1.93; 95% CI, 1.47-2.52). Analysis of unplanned cesarean indications showed large HC infants had more failure to progress (27.7% vs 14.1%, P < .001), while smaller HC infants had more fetal distress (23.4% vs 16.9%, P < .05).\n\n\nCONCLUSION\nA large HC is more strongly associated with unplanned cesarean and instrumental delivery than high BW. Prospective studies are needed to test fetal HC as a predictive parameter for prelabor counseling of women with \"big babies.\"" }, { "pmid": "19575315", "title": "The social brain hypothesis and its implications for social evolution.", "abstract": "The social brain hypothesis was proposed as an explanation for the fact that primates have unusually large brains for body size compared to all other vertebrates: Primates evolved large brains to manage their unusually complex social systems. Although this proposal has been generalized to all vertebrate taxa as an explanation for brain evolution, recent analyses suggest that the social brain hypothesis takes a very different form in other mammals and birds than it does in anthropoid primates. In primates, there is a quantitative relationship between brain size and social group size (group size is a monotonic function of brain size), presumably because the cognitive demands of sociality place a constraint on the number of individuals that can be maintained in a coherent group. In other mammals and birds, the relationship is a qualitative one: Large brains are associated with categorical differences in mating system, with species that have pairbonded mating systems having the largest brains. It seems that anthropoid primates may have generalized the bonding processes that characterize monogamous pairbonds to other non-reproductive relationships ('friendships'), thereby giving rise to the quantitative relationship between group size and brain size that we find in this taxon. This raises issues about why bonded relationships are cognitively so demanding (and, indeed, raises questions about what a bonded relationship actually is), and when and why primates undertook this change in social style." }, { "pmid": "23804623", "title": "Processing power limits social group size: computational evidence for the cognitive costs of sociality.", "abstract": "Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses." }, { "pmid": "25551149", "title": "Collective action and the collaborative brain.", "abstract": "Humans are unique both in their cognitive abilities and in the extent of cooperation in large groups of unrelated individuals. How our species evolved high intelligence in spite of various costs of having a large brain is perplexing. Equally puzzling is how our ancestors managed to overcome the collective action problem and evolve strong innate preferences for cooperative behaviour. Here, I theoretically study the evolution of social-cognitive competencies as driven by selection emerging from the need to produce public goods in games against nature or in direct competition with other groups. I use collaborative ability in collective actions as a proxy for social-cognitive competencies. My results suggest that collaborative ability is more likely to evolve first by between-group conflicts and then later be utilized and improved in games against nature. If collaborative abilities remain low, the species is predicted to become genetically dimorphic with a small proportion of individuals contributing to public goods and the rest free-riding. Evolution of collaborative ability creates conditions for the subsequent evolution of collaborative communication and cultural learning." }, { "pmid": "29795254", "title": "Inference of ecological and social drivers of human brain-size evolution.", "abstract": "The human brain is unusually large. It has tripled in size from Australopithecines to modern humans 1 and has become almost six times larger than expected for a placental mammal of human size 2 . Brains incur high metabolic costs 3 and accordingly a long-standing question is why the large human brain has evolved 4 . The leading hypotheses propose benefits of improved cognition for overcoming ecological5-7, social8-10 or cultural11-14 challenges. However, these hypotheses are typically assessed using correlative analyses, and establishing causes for brain-size evolution remains difficult15,16. Here we introduce a metabolic approach that enables causal assessment of social hypotheses for brain-size evolution. Our approach yields quantitative predictions for brain and body size from formalized social hypotheses given empirical estimates of the metabolic costs of the brain. Our model predicts the evolution of adult Homo sapiens-sized brains and bodies when individuals face a combination of 60% ecological, 30% cooperative and 10% between-group competitive challenges, and suggests that between-individual competition has been unimportant for driving human brain-size evolution. Moreover, our model indicates that brain expansion in Homo was driven by ecological rather than social challenges, and was perhaps strongly promoted by culture. Our metabolic approach thus enables causal assessments that refine, refute and unify hypotheses of brain-size evolution." }, { "pmid": "5938775", "title": "Lemur social behavior and primate intelligence.", "abstract": "Our human intellect has resulted from an enormous leap in capacity above the level of monkeys and apes. Earlier, though, Old and New World monkeys' intelligence outdistanced that of other mammals, including the prosimian primates. This first great advance in intelligence probably was selected through interspecific competition on the large continents. However, even at this early stage, primate social life provided the evolutionary context of primate intelligence. Two arguments support this conclusion. One is ontogenetic: modern monkeys learn so much of their social behavior, and learn their behavior toward food and toward other species through social example. The second is phylogenetic: some prosimians, the social lemurs, have evolved the usual primate type of society and social learning without the capacity to manipulate objects as monkeys do. It thus seems likely that the rudiments of primate society preceded the growth of primate intelligence, made it possible, and determined its nature." }, { "pmid": "22658335", "title": "Social organization and the evolution of cumulative technology in apes and hominins.", "abstract": "Culturally supported accumulation (or ratcheting) of technological complexity is widely seen as characterizing hominin technology relative to that of the extant great apes, and thus as representing a threshold in cultural evolution. To explain this divide, we modeled the process of cultural accumulation of technology, which we defined as adding new actions to existing ones to create new functional combinations, based on a model for great ape tool use. The model shows that intraspecific and interspecific variation in the presence of simple and cumulative technology among extant orangutans and chimpanzees is largely due to variation in sociability, and hence opportunities for social learning. The model also suggests that the adoption of extensive allomaternal care (cooperative breeding) in early Pleistocene Homo, which led to an increase in sociability and to teaching, and hence increased efficiency of social learning, was enough to facilitate technological ratcheting. Hence, socioecological changes, rather than advances in cognitive abilities, can account for the cumulative cultural changes seen until the origin of the Acheulean. The consequent increase in the reliance on technology could have served as the pacemaker for increased cognitive abilities. Our results also suggest that a more important watershed in cultural evolution was the rise of donated culture (technology or concepts), in which technology or concepts was transferred to naïve individuals, allowing them to skip many learning steps, and specialization arose, which allowed individuals to learn only a subset of the population's skills." }, { "pmid": "15514155", "title": "The evolutionary origin of cooperators and defectors.", "abstract": "Coexistence of cooperators and defectors is common in nature, yet the evolutionary origin of such social diversification is unclear. Many models have been studied on the basis of the assumption that benefits of cooperative acts only accrue to others. Here, we analyze the continuous snowdrift game, in which cooperative investments are costly but yield benefits to others as well as to the cooperator. Adaptive dynamics of investment levels often result in evolutionary diversification from initially uniform populations to a stable state in which cooperators making large investments coexist with defectors who invest very little. Thus, when individuals benefit from their own actions, large asymmetries in cooperative investments can evolve." }, { "pmid": "8728982", "title": "Neocortex size and behavioural ecology in primates.", "abstract": "The neocortex is widely held to have been the focus of mammalian brain evolution, but what selection pressures explain the observed diversity in its size and structure? Among primates, comparative studies suggest that neocortical evolution is related to the cognitive demands of sociality, and here I confirm that neocortex size and social group size are positively correlated once phylogenetic associations and overall brain size are taken into account. This association holds within haplorhine but not strepsirhine primates. In addition, the neocortex is larger in diurnal than in nocturnal primates, and among diurnal haplorhines its size is positively correlated with the degree of frugivory. These ecological correlates reflect the diverse sensory-cognitive functions of the neocortex." }, { "pmid": "22230623", "title": "Embracing covariation in brain evolution: large brains, extended development, and flexible primate social systems.", "abstract": "Brain size, body size, developmental length, life span, costs of raising offspring, behavioral complexity, and social structures are correlated in mammals due to intrinsic life-history requirements. Dissecting variation and direction of causation in this web of relationships often draw attention away from the factors that correlate with basic life parameters. We consider the \"social brain hypothesis,\" which postulates that overall brain and the isocortex are selectively enlarged to confer social abilities in primates, as an example of this enterprise and pitfalls. We consider patterns of brain scaling, modularity, flexibility of brain organization, the \"leverage,\" and direction of selection on proposed dimensions. We conclude that the evidence supporting selective changes in isocortex or brain size for the isolated ability to manage social relationships is poor. Strong covariation in size and developmental duration coupled with flexible brains allow organisms to adapt in variable social and ecological environments across the life span and in evolution." }, { "pmid": "19732937", "title": "The Expensive Brain: a framework for explaining evolutionary changes in brain size.", "abstract": "To explain variation in relative brain size among homoiothermic vertebrates, we propose the Expensive Brain hypothesis as a unifying explanatory framework. It claims that the costs of a relatively large brain must be met by any combination of increased total energy turnover or reduced energy allocation to another expensive function such as digestion, locomotion, or production (growth and reproduction). Focusing on the energetic costs of brain enlargement, a comparative analysis of the largest mammalian sample assembled to date shows that an increase in brain size leads to larger neonates among all mammals and a longer period of immaturity among monotokous precocial species, but not among the polytokous altricial ones, who instead reduce their litter size. Relatively large brained mammals, altricial and precocial, also show reduced annual fertility rates as compared to their smaller brained relatives, but allomaternal energy inputs allow some cooperatively breeding altricial carnivores to produce even more offspring in a shorter time despite having a relatively large brain. Thus, the Expensive Brain framework explains why brain size is linked to life history pace in some, but not all mammalian lineages. This framework encompasses other hypotheses of energetic constraints on brain size variation and is also compatible with the Brain Malnutrition Risk hypothesis, but the absence of a mammal-wide correlation between brain size and immature period argues against the Needing-to-Learn explanation for slower development among large brained mammals." }, { "pmid": "9210020", "title": "Social pressures have selected for an extended juvenile period in primates.", "abstract": "Primates are highly social animals. As such, they utilize a large repertoire of social skills to manage their complex and dynamic social environments. In order to acquire complex social skills, primates require an extended learning period. Here 1 perform a comparative analysis using independent contrasts to show that social pressures have favored an extension in the proportion of time primates spend as juveniles." }, { "pmid": "16890272", "title": "Evolution of brain size and juvenile periods in primates.", "abstract": "This paper assesses selective pressures that shaped primate life histories, with particular attention to the evolution of longer juvenile periods and increased brain sizes. We evaluate the effects of social complexity (as indexed by group size) and foraging complexity (as indexed by percent fruit and seeds in the diet) on the length of the juvenile period, brain size, and brain ratios (neocortex and executive brain ratios) while controlling for positive covariance among body size, life span, and home range. Results support strong components of diet, life span, and population density acting on juvenile periods and of home range acting on relative brain sizes. Social-complexity arguments for the evolution of primate intelligence are compelling given strong positive correlations between brain ratios and group size while controlling for potential confounding variables. We conclude that both social and ecological components acting at variable intensities in different primate clades are important for understanding variation in primate life histories." }, { "pmid": "19692401", "title": "Cooperative breeding in South American hunter-gatherers.", "abstract": "Evolutionary researchers have recently suggested that pre-modern human societies habitually practised cooperative breeding and that this feature helps explain human prosocial tendencies. Despite circumstantial evidence that post-reproductive females and extra-pair males both provide resources required for successful reproduction by mated pairs, no study has yet provided details about the flow of food resources by different age and sex categories to breeders and offspring, nor documented the ratio of helpers to breeders. Here, we show in two hunter-gatherer societies of South America that each breeding pair with dependent offspring on average obtained help from approximately 1.3 non-reproductive adults. Young married males and unmarried males of all ages were the main food providers, accounting for 93-100% of all excess food production available to breeding pairs and their offspring. Thus, each breeding pair with dependants was provisioned on average by 0.8 adult male helpers. The data provide no support for the hypothesis that post-reproductive females are the main provisioners of younger reproductive-aged kin in hunter-gatherer societies. Demographic and food acquisition data show that most breeding pairs can expect food deficits owing to foraging luck, health disabilities and accumulating dependency ratio of offspring in middle age, and that extra-pair provisioning may be essential to the evolved human life history." }, { "pmid": "18703660", "title": "Male dominance rarely skews the frequency distribution of Y chromosome haplotypes in human populations.", "abstract": "A central tenet of evolutionary social science holds that behaviors, such as those associated with social dominance, produce fitness effects that are subject to cultural selection. However, evidence for such selection is inconclusive because it is based on short-term statistical associations between behavior and fertility. Here, we show that the evolutionary effects of dominance at the population level can be detected using noncoding regions of DNA. Highly variable polymorphisms on the nonrecombining portion of the Y chromosome can be used to trace lines of descent from a common male ancestor. Thus, it is possible to test for the persistence of differential fertility among patrilines. We examine haplotype distributions defined by 12 short tandem repeats in a sample of 1269 men from 41 Indonesian communities and test for departures from neutral mutation-drift equilibrium based on the Ewens sampling formula. Our tests reject the neutral model in only 5 communities. Analysis and simulations show that we have sufficient power to detect such departures under varying demographic conditions, including founder effects, bottlenecks, and migration, and at varying levels of social dominance. We conclude that patrilines seldom are dominant for more than a few generations, and thus traits or behaviors that are strictly paternally inherited are unlikely to be under strong cultural selection." }, { "pmid": "15795887", "title": "Cross-cultural estimation of the human generation interval for use in genetics-based population divergence studies.", "abstract": "The length of the human generation interval is a key parameter when using genetics to date population divergence events. However, no consensus exists regarding the generation interval length, and a wide variety of interval lengths have been used in recent studies. This makes comparison between studies difficult, and questions the accuracy of divergence date estimations. Recent genealogy-based research suggests that the male generation interval is substantially longer than the female interval, and that both are greater than the values commonly used in genetics studies. This study evaluates each of these hypotheses in a broader cross-cultural context, using data from both nation states and recent hunter-gatherer societies. Both hypotheses are supported by this study; therefore, revised estimates of male, female, and overall human generation interval lengths are proposed. The nearly universal, cross-cultural nature of the evidence justifies using these proposed estimates in Y-chromosomal, mitochondrial, and autosomal DNA-based population divergence studies." }, { "pmid": "16365310", "title": "Placing confidence limits on the molecular age of the human-chimpanzee divergence.", "abstract": "Molecular clocks have been used to date the divergence of humans and chimpanzees for nearly four decades. Nonetheless, this date and its confidence interval remain to be firmly established. In an effort to generate a genomic view of the human-chimpanzee divergence, we have analyzed 167 nuclear protein-coding genes and built a reliable confidence interval around the calculated time by applying a multifactor bootstrap-resampling approach. Bayesian and maximum likelihood analyses of neutral DNA substitutions show that the human-chimpanzee divergence is close to 20% of the ape-Old World monkey (OWM) divergence. Therefore, the generally accepted range of 23.8-35 millions of years ago for the ape-OWM divergence yields a range of 4.98-7.02 millions of years ago for human-chimpanzee divergence. Thus, the older time estimates for the human-chimpanzee divergence, from molecular and paleontological studies, are unlikely to be correct. For a given the ape-OWM divergence time, the 95% confidence interval of the human-chimpanzee divergence ranges from -12% to 19% of the estimated time. Computer simulations suggest that the 95% confidence intervals obtained by using a multifactor bootstrap-resampling approach contain the true value with >95% probability, whether deviations from the molecular clock are random or correlated among lineages. Analyses revealed that the use of amino acid sequence differences is not optimal for dating human-chimpanzee divergence and that the inclusion of additional genes is unlikely to narrow the confidence interval significantly. We conclude that tests of hypotheses about the timing of human-chimpanzee divergence demand more precise fossil-based calibrations." }, { "pmid": "20378813", "title": "Why copy others? Insights from the social learning strategies tournament.", "abstract": "Social learning (learning through observation or interaction with other individuals) is widespread in nature and is central to the remarkable success of humanity, yet it remains unclear why copying is profitable and how to copy most effectively. To address these questions, we organized a computer tournament in which entrants submitted strategies specifying how to use social learning and its asocial alternative (for example, trial-and-error learning) to acquire adaptive behavior in a complex environment. Most current theory predicts the emergence of mixed strategies that rely on some combination of the two types of learning. In the tournament, however, strategies that relied heavily on social learning were found to be remarkably successful, even when asocial information was no more costly than social information. Social learning proved advantageous because individuals frequently demonstrated the highest-payoff behavior in their repertoire, inadvertently filtering information for copiers. The winning strategy (discountmachine) relied nearly exclusively on social learning and weighted information according to the time since acquisition." }, { "pmid": "17908248", "title": "Evidence for coevolution of sociality and relative brain size in three orders of mammals.", "abstract": "As the brain is responsible for managing an individual's behavioral response to its environment, we should expect that large relative brain size is an evolutionary response to cognitively challenging behaviors. The \"social brain hypothesis\" argues that maintaining group cohesion is cognitively demanding as individuals living in groups need to be able to resolve conflicts that impact on their ability to meet resource requirements. If sociality does impose cognitive demands, we expect changes in relative brain size and sociality to be coupled over evolutionary time. In this study, we analyze data on sociality and relative brain size for 206 species of ungulates, carnivores, and primates and provide, for the first time, evidence that changes in sociality and relative brain size are closely correlated over evolutionary time for all three mammalian orders. This suggests a process of coevolution and provides support for the social brain theory. However, differences between taxonomic orders in the stability of the transition between small-brained/nonsocial and large-brained/social imply that, although sociality is cognitively demanding, sociality and relative brain size can become decoupled in some cases. Carnivores seem to have been especially prone to this." }, { "pmid": "17412589", "title": "Delayed breeding affects lifetime reproductive success differently in male and female green woodhoopoes.", "abstract": "In cooperatively breeding species, many individuals only start breeding long after reaching physiological maturity [1], and this delay is expected to reduce lifetime reproductive success (LRS) [1-3]. Although many studies have investigated how nonbreeding helpers might mitigate the assumed cost of delayed breeding (reviewed in [3]), few have directly quantified the cost itself [4, 5] (but see [6, 7]). Moreover, although life-history tradeoffs frequently influence the sexes in profoundly different ways [8, 9], it has been generally assumed that males and females are similarly affected by a delayed start to breeding [7]. Here, we use 24 years of data to investigate the sex-specific cost of delayed breeding in the cooperatively breeding green woodhoopoe (Phoeniculus purpureus) and show that age at first breeding is related to LRS differently in males and females. As is traditionally expected, males that started to breed earlier in life had greater LRS than those that started later. However, females showed the opposite pattern: Those individuals that started to breed later in life actually had greater LRS than those that started earlier. In both sexes, the association between age at first breeding and LRS was driven by differences in breeding-career length, rather than per-season productivity. We hypothesize that the high mortality rate of young female breeders, and thus their short breeding careers, is related to a reduced ability to deal with the high physiological costs of reproduction in this species. These results demonstrate the importance of considering sex-specific reproductive costs when estimating the payoffs of life-history decisions and bring into question the long-held assumption that delayed breeding is necessarily costly." }, { "pmid": "7569951", "title": "Plio-Pleistocene African climate.", "abstract": "Marine records of African climate variability document a shift toward more arid conditions after 2.8 million years ago (Ma), evidently resulting from remote forcing by cold North Atlantic sea-surface temperatures associated with the onset of Northern Hemisphere glacial cycles. African climate before 2.8 Ma was regulated by low-latitude insolation forcing of monsoonal climate due to Earth orbital precession. Major steps in the evolution of African hominids and other vertebrates are coincident with shifts to more arid, open conditions near 2.8 Ma, 1.7 Ma, and 1.0 Ma, suggesting that some Pliocene (Plio)-Pleistocene speciation events may have been climatically mediated." }, { "pmid": "20392733", "title": "Population size predicts technological complexity in Oceania.", "abstract": "Much human adaptation depends on the gradual accumulation of culturally transmitted knowledge and technology. Recent models of this process predict that large, well-connected populations will have more diverse and complex tool kits than small, isolated populations. While several examples of the loss of technology in small populations are consistent with this prediction, it found no support in two systematic quantitative tests. Both studies were based on data from continental populations in which contact rates were not available, and therefore these studies do not provide a test of the models. Here, we show that in Oceania, around the time of early European contact, islands with small populations had less complicated marine foraging technology. This finding suggests that explanations of existing cultural variation based on optimality models alone are incomplete because demography plays an important role in generating cumulative cultural adaptation. It also indicates that hominin populations with similar cognitive abilities may leave very different archaeological records, a conclusion that has important implications for our understanding of the origin of anatomically modern humans and their evolved psychology." }, { "pmid": "19498164", "title": "Late Pleistocene demography and the appearance of modern human behavior.", "abstract": "The origins of modern human behavior are marked by increased symbolic and technological complexity in the archaeological record. In western Eurasia this transition, the Upper Paleolithic, occurred about 45,000 years ago, but many of its features appear transiently in southern Africa about 45,000 years earlier. We show that demography is a major determinant in the maintenance of cultural complexity and that variation in regional subpopulation density and/or migratory activity results in spatial structuring of cultural skill accumulation. Genetic estimates of regional population size over time show that densities in early Upper Paleolithic Europe were similar to those in sub-Saharan Africa when modern behavior first appeared. Demographic factors can thus explain geographic variation in the timing of the first appearance of modern behavior without invoking increased cognitive capacity." }, { "pmid": "22555004", "title": "Innovativeness, population size and cumulative cultural evolution.", "abstract": "Henrich [Henrich, J., 2004. Demography and cultural evolution: how adaptive cultural processes can produce maladaptive losses-the Tasmanian case. Am. Antiquity 69, 197-214] proposed a model designed to show that larger population size facilitates cumulative cultural evolution toward higher skill levels. In this model, each newborn attempts to imitate the most highly skilled individual of the parental generation by directly-biased social learning, but the skill level he/she acquires deviates probabilistically from that of the exemplar (cultural parent). The probability that the skill level of the imitator exceeds that of the exemplar can be regarded as the innovation rate. After reformulating Henrich's model rigorously, we introduce an overlapping-generations analog based on the Moran model and derive an approximate formula for the expected change per generation of the highest skill level in the population. For large population size, our overlapping-generations model predicts a much larger effect of population size than Henrich's discrete-generations model. We then investigate by way of Monte Carlo simulations the case where each newborn chooses as his/her exemplar the most highly skilled individual from among a limited number of acquaintances. When the number of acquaintances is small relative to the population size, we find that a change in the innovation rate contributes more than a proportional change in population size to the cumulative cultural evolution of skill level." }, { "pmid": "29856954", "title": "Human-Specific NOTCH2NL Genes Affect Notch Signaling and Cortical Neurogenesis.", "abstract": "Genetic changes causing brain size expansion in human evolution have remained elusive. Notch signaling is essential for radial glia stem cell proliferation and is a determinant of neuronal number in the mammalian cortex. We find that three paralogs of human-specific NOTCH2NL are highly expressed in radial glia. Functional analysis reveals that different alleles of NOTCH2NL have varying potencies to enhance Notch signaling by interacting directly with NOTCH receptors. Consistent with a role in Notch signaling, NOTCH2NL ectopic expression delays differentiation of neuronal progenitors, while deletion accelerates differentiation into cortical neurons. Furthermore, NOTCH2NL genes provide the breakpoints in 1q21.1 distal deletion/duplication syndrome, where duplications are associated with macrocephaly and autism and deletions with microcephaly and schizophrenia. Thus, the emergence of human-specific NOTCH2NL genes may have contributed to the rapid evolution of the larger human neocortex, accompanied by loss of genomic stability at the 1q21.1 locus and resulting recurrent neurodevelopmental disorders." }, { "pmid": "29856955", "title": "Human-Specific NOTCH2NL Genes Expand Cortical Neurogenesis through Delta/Notch Regulation.", "abstract": "The cerebral cortex underwent rapid expansion and increased complexity during recent hominid evolution. Gene duplications constitute a major evolutionary force, but their impact on human brain development remains unclear. Using tailored RNA sequencing (RNA-seq), we profiled the spatial and temporal expression of hominid-specific duplicated (HS) genes in the human fetal cortex and identified a repertoire of 35 HS genes displaying robust and dynamic patterns during cortical neurogenesis. Among them NOTCH2NL, human-specific paralogs of the NOTCH2 receptor, stood out for their ability to promote cortical progenitor maintenance. NOTCH2NL promote the clonal expansion of human cortical progenitors, ultimately leading to higher neuronal output. At the molecular level, NOTCH2NL function by activating the Notch pathway through inhibition of cis Delta/Notch interactions. Our study uncovers a large repertoire of recently evolved genes active during human corticogenesis and reveals how human-specific NOTCH paralogs may have contributed to the expansion of the human cortex." }, { "pmid": "24753565", "title": "The evolution of self-control.", "abstract": "Cognition presents evolutionary research with one of its greatest challenges. Cognitive evolution has been explained at the proximate level by shifts in absolute and relative brain volume and at the ultimate level by differences in social and dietary complexity. However, no study has integrated the experimental and phylogenetic approach at the scale required to rigorously test these explanations. Instead, previous research has largely relied on various measures of brain size as proxies for cognitive abilities. We experimentally evaluated these major evolutionary explanations by quantitatively comparing the cognitive performance of 567 individuals representing 36 species on two problem-solving tasks measuring self-control. Phylogenetic analysis revealed that absolute brain volume best predicted performance across species and accounted for considerably more variance than brain volume controlling for body mass. This result corroborates recent advances in evolutionary neurobiology and illustrates the cognitive consequences of cortical reorganization through increases in brain volume. Within primates, dietary breadth but not social group size was a strong predictor of species differences in self-control. Our results implicate robust evolutionary relationships between dietary breadth, absolute brain volume, and self-control. These findings provide a significant first step toward quantifying the primate cognitive phenome and explaining the process of cognitive evolution." }, { "pmid": "28479979", "title": "The evolution of intelligence in mammalian carnivores.", "abstract": "Although intelligence should theoretically evolve to help animals solve specific types of problems posed by the environment, it is unclear which environmental challenges favour enhanced cognition, or how general intelligence evolves along with domain-specific cognitive abilities. The social intelligence hypothesis posits that big brains and great intelligence have evolved to cope with the labile behaviour of group mates. We have exploited the remarkable convergence in social complexity between cercopithecine primates and spotted hyaenas to test predictions of the social intelligence hypothesis in regard to both cognition and brain size. Behavioural data indicate that there has been considerable convergence between primates and hyaenas with respect to their social cognitive abilities. Moreover, compared with other hyaena species, spotted hyaenas have larger brains and expanded frontal cortex, as predicted by the social intelligence hypothesis. However, broader comparative study suggests that domain-general intelligence in carnivores probably did not evolve in response to selection pressures imposed specifically in the social domain. The cognitive buffer hypothesis, which suggests that general intelligence evolves to help animals cope with novel or changing environments, appears to offer a more robust explanation for general intelligence in carnivores than any hypothesis invoking selection pressures imposed strictly by sociality or foraging demands." }, { "pmid": "9144286", "title": "Body mass and encephalization in Pleistocene Homo.", "abstract": "Many dramatic changes in morphology within the genus Homo have occurred over the past 2 million years or more, including large increases in absolute brain size and decreases in postcanine dental size and skeletal robusticity. Body mass, as the 'size' variable against which other morphological features are usually judged, has been important for assessing these changes. Yet past body mass estimates for Pleistocene Homo have varied greatly, sometimes by as much as 50% for the same individuals. Here we show that two independent methods of body-mass estimation yield concordant results when applied to Pleistocene Homo specimens. On the basis of an analysis of 163 individuals, body mass in Pleistocene Homo averaged significantly (about 10%) larger than a representative sample of living humans. Relative to body mass, brain mass in late archaic H. sapiens (Neanderthals) was slightly smaller than in early 'anatomically modern' humans, but the major increase in encephalization within Homo occurred earlier during the Middle Pleistocene (600-150 thousand years before present (kyr BP)), preceded by a long period of stasis extending through the Early Pleistocene (1,800 kyr BP)." } ]
PLoS Computational Biology
30372442
PMC6224120
10.1371/journal.pcbi.1006538
Computational discovery of dynamic cell line specific Boolean networks from multiplex time-course data
Protein signaling networks are static views of dynamic processes where proteins go through many biochemical modifications such as ubiquitination and phosphorylation to propagate signals that regulate cells and can act as feed-back systems. Understanding the precise mechanisms underlying protein interactions can elucidate how signaling and cell cycle progression occur within cells in different diseases such as cancer. Large-scale protein signaling networks contain an important number of experimentally verified protein relations but lack the capability to predict the outcomes of the system, and therefore to be trained with respect to experimental measurements. Boolean Networks (BNs) are a simple yet powerful framework to study and model the dynamics of the protein signaling networks. While many BN approaches exist to model biological systems, they focus mainly on system properties, and few exist to integrate experimental data in them. In this work, we show an application of a method conceived to integrate time series phosphoproteomic data into protein signaling networks. We use a large-scale real case study from the HPN-DREAM Breast Cancer challenge. Our efficient and parameter-free method combines logic programming and model-checking to infer a family of BNs from multiple perturbation time series data of four breast cancer cell lines given a prior protein signaling network. Because each predicted BN family is cell line specific, our method highlights commonalities and discrepancies between the four cell lines. Our models have a Root Mean Square Error (RMSE) of 0.31 with respect to the testing data, while the best performant method of this HPN-DREAM challenge had a RMSE of 0.47. To further validate our results, BNs are compared with the canonical mTOR pathway showing a comparable AUROC score (0.77) to the top performing HPN-DREAM teams. In addition, our approach can also be used as a complementary method to identify erroneous experiments. These results prove our methodology as an efficient dynamic model discovery method in multiple perturbation time course experimental data of large-scale signaling networks. The software and data are publicly available at https://github.com/misbahch6/caspo-ts.
Related workRegarding the training of BNs with respect to multiple perturbation datasets, CellNOpT (CNO)[16] assembles BNs from a Prior Knowledge Network (PKN) and phosphoproteomic datasets. Their tool has been implemented using stochastic search algorithms (more precisely, a genetic algorithm), to suggest multiple BNs explaining the data [17]. However, stochastic search methods cannot generate a complete set of solutions, hence they cannot guarantee a global optimal solution. In [11, 12], the authors overcome this problem by proposing caspo, an approach based on ASP to infer BNs explaining the underlying protein signaling network. This approach can generate all possible optimal Boolean models as compared to the CellNOpt approach. The authors in [14], presented a framework based on integer linear programming (ILP) to learn the subset of interactions best fitted to the experimental data. Recently, another approach based on ILP has been proposed to reconstruct BNs from experimental data. Their learning approach do not require the information about the activation/repression properties of the network’s edges [13].The methods mentioned above are very useful but restrain themselves to learn from only two time points, assuming the system has reached an early steady-state when the measurements are performed. This assumption prevents us from capturing interesting characteristics like loops [3]. To overcome this issue, the caspo time series (caspo-ts) method was proposed in [8]. This method learns BNs from multiple perturbation phosphoproteomic time series data given a PKN. The proposed method is based on ASP and a model-checking step is needed to detect true positive BNs. They tested their approach on synthetic data for a small PKN (≈17 nodes and ≈50 edges) [8]. More recently, an approach based on genetic algorithms was proposed to learn context specific networks given a PKN and experimental information about stable states and their transitions but it does not scale well with large networks and finding a global optimum is not guaranteed [18].Caspo-ts modeling frameworkWe chose the caspo-ts method [7, 8] for the inference of BNs. This method was tailored to handle protein phosphoproteomic time series data. The input of the method consists of a PKN and normalized phosphoproteomic time series data under different perturbations to generate a family of BNs whose structure is compatible with the PKN and that can also reproduce the patterns observed in the experimental data. In the following, we will develop the main notions of this framework.Prior knowledge networkIt is one input of caspo-ts and it is modeled as a labeled (or colored) directed graph (V, E, σ) with V = {v1, v2, …, vn} the set of nodes, E ⊆ V × V the set of directed edges and σ ⊆ E × {+1, −1} the signs of edges. The set of nodes is denoted by V = S∪I∪R∪U where S are stimuli, I are inhibitors, R are readouts, and U are unobserved nodes. Stimuli, inhibitors, readouts, and unobserved nodes are encoded by different colors in the graphs presented in this case study. Stimuli are shown in green, inhibitors in red, readouts in blue, and unobserved nodes in white (Fig 1). Moreover, the subsets S, I, R, U are all pairwise disjoint except for I and R, because a protein can be inhibited as well as measured. Stimuli are used to bound the system and also serve as interaction points of the system, these nodes can be experimentally stimulated, e.g. cellular receptors. Inhibitors are those nodes which remain inactive or blocked over all time points of the experiment by small molecule inhibitors. Stimuli and inhibitor nodes take Boolean values {0, 1} representing the fact that the node was stimulated (1) or inhibited (0). Readouts are experimentally measured given a combination of stimuli and inhibitors. They usually take continuous values in [0;1] after normalization. Unobserved nodes are neither measured nor experimentally manipulated. In this study, we use the term perturbation to refer to the combination of stimuli and inhibitors, similarly to other studies such as [19–21].10.1371/journal.pcbi.1006538.g001Fig 1Caspo-ts workflow.Caspo-ts receives as input data a prior knowledge network (PKN) and a discretized phosphoproteomic dataset. In this example the phosphoproteomic data consists of two perturbations involving akt (inhibitor) and hgf (stimulus): 1) akt = 0, hgf = 1 and 2) akt = 1, hgf = 0. A black colored perturbation means the inhibitor or stimulus was perturbed (1) while white represents the opposite (0). Readouts are specified in blue and describe the time series under given perturbations. Using this input data, caspo-ts, performs two steps: ASP solving and model checking. In the ASP solving step: (i) a set of BNs, compatible with the PKN, is generated, (ii) afterwards an over-approximation constraint is imposed upon each candidate BN to filter out invalid BNs, that do not result in an over-approximation of the reachability between the Boolean states given by the phosphoproteomic dataset, and finally (iii) BNs are optimized using an objective function minimizing the distance to the experimental measures. The ASP step also introduces repairs in some data points of the time series that added penalties to the objective function. These corrected traces will be given to the model checker. In the model checking step, the exact reachability of all the (binarized and corrected) time series traces in the family of BNs is verified.Phosphoproteomic time series dataIt is the second input of caspo-ts and it consists of temporal changes in phosphorylated proteins under a perturbation (Fig 1). Without loss of generality, we assume that the time series data are related to the observation of m ≤ n nodes for the nodes {v1, …, vm} (so the nodes {vm+1, …, vn} are not observed). The observations consist of normalized continuous values: a time series of k data points is denoted by TP=(tP1,…,tPk), where P ⊆ S ∪ I is a perturbation and tj ∈ [0; 1]m for 1 ≤ j ≤ k. This data will be discretized in order to link it with further BNs’ discovery (ASP solving and model checking steps).Boolean NetworkIt is the output of caspo-ts. A Boolean Network (BN) [22, 23] is defined as a pair B = (N, F), whereN = {v1, …, vn} is a finite set of nodes (or variables/proteins/genes)F = {f1, …, fn} is a set of Boolean functions (regulatory functions) fi:Bk→B, with B={0,1}, describing the evolution of variable vi.A vector (or state) x = (x1, …, xn) captures the values of all nodes N at a time step, where xi represents the value of the node vi, and is either 1 or 0. There are up to 2n possible distinct states for each time step. Next, we define the transition x → x′ between two states of a BN. If there is no update for node vi then xi′ = xi. If there is an update for node vi then the state of a node vi at the next time step is determined by xi′=fi(x1,…,xn). Note that usually only a subset of the nodes influence the evolution of node vi. These nodes are called the regulatory nodes of vi. The state of each node can be updated in a synchronous (parallel) or asynchronous fashion. In the synchronous update schedule, the states of all nodes are updated, while in asynchronous update schedule, the state of one node is updated at a time. The work presented in this article is independent of the update schedule routine, hence any number of nodes can be updated at a time.ASP solvingGiven a PKN and a phosphoproteomic dataset, a family of candidate BNs, compatible with this PKN, is exhaustively enumerated including the main nodes (the sets S,I,R) of the experimental data. We refer the reader to [12] for a detailed description of BN’s compatibility with a PKN. Afterwards an over-approximation constraint (see Materials and methods) is imposed upon each candidate BN to filter out invalid BNs [8], that do not result in an over-approximation of the reachability between the Boolean states given by the phosphoproteomic dataset. Finally, an optimization step is performed to select those BNs having a minimal distance between the actual time series TP and the over-approximated time series YP. We have adopted the Root Mean Square Error (RMSE) as the objective function: RMSE=1m*k*|P|∑i=1m∑j=1k∑P∈P((tPj)i-(yPj)i)2(1) where m is the number of observed nodes, k is the number of time points, and P is the set of perturbations. In addition, the optimization step highlights the data points in the time series which added penalties to the RMSE. Such data points are automatically corrected before the model checking step.All the analyses described in this step are performed using ASP, namely the clingo 4.5.4 solver [15]. This solver guarantees finding optimal solutions, and all BNs outputted by the ASP solver step will be identically optimal. For the HPN-DREAM case study, the full enumeration of optimal BNs creates billions of BNs, and since the next (model checking) step can take days of computation depending on the verified BN we choose to limit this enumeration to a fixed number of BNs.Model checking and true positive BNsFrom the ASP solving step, a set of optimal BNs that over-approximate the phosphoproteomic time series data is produced. This set of BNs is verified with an exact model checking to detect true positive (TP) BNs. TP BNs are guaranteed to reproduce all the (binarized) trajectories under all perturbations by verifying exact reachability in the BN state graph. For this, we have used computational tree logic (CTL) implemented in the NuSMV 2.6.0 [24], which is a symbolic model checker.Caspo-ts workflowThe caspo-ts workflow is shown in Fig 1. It consists of two main steps, ASP solving and model checking, as described previously.
[ "18468563", "23803171", "22871648", "23072820", "27484338", "23853063", "23286509", "19997482", "23079107", "27716031", "26901648", "28017544", "5803332", "15932879", "20530665", "23000897", "25692714", "10592173", "14681407", "22096230", "16381927", "27924014", "10592249", "14681466", "14681455", "11911893", "18823568", "14597658", "26058016" ]
[ { "pmid": "18468563", "title": "Logic models of pathway biology.", "abstract": "Living systems seamlessly perform complex information processing and control tasks using combinatorially complex sets of biochemical reactions. Drugs that therapeutically modulate the biological processes of disease are developed using single protein target strategies, often with limited knowledge of the complex underlying role of the targets. Approaches that attempt to consider the combinatorial complexity from the outset might help identify any causal relationships that could lead to undesirable or adverse side effects earlier in the development pipeline. Such approaches, in particular logic methodologies, might also aid pathway selection and multiple target strategies during the drug discovery phase. Here, we describe the use of logic as a tractable and informative approach to modelling biological pathways that can allow us to improve our understanding of the dependencies in complex biological processes." }, { "pmid": "23803171", "title": "Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks.", "abstract": "A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models.Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input-output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous.We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to logical and eventually to logic-based ODE models. Importantly, systems and network properties determined in the rougher representation are conserved during these transformations." }, { "pmid": "22871648", "title": "State-time spectrum of signal transduction logic models.", "abstract": "Despite the current wealth of high-throughput data, our understanding of signal transduction is still incomplete. Mathematical modeling can be a tool to gain an insight into such processes. Detailed biochemical modeling provides deep understanding, but does not scale well above relatively a few proteins. In contrast, logic modeling can be used where the biochemical knowledge of the system is sparse and, because it is parameter free (or, at most, uses relatively a few parameters), it scales well to large networks that can be derived by manual curation or retrieved from public databases. Here, we present an overview of logic modeling formalisms in the context of training logic models to data, and specifically the different approaches to modeling qualitative to quantitative data (state) and dynamics (time) of signal transduction. We use a toy model of signal transduction to illustrate how different logic formalisms (Boolean, fuzzy logic and differential equations) treat state and time. Different formalisms allow for different features of the data to be captured, at the cost of extra requirements in terms of computational power and data quality and quantity. Through this demonstration, the assumptions behind each formalism are discussed, as well as their advantages and disadvantages and possible future developments." }, { "pmid": "23072820", "title": "Logic-based models in systems biology: a predictive and parameter-free network analysis method.", "abstract": "Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network's dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples." }, { "pmid": "27484338", "title": "Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.", "abstract": "Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models." }, { "pmid": "23853063", "title": "Exhaustively characterizing feasible logic models of a signaling network using Answer Set Programming.", "abstract": "MOTIVATION\nLogic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions.\n\n\nRESULTS\nWe propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design.\n\n\nAVAILABILITY\ncaspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary materials are available at Bioinformatics online.\n\n\nCONTACT\[email protected]." }, { "pmid": "23286509", "title": "Reconstructing Boolean models of signaling.", "abstract": "Since the first emergence of protein-protein interaction networks more than a decade ago, they have been viewed as static scaffolds of the signaling-regulatory events taking place in cells, and their analysis has been mainly confined to topological aspects. Recently, functional models of these networks have been suggested, ranging from Boolean to constraint-based methods. However, learning such models from large-scale data remains a formidable task, and most modeling approaches rely on extensive human curation. Here we provide a generic approach to learning Boolean models automatically from data. We apply our approach to growth and inflammatory signaling systems in humans and show how the learning phase can improve the fit of the model to experimental data, remove spurious interactions, and lead to better understanding of the system at hand." }, { "pmid": "19997482", "title": "Identifying drug effects via pathway alterations using an integer linear programming optimization formulation on phosphoproteomic data.", "abstract": "Understanding the mechanisms of cell function and drug action is a major endeavor in the pharmaceutical industry. Drug effects are governed by the intrinsic properties of the drug (i.e., selectivity and potency) and the specific signaling transduction network of the host (i.e., normal vs. diseased cells). Here, we describe an unbiased, phosphoproteomic-based approach to identify drug effects by monitoring drug-induced topology alterations. With our proposed method, drug effects are investigated under diverse stimulations of the signaling network. Starting with a generic pathway made of logical gates, we build a cell-type specific map by constraining it to fit 13 key phopshoprotein signals under 55 experimental conditions. Fitting is performed via an Integer Linear Program (ILP) formulation and solution by standard ILP solvers; a procedure that drastically outperforms previous fitting schemes. Then, knowing the cell's topology, we monitor the same key phosphoprotein signals under the presence of drug and we re-optimize the specific map to reveal drug-induced topology alterations. To prove our case, we make a topology for the hepatocytic cell-line HepG2 and we evaluate the effects of 4 drugs: 3 selective inhibitors for the Epidermal Growth Factor Receptor (EGFR) and a non-selective drug. We confirm effects easily predictable from the drugs' main target (i.e., EGFR inhibitors blocks the EGFR pathway) but we also uncover unanticipated effects due to either drug promiscuity or the cell's specific topology. An interesting finding is that the selective EGFR inhibitor Gefitinib inhibits signaling downstream the Interleukin-1alpha (IL1alpha) pathway; an effect that cannot be extracted from binding affinity-based approaches. Our method represents an unbiased approach to identify drug effects on small to medium size pathways which is scalable to larger topologies with any type of signaling interventions (small molecules, RNAi, etc). The method can reveal drug effects on pathways, the cornerstone for identifying mechanisms of drug's efficacy." }, { "pmid": "23079107", "title": "CellNOptR: a flexible toolkit to train protein signaling networks to data using multiple logic formalisms.", "abstract": "BACKGROUND\nCells process signals using complex and dynamic networks. Studying how this is performed in a context and cell type specific way is essential to understand signaling both in physiological and diseased situations. Context-specific medium/high throughput proteomic data measured upon perturbation is now relatively easy to obtain but formalisms that can take advantage of these features to build models of signaling are still comparatively scarce.\n\n\nRESULTS\nHere we present CellNOptR, an open-source R software package for building predictive logic models of signaling networks by training networks derived from prior knowledge to signaling (typically phosphoproteomic) data. CellNOptR features different logic formalisms, from Boolean models to differential equations, in a common framework. These different logic model representations accommodate state and time values with increasing levels of detail. We provide in addition an interface via Cytoscape (CytoCopteR) to facilitate use and integration with Cytoscape network-based capabilities.\n\n\nCONCLUSIONS\nModels generated with this pipeline have two key features. First, they are constrained by prior knowledge about the network but trained to data. They are therefore context and cell line specific, which results in enhanced predictive and mechanistic insights. Second, they can be built using different logic formalisms depending on the richness of the available data. Models built with CellNOptR are useful tools to understand how signals are processed by cells and how this is altered in disease. They can be used to predict the effect of perturbations (individual or in combinations), and potentially to engineer therapies that have differential effects/side effects depending on the cell type or context." }, { "pmid": "27716031", "title": "Boolean regulatory network reconstruction using literature based knowledge with a genetic algorithm optimization method.", "abstract": "BACKGROUND\nPrior knowledge networks (PKNs) provide a framework for the development of computational biological models, including Boolean models of regulatory networks which are the focus of this work. PKNs are created by a painstaking process of literature curation, and generally describe all relevant regulatory interactions identified using a variety of experimental conditions and systems, such as specific cell types or tissues. Certain of these regulatory interactions may not occur in all biological contexts of interest, and their presence may dramatically change the dynamical behaviour of the resulting computational model, hindering the elucidation of the underlying mechanisms and reducing the usefulness of model predictions. Methods are therefore required to generate optimized contextual network models from generic PKNs.\n\n\nRESULTS\nWe developed a new approach to generate and optimize Boolean networks, based on a given PKN. Using a genetic algorithm, a model network is built as a sub-network of the PKN and trained against experimental data to reproduce the experimentally observed behaviour in terms of attractors and the transitions that occur between them under specific perturbations. The resulting model network is therefore contextualized to the experimental conditions and constitutes a dynamical Boolean model closer to the observed biological process used to train the model than the original PKN. Such a model can then be interrogated to simulate response under perturbation, to detect stable states and their properties, to get insights into the underlying mechanisms and to generate new testable hypotheses.\n\n\nCONCLUSIONS\nGeneric PKNs attempt to synthesize knowledge of all interactions occurring in a biological process of interest, irrespective of the specific biological context. This limits their usefulness as a basis for the development of context-specific, predictive dynamical Boolean models. The optimization method presented in this article produces specific, contextualized models from generic PKNs. These contextualized models have improved utility for hypothesis generation and experimental design. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research. Our method was implemented in the software optimusqual, available online at http://www.vital-it.ch/software/optimusqual/ ." }, { "pmid": "26901648", "title": "Inferring causal molecular networks: empirical assessment through a community-based effort.", "abstract": "It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense." }, { "pmid": "28017544", "title": "Context Specificity in Causal Signaling Networks Revealed by Phosphoprotein Profiling.", "abstract": "Signaling networks downstream of receptor tyrosine kinases are among the most extensively studied biological networks, but new approaches are needed to elucidate causal relationships between network components and understand how such relationships are influenced by biological context and disease. Here, we investigate the context specificity of signaling networks within a causal conceptual framework using reverse-phase protein array time-course assays and network analysis approaches. We focus on a well-defined set of signaling proteins profiled under inhibition with five kinase inhibitors in 32 contexts: four breast cancer cell lines (MCF7, UACC812, BT20, and BT549) under eight stimulus conditions. The data, spanning multiple pathways and comprising ∼70,000 phosphoprotein and ∼260,000 protein measurements, provide a wealth of testable, context-specific hypotheses, several of which we experimentally validate. Furthermore, the data provide a unique resource for computational methods development, permitting empirical assessment of causal network learning in a complex, mammalian setting." }, { "pmid": "15932879", "title": "Mechanism of constitutive phosphoinositide 3-kinase activation by oncogenic mutants of the p85 regulatory subunit.", "abstract": "p85/p110 phosphoinositide 3-kinases regulate multiple cell functions and are frequently mutated in human cancer. The p85 regulatory subunit stabilizes and inhibits the p110 catalytic subunit. The minimal fragment of p85 capable of regulating p110 is the N-terminal SH2 domain linked to the coiled-coil iSH2 domain (referred to as p85ni). We have previously proposed that the conformationally rigid iSH2 domain tethers p110 to p85, facilitating regulatory interactions between p110 and the p85 nSH2 domain. In an oncogenic mutant of murine p85, truncation at residue 571 leads to constitutively increased phosphoinositide 3-kinase activity, which has been proposed to result from either loss of an inhibitory Ser-608 autophosphorylation site or altered interactions with cellular regulatory factors. We have examined this mutant (referred to as p65) in vitro and find that p65 binds but does not inhibit p110, leading to constitutive p110 activity. This activated phenotype is observed with recombinant proteins in the absence of cellular factors. Importantly, this effect is also produced by truncating p85ni at residue 571. Thus, the phenotype is not because of loss of the Ser-608 inhibitory autophosphorylation site, which is not present in p85ni. To determine the structural basis for the phenotype of p65, we used a broadly applicable spin label/NMR approach to define the positioning of the nSH2 domain relative to the iSH2 domain. We found that one face of the nSH2 domain packs against the 581-593 region of the iSH2 domain. The loss of this interaction in the truncated p65 would remove the orienting constraints on the nSH2 domain, leading to a loss of p110 regulation by the nSH2. Based on these findings, we propose a general model for oncogenic mutants of p85 and p110 in which disruption of nSH2-p110 regulatory contacts leads to constitutive p110 activity." }, { "pmid": "20530665", "title": "The phosphoinositide 3-kinase regulatory subunit p85alpha can exert tumor suppressor properties through negative regulation of growth factor signaling.", "abstract": "Phosphoinositide 3-kinase (PI3K) plays a critical role in tumorigenesis, and the PI3K p85 regulatory subunit exerts both positive and negative effects on signaling. Expression of Pik3r1, the gene encoding p85, is decreased in human prostate, lung, ovarian, bladder, and liver cancers, consistent with the possibility that p85 has tumor suppressor properties. We tested this hypothesis by studying mice with a liver-specific deletion of the Pik3r1 gene. These mice exhibited enhanced insulin and growth factor signaling and progressive changes in hepatic pathology, leading to the development of aggressive hepatocellular carcinomas with pulmonary metastases. Liver tumors that arose exhibited markedly elevated levels of phosphatidylinositol (3,4,5)-trisphosphate, along with Akt activation and decreased PTEN expression, at both the mRNA and protein levels. Together, these results substantiate the concept that the p85 subunit of PI3K has a tumor-suppressive role in the liver and possibly other tissues." }, { "pmid": "23000897", "title": "Comprehensive molecular portraits of human breast tumours.", "abstract": "We analysed primary breast cancers by genomic DNA copy number arrays, DNA methylation, exome sequencing, messenger RNA arrays, microRNA sequencing and reverse-phase protein arrays. Our ability to integrate information across platforms provided key insights into previously defined gene expression subtypes and demonstrated the existence of four main breast cancer classes when combining data from five platforms, each of which shows significant molecular heterogeneity. Somatic mutations in only three genes (TP53, PIK3CA and GATA3) occurred at >10% incidence across all breast cancers; however, there were numerous subtype-associated and novel gene mutations including the enrichment of specific mutations in GATA3, PIK3CA and MAP3K1 with the luminal A subtype. We identified two novel protein-expression-defined subgroups, possibly produced by stromal/microenvironmental elements, and integrated analyses identified specific signalling pathways dominant in each molecular subtype including a HER2/phosphorylated HER2/EGFR/phosphorylated EGFR signature within the HER2-enriched expression subtype. Comparison of basal-like breast tumours with high-grade serous ovarian tumours showed many molecular commonalities, indicating a related aetiology and similar therapeutic opportunities. The biological finding of the four main breast cancer subtypes caused by different subsets of genetic and epigenetic abnormalities raises the hypothesis that much of the clinically observable plasticity and heterogeneity occurs within, and not across, these major biological subtypes of breast cancer." }, { "pmid": "25692714", "title": "The roles of post-translational modifications in the context of protein interaction networks.", "abstract": "Among other effects, post-translational modifications (PTMs) have been shown to exert their function via the modulation of protein-protein interactions. For twelve different main PTM-types and associated subtypes and across 9 diverse species, we investigated whether particular PTM-types are associated with proteins with specific and possibly \"strategic\" placements in the network of all protein interactions by determining informative network-theoretic properties. Proteins undergoing a PTM were observed to engage in more interactions and positioned in more central locations than non-PTM proteins. Among the twelve considered PTM-types, phosphorylated proteins were identified most consistently as being situated in central network locations and with the broadest interaction spectrum to proteins carrying other PTM-types, while glycosylated proteins are preferentially located at the network periphery. For the human interactome, proteins undergoing sumoylation or proteolytic cleavage were found with the most characteristic network properties. PTM-type-specific protein interaction network (PIN) properties can be rationalized with regard to the function of the respective PTM-carrying proteins. For example, glycosylation sites were found enriched in proteins with plasma membrane localizations and transporter or receptor activity, which generally have fewer interacting partners. The involvement in disease processes of human proteins undergoing PTMs was also found associated with characteristic PIN properties. By integrating global protein interaction networks and specific PTMs, our study offers a novel approach to unraveling the role of PTMs in cellular processes." }, { "pmid": "10592173", "title": "KEGG: kyoto encyclopedia of genes and genomes.", "abstract": "KEGG (Kyoto Encyclopedia of Genes and Genomes) is a knowledge base for systematic analysis of gene functions, linking genomic information with higher order functional information. The genomic information is stored in the GENES database, which is a collection of gene catalogs for all the completely sequenced genomes and some partial genomes with up-to-date annotation of gene functions. The higher order functional information is stored in the PATHWAY database, which contains graphical representations of cellular processes, such as metabolism, membrane transport, signal transduction and cell cycle. The PATHWAY database is supplemented by a set of ortholog group tables for the information about conserved subpathways (pathway motifs), which are often encoded by positionally coupled genes on the chromosome and which are especially useful in predicting gene functions. A third database in KEGG is LIGAND for the information about chemical compounds, enzyme molecules and enzymatic reactions. KEGG provides Java graphics tools for browsing genome maps, comparing two genome maps and manipulating expression maps, as well as computational tools for sequence comparison, graph comparison and path computation. The KEGG databases are daily updated and made freely available (http://www. genome.ad.jp/kegg/)." }, { "pmid": "14681407", "title": "The Gene Ontology (GO) database and informatics resource.", "abstract": "The Gene Ontology (GO) project (http://www. geneontology.org/) provides structured, controlled vocabularies and classifications that cover several domains of molecular and cellular biology and are freely available for community use in the annotation of genes, gene products and sequences. Many model organism databases and genome annotation groups use the GO and contribute their annotation sets to the GO resource. The GO database integrates the vocabularies and contributed annotations and provides full access to this information in several formats. Members of the GO Consortium continually work collectively, involving outside experts as needed, to expand and update the GO vocabularies. The GO Web resource also provides access to extensive documentation about the GO project and links to applications that use GO data for functional analyses." }, { "pmid": "22096230", "title": "WikiPathways: building research communities on biological pathways.", "abstract": "Here, we describe the development of WikiPathways (http://www.wikipathways.org), a public wiki for pathway curation, since it was first published in 2008. New features are discussed, as well as developments in the community of contributors. New features include a zoomable pathway viewer, support for pathway ontology annotations, the ability to mark pathways as private for a limited time and the availability of stable hyperlinks to pathways and the elements therein. WikiPathways content is freely available in a variety of formats such as the BioPAX standard, and the content is increasingly adopted by external databases and tools, including Wikipedia. A recent development is the use of WikiPathways as a staging ground for centrally curated databases such as Reactome. WikiPathways is seeing steady growth in the number of users, page views and edits for each pathway. To assess whether the community curation experiment can be considered successful, here we analyze the relation between use and contribution, which gives results in line with other wiki projects. The novel use of pathway pages as supplementary material to publications, as well as the addition of tailored content for research domains, is expected to stimulate growth further." }, { "pmid": "16381927", "title": "BioGRID: a general repository for interaction datasets.", "abstract": "Access to unified datasets of protein and genetic interactions is critical for interrogation of gene/protein function and analysis of global network properties. BioGRID is a freely accessible database of physical and genetic interactions available at http://www.thebiogrid.org. BioGRID release version 2.0 includes >116 000 interactions from Saccharomyces cerevisiae, Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens. Over 30 000 interactions have recently been added from 5778 sources through exhaustive curation of the Saccharomyces cerevisiae primary literature. An internally hyper-linked web interface allows for rapid search and retrieval of interaction data. Full or user-defined datasets are freely downloadable as tab-delimited text files and PSI-MI XML. Pre-computed graphical layouts of interactions are available in a variety of file formats. User-customized graphs with embedded protein, gene and interaction attributes can be constructed with a visualization system called Osprey that is dynamically linked to the BioGRID." }, { "pmid": "27924014", "title": "The STRING database in 2017: quality-controlled protein-protein association networks, made broadly accessible.", "abstract": "A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organisms. The associations in STRING include direct (physical) interactions, as well as indirect (functional) interactions, as long as both are specific and biologically meaningful. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text-mining of the scientific literature and (iv) computational transfer of interaction knowledge between organisms based on gene orthology. In the latest version 10.5 of STRING, the biggest changes are concerned with data dissemination: the web frontend has been completely redesigned to reduce dependency on outdated browser technologies, and the database can now also be queried from inside the popular Cytoscape software framework. Further improvements include automated background analysis of user inputs for functional enrichments, and streamlined download options. The STRING resource is available online, at http://string-db.org/." }, { "pmid": "10592249", "title": "DIP: the database of interacting proteins.", "abstract": "The Database of Interacting Proteins (DIP; http://dip.doe-mbi.ucla.edu) is a database that documents experimentally determined protein-protein interactions. This database is intended to provide the scientific community with a comprehensive and integrated tool for browsing and efficiently extracting information about protein interactions and interaction networks in biological processes. Beyond cataloging details of protein-protein interactions, the DIP is useful for understanding protein function and protein-protein relationships, studying the properties of networks of interacting proteins, benchmarking predictions of protein-protein interactions, and studying the evolution of protein-protein interactions." }, { "pmid": "14681466", "title": "Human protein reference database as a discovery resource for proteomics.", "abstract": "The rapid pace at which genomic and proteomic data is being generated necessitates the development of tools and resources for managing data that allow integration of information from disparate sources. The Human Protein Reference Database (http://www.hprd.org) is a web-based resource based on open source technologies for protein information about several aspects of human proteins including protein-protein interactions, post-translational modifications, enzyme-substrate relationships and disease associations. This information was derived manually by a critical reading of the published literature by expert biologists and through bioinformatics analyses of the protein sequence. This database will assist in biomedical discoveries by serving as a resource of genomic and proteomic information and providing an integrated view of sequence, structure, function and protein networks in health and disease." }, { "pmid": "14681455", "title": "IntAct: an open source molecular interaction database.", "abstract": "IntAct provides an open source database and toolkit for the storage, presentation and analysis of protein interactions. The web interface provides both textual and graphical representations of protein interactions, and allows exploring interaction networks in the context of the GO annotations of the interacting proteins. A web service allows direct computational access to retrieve interaction networks in XML format. IntAct currently contains approximately 2200 binary and complex interactions imported from the literature and curated in collaboration with the Swiss-Prot team, making intensive use of controlled vocabularies to ensure data consistency. All IntAct software, data and controlled vocabularies are available at http://www.ebi.ac.uk/intact." }, { "pmid": "11911893", "title": "MINT: a Molecular INTeraction database.", "abstract": "Protein interaction databases represent unique tools to store, in a computer readable form, the protein interaction information disseminated in the scientific literature. Well organized and easily accessible databases permit the easy retrieval and analysis of large interaction data sets. Here we present MINT, a database (http://cbm.bio.uniroma2.it/mint/index.html) designed to store data on functional interactions between proteins. Beyond cataloguing binary complexes, MINT was conceived to store other types of functional interactions, including enzymatic modifications of one of the partners. Release 1.0 of MINT focuses on experimentally verified protein-protein interactions. Both direct and indirect relationships are considered. Furthermore, MINT aims at being exhaustive in the description of the interaction and, whenever available, information about kinetic and binding constants and about the domains participating in the interaction is included in the entry. MINT consists of entries extracted from the scientific literature by expert curators assisted by 'MINT Assistant', a software that targets abstracts containing interaction information and presents them to the curator in a user-friendly format. The interaction data can be easily extracted and viewed graphically through 'MINT Viewer'. Presently MINT contains 4568 interactions, 782 of which are indirect or genetic interactions." }, { "pmid": "18823568", "title": "iRefIndex: a consolidated protein interaction database with provenance.", "abstract": "BACKGROUND\nInteraction data for a given protein may be spread across multiple databases. We set out to create a unifying index that would facilitate searching for these data and that would group together redundant interaction data while recording the methods used to perform this grouping.\n\n\nRESULTS\nWe present a method to generate a key for a protein interaction record and a key for each participant protein. These keys may be generated by anyone using only the primary sequence of the proteins, their taxonomy identifiers and the Secure Hash Algorithm. Two interaction records will have identical keys if they refer to the same set of identical protein sequences and taxonomy identifiers. We define records with identical keys as a redundant group. Our method required that we map protein database references found in interaction records to current protein sequence records. Operations performed during this mapping are described by a mapping score that may provide valuable feedback to source interaction databases on problematic references that are malformed, deprecated, ambiguous or unfound. Keys for protein participants allow for retrieval of interaction information independent of the protein references used in the original records.\n\n\nCONCLUSION\nWe have applied our method to protein interaction records from BIND, BioGrid, DIP, HPRD, IntAct, MINT, MPact, MPPI and OPHID. The resulting interaction reference index is provided in PSI-MITAB 2.5 format at http://irefindex.uio.no. This index may form the basis of alternative redundant groupings based on gene identifiers or near sequence identity groupings." }, { "pmid": "14597658", "title": "Cytoscape: a software environment for integrated models of biomolecular interaction networks.", "abstract": "Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models." }, { "pmid": "26058016", "title": "Discrete Logic Modelling Optimization to Contextualize Prior Knowledge Networks Using PRUNET.", "abstract": "High-throughput technologies have led to the generation of an increasing amount of data in different areas of biology. Datasets capturing the cell's response to its intra- and extra-cellular microenvironment allows such data to be incorporated as signed and directed graphs or influence networks. These prior knowledge networks (PKNs) represent our current knowledge of the causality of cellular signal transduction. New signalling data is often examined and interpreted in conjunction with PKNs. However, different biological contexts, such as cell type or disease states, may have distinct variants of signalling pathways, resulting in the misinterpretation of new data. The identification of inconsistencies between measured data and signalling topologies, as well as the training of PKNs using context specific datasets (PKN contextualization), are necessary conditions to construct reliable, predictive models, which are current challenges in the systems biology of cell signalling. Here we present PRUNET, a user-friendly software tool designed to address the contextualization of a PKNs to specific experimental conditions. As the input, the algorithm takes a PKN and the expression profile of two given stable steady states or cellular phenotypes. The PKN is iteratively pruned using an evolutionary algorithm to perform an optimization process. This optimization rests in a match between predicted attractors in a discrete logic model (Boolean) and a Booleanized representation of the phenotypes, within a population of alternative subnetworks that evolves iteratively. We validated the algorithm applying PRUNET to four biological examples and using the resulting contextualized networks to predict missing expression values and to simulate well-characterized perturbations. PRUNET constitutes a tool for the automatic curation of a PKN to make it suitable for describing biological processes under particular experimental conditions. The general applicability of the implemented algorithm makes PRUNET suitable for a variety of biological processes, for instance cellular reprogramming or transitions between healthy and disease states." } ]
Royal Society Open Science
30473797
PMC6227951
10.1098/rsos.171920
Evaluating prose style transfer with the Bible
In the prose style transfer task a system, provided with text input and a target prose style, produces output which preserves the meaning of the input text but alters the style. These systems require parallel data for evaluation of results and usually make use of parallel data for training. Currently, there are few publicly available corpora for this task. In this work, we identify a high-quality source of aligned, stylistically distinct text in different versions of the Bible. We provide a standardized split, into training, development and testing data, of the public domain versions in our corpus. This corpus is highly parallel since many Bible versions are included. Sentences are aligned due to the presence of chapter and verse numbers within all versions of the text. In addition to the corpus, we present the results, as measured by the BLEU and PINC metrics, of several models trained on our data which can serve as baselines for future research. While we present these data as a style transfer corpus, we believe that it is of unmatched quality and may be useful for other natural language tasks as well.
2.Related work2.1.Style transfer datasetsOurs is clearly not the first parallel dataset created for style transfer, and the existing datasets have their own strengths and weaknesses.One of the most used style transfer corpora was built using articles from Wikipedia and Simple Wikipedia to collect examples of sentences and their simplified versions [25]. These sources further were used with improved sentence alignment techniques to produce another dataset which included classification of each parallel sentence pair’s quality [26]. More recently, word embeddings were used to inform alignment and yet another Wikipedia simplification dataset was released [17].The use of Wikipedia for text simplification has been criticized generally, and some of the released corpora denounced for more specific and severe issues with their sentence alignments [27]. The same paper also proposed the use of the Newsela corpus for text simplification. These data consist of 1130 news articles, each professionally rewritten four times to target different reading levels.A new dataset targeting another aspect of style, namely formality, should soon be made publicly available [18]. The Grammarly’s Yahoo Answers Formality Corpus (GYAFC) was constructed by identifying 110 000 informal responses containing between 5 and 25 words on Yahoo Answers. Each of these was then rewritten to use more formal language by Amazon Mechanical Turk workers.While these datasets can all be viewed as representing different styles, simplicity and formality are only two aspects of a broader definition of style. The first work to attempt this more general problem introduced a corpus of Shakespeare plays and their modern translations for the task [10]. This corpus contains 17 plays and their modernizations from http://nfs.sparknotes.com and versions of eight of these plays from http://enotes.com. While the alignments appear to mostly be of high quality, they were still produced using automatic sentence alignment which may not perform the task as proficiently as a human. The larger sparknotes dataset contains about 21 000 aligned sentences. This magnitude is sufficient for the statistical machine translation methods used in their paper, but is not comparable to the corpora usually employed by neural machine translation systems.Most of these existing parallel corpora were not created for the general task of style transfer [17,18,25–27]. A system targeting only one aspect of style may use techniques specific to that task, such as the use of simplification-specific objective functions [19]. So while we can view simplification and formalization as types of style transfer, we cannot always directly apply the same methods to the more general problem.The Shakespeare dataset [10], which does not focus on only simplicity or formality, still contains only two (or three if each modern source is considered individually) distinct styles. Standard machine translation corpora, such as WMT-14 (http://www.statmt.org/wmt14/translation-task.html), have parallel data across many languages. A multilingual corpus not only provides the ability to test how generalizable a system is, but can also be leveraged to improve results even when considering a single source and target language [28].Some of these existing corpora require researchers to request access to the data [18,27]. Access to high-quality data are certainly worth this extra step, but sometimes response times to these requests can be slow. We experienced a delay of several months between requesting some of these data and receiving it. With the current speed of innovation in machine translation, such delays in access to data may make these corpora less practical than those with free immediate access.2.2.Machine translation and style transfer modelsAs mentioned, style transfer has obvious connections to work in traditional language-to-language translation. The Seq2Seq model was first created and used in conjunction with statistical methods to perform machine translation [29]. The model consists of a recurrent neural network acting as an encoder, which produces an embedding of the full sequence of inputs. This sentence embedding is then used by another recurrent neural network which acts as a decoder and produces a sequence corresponding to the original input sequence.Long short-term memory (LSTM) [30] was introduced to allow a recurrent neural network to store information for an extended period of time. Using a formulation of LSTM which differs slightly from the original [31], the Seq2Seq model was adapted to use multiple LSTM layers on both the encoding and decoding sides [32]. This model demonstrated near state-of-the-art results on the WMT-14 English-to-French translation task. In another modification, an attention mechanism was introduced [33] which again achieved near state-of-the-art results on English-to-French translation.Other papers proposed versions of the model which could translate into many languages [33,34], including one which could translate from many source languages to many target languages, even if the source–target pair was never seen during training [28]. The authors of this work make no major changes to the Seq2Seq architecture, but introduce special tokens at the start of each input sentence indicating the target language. The model can learn to translate between two languages which never appeared as a pair in the training data, provided it has seen each of the languages paired with others. The idea of using these artificially added tags was applied to related tasks such as targeting level of formality or use of active or passive voice in produced translations [6,7].This work on machine translation is relevant for paraphrase generation framed as a form of monolingual translation. In this context, statistical machine translation techniques were used to generate novel paraphrases [35]. More recently, phrase-based statistical machine translation software was used to create paraphrases [36].Tasks such as text simplification [5,16] can be viewed as a form of style transfer, but generating paraphrases targeting a more general interpretation of style was first attempted in 2012 [10]. All of these results employed statistical machine translation methods.The advances mentioned previously in neural machine translation have only started to be applied to general stylistic paraphrasing. One approach proposed the training of a neural model which would ‘disentangle’ stylistic and semantic features, but did not publish any results [37]. Another attempt at text simplification as stylistic paraphrasing is [38]. They generate artificial data and show that the model performs well, but do no experiments with human-produced corpora. The Shakespeare dataset [10] recently was used with a Seq2Seq model [11]. Their results are impressive, showing improvement over statistical machine translation methods as measured by automatic metrics. They experiment with many settings, but in order to overcome the small amount of training data, their best models all require the integration of a human-produced dictionary which translates approximately 1500 Shakespearean words to their modern equivalent.
[ "22547796", "9377276" ]
[ { "pmid": "22547796", "title": "Quantitative patterns of stylistic influence in the evolution of literature.", "abstract": "Literature is a form of expression whose temporal structure, both in content and style, provides a historical record of the evolution of culture. In this work we take on a quantitative analysis of literary style and conduct the first large-scale temporal stylometric study of literature by using the vast holdings in the Project Gutenberg Digital Library corpus. We find temporal stylistic localization among authors through the analysis of the similarity structure in feature vectors derived from content-free word usage, nonhomogeneous decay rates of stylistic influence, and an accelerating rate of decay of influence among modern authors. Within a given time period we also find evidence for stylistic coherence with a given literary topic, such that writers in different fields adopt different literary styles. This study gives quantitative support to the notion of a literary \"style of a time\" with a strong trend toward increasingly contemporaneous stylistic influence." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." } ]
Royal Society Open Science
30473806
PMC6227980
10.1098/rsos.180296
Autonomously shaping natural climbing plants: a bio-hybrid approach
Plant growth is a self-organized process incorporating distributed sensing, internal communication and morphology dynamics. We develop a distributed mechatronic system that autonomously interacts with natural climbing plants, steering their behaviours to grow user-defined shapes and patterns. Investigating this bio-hybrid system paves the way towards the development of living adaptive structures and grown building components. In this new application domain, challenges include sensing, actuation and the combination of engineering methods and natural plants in the experimental set-up. By triggering behavioural responses in the plants through light spectra stimuli, we use static mechatronic nodes to grow climbing plants in a user-defined pattern at a two-dimensional plane. The experiments show successful growth over periods up to eight weeks. Results of the stimuli-guided experiments are substantially different from the control experiments. Key limitations are the number of repetitions performed and the scale of the systems tested. Recommended future research would investigate the use of similar bio-hybrids to connect construction elements and grow shapes of larger size.
2.Related workAutonomous bio-hybrid systems have been investigated before, with notable focus on animals. For instance, in a collective bio-hybrid system of cockroaches and robots, the robots were able to influence the social behaviour of the cockroaches [8,9], whose aggregation dynamics were significantly changed. A similar study was done showing robotic influence on crickets [10]. In larger animals, robotic interaction has been demonstrated with young chickens [11]. Forms of communication between robots and animals were further developed in the project ASSISI|bf, specifically in bio-hybrid systems of robots with honeybees [12] and robots with zebrafish [13].Plants generally grow and move slowly, compared to the quick mobility of animals. Plants adapt their shape during growth according to surrounding environmental conditions [14]. Their slow feedback is a challenge when combined with autonomous mechatronics in a bio-hybrid system. However, as reported in previous works [15–17], robots were able to control the directional growth of plant shoots by introducing changes to their environment. In these previous works, the light stimuli were autonomously used to shape young, unsupported plants with a stem length less than 30 cm. In this paper, a mechanical and mechatronic set-up is engineered to extend this state-of-the-art to much larger plants, demonstrated with supported climbing plants of a stem length over 2 m. This engineered system contributes to open challenges that have been previously identified for bio-hybrids in autonomous construction [18,19].Coupling natural plants and autonomous mechatronics for long periods of time triggers the necessity of monitoring and sustaining plant health. Following common practices from indoor gardening [1,3,20], many environmental variables can be regulated to ensure the well-being of plants, see §3.3.2. Existing research extends such gardening practices, for example, to grow plants in space [21]. For more typical indoor gardening, a cognitive approach uses Artificial Intelligence techniques to treat each plant individually according to its history [22]. A well-developed outdoor approach is precision farming [2,4,5], where an array of smart sensors equipped with GPS monitors many variables (e.g. soil moisture and pH levels). These readings are combined with local and satellite imagery to ensure the soil and crops receive precisely their needed resources for each location’s optimum health and fertility.In bioinspired engineering research, a variety of plants’ climbing mechanisms are viewed as applicable to materials and actuation [23,24], including microscopic hairs augmenting the twinning mechanism of the common bean plant, used in this paper. Climbing plants that use tendrils have been studied to develop climbing robots [25] and linear actuators [26], and those using microscopic hooks on the surface of leaves have been mimicked for a dry adhesive [27]. Engineered systems incorporating biological tissues or organisms have been developed for several robotic tasks (e.g. locomotion guidance [28]), but to the authors’ knowledge have not yet been developed for climbing—although investigation into the attachment strength of certain climbing plants has progressed [29].Construction incorporating biological organisms has been pursued by architects and artisans (e.g. with insects, algae or fungi [30–32]). There is also existing research on self-repair of structures using biological organisms, through biocementation and bioremediation of concrete structures with certain types of bacteria [33]. These approaches have not included any autonomous technology to shape the biological material; if shaping is involved, it is enacted manually, often by moulding. Specifically, in growing structures from plants, the literature shows several artisan approaches for shaping, using combinations of mechanical constraint and rearrangement by hand (figure 1). In one approach, woody plants have been manually constrained to a building-sized mechanical frame during early growth, so that the plants’ constrained positions will form part of a façade, or grow to become part of the structural frame [34]. At the size of furniture and small products, this moulding method has been combined with grafting to achieve an agricultural process nearing mass production (see the UK firm ‘Full Grown’1 [35]). In another approach, woody plants have been manually constrained to one another or bundled together into small house-sized structural frames (i.e. without a separate mechanical frame), to both keep them in position and create enough stiffness for them to collectively perform a structural role [36]. This approach has sometimes been extended by natural or artificial grafting, see figure 1a, often termed ‘arborsculpture’ by practitioners [37–39]. Another approach, perhaps the most labour intensive, employs weekly or monthly manual rearrangement of new root growth over a time period of several decades. An indigenous technique developed in Meghalaya, India, this approach is used to construct bridges over rivers or canyons, termed ‘Living Root Bridges’ (figure 1b). Because of the heavy moisture and flash flooding in that area, these plant bridges have been found to outlast steel suspension bridges [6]. These various approaches give evidence for the feasibility of plants performing structural and building envelope roles. The labour-intensive processes they rely on for shaping plants could be made more convenient and scalable by introducing automation. Figure 1.Artisan and indigenous construction of furniture and structures from living plants, using manual methods. (a) ‘Arborsculpture’ chair by Cook and Northey; image used with licence2 and (b) ‘Living Root Bridge’ made by indigenous construction methods in Meghalaya, India; image used with licence.3
[ "18006751", "29188422", "27387948", "19818882", "25678588", "23243099", "27923062", "23048015", "26795156", "24481074", "25638281", "22623496", "23048016", "24399823", "24323503", "27060719", "23281393", "25904120", "27040840" ]
[ { "pmid": "18006751", "title": "Social integration of robots into groups of cockroaches to control self-organized choices.", "abstract": "Collective behavior based on self-organization has been shown in group-living animals from insects to vertebrates. These findings have stimulated engineers to investigate approaches for the coordination of autonomous multirobot systems based on self-organization. In this experimental study, we show collective decision-making by mixed groups of cockroaches and socially integrated autonomous robots, leading to shared shelter selection. Individuals, natural or artificial, are perceived as equivalent, and the collective decision emerges from nonlinear feedbacks based on local interactions. Even when in the minority, robots can modulate the collective decision-making process and produce a global pattern not observed in their absence. These results demonstrate the possibility of using intelligent autonomous devices to study and control self-organized behavioral patterns in group-living animals." }, { "pmid": "29188422", "title": "Climbing plants: attachment adaptations and bioinspired innovations.", "abstract": "Climbing plants have unique adaptations to enable them to compete for sunlight, for which they invest minimal resources for vertical growth. Indeed, their stems bear relatively little weight, as they traverse their host substrates skyward. Climbers possess high tensile strength and flexibility, which allows them to utilize natural and manmade structures for support and growth. The climbing strategies of plants have intrigued scientists for centuries, yet our understanding about biochemical adaptations and their molecular undergirding is still in the early stages of research. Nonetheless, recent discoveries are promising, not only from a basic knowledge perspective, but also for bioinspired product development. Several adaptations, including nanoparticle and adhesive production will be reviewed, as well as practical translation of these adaptations to commercial applications. We will review the botanical literature on the modes of adaptation to climb, as well as specialized organs-and cellular innovations. Finally, recent molecular and biochemical data will be reviewed to assess the future needs and new directions for potential practical products that may be bioinspired by climbing plants." }, { "pmid": "27387948", "title": "Phototactic guidance of a tissue-engineered soft-robotic ray.", "abstract": "Inspired by the relatively simple morphological blueprint provided by batoid fish such as stingrays and skates, we created a biohybrid system that enables an artificial animal--a tissue-engineered ray--to swim and phototactically follow a light cue. By patterning dissociated rat cardiomyocytes on an elastomeric body enclosing a microfabricated gold skeleton, we replicated fish morphology at 1/10 scale and captured basic fin deflection patterns of batoid fish. Optogenetics allows for phototactic guidance, steering, and turning maneuvers. Optical stimulation induced sequential muscle activation via serpentine-patterned muscle circuits, leading to coordinated undulatory swimming. The speed and direction of the ray was controlled by modulating light frequency and by independently eliciting right and left fins, allowing the biohybrid machine to maneuver through an obstacle course." }, { "pmid": "19818882", "title": "Quantifying the attachment strength of climbing plants: a new approach.", "abstract": "In order to grow vertically, it is essential for climbing plants to firmly attach to their supporting structures. In climbing plants, different strategies for permanent attachment can be distinguished. Besides twining stems and tendrils, many plants use attachment pads or attachment roots for this purpose. Using a novel custom-built tensile testing setup, the mechanical properties of different permanent attachment structures of self-clinging plant species were investigated, namely the attachment pads of Boston ivy (Parthenocissus tricuspidata), the attachment roots of ivy (Hedera helix) and the clustered attachment roots of trumpet creeper (Campsis radicans). Force-displacement measurements of individual attachment pads as well as of complete structures consisting of several pads or roots were conducted for both natural and laboratory growth conditions. The shapes of the curves and the maximum forces determined indicate clear differences in the detachment process for the different plants and structures tested. Based on these findings, it is argued that the attachment structures are displacement-optimized rather than force-optimized." }, { "pmid": "25678588", "title": "The behavioural ecology of climbing plants.", "abstract": "Climbing plants require an external support to grow vertically and enhance light acquisition. Vines that find a suitable support have greater performance and fitness than those that remain prostrate. Therefore, the location of a suitable support is a key process in the life history of climbing plants. Numerous studies on climbing plant behaviour have elucidated mechanistic details of support searching and attachment. Far fewer studies have addressed the ecological significance of support-finding behaviour and the factors that affect it. Without this knowledge, little progress can be made in the understanding of the evolution of support-finding behaviour in climbers. Here I review studies addressing ecological causes and consequences of support finding and use by climbing plants. I also propose the use of behavioural ecology theoretical frameworks to study climbing plant behaviour. I show how host tree attributes may determine the probability of successful colonization for the different types of climbers, and examine the evidence of environmental and genetic control of circumnutation behaviour and phenotypic responses to support availability. Cases of oriented vine growth towards supports are highlighted. I discuss functional responses of vines to the interplay between herbivory and support availability under different abiotic environments, illustrating with one study case how results comply with a theoretical framework of behavioural ecology originally conceived for animals. I conclude stressing that climbing plants are suitable study subjects for the application of behavioural-ecological theory. Further research under this framework should aim at characterizing the different stages of the support-finding process in terms of their fit with the different climbing modes and environmental settings. In particular, cost-benefit analysis of climbing plant behaviour should be helpful to infer the selective pressures that have operated to shape current climber ecological communities." }, { "pmid": "23243099", "title": "Circumnutation as an autonomous root movement in plants.", "abstract": "Although publications on circumnutation of the aerial parts of flowering plants are numerous and primarily from the time between Darwin (1880) and the 1950s, reports on circumnutation of roots are scarce. With the introduction of modern molecular biology techniques, many topics in the plant sciences have been revitalized; among these is root circumnutation. The most important research in this area has been done on Arabidopsis thaliana, which has roots that behave differently from those of many other plants; roots grown on inclined agar dishes produce a pattern of half waves slanted to one side. When grown instead on horizontally set dishes, the roots grow in loops or in tight right-handed coils that are characterized by a tight torsion to the left-hand. The roots of the few plants that differ from Arabidopsis and have been similarly tested do not present such patterns, because even if they circumnutate generally in a helical pattern, they subsequently straighten. Research on plants in space or on a clinostat has allowed the testing of these roots in a habitat lacking gravity or simulating the lack. Recently, molecular geneticists have started to connect various root behaviors to specific groups of genes. For example, anomalies in auxin responses caused by some genes can be overcome by complementation with wild-type genes. Such important studies contribute to understanding the mechanisms of growth and elongation, processes that are only superficially understood." }, { "pmid": "27923062", "title": "The Kinematics of Plant Nutation Reveals a Simple Relation between Curvature and the Orientation of Differential Growth.", "abstract": "Nutation is an oscillatory movement that plants display during their development. Despite its ubiquity among plants movements, the relation between the observed movement and the underlying biological mechanisms remains unclear. Here we show that the kinematics of the full organ in 3D give a simple picture of plant nutation, where the orientation of the curvature along the main axis of the organ aligns with the direction of maximal differential growth. Within this framework we reexamine the validity of widely used experimental measurements of the apical tip as markers of growth dynamics. We show that though this relation is correct under certain conditions, it does not generally hold, and is not sufficient to uncover the specific role of each mechanism. As an example we re-interpret previously measured experimental observations using our model." }, { "pmid": "23048015", "title": "Gravity sensing and signal transduction in vascular plant primary roots.", "abstract": "During gravitropism, the potential energy of gravity is converted into a biochemical signal. How this transfer occurs remains one of the most exciting mysteries in plant cell biology. New experiments are filling in pieces of the puzzle. In this review, we introduce gravitropism and give an overview of what we know about gravity sensing in roots of vascular plants, with special highlight on recent papers. When plant roots are reoriented sideways, amyloplast resedimentation in the columella cells is a key initial step in gravity sensing. This process somehow leads to cytoplasmic alkalinization of these cells followed by relocalization of auxin efflux carriers (PINs). This changes auxin flow throughout the root, generating a lateral gradient of auxin across the cap that upon transmission to the elongation zone leads to differential cell elongation and gravibending. We will present the evidence for and against the following players having a role in transferring the signal from the amyloplast sedimentation into the auxin signaling cascade: mechanosensitive ion channels, actin, calcium ions, inositol trisphosphate, receptors/ligands, ARG1/ARL2, spermine, and the TOC complex. We also outline auxin transport and signaling during gravitropism." }, { "pmid": "26795156", "title": "Space, the final frontier: A critical review of recent experiments performed in microgravity.", "abstract": "Space biology provides an opportunity to study plant physiology and development in a unique microgravity environment. Recent space studies with plants have provided interesting insights into plant biology, including discovering that plants can grow seed-to-seed in microgravity, as well as identifying novel responses to light. However, spaceflight experiments are not without their challenges, including limited space, limited access, and stressors such as lack of convection and cosmic radiation. Therefore, it is important to design experiments in a way to maximize the scientific return from research conducted on orbiting platforms such as the International Space Station. Here, we provide a critical review of recent spaceflight experiments and suggest ways in which future experiments can be designed to improve the value and applicability of the results generated. These potential improvements include: utilizing in-flight controls to delineate microgravity versus other spaceflight effects, increasing scientific return via next-generation sequencing technologies, and utilizing multiple genotypes to ensure results are not unique to one genetic background. Space experiments have given us new insights into plant biology. However, to move forward, special care should be given to maximize science return in understanding both microgravity itself as well as the combinatorial effects of living in space." }, { "pmid": "24481074", "title": "Phototropism: growing towards an understanding of plant movement.", "abstract": "Phototropism, or the differential cell elongation exhibited by a plant organ in response to directional blue light, provides the plant with a means to optimize photosynthetic light capture in the aerial portion and water and nutrient acquisition in the roots. Tremendous advances have been made in our understanding of the molecular, biochemical, and cellular bases of phototropism in recent years. Six photoreceptors and their associated signaling pathways have been linked to phototropic responses under various conditions. Primary detection of directional light occurs at the plasma membrane, whereas secondary modulatory photoreception occurs in the cytoplasm and nucleus. Intracellular responses to light cues are processed to regulate cell-to-cell movement of auxin to allow establishment of a trans-organ gradient of the hormone. Photosignaling also impinges on the transcriptional regulation response established as a result of changes in local auxin concentrations. Three additional phytohormone signaling pathways have also been shown to influence phototropic responsiveness, and these pathways are influenced by the photoreceptor signaling as well. Here, we will discuss this complex dance of intra- and intercellular responses that are regulated by these many systems to give rise to a rapid and robust adaptation response observed as organ bending." }, { "pmid": "25638281", "title": "Sensing the light environment in plants: photoreceptors and early signaling steps.", "abstract": "Plants must constantly adapt to a changing light environment in order to optimize energy conversion through the process of photosynthesis and to limit photodamage. In addition, plants use light cues for timing of key developmental transitions such as initiation of reproduction (transition to flowering). Plants are equipped with a battery of photoreceptors enabling them to sense a very broad light spectrum spanning from UV-B to far-red wavelength (280-750nm). In this review we briefly describe the different families of plant photosensory receptors and the mechanisms by which they transduce environmental information to influence numerous aspects of plant growth and development throughout their life cycle." }, { "pmid": "22623496", "title": "Photosynthetic quantum yield dynamics: from photosystems to leaves.", "abstract": "The mechanisms underlying the wavelength dependence of the quantum yield for CO(2) fixation (α) and its acclimation to the growth-light spectrum are quantitatively addressed, combining in vivo physiological and in vitro molecular methods. Cucumber (Cucumis sativus) was grown under an artificial sunlight spectrum, shade light spectrum, and blue light, and the quantum yield for photosystem I (PSI) and photosystem II (PSII) electron transport and α were simultaneously measured in vivo at 20 different wavelengths. The wavelength dependence of the photosystem excitation balance was calculated from both these in vivo data and in vitro from the photosystem composition and spectroscopic properties. Measuring wavelengths overexciting PSI produced a higher α for leaves grown under the shade light spectrum (i.e., PSI light), whereas wavelengths overexciting PSII produced a higher α for the sun and blue leaves. The shade spectrum produced the lowest PSI:PSII ratio. The photosystem excitation balance calculated from both in vivo and in vitro data was substantially similar and was shown to determine α at those wavelengths where absorption by carotenoids and nonphotosynthetic pigments is insignificant (i.e., >580 nm). We show quantitatively that leaves acclimate their photosystem composition to their growth light spectrum and how this changes the wavelength dependence of the photosystem excitation balance and quantum yield for CO(2) fixation. This also proves that combining different wavelengths can enhance quantum yields substantially." }, { "pmid": "23048016", "title": "Shoot phototropism in higher plants: new light through old concepts.", "abstract": "Light is a key environmental factor that drives many aspects of plant growth and development. Phototropism, the reorientation of growth toward or away from light, represents one of these important adaptive processes. Modern studies of phototropism began with experiments conducted by Charles Darwin demonstrating that light perception at the shoot apex of grass coleoptiles induces differential elongation in the lower epidermal cells. This led to the discovery of the plant growth hormone auxin and the Cholodny-Went hypothesis attributing differential tropic bending to lateral auxin relocalization. In the past two decades, molecular-genetic analyses in the model flowering plant Arabidopsis thaliana has identified the principal photoreceptors for phototropism and their mechanism of activation. In addition, several protein families of auxin transporters have been identified. Despite extensive efforts, however, it still remains unclear as to how photoreceptor activation regulates lateral auxin transport to establish phototropic growth. This review aims to summarize major developments from over the last century and how these advances shape our current understanding of higher plant phototropism. Recent progress in phototropism research and the way in which this research is shedding new light on old concepts, including the Cholodny-Went hypothesis, is also highlighted." }, { "pmid": "24323503", "title": "Shade avoidance: phytochrome signalling and other aboveground neighbour detection cues.", "abstract": "Plants compete with neighbouring vegetation for limited resources. In competition for light, plants adjust their architecture to bring the leaves higher in the vegetation where more light is available than in the lower strata. These architectural responses include accelerated elongation of the hypocotyl, internodes and petioles, upward leaf movement (hyponasty), and reduced shoot branching and are collectively referred to as the shade avoidance syndrome. This review discusses various cues that plants use to detect the presence and proximity of neighbouring competitors and respond to with the shade avoidance syndrome. These cues include light quality and quantity signals, mechanical stimulation, and plant-emitted volatile chemicals. We will outline current knowledge about each of these signals individually and discuss their possible interactions. In conclusion, we will make a case for a whole-plant, ecophysiology approach to identify the relative importance of the various neighbour detection cues and their possible interactions in determining plant performance during competition." }, { "pmid": "27060719", "title": "Photoreceptor crosstalk in shade avoidance.", "abstract": "Plants integrate a variety of environmental signals to determine the threat of competitor shading and use this information to initiate escape responses, termed shade avoidance. Photoreceptor-mediated light signals are central to this process. Encroaching vegetation is sensed as a reduction in the ratio of red to far-red wavebands (R:FR) by phytochromes. Plants shaded within a canopy will also perceive reduced blue light signals and possibly enriched green light through cryptochromes. The detection of canopy gaps may be further facilitated by blue light sensing phototropins and the UV-B photoreceptor, UVR8. Once sunlight has been reached, phytochrome and UVR8 inhibit shade avoidance. Accumulating evidence suggests that multiple plant photoreceptors converge on a shared signalling network to regulate responses to shade." }, { "pmid": "23281393", "title": "Contributions of green light to plant growth and development.", "abstract": "Light passing through or reflected from adjacent foliage provides a developing plant with information that is used to guide specific genetic and physiological processes. Changes in gene expression underlie adaptation to, or avoidance of, the light-compromised environment. These changes have been well described and are mostly attributed to a decrease in the red light to far-red light ratio and/or a reduction in blue light fluence rate. In most cases, these changes rely on the integration of red/far-red/blue light signals, leading to changes in phytohormone levels. Studies over the last decade have described distinct responses to green light and/or a shift of the blue-green, or red-green ratio. Responses to green light are typically low-light responses, suggesting that they may contribute to the adaptation to growth under foliage or within close proximity to other plants. This review summarizes the growth responses in artificially manipulated light environments with an emphasis on the roles of green wavebands. The information may be extended to understanding the influence of green light in shade avoidance responses as well as other plant developmental and physiological processes." }, { "pmid": "25904120", "title": "Cytokinin is required for escape but not release from auxin mediated apical dominance.", "abstract": "Auxin produced by an active primary shoot apex is transported down the main stem and inhibits the growth of the axillary buds below it, contributing to apical dominance. Here we use Arabidopsis thaliana cytokinin (CK) biosynthetic and signalling mutants to probe the role of CK in this process. It is well established that bud outgrowth is promoted by CK, and that CK synthesis is inhibited by auxin, leading to the hypothesis that release from apical dominance relies on an increased supply of CK to buds. Our data confirm that decapitation induces the expression of at least one ISOPENTENYLTRANSFERASE (IPT) CK biosynthetic gene in the stem. We further show that transcript abundance of a clade of the CK-responsive type-A Arabidopsis response regulator (ARR) genes increases in buds following CK supply, and that, contrary to their typical action as inhibitors of CK signalling, these genes are required for CK-mediated bud activation. However, analysis of the relevant arr and ipt multiple mutants demonstrates that defects in bud CK response do not affect auxin-mediated bud inhibition, and increased IPT transcript levels are not needed for bud release following decapitation. Instead, our data suggest that CK acts to overcome auxin-mediated bud inhibition, allowing buds to escape apical dominance under favourable conditions, such as high nitrate availability." }, { "pmid": "27040840", "title": "The importance of strigolactone transport regulation for symbiotic signaling and shoot branching.", "abstract": "This review presents the role of strigolactone transport in regulating plant root and shoot architecture, plant-fungal symbiosis and the crosstalk with several phytohormone pathways. The authors, based on their data and recently published results, suggest that long-distance, as well local strigolactone transport might occur in a cell-to-cell manner rather than via the xylem stream. Strigolactones (SLs) are recently characterized carotenoid-derived phytohormones. They play multiple roles in plant architecture and, once exuded from roots to soil, in plant-rhizosphere interactions. Above ground SLs regulate plant developmental processes, such as lateral bud outgrowth, internode elongation and stem secondary growth. Below ground, SLs are involved in lateral root initiation, main root elongation and the establishment of the plant-fungal symbiosis known as mycorrhiza. Much has been discovered on players and patterns of SL biosynthesis and signaling and shown to be largely conserved among different plant species, however little is known about SL distribution in plants and its transport from the root to the soil. At present, the only characterized SL transporters are the ABCG protein PLEIOTROPIC DRUG RESISTANCE 1 from Petunia axillaris (PDR1) and, in less detail, its close homologue from Nicotiana tabacum PLEIOTROPIC DRUG RESISTANCE 6 (PDR6). PDR1 is a plasma membrane-localized SL cellular exporter, expressed in root cortex and shoot axils. Its expression level is regulated by its own substrate, but also by the phytohormone auxin, soil nutrient conditions (mainly phosphate availability) and mycorrhization levels. Hence, PDR1 integrates information from nutrient availability and hormonal signaling, thus synchronizing plant growth with nutrient uptake. In this review we discuss the effects of PDR1 de-regulation on plant development and mycorrhization, the possible cross-talk between SLs and other phytohormone transporters and finally the need for SL transporters in different plant species." } ]
Frontiers in Neuroscience
30459544
PMC6232272
10.3389/fnins.2018.00781
Automatic Human Sleep Stage Scoring Using Deep Neural Networks
The classification of sleep stages is the first and an important step in the quantitative analysis of polysomnographic recordings. Sleep stage scoring relies heavily on visual pattern recognition by a human expert and is time consuming and subjective. Thus, there is a need for automatic classification. In this work we developed machine learning algorithms for sleep classification: random forest (RF) classification based on features and artificial neural networks (ANNs) working both with features and raw data. We tested our methods in healthy subjects and in patients. Most algorithms yielded good results comparable to human interrater agreement. Our study revealed that deep neural networks (DNNs) working with raw data performed better than feature-based methods. We also demonstrated that taking the local temporal structure of sleep into account a priori is important. Our results demonstrate the utility of neural network architectures for the classification of sleep.
Related WorkMartin et al. (1972) applied a simple decision tree using EEG and EOG data for scoring. A decision tree like algorithm was also used by Louis et al. (2004). Stanus et al. (1987) developed and compared two methods for automatic sleep scoring: one based on an autoregressive model and another one based on spectral bands and Bayesian decision theory. Both methods used one EEG, two EOG and an EMG channel. The EOG was needed to detect eye movements and the EMG to assess the muscle tone. Fell et al. (1996) examined automatic sleep scoring using additional non-linear features (correlation dimension, Kolmogorov entropy, Lyapunov exponent) and concluded that such measures carry additional information not captured with spectral features. Park et al. (2000) built a hybrid rule- and case- based system and reported high agreement with human scorers. They also claimed that such a system works well to score patients with sleep disorders.One of the commercially successful attempts to perform automatic scoring evolved from the SIESTA project (Klosh et al., 2001). The corresponding software of the SIESTA group was named Somnolyzer 24x7. It includes a quality check of the data based on histograms. The software extracts features based on a single EEG channel, two EOG channels and one EMG channel and predicts sleep stages using a decision tree (Anderer et al., 2005). The software was validated on a database containing 90 patients with various sleep disorders and ∼200 controls. Several experts scored sleep in the database and Somnolyzer 24x7 showed good agreement with consent scoring (Anderer et al., 2005).Newer and more sophisticated approaches were based on artificial neural networks (ANNs). Schaltenbrand et al. (1993) for example applied ANNs for sleep stage classification using 17 features extracted from PSG signals and reported an accuracy close to 90%. Pardey et al. (1996) combined ANNs with fuzzy logic and Längkvist et al. (2012) applied restricted Boltzmann machines to solve the sleep classification problem, to mention just a few approaches.The methods mentioned above require carefully engineered features. It is possible to avoid this step using novel deep learning methods. ANNs in the form of convolutional neural networks (CNNs) were recently applied to the raw sleep EEG by Tsinalis et al. (2016). CNNs are especially promising because they can learn complex patterns and ‘look’ at the data in a similar way as a ‘real brain’ (Fukushima and Miyake, 1982). However, working with raw data requires a huge amount of training data and computational resources.Sequences of epochs are considered by a human expert according to the scoring manuals. Therefore, we assume that learning local temporal structures are an important aspect in automatic sleep scoring. Temporal patterns have previously been addressed by applying a hidden Markov model (HMM) (Doroshenkov et al., 2007; Pan et al., 2012). In the last few years, recurrent neural networks (RNNs) have demonstrated better performance than “classical” machine learning methods on datasets with a temporal structure (Mikolov et al., 2010; Graves et al., 2013; Karpathy and Fei-Fei, 2015). One of the most common and well-studied RNNs is the Long-Short Term Memory (LSTM) neural network (Hochreiter and Schmidhuber, 1997). Such networks have been successfully applied to EEG data in general (Davidson et al., 2006) as well as to sleep data (Supratak et al., 2017).Artificial neural networks using raw data revealed comparable performance as the best ANNs using engineered features and the best classical machine learning methods (Davidson et al., 2006; Tsinalis et al., 2016; Supratak et al., 2017; Chambon et al., 2018; Phan et al., 2018; Sors et al., 2018). See Section “Discussion” for more details.The above-mentioned approaches were based on supervised learning. There have also been several attempts to perform unsupervised automatic sleep scoring in humans (Gath and Geva, 1989; Agarwal and Gotman, 2001; Grube et al., 2002) and in animals (Sunagawa et al., 2013; Libourel et al., 2015).
[ "11759922", "15838184", "24759284", "29641380", "19250176", "14996037", "8647043", "4348642", "9377276", "4182894", "13790851", "11446210", "4192812", "25325478", "14757347", "29516562", "4111497", "16229049", "17468046", "29391413", "22908930", "9065871", "11017725", "23319911", "23319910", "8477587", "8853216", "4105870", "28678710", "2435525", "29029305", "23621645", "28678710", "25377287", "28429067", "18282838", "29351821", "27070243" ]
[ { "pmid": "11759922", "title": "Computer-assisted sleep staging.", "abstract": "To address the subjectivity in manual scoring of polysomnograms, a computer-assisted sleep staging method is presented in this paper. The method uses the principles of segmentation and self-organization (clustering) based on primitive sleep-related features to find the pseudonatural stages present in the record. Sample epochs of these natural stages are presented to the user, who can classify them according to the Rechtschaffen and Kales (RK) or any other standard. The method then learns from these samples to complete the classification. This step allows the active participation of the operator in order to customize the staging to his/her preferences. The method was developed and tested using 12 records of varying types (normal, abnormal, male, female, varying age groups). Results showed an overall concurrence of 80.6% with manual scoring of 20-s epochs according to RK standard. The greatest amount of errors occurred in the identification of the highly transitional Stage 1, 54% of which was misclassified into neighboring stages 2 or Wake." }, { "pmid": "15838184", "title": "An E-health solution for automatic sleep classification according to Rechtschaffen and Kales: validation study of the Somnolyzer 24 x 7 utilizing the Siesta database.", "abstract": "To date, the only standard for the classification of sleep-EEG recordings that has found worldwide acceptance are the rules published in 1968 by Rechtschaffen and Kales. Even though several attempts have been made to automate the classification process, so far no method has been published that has proven its validity in a study including a sufficiently large number of controls and patients of all adult age ranges. The present paper describes the development and optimization of an automatic classification system that is based on one central EEG channel, two EOG channels and one chin EMG channel. It adheres to the decision rules for visual scoring as closely as possible and includes a structured quality control procedure by a human expert. The final system (Somnolyzer 24 x 7) consists of a raw data quality check, a feature extraction algorithm (density and intensity of sleep/wake-related patterns such as sleep spindles, delta waves, SEMs and REMs), a feature matrix plausibility check, a classifier designed as an expert system, a rule-based smoothing procedure for the start and the end of stages REM, and finally a statistical comparison to age- and sex-matched normal healthy controls (Siesta Spot Report). The expert system considers different prior probabilities of stage changes depending on the preceding sleep stage, the occurrence of a movement arousal and the position of the epoch within the NREM/REM sleep cycles. Moreover, results obtained with and without using the chin EMG signal are combined. The Siesta polysomnographic database (590 recordings in both normal healthy subjects aged 20-95 years and patients suffering from organic or nonorganic sleep disorders) was split into two halves, which were randomly assigned to a training and a validation set, respectively. The final validation revealed an overall epoch-by-epoch agreement of 80% (Cohen's kappa: 0.72) between the Somnolyzer 24 x 7 and the human expert scoring, as compared with an inter-rater reliability of 77% (Cohen's kappa: 0.68) between two human experts scoring the same dataset. Two Somnolyzer 24 x 7 analyses (including a structured quality control by two human experts) revealed an inter-rater reliability close to 1 (Cohen's kappa: 0.991), which confirmed that the variability induced by the quality control procedure, whereby approximately 1% of the epochs (in 9.5% of the recordings) are changed, can definitely be neglected. Thus, the validation study proved the high reliability and validity of the Somnolyzer 24 x 7 and demonstrated its applicability in clinical routine and sleep studies." }, { "pmid": "24759284", "title": "A review of multitaper spectral analysis.", "abstract": "Nonparametric spectral estimation is a widely used technique in many applications ranging from radar and seismic data analysis to electroencephalography (EEG) and speech processing. Among the techniques that are used to estimate the spectral representation of a system based on finite observations, multitaper spectral estimation has many important optimality properties, but is not as widely used as it possibly could be. We give a brief overview of the standard nonparametric spectral estimation theory and the multitaper spectral estimation, and give two examples from EEG analyses of anesthesia and sleep." }, { "pmid": "29641380", "title": "A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series.", "abstract": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost." }, { "pmid": "19250176", "title": "Interrater reliability for sleep scoring according to the Rechtschaffen & Kales and the new AASM standard.", "abstract": "Interrater variability of sleep stage scorings has an essential impact not only on the reading of polysomnographic sleep studies (PSGs) for clinical trials but also on the evaluation of patients' sleep. With the introduction of a new standard for sleep stage scorings (AASM standard) there is a need for studies on interrater reliability (IRR). The SIESTA database resulting from an EU-funded project provides a large number of studies (n = 72; 56 healthy controls and 16 subjects with different sleep disorders, mean age +/- SD: 57.7 +/- 18.7, 34 females) for which scorings according to both standards (AASM and R&K) were done. Differences in IRR were analysed at two levels: (1) based on quantitative sleep parameter by means of intraclass correlations; and (2) based on an epoch-by-epoch comparison by means of Cohen's kappa and Fleiss' kappa. The overall agreement was for the AASM standard 82.0% (Cohen's kappa = 0.76) and for the R&K standard 80.6% (Cohen's kappa = 0.68). Agreements increased from R&K to AASM for all sleep stages, except N2. The results of this study underline that the modification of the scoring rules improve IRR as a result of the integration of occipital, central and frontal leads on the one hand, but decline IRR on the other hand specifically for N2, due to the new rule that cortical arousals with or without concurrent increase in submental electromyogram are critical events for the end of N2." }, { "pmid": "14996037", "title": "Interrater reliability between scorers from eight European sleep laboratories in subjects with different sleep disorders.", "abstract": "Interrater variability of sleep stage scorings is a well-known phenomenon. The SIESTA project offered the opportunity to analyse interrater reliability (IRR) between experienced scorers from eight European sleep laboratories within a large sample of patients with different (sleep) disorders: depression, general anxiety disorder with and without non-organic insomnia, Parkinson's disease, period limb movements in sleep and sleep apnoea. The results were based on 196 recordings from 98 patients (73 males: 52.3 +/- 12.1 years and 25 females: 49.5 +/- 11.9 years) for which two independent expert scorings from two different laboratories were available. Cohen's kappa was used to evaluate the IRR on the basis of epochs and intraclass correlation was used to analyse the agreement on quantitative sleep parameters. The overall level of agreement when five different stages were distinguished was kappa = 0.6816 (76.8%), which in terms of kappa reflects a 'substantial' agreement (Landis and Koch, 1977). For different groups of patients kappa values varied from 0.6138 (Parkinson's disease) to 0.8176 (generalized anxiety disorder). With regard to (sleep) stages, the IRR was highest for rapid eye movement (REM), followed by Wake, slow-wave sleep (SWS), non-rapid eye movement 2 (NREM2) and NREM1. The results of regression analysis showed that age and sex only had a statistically significant effect on kappa when the (sleep) stages are considered separately. For NREM2 and SWS a statistically significant decrease of IRR with age has been observed and the IRR for SWS was lower for males than for females. These variations of IRR most probably reflect changes of the sleep electroencephalography (EEG) with age and gender." }, { "pmid": "8647043", "title": "Discrimination of sleep stages: a comparison between spectral and nonlinear EEG measures.", "abstract": "During recent years, methods from nonlinear dynamics were introduced into the analysis of EEG signals. Although from a theoretical point of view nonlinear measures quantify properties being independent from conventional spectral measures, it is a crucial question whether in practice nonlinear EEG measures yield additional information, which is not redundant to the information gained by spectral analysis. Therefore, we compared the ability of several spectral and nonlinear measures to discriminate different sleep stages. We evaluated spectral measures (relative delta power, spectral edge, spectral entropy and first spectral moment), and nonlinear measures (correlation dimension D2, largest Lyapunov exponent LI, and approximated Kolmogorof entropy K2), and additionally the stochastic time domain based measure entropy of amplitudes. For 12 healthy subjects these measures were calculated from sleep EEG segments of 2:44 min duration, each segment unambiguously corresponding to one of the sleep stages I, II, SWS and REM. Results were statistically evaluated by multivariate and univariate analyses of variance and by discriminant analyses. Generally, nonlinear measures (D2 and L1) performed better in discriminating sleep stages I and II, whereas spectral measures showed advantages in discriminating stage II and SWS. Combinations of spectral and nonlinear measures yielded a better overall discrimination of sleep stages than spectral measures alone. The best overall discrimination was reached even without inclusion of any of the spectral measures. It can be concluded that nonlinear measures yield additional information, which improves the ability to discriminate sleep stages and which may in general improve the ability to distinguish different psychophysiological states. This confirms the importance and practical reliability of the application of nonlinear methods to EEG analysis." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "25325478", "title": "Unsupervised online classifier in sleep scoring for sleep deprivation studies.", "abstract": "STUDY OBJECTIVE\nThis study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents.\n\n\nDESIGN\nWe designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD).\n\n\nSETTINGS\nControlled laboratory settings for chronic polygraphic sleep recordings and selective PSD.\n\n\nPARTICIPANTS\nTen adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings.\n\n\nMEASUREMENTS\nThe performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours.\n\n\nRESULTS\nOur algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes).\n\n\nCONCLUSIONS\nOur fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method." }, { "pmid": "14757347", "title": "Design and validation of a computer-based sleep-scoring algorithm.", "abstract": "A computer-based sleep scoring algorithm was devised for the real time scoring of sleep-wake state in Wistar rats. Electroencephalogram (EEG) amplitude (microV(rms)) was measured in the following frequency bands: delta (delta; 1.5-6 Hz), theta (Theta; 6-10 Hz), alpha (alpha; 10.5-15 Hz), beta (beta; 22-30 Hz), and gamma (gamma; 35-45 Hz). Electromyographic (EMG) signals (microV(rms)) were recorded from the levator auris longus (neck) muscle, as this yielded a significantly higher algorithm accuracy than the spinodeltoid (shoulder) or temporalis (head) muscle EMGs (ANOVA; P=0.009). Data were obtained using either tethers (n=10) or telemetry (n=4). We developed a simple three-step algorithm that categorizes behavioural state as wake, non-rapid eye movement (NREM) sleep, rapid eye movement (REM) sleep, based on thresholds set during a manually-scored 90-min preliminary recording. Behavioural state was assigned in 5-s epochs. EMG amplitude and ratios of EEG frequency band amplitudes were measured, and compared with empirical thresholds in each animal.STEP 1: EMG amplitude greater than threshold? Yes: \"active\" wake, no: sleep or \"quiet\" wake. STEP 2: EEG amplitude ratio (delta x alpha)/(beta x gamma) greater than threshold? Yes: NREM, no: REM or \"quiet\" wake. STEP 3: EEG amplitude ratio Theta(2)/(delta x alpha) greater than threshold? Yes: REM, no: \"quiet\" wake. The algorithm was validated with one, two and three steps. The overall accuracy in discriminating wake and sleep (NREM and REM combined) using step one alone was found to be 90.1%. Overall accuracy using the first two steps was found to be 87.5% in scoring wake, NREM and REM sleep. When all three steps were used, overall accuracy in scoring wake, NREM and REM sleep was determined to be 87.9%. All accuracies were derived from comparisons with unequivocally-scored epochs from four 90-min recordings as defined by an experienced human rater. The algorithms were as reliable as the agreement between three human scorers (88%)." }, { "pmid": "29516562", "title": "Automatic artefact detection in single-channel sleep EEG recordings.", "abstract": "Quantitative electroencephalogram analysis (e.g. spectral analysis) has become an important tool in sleep research and sleep medicine. However, reliable results are only obtained if artefacts are removed or excluded. Artefact detection is often performed manually during sleep stage scoring, which is time consuming and prevents application to large datasets. We aimed to test the performance of mostly simple algorithms of artefact detection in polysomnographic recordings, derive optimal parameters and test their generalization capacity. We implemented 14 different artefact detection methods, optimized parameters for derivation C3A2 using receiver operator characteristic curves of 32 recordings, and validated them on 21 recordings of healthy participants and 10 recordings of patients (different laboratory) and considered the methods as generalizable. We also compared average power density spectra with artefacts excluded based on algorithms and expert scoring. Analyses were performed retrospectively. We could reliably identify artefact contaminated epochs in sleep electroencephalogram recordings of two laboratories (healthy participants and patients) reaching good sensitivity (specificity 0.9) with most algorithms. The best performance was obtained using fixed thresholds of the electroencephalogram slope, high-frequency power (25-90 Hz or 45-90 Hz) and residuals of adaptive autoregressive models. Artefacts in electroencephalogram data can be reliably excluded by simple algorithms with good performance, and average electroencephalogram power density spectra with artefact exclusion based on algorithms and manual scoring are very similar in the frequency range relevant for most applications in sleep research and sleep medicine, allowing application to large datasets as needed to address questions related to genetics, epidemiology or precision medicine." }, { "pmid": "16229049", "title": "Antidepressants and their effect on sleep.", "abstract": "Given the relationship between sleep and depression, there is inevitably going to be an effect of antidepressants on sleep. Current evidence suggests that this effect depends on the class of antidepressant used and the dosage. The extent of variation between the effects of antidepressants and sleep may relate to their mechanism of action. This systematic review examines randomised-controlled trials (RCTs) that have reported the effect that antidepressants appear to have on sleep. RCTs are not restricted to depressed populations, since several studies provide useful information about the effects on sleep in other groups. Nevertheless, the distinction is made between those studies because the participant's health may influence the baseline sleep profiles and the effect of the antidepressant. Insomnia is often seen with monoamine oxidase inhibitors (MAOIs), with all tricyclic antidepressants (TCAs) except amitriptyline, and all selective serotonin reuptake inhibitors (SSRIs) with venlafaxine and moclobemide as well. Sedation has been reported with all TCAs except desipramine, with mirtazapine and nefazodone, the TCA-related maprotiline, trazodone and mianserin, and with all MAOIs. REM sleep suppression has been observed with all TCAs except trimipramine, but especially clomipramine, with all MAOIs and SSRIs and with venlafaxine, trazodone and bupropion. However, the effect on sleep varies between compounds within antidepressant classes, differences relating to the amount of sedative or alerting (insomnia) effects, changes to baseline sleep parameters, differences relating to REM sleep, and the degree of sleep-related side effects." }, { "pmid": "17468046", "title": "Neurobiology of REM and NREM sleep.", "abstract": "This paper presents an overview of the current knowledge of the neurophysiology and cellular pharmacology of sleep mechanisms. It is written from the perspective that recent years have seen a remarkable development of knowledge about sleep mechanisms, due to the capability of current cellular neurophysiological, pharmacological and molecular techniques to provide focused, detailed, and replicable studies that have enriched and informed the knowledge of sleep phenomenology and pathology derived from electroencephalographic (EEG) analysis. This chapter has a cellular and neurophysiological/neuropharmacological focus, with an emphasis on rapid eye movement (REM) sleep mechanisms and non-REM (NREM) sleep phenomena attributable to adenosine. The survey of neuronal and neurotransmitter-related brainstem mechanisms of REM includes monoamines, acetylcholine, the reticular formation, a new emphasis on GABAergic mechanisms and a discussion of the role of orexin/hypcretin in diurnal consolidation of REM sleep. The focus of the NREM sleep discussion is on the basal forebrain and adenosine as a mediator of homeostatic control. Control is through basal forebrain extracellular adenosine accumulation during wakefulness and inhibition of wakefulness-active neurons. Over longer periods of sleep loss, there is a second mechanism of homeostatic control through transcriptional modification. Adenosine acting at the A1 receptor produces an up-regulation of A1 receptors, which increases inhibition for a given level of adenosine, effectively increasing the gain of the sleep homeostat. This second mechanism likely occurs in widespread cortical areas as well as in the basal forebrain. Finally, the results of a new series of experimental paradigms in rodents to measure the neurocognitive effects of sleep loss and sleep interruption (modeling sleep apnea) provide animal model data congruent with those in humans." }, { "pmid": "29391413", "title": "The Effect of a Slowly Rocking Bed on Sleep.", "abstract": "Rocking movements appear to affect human sleep. Recent research suggested a facilitated transition from wake to sleep and a boosting of slow oscillations and sleep spindles due to lateral rocking movements during an afternoon nap. This study aimed at investigating the effect of vestibular stimulation on sleep onset, nocturnal sleep and its potential to increase sleep spindles and slow waves, which could influence memory performance. Polysomnography was recorded in 18 males (age: 20-28 years) during three nights: movement until sleep onset (C1), movement for 2 hours (C2), and one baseline (B) without motion. Sleep dependent changes in memory performance were assessed with a word-pair learning task. Although subjects preferred nights with vestibular stimulation, a facilitated sleep onset or a boost in slow oscillations was not observed. N2 sleep and the total number of sleep spindles increased during the 2 h with vestibular stimulation (C2) but not over the entire night. Memory performance increased over night but did not differ between conditions. The lack of an effect might be due to the already high sleep efficiency (96%) and sleep quality of our subjects during baseline. Nocturnal sleep in good sleepers might not benefit from the potential facilitating effects of vestibular stimulation." }, { "pmid": "22908930", "title": "A transition-constrained discrete hidden Markov model for automatic sleep staging.", "abstract": "BACKGROUND\nApproximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG) recordings including electroencephalograms (EEGs), electrooculograms (EOGs) and electromyograms (EMGs), are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K) rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable.\n\n\nMETHOD\nThe EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158  hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM), and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment.\n\n\nRESULTS\nOverall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%), and the least-accurately classified stage was S1 (<34%). In the majority of cases, S1 was classified as Wake (21%), S2 (33%) or REM sleep (12%), consistent with previous studies. However, the total time of S1 in the 20 all-night sleep recordings was less than 4%.\n\n\nCONCLUSION\nThe results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies." }, { "pmid": "9065871", "title": "A new approach to the analysis of the human sleep/wakefulness continuum.", "abstract": "The conventional approach to the analysis of human sleep uses a set of pre-defined rules to allocate each 20 or 30-s epoch to one of six main sleep stages. The application of these rules is performed either manually, by visual inspection of the electroencephalogram and related signals, or, more recently, by a software implementation of these rules on a computer. This article evaluates the limitations of rule-based sleep staging and then presents a new method of sleep analysis that makes no such use of pre-defined rules and stages, tracking instead the dynamic development of sleep on a continuous scale. The extraction of meaningful features from the electroencephalogram is first considered, and for this purpose a technique called autoregressive modelling was preferred to the more commonly-used methods of band-pass filtering or the fast Fourier transform. This is followed by a qualitative investigation into the dynamics of the electroencephalogram during sleep using a technique for data visualization known as a self-organizing feature map. The insights gained using this map led to the subsequent development of a new, quantitative method of sleep analysis that utilizes the pattern recognition capabilities of an artificial neural network. The outputs from this network provide a second-by-second quantification of the sleep/wakefulness continuum with a resolution that far exceeds that of rule-based sleep staging. This is demonstrated by the neural network's ability to pinpoint micro-arousals and highlight periods of severely disturbed sleep caused by certain sleep disorders. Both these phenomena are of considerable clinical value, but neither are scored satisfactorily using rule-based sleep staging." }, { "pmid": "11017725", "title": "Automated sleep stage scoring using hybrid rule- and case-based reasoning.", "abstract": "We propose an automated method for sleep stage scoring using hybrid rule- and case-based reasoning. The system first performs rule-based sleep stage scoring, according to the Rechtschaffen and Kale's sleep-scoring rule (1968), and then supplements the scoring with case-based reasoning. This method comprises signal processing unit, rule-based scoring unit, and case-based scoring unit. We applied this methodology to three recordings of normal sleep and three recordings of obstructive sleep apnea (OSA). Average agreement rate in normal recordings was 87.5% and case-based scoring enhanced the agreement rate by 5.6%. This architecture showed several advantages over the other analytical approaches in sleep scoring: high performance on sleep disordered recordings, the explanation facility, and the learning ability. The results suggest that combination of rule-based reasoning and case-based reasoning is promising for an automated sleep scoring and it is also considered to be a good model of the cognitive scoring process." }, { "pmid": "23319910", "title": "The American Academy of Sleep Medicine inter-scorer reliability program: sleep stage scoring.", "abstract": "STUDY OBJECTIVES\nThe program provides a unique opportunity to compare a large number of scorers with varied levels of experience to determine sleep stage scoring agreement. The objective is to examine areas of disagreement to inform future revisions of the AASM Manual for the Scoring of Sleep and Associated Events.\n\n\nMETHODS\nThe sample included 9 record fragments, 1,800 epochs and more than 3,200,000 scoring decisions. More than 2,500 scorers, most with 3 or more years of experience, participated. The analysis determined agreement with the score chosen by the majority of scorers.\n\n\nRESULTS\nSleep stage agreement averaged 82.6%. Agreement was highest for stage R sleep with stages N2 and W approaching the same level. Scoring agreement for stage N3 sleep was 67.4% and was lowest for stage N1 at 63.0%. Scorers had particular difficulty with the last epoch of stage W before sleep onset, the first epoch of stage N2 after stage N1 and the first epoch of stage R after stage N2. Discrimination between stages N2 and N3 was particularly difficult for scorers.\n\n\nCONCLUSIONS\nThese findings suggest that with current rules, inter-scorer agreement in a large group is approximately 83%, a level similar to that reported for agreement between expert scorers. Agreement in the scoring of stages N1 and N3 sleep was low. Modifications to the scoring rules to improve scoring during sleep stage transitions may result in improvement." }, { "pmid": "8477587", "title": "Neural network model: application to automatic analysis of human sleep.", "abstract": "We describe an approach to automatic all-night sleep analysis based on neural network models and simulated on a digital computer. First, automatic sleep stage scoring was performed using a multilayer feedforward network. Second, supervision of the automatic decision was achieved using ambiguity rejection and artifact rejection. Then, numerical analysis of sleep was carried out using all-night spectral analysis for the background activity of the EEG and sleep pattern detectors for the transient activity. Computerized analysis of sleep recordings may be considered as an essential tool to describe the sleep process and to reflect the dynamical organization of human sleep." }, { "pmid": "8853216", "title": "The effects of paroxetine and nefazodone on sleep: a placebo controlled trial.", "abstract": "We studied the effect of acute (1 day) and subacute (16 days) administration of the new antidepressant, nefazodone (400 mg daily), and the selective serotonin re-uptake inhibitor (SSRI), paroxetine (30 mg daily), on the sleep polysomnogram of 37 healthy volunteers using a random allocation, double-blind, placebo-controlled design. Compared to placebo, paroxetine lowered rapid eye movement (REM) sleep and increased REM latency. In addition, paroxetine increased awakenings and reduced Actual Sleep Time and Sleep Efficiency. In contrast, nefazodone did not alter REM sleep and had little effect on measures of sleep continuity. We conclude that in contrast to typical SSRIs, nefazodone administration has little effect on sleep architecture in healthy volunteers." }, { "pmid": "28678710", "title": "DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.", "abstract": "This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features." }, { "pmid": "2435525", "title": "Automated sleep scoring: a comparative reliability study of two algorithms.", "abstract": "In the present study, deterministic and stochastic sleep staging (DSS and SSS) methods were compared with expert visual analysis in order to provide reliability estimates under strict conditions of comparison. Thirty polygraphic records (15 controls, 15 patients) have been investigated, including artefacts and doubtful periods. Average agreement rates of both methods compared to expert visual scoring were very similar, although a few specifics occasionally appeared for partial sleep stages. The comparison of more than 40,000 sleep decisions (on 20 sec epochs) yielded 75% absolute reliability for normal controls and 70% for pathological cases. However, if the agreement rate obtained for routine visual scoring (82%) in our sleep laboratory is considered as satisfactory, our system is then 90% satisfactory. Finally, complementary aspects outlined in the two automatic scoring systems suggested the development of a unique algorithm on the basis of these methods. Keeping in mind the size of the test sample and the strict procedure of comparison, the two automated staging systems described in this study can be used with reasonable confidence for large scale investigations of sleep in man." }, { "pmid": "29029305", "title": "Large-Scale Automated Sleep Staging.", "abstract": "Study Objectives\nAutomated sleep staging has been previously limited by a combination of clinical and physiological heterogeneity. Both factors are in principle addressable with large data sets that enable robust calibration. However, the impact of sample size remains uncertain. The objectives are to investigate the extent to which machine learning methods can approximate the performance of human scorers when supplied with sufficient training cases and to investigate how staging performance depends on the number of training patients, contextual information, model complexity, and imbalance between sleep stage proportions.\n\n\nMethods\nA total of 102 features were extracted from six electroencephalography (EEG) channels in routine polysomnography. Two thousand nights were partitioned into equal (n = 1000) training and testing sets for validation. We used epoch-by-epoch Cohen's kappa statistics to measure the agreement between classifier output and human scorer according to American Academy of Sleep Medicine scoring criteria.\n\n\nResults\nEpoch-by-epoch Cohen's kappa improved with increasing training EEG recordings until saturation occurred (n = ~300). The kappa value was further improved by accounting for contextual (temporal) information, increasing model complexity, and adjusting the model training procedure to account for the imbalance of stage proportions. The final kappa on the testing set was 0.68. Testing on more EEG recordings leads to kappa estimates with lower variance.\n\n\nConclusion\nTraining with a large data set enables automated sleep staging that compares favorably with human scorers. Because testing was performed on a large and heterogeneous data set, the performance estimate has low variance and is likely to generalize broadly." }, { "pmid": "23621645", "title": "FASTER: an unsupervised fully automated sleep staging method for mice.", "abstract": "Identifying the stages of sleep, or sleep staging, is an unavoidable step in sleep research and typically requires visual inspection of electroencephalography (EEG) and electromyography (EMG) data. Currently, scoring is slow, biased and prone to error by humans and thus is the most important bottleneck for large-scale sleep research in animals. We have developed an unsupervised, fully automated sleep staging method for mice that allows less subjective and high-throughput evaluation of sleep. Fully Automated Sleep sTaging method via EEG/EMG Recordings (FASTER) is based on nonparametric density estimation clustering of comprehensive EEG/EMG power spectra. FASTER can accurately identify sleep patterns in mice that have been perturbed by drugs or by genetic modification of a clock gene. The overall accuracy is over 90% in every group. 24-h data are staged by a laptop computer in 10 min, which is faster than an experienced human rater. Dramatically improving the sleep staging process in both quality and throughput FASTER will open the door to quantitative and comprehensive animal sleep research." }, { "pmid": "28678710", "title": "DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.", "abstract": "This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features." }, { "pmid": "25377287", "title": "[Sleep habits, sleep quality and sleep medicine use of the Swiss population result].", "abstract": "A survey in a representative sample of the Swiss population revealed an average sleep duration of 7.5 hours on workdays and of 8.5 hours on free days, which reflected a more than half an hour (38 min) shorter sleep duration than 28 years ago. The mean time in bed was between 22:41 and 06:37 on workdays and between 23:29 and 08:27 on free days. On workdays as well as on free days the bedtime was delayed by 47 minutes in comparison to a similar survey 28 years ago. By contrast, the mean rise times on workdays and free days did not change. The sleep duration required to feel refreshed was indicated with 7 hours, which was 41 minutes less than 28 years ago. Roughly 90 % of the interviewees answered that they felt healthy, and 75 % described their sleep as good or very good compared to 79 % 28 years ago. The most frequent reasons stated for bad sleep were personal problems and strain at the workplace. The effect of bad quality sleep on every day functioning was considered as essential by 65 % of the respondents compared to 69 % 28 years ago. The use of medication to improve sleep was declared by 2.8 % (2.7 % 28 years ago), most often benzodiazepines, but also Valerian products and so-called z-drugs. In comparison with similar surveys in other countries (France, Great Britain and USA), Swiss residents slept roughly half an hour longer, but these other countries alike showed a sizable shortening of their habitual sleep duration over the last decades." }, { "pmid": "28429067", "title": "Neuronal oscillations and synchronicity associated with gamma-hydroxybutyrate during resting-state in healthy male volunteers.", "abstract": "RATIONALE\nGamma-hydroxybutyrate (GHB) is a putative neurotransmitter, a drug of abuse, an anesthetic agent, and a treatment for neuropsychiatric disorders. In previous electroencephalography (EEG) studies, GHB was shown to induce an electrophysiological pattern of \"paradoxical EEG-behavioral dissociation\" characterized by increased delta and theta oscillations usually associated with sleep during awake states. However, no detailed source localization of these alterations and no connectivity analyses have been performed yet.\n\n\nOBJECTIVES AND METHODS\nWe tested the effects of GHB (20 and 35 mg/kg, p.o.) on current source density (CSD), lagged phase synchronization (LPS), and global omega complexity (GOC) of neuronal oscillations in a randomized, double-blind, placebo-controlled, balanced cross-over study in 19 healthy, male participants using exact low-resolution electromagnetic tomography (eLORETA) of resting-state high-density EEG recordings.\n\n\nRESULTS\nCompared to placebo, GHB increased CSD of theta oscillations (5-7 Hz) in the posterior cingulate cortex (PCC) and alpha1 (8-10 Hz) oscillations in the anterior cingulate cortex. Higher blood plasma values were associated with higher LPS values of delta (2-4 Hz) oscillations between the PCC and the right inferior parietal lobulus. Additionally, GHB decreased GOC of alpha1 oscillations.\n\n\nCONCLUSION\nThese findings indicate that alterations in neuronal oscillations in the PCC mediate the psychotropic effects of GHB. Theta oscillations emerging from the PCC in combination with stability of functional connectivity within the default mode network might explain the GHB-related \"paradoxical EEG-behavioral dissociation.\" Our findings related to GOC suggest a reduced number of relatively independent neuronal processes, an effect that has also been demonstrated for other anesthetic agents." }, { "pmid": "18282838", "title": "A novel objective function for improved phoneme recognition using time-delay neural networks.", "abstract": "Single-speaker and multispeaker recognition results are presented for the voice-stop consonants /b,d,g/ using time-delay neural networks (TDNNs) with a number of enhancements, including a new objective function for training these networks. The new objective function, called the classification figure of merit (CFM), differs markedly from the traditional mean-squared-error (MSE) objective function and the related cross entropy (CE) objective function. Where the MSE and CE objective functions seek to minimize the difference between each output node and its ideal activation, the CFM function seeks to maximize the difference between the output activation of the node representing incorrect classifications. A simple arbitration mechanism is used with all three objective functions to achieve a median 30% reduction in the number of misclassifications when compared to TDNNs trained with the traditional MSE back-propagation objective function alone." }, { "pmid": "29351821", "title": "Reliability of the American Academy of Sleep Medicine Rules for Assessing Sleep Depth in Clinical Practice.", "abstract": "STUDY OBJECTIVES\nThe American Academy of Sleep Medicine has published manuals for scoring polysomnograms that recommend time spent in non-rapid eye movement sleep stages (stage N1, N2, and N3 sleep) be reported. Given the well-established large interrater variability in scoring stage N1 and N3 sleep, we determined the range of time in stage N1 and N3 sleep scored by a large number of technologists when compared to reasonably estimated true values.\n\n\nMETHODS\nPolysomnograms of 70 females were scored by 10 highly trained sleep technologists, two each from five different academic sleep laboratories. Range and confidence interval (CI = difference between the 5th and 95th percentiles) of the 10 times spent in stage N1 and N3 sleep assigned in each polysomnogram were determined. Average values of times spent in stage N1 and N3 sleep generated by the 10 technologists in each polysomnogram were considered representative of the true values for the individual polysomnogram. Accuracy of different technologists in estimating delta wave duration was determined by comparing their scores to digitally determined durations.\n\n\nRESULTS\nThe CI range of the ten N1 scores was 4 to 39 percent of total sleep time (% TST) in different polysomnograms (mean CI ± standard deviation = 11.1 ± 7.1 % TST). Corresponding range for N3 was 1 to 28 % TST (14.4 ± 6.1 % TST). For stage N1 and N3 sleep, very low or very high values were reported for virtually all polysomnograms by different technologists. Technologists varied widely in their assignment of stage N3 sleep, scoring that stage when the digitally determined time of delta waves ranged from 3 to 17 seconds.\n\n\nCONCLUSIONS\nManual scoring of non-rapid eye movement sleep stages is highly unreliable among highly trained, experienced technologists. Measures of sleep continuity and depth that are reliable and clinically relevant should be a focus of clinical research." }, { "pmid": "27070243", "title": "Staging Sleep in Polysomnograms: Analysis of Inter-Scorer Variability.", "abstract": "STUDY OBJECTIVES\nTo determine the reasons for inter-scorer variability in sleep staging of polysomnograms (PSGs).\n\n\nMETHODS\nFifty-six PSGs were scored (5-stage sleep scoring) by 2 experienced technologists, (first manual, M1). Months later, the technologists edited their own scoring (second manual, M2) based upon feedback from the investigators that highlighted differences between their scoring. The PSGs were then scored with an automatic system (Auto) and the technologists edited them, epoch-by-epoch (Edited-Auto). This resulted in 6 different manual scores for each PSG. Epochs were classified as scorer errors (one M1 score differed from the other 5 scores), scorer bias (all 3 scores of each technologist were similar, but differed from the other technologist) and equivocal (sleep scoring was inconsistent within and between technologists).\n\n\nRESULTS\nPercent agreement after M1 was 78.9% ± 9.0% and was unchanged after M2 (78.1% ± 9.7%) despite numerous edits (≈40/PSG) by the scorers. Agreement in Edited-Auto was higher (86.5% ± 6.4%, p < 1E-9). Scorer errors (< 2% of epochs) and scorer bias (3.5% ± 2.3% of epochs) together accounted for < 20% of M1 disagreements. A large number of epochs (92 ± 44/PSG) with scoring agreement in M1 were subsequently changed in M2 and/or Edited-Auto. Equivocal epochs, which showed scoring inconsistency, accounted for 28% ± 12% of all epochs, and up to 76% of all epochs in individual patients. Disagreements were largely between awake/NREM, N1/N2, and N2/N3 sleep.\n\n\nCONCLUSION\nInter-scorer variability is largely due to epochs that are difficult to classify. Availability of digitally identified events (e.g., spindles) or calculated variables (e.g., depth of sleep, delta wave duration) during scoring may greatly reduce scoring variability." } ]
BMC Medical Informatics and Decision Making
30424769
PMC6234630
10.1186/s12911-018-0679-6
A decision support system to follow up and diagnose primary headache patients using semantically enriched data
BackgroundHeadache disorders are an important health burden, having a large health-economic impact worldwide. Current treatment & follow-up processes are often archaic, creating opportunities for computer-aided and decision support systems to increase their efficiency. Existing systems are mostly completely data-driven, and the underlying models are a black-box, deteriorating interpretability and transparency, which are key factors in order to be deployed in a clinical setting.MethodsIn this paper, a decision support system is proposed, composed of three components: (i) a cross-platform mobile application to capture the required data from patients to formulate a diagnosis, (ii) an automated diagnosis support module that generates an interpretable decision tree, based on data semantically annotated with expert knowledge, in order to support physicians in formulating the correct diagnosis and (iii) a web application such that the physician can efficiently interpret captured data and learned insights by means of visualizations.ResultsWe show that decision tree induction techniques achieve competitive accuracy rates, compared to other black- and white-box techniques, on a publicly available dataset, referred to as migbase. Migbase contains aggregated information of headache attacks from 849 patients. Each sample is labeled with one of three possible primary headache disorders. We demonstrate that we are able to reduce the classification error, statistically significant (ρ≤0.05), with more than 10% by balancing the dataset using prior expert knowledge. Furthermore, we achieve high accuracy rates by using features extracted using the Weisfeiler-Lehman kernel, which is completely unsupervised. This makes it an ideal approach to solve a potential cold start problem.ConclusionDecision trees are the perfect candidate for the automated diagnosis support module. They achieve predictive performances competitive to other techniques on the migbase dataset and are, foremost, completely interpretable. Moreover, the incorporation of prior knowledge increases both predictive performance as well as transparency of the resulting predictive model on the studied dataset.
Related workIt can be hard to get a clear and high-quality clinical picture of a patient from a consultation alone. Therefore, some form of self-monitoring is preferred, where the patient keeps track of his or her headache attacks over time [19–21]. Clearly, a mobile application is more user-friendly than a paper calendar [22], since it allows patients to register information at any time or place, without having to worry about losing the calendar or forgetting to bring it to a consultation. Quite some mobile headache diary journal applications are already commercially available [23]. The most popular ones, in terms of number of downloads and rating in the Android Play and Apple App Store, include Migraine Buddy [24] and Headache Diary Lite/Pro [25]. Unfortunately, while many solutions exists for patients to keep track of all headache information, the number of solutions that allow physicians to efficiently interpret all collected data is very limited. Most mobile applications provide an export functionality, which allows users to print out a certain representation of their data, which can be brought to a consultation. This is still archaic, and does not solve the problem that patients can forget to bring this printed version to a consultation. Moreover, a physician can only analyze the data, of which the representation is completely determined by the mobile application developers, when the data is provided to him by the patient. A custom-made application that visualizes all collected data allows the physicians to analyze patient data anytime they want, and allows them to tailor the data representation to their own needs [26, 27].A few researchers have already shown the potential machine learning techniques can offer in diagnosing a headache disorder. In Keight et al. [28], nine different classifiers were compared on a dataset consisting of 836 primary headache cases, each containing 65 different variables. Each case is labeled as one of five classes (tension-type, chronic tension-type, migraine with or without aura and trigeminal autonomic cephalalgia), collected from two Turkish medical institutions. They show that a stacking classifier achieves the best predictive performance, at a cost of having very limited interpretability. The power of ensembles for headache classification has also been confirmed by Jackowski et al. [29]. Krawczyk et al. [30] present a taxonomy of headache disorders, along with corresponding diagnosis criteria from the ICHD document. They compare 6 different classifiers and three feature selection techniques with each other, and with the performance of a physician, on a labeled dataset of 579 subjects consisting of three classes (migraine, tension-type and cluster headache). They show that reducing the feature set can increase the predictive performance, and that the automated feature selection techniques selects a better subset of features than a physician in terms of resulting predictive performance. Moreover, they show that the predictive performance of C4.5, a decision tree induction algorithm, closely matches the performance of black-box counterparts. Celik et al. [31] introduce an artificial immune algorithm that achieves high predictive performance on a dataset of 849 samples with three classes (migraine, tension-type and cluster headache). The dataset is made publicly available and is used in this study to allow for comparison with their and possible future studies. Furthermore, they present a web-based application that allows for patients to register information concerning their headache attacks and for physicians to consult this data. In 2017, an extension was released, in which they evaluated an ant colony optimization algorithm on their dataset. More importantly, they give a clear overview of all prior research for primary headache disorder classification [32]. Yin et al. [33] propose a rule-based and case-based reasoner, which is an extension on a former proposed system [34], and show that these reasoners outperform machine learning classifiers in terms of both precision and recall on their dataset. Finally, it is shown by Garcia-Chimeno et al. that ensemble techniques combined with feature selection can drastically improve predictive performances for headache classification [35], confirming the findings of Jackowski et al. and Keight et al. While the discussed papers provide interesting insights of different methodologies applied to headache disorder classification, none of these, except for research by Celik et al., uses a publicly available dataset or discusses and end-to-end application with components for both patient and physician.As opposed to Celik et al., we advocate the use of a white-box approach since interpretability and transparency are important factors to boost the physicians trust in the decision support system. To stimulate transparency, we incorporate existing expert knowledge of the headache diagnosis disorder domain into the different phases of our machine learning approach. This is in contrast with a purely data-driven method, where existing knowledge is completely neglected. This hybrid mix of both knowledge-driven and data-driven techniques has other advantages than better interpretability alone. It requires a lot less labeled data and is often faster than the expensive training phase from data-driven methods. On the downside, the predictive performance of the resulting model depends entirely on the quality of the incorporated knowledge [36–38]. Fortunately, expert knowledge in the headache disorder domain is of high quality and can easily be encoded in a machine-interpretable format, as has been shown by Yin et al. The added value of prior knowledge incorporation in the different steps of a machine learning pipeline, for medical tasks in different domains, has already been demonstrated by multiple other studies [39–41].
[ "25888584", "16643310", "21924589", "16622551", "28919118", "24330723", "19438916", "22360745", "19402567", "12876249", "25178541", "17221341", "12654956", "16886925", "22842873", "25834725", "29036404", "26306246" ]
[ { "pmid": "25888584", "title": "Quality of life in primary headache disorders: A review.", "abstract": "BACKGROUND\nHealth-related quality of life (HRQoL) is emerging as an important element of clinical research in primary headache disorders, allowing a measure of the impact of headache on patients' well-being and daily life. A better understanding of this may contribute to improved resource allocations and treatment approaches.\n\n\nOBJECTIVE\nThe objective of this study is to review available data on HRQoL in primary headache disorders and identify any influencing factors.\n\n\nMETHODS\nDatabase searches including MEDLINE, PsycINFO and EMBASE were performed. Studies that investigated HRQoL in patients with primary headache disorders were included and reviewed. Trials that evaluated the efficacy of medications or interventions were excluded.\n\n\nRESULTS\nA total of 80 articles were included in the review. Both physical and emotional/mental aspects of HRQoL were impaired across headache subtypes, although the extent varied depending on headache type. A number of factors influencing HRQoL were also identified.\n\n\nCONCLUSION\nThis narrative review suggests that headache, particularly in its chronic form, has a great impact on HRQoL. Clinical practice should not solely focus on pain alleviation but rather adopt routine assessment of HRQoL. Furthermore, identification and management of associated psychological comorbidities, which can significantly influence HRQoL in headache sufferers, are essential for optimal clinical management." }, { "pmid": "16643310", "title": "Epidemiology of headache in Europe.", "abstract": "The present review of epidemiologic studies on migraine and headache in Europe is part of a larger initiative by the European Brain Council to estimate the costs incurred because of brain disorders. Summarizing the data on 1-year prevalence, the proportion of adults in Europe reporting headache was 51%, migraine 14%, and 'chronic headache' (i.e. > or =15 days/month or 'daily') 4%. Generally, migraine, and to a lesser degree headache, are most prevalent during the most productive years of adulthood, from age 20 to 50 years. Several European studies document the negative influence of headache disorders on the quality of life, and health-economic studies indicate that 15% of adults were absent from work during the last year because of headache. Very few studies have been performed in Eastern Europe, and there are also surprisingly little data on tension-type headache from any country. Although the methodology and the quality of the published studies vary considerably, making direct comparisons between different countries difficult, the present review clearly demonstrates that headache disorders are extremely prevalent and have a vast impact on public health. The data collected should be used as arguments to increase resources to headache research and care for headache patients all over the continent." }, { "pmid": "21924589", "title": "Cost of disorders of the brain in Europe 2010.", "abstract": "BACKGROUND\nThe spectrum of disorders of the brain is large, covering hundreds of disorders that are listed in either the mental or neurological disorder chapters of the established international diagnostic classification systems. These disorders have a high prevalence as well as short- and long-term impairments and disabilities. Therefore they are an emotional, financial and social burden to the patients, their families and their social network. In a 2005 landmark study, we estimated for the first time the annual cost of 12 major groups of disorders of the brain in Europe and gave a conservative estimate of €386 billion for the year 2004. This estimate was limited in scope and conservative due to the lack of sufficiently comprehensive epidemiological and/or economic data on several important diagnostic groups. We are now in a position to substantially improve and revise the 2004 estimates. In the present report we cover 19 major groups of disorders, 7 more than previously, of an increased range of age groups and more cost items. We therefore present much improved cost estimates. Our revised estimates also now include the new EU member states, and hence a population of 514 million people.\n\n\nAIMS\nTo estimate the number of persons with defined disorders of the brain in Europe in 2010, the total cost per person related to each disease in terms of direct and indirect costs, and an estimate of the total cost per disorder and country.\n\n\nMETHODS\nThe best available estimates of the prevalence and cost per person for 19 groups of disorders of the brain (covering well over 100 specific disorders) were identified via a systematic review of the published literature. Together with the twelve disorders included in 2004, the following range of mental and neurologic groups of disorders is covered: addictive disorders, affective disorders, anxiety disorders, brain tumor, childhood and adolescent disorders (developmental disorders), dementia, eating disorders, epilepsy, mental retardation, migraine, multiple sclerosis, neuromuscular disorders, Parkinson's disease, personality disorders, psychotic disorders, sleep disorders, somatoform disorders, stroke, and traumatic brain injury. Epidemiologic panels were charged to complete the literature review for each disorder in order to estimate the 12-month prevalence, and health economic panels were charged to estimate best cost-estimates. A cost model was developed to combine the epidemiologic and economic data and estimate the total cost of each disorder in each of 30 European countries (EU27+Iceland, Norway and Switzerland). The cost model was populated with national statistics from Eurostat to adjust all costs to 2010 values, converting all local currencies to Euro, imputing costs for countries where no data were available, and aggregating country estimates to purchasing power parity adjusted estimates for the total cost of disorders of the brain in Europe 2010.\n\n\nRESULTS\nThe total cost of disorders of the brain was estimated at €798 billion in 2010. Direct costs constitute the majority of costs (37% direct healthcare costs and 23% direct non-medical costs) whereas the remaining 40% were indirect costs associated with patients' production losses. On average, the estimated cost per person with a disorder of the brain in Europe ranged between €285 for headache and €30,000 for neuromuscular disorders. The European per capita cost of disorders of the brain was €1550 on average but varied by country. The cost (in billion €PPP 2010) of the disorders of the brain included in this study was as follows: addiction: €65.7; anxiety disorders: €74.4; brain tumor: €5.2; child/adolescent disorders: €21.3; dementia: €105.2; eating disorders: €0.8; epilepsy: €13.8; headache: €43.5; mental retardation: €43.3; mood disorders: €113.4; multiple sclerosis: €14.6; neuromuscular disorders: €7.7; Parkinson's disease: €13.9; personality disorders: €27.3; psychotic disorders: €93.9; sleep disorders: €35.4; somatoform disorder: €21.2; stroke: €64.1; traumatic brain injury: €33.0. It should be noted that the revised estimate of those disorders included in the previous 2004 report constituted €477 billion, by and large confirming our previous study results after considering the inflation and population increase since 2004. Further, our results were consistent with administrative data on the health care expenditure in Europe, and comparable to previous studies on the cost of specific disorders in Europe. Our estimates were lower than comparable estimates from the US.\n\n\nDISCUSSION\nThis study was based on the best currently available data in Europe and our model enabled extrapolation to countries where no data could be found. Still, the scarcity of data is an important source of uncertainty in our estimates and may imply over- or underestimations in some disorders and countries. Even though this review included many disorders, diagnoses, age groups and cost items that were omitted in 2004, there are still remaining disorders that could not be included due to limitations in the available data. We therefore consider our estimate of the total cost of the disorders of the brain in Europe to be conservative. In terms of the health economic burden outlined in this report, disorders of the brain likely constitute the number one economic challenge for European health care, now and in the future. Data presented in this report should be considered by all stakeholder groups, including policy makers, industry and patient advocacy groups, to reconsider the current science, research and public health agenda and define a coordinated plan of action of various levels to address the associated challenges.\n\n\nRECOMMENDATIONS\nPolitical action is required in light of the present high cost of disorders of the brain. Funding of brain research must be increased; care for patients with brain disorders as well as teaching at medical schools and other health related educations must be quantitatively and qualitatively improved, including psychological treatments. The current move of the pharmaceutical industry away from brain related indications must be halted and reversed. Continued research into the cost of the many disorders not included in the present study is warranted. It is essential that not only the EU but also the national governments forcefully support these initiatives." }, { "pmid": "16622551", "title": "Epidemiology of primary and secondary headaches in a Brazilian tertiary-care center.", "abstract": "OBJECTIVE\nTo analyze the demographic features of the population sample, the time of headache complaint until first consultation and the diagnosis of primary and secondary headaches.\n\n\nMETHOD\n3328 patients were analyzed retrospectively and divided according to gender, age, race, school instruction, onset of headache until first consultation and diagnosis(ICHD-II, 2004).\n\n\nRESULTS\nSex ratio (Female/Male) was 4:1, and the mean age was 40.7+/-15 years, without statistical differences between sexes. Approximately 65% of the patients were white and 55% had less than eight years of school instruction. Headache complaint until first consultation ranged from 1 to 5 years in 32.99% patients. The most prevalent diagnosis were migraine (37.98%), tension-type headache-TTH (22.65%) and cluster headache (2.73%).\n\n\nCONCLUSION\nThere are few data on epidemiological features of headache clinic populations, mainly in developing countries. According to the literature, migraine was more frequent than TTH. It is noteworthy the low school instruction of this sample and time patient spent to seek for specialized attention. Hypnic headache syndrome was seen with an unusual frequency." }, { "pmid": "28919118", "title": "Global, regional, and national disability-adjusted life-years (DALYs) for 333 diseases and injuries and healthy life expectancy (HALE) for 195 countries and territories, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016.", "abstract": "BACKGROUND\nMeasurement of changes in health across locations is useful to compare and contrast changing epidemiological patterns against health system performance and identify specific needs for resource allocation in research, policy development, and programme decision making. Using the Global Burden of Diseases, Injuries, and Risk Factors Study 2016, we drew from two widely used summary measures to monitor such changes in population health: disability-adjusted life-years (DALYs) and healthy life expectancy (HALE). We used these measures to track trends and benchmark progress compared with expected trends on the basis of the Socio-demographic Index (SDI).\n\n\nMETHODS\nWe used results from the Global Burden of Diseases, Injuries, and Risk Factors Study 2016 for all-cause mortality, cause-specific mortality, and non-fatal disease burden to derive HALE and DALYs by sex for 195 countries and territories from 1990 to 2016. We calculated DALYs by summing years of life lost and years of life lived with disability for each location, age group, sex, and year. We estimated HALE using age-specific death rates and years of life lived with disability per capita. We explored how DALYs and HALE differed from expected trends when compared with the SDI: the geometric mean of income per person, educational attainment in the population older than age 15 years, and total fertility rate.\n\n\nFINDINGS\nThe highest globally observed HALE at birth for both women and men was in Singapore, at 75·2 years (95% uncertainty interval 71·9-78·6) for females and 72·0 years (68·8-75·1) for males. The lowest for females was in the Central African Republic (45·6 years [42·0-49·5]) and for males was in Lesotho (41·5 years [39·0-44·0]). From 1990 to 2016, global HALE increased by an average of 6·24 years (5·97-6·48) for both sexes combined. Global HALE increased by 6·04 years (5·74-6·27) for males and 6·49 years (6·08-6·77) for females, whereas HALE at age 65 years increased by 1·78 years (1·61-1·93) for males and 1·96 years (1·69-2·13) for females. Total global DALYs remained largely unchanged from 1990 to 2016 (-2·3% [-5·9 to 0·9]), with decreases in communicable, maternal, neonatal, and nutritional (CMNN) disease DALYs offset by increased DALYs due to non-communicable diseases (NCDs). The exemplars, calculated as the five lowest ratios of observed to expected age-standardised DALY rates in 2016, were Nicaragua, Costa Rica, the Maldives, Peru, and Israel. The leading three causes of DALYs globally were ischaemic heart disease, cerebrovascular disease, and lower respiratory infections, comprising 16·1% of all DALYs. Total DALYs and age-standardised DALY rates due to most CMNN causes decreased from 1990 to 2016. Conversely, the total DALY burden rose for most NCDs; however, age-standardised DALY rates due to NCDs declined globally.\n\n\nINTERPRETATION\nAt a global level, DALYs and HALE continue to show improvements. At the same time, we observe that many populations are facing growing functional health loss. Rising SDI was associated with increases in cumulative years of life lived with disability and decreases in CMNN DALYs offset by increased NCD DALYs. Relative compression of morbidity highlights the importance of continued health interventions, which has changed in most locations in pace with the gross domestic product per person, education, and family planning. The analysis of DALYs and HALE and their relationship to SDI represents a robust framework with which to benchmark location-specific health performance. Country-specific drivers of disease burden, particularly for causes with higher-than-expected DALYs, should inform health policies, health system improvement initiatives, targeted prevention efforts, and development assistance for health, including financial and research investments for all countries, regardless of their level of sociodemographic development. The presence of countries that substantially outperform others suggests the need for increased scrutiny for proven examples of best practices, which can help to extend gains, whereas the presence of underperforming countries suggests the need for devotion of extra attention to health systems that need more robust support.\n\n\nFUNDING\nBill & Melinda Gates Foundation." }, { "pmid": "24330723", "title": "Migraine misdiagnosis as a sinusitis, a delay that can last for many years.", "abstract": "BACKGROUND\nSinusitis is the most frequent misdiagnosis given to patients with migraine.Therefore we decided to estimate the frequency of misdiagnosis of sinusitis among migraine patients.\n\n\nMETHODS\nThe study included migraine patients with a past history of sinusitis. All included cases fulfilled the International Classification of Headache Disorders, 3rd edition (ICHD-III- beta) criteria. We excluded patients with evidence of sinusitis within the past 6 months of evaluation. Demographic data, headache history, medical consultation, and medication intake for headache and effectiveness of therapy before and after diagnosis were collected.\n\n\nRESULTS\nA total of 130 migraine patients were recruited. Of these patients 106 (81.5%) were misdiagnosed as sinusitis. The mean time delay of migraine diagnosis was (7.75 ± 6.29, range 1 to 38 years). Chronic migraine was significantly higher (p < 0.02) in misdiagnosed patients than in patients with proper diagnosis. Medication overuse headache (MOH) was reported only in patients misdiagnosed as sinusitis. The misdiagnosed patients were treated either medically 87.7%, or surgically12.3% without relieve of their symptoms in 84.9% and 76.9% respectively. However, migraine headache improved in 68.9% after proper diagnosis and treatment.\n\n\nCONCLUSIONS\nMany migraine patients were misdiagnosed as sinusitis. Strict adherence to the diagnostic criteria will prevent the delay in migraine diagnosis and help to prevent chronification of the headache and possible MOH." }, { "pmid": "19438916", "title": "Underdiagnosis and undertreatment of migraine in Italy: a survey of patients attending for the first time 10 headache centres.", "abstract": "The aim of this study was to asses the clinical features, pattern of healthcare and drug utilization of migraine patients attending 10 Italian headache centres (HC). Migraine is underdiagnosed and undertreated everywhere throughout the world, despite its considerable burden. Migraine sufferers often deal with their problem alone using self-prescribing drugs, whereas triptans are used by a small proportion of patients. All patients attending for the first time 10 Italian HCs over a 3-month period were screened for migraine. Migraine patients underwent a structured direct interview about previous migraine diagnosis, comorbidity, headache treatments and their side-effects and healthcare utilization for migraine. Patient satisfaction with their usual therapy for the migraine attack was evaluated with the Migraine-Assessment of Current Therapy (ACT) questionnaire. The quality of life of migraine patients was assessed by mean of Short Form (SF)-12 and Migraine-Specific Quality of life (MSQ) version 2.1 questionnaires. Of the 2675 patients who attended HCs for the first time during the study period, 71% received a diagnosis of migraine and the first 953 subjects completed the study out of 1025 patients enrolled. Only 26.8% of migraine patients had a previous diagnosis of migraine; 62.4% of them visited their general practitioner (GP) in the last year, 38.2% saw a specialist for headache, 23% attended an Emergency Department and 4.5% were admitted to hospital for migraine; 82.8% of patients used non-specific drugs for migraine attacks, whereas 17.2% used triptans and only 4.8% used a preventive migraine medication. Triptans were used by 46.4% of patients with a previous diagnosis of migraine. About 80% of migraine patients took over-the-counter medications. The Migraine-ACT revealed that 60% of patients needed a change in their treatment of migraine attacks, 85% of whom took non-specific drugs. Both the MSQ version 2.1 and the SF-12 questionnaires indicated a poor quality of life of most patients. Migraine represents the prevalent headache diagnosis in Italian HCs. Migraine is still underdiagnosed in Italy and migraine patients receive a suboptimal medical approach in our country, despite the healthcare utilization of migraine subjects being noteworthy. A cooperative network involving GPs, neurologists and headache specialists is strongly desirable in order to improve long-term migraine management in Italy." }, { "pmid": "22360745", "title": "Self-medication of regular headache: a community pharmacy-based survey.", "abstract": "BACKGROUND\nThis observational community pharmacy-based study aimed to investigate headache characteristics and medication use of persons with regular headache presenting for self-medication.\n\n\nMETHODS\nParticipants (n = 1205) completed (i) a questionnaire to assess current headache medication and previous physician diagnosis, (ii) the ID Migraine Screener (ID-M), and (iii) the Migraine Disability Assessment questionnaire.\n\n\nRESULTS\nForty-four percentage of the study population (n = 528) did not have a physician diagnosis of their headache, and 225 of them (225/528, 42.6%) were found to be ID-M positive. The most commonly used acute headache drugs were paracetamol (used by 62% of the study population), NSAIDs (39%), and combination analgesics (36%). Only 12% of patients physician-diagnosed with migraine used prophylactic migraine medication, and 25% used triptans. About 24% of our sample (n = 292) chronically overused acute medication, which was combination analgesic overuse (n = 166), simple analgesic overuse (n = 130), triptan overuse (n = 19), ergot overuse (n = 6), and opioid overuse (n = 5). Only 14.5% was ever advised to limit intake frequency of acute headache treatments.\n\n\nCONCLUSIONS\nThis study identified underdiagnosis of migraine, low use of migraine prophylaxis and triptans, and high prevalence of medication overuse amongst subjects seeking self-medication for regular headache. Community pharmacists have a strategic position in education and referral of these self-medicating headache patients." }, { "pmid": "19402567", "title": "Diagnostic and therapeutic trajectory of cluster headache patients in Flanders.", "abstract": "OBJECTIVE\nA fraction of cluster headache (CH) patients face diagnostic delay, misdiagnosis, undertreatment and mismanagement. Specific data for Flanders are warranted.\n\n\nMETHODS\nData on CH characteristics, diagnostic process and treatment history were gathered using a self-administered questionnaire with 90 items in CH patients that presented to 4 neurology outpatient clinics.\n\n\nRESULTS\nData for 85 patients (77 men) with a mean age of 44 years (range 23-69) were analysed. 79% suffered from episodic CH and 21% from chronic CH. A mean diagnostic delay of 44 months was reported. 31% of patients had to wait more than 4 years for the CH diagnosis. 52% of patients consulted at least 3 physicians prior to CH diagnosis. Most common misdiagnoses were migraine (45%), sinusitis (23%), tooth/jaw problems (23%), tension-type headache (16%) and trigeminal neuralgia (16%). A significant percentage of patients had never received access to injectable sumatriptan (26%) or oxygen (31%). Most prescribed preventative drugs after the CH diagnosis were verapamil (82%), lithium (35%), methysergide (31%) and topiramate (22%). Despite the CH diagnosis, ineffective preventatives were still used in some, including propranolol (12%), amitriptyline (9%) and carbamazepine (12%). 31% of patients had undergone invasive therapy prior to CH diagnosis, including dental procedures (21%) and sinus surgery (10%).\n\n\nCONCLUSION\nDespite the obvious methodological limitations of this study, the need for better medical education on CH is evident to optimize CH management in Flanders." }, { "pmid": "12876249", "title": "Features involved in the diagnostic delay of cluster headache.", "abstract": "BACKGROUND\nCluster headache (CH) is a comparatively rare, very severe primary headache. Although circumscript and recognisable criteria are available, the diagnosis is often missed or delayed. Besides, while adequate and evidence based treatment is available in diagnosed cases, CH seems to be poorly managed. The authors performed a nationwide survey among CH patients, and looked for factors involved in the diagnostic delay.\n\n\nMETHODS\nThe authors performed a nationwide mailing to all Dutch general practitioners (about 5800), and neurologists (about 560) and invited them to refer patients in whom the diagnosis CH was made or considered. Patients could also apply via the Dutch Headache Patients Society. A variety of clinical characteristics were assessed by means of questionnaires. Specifically, patients were asked about the time between their first episode and the diagnosis.\n\n\nRESULTS\nThe IHS criteria for CH were met by 1429 of 2001 responders, and 1163 of these filled in an extended questionnaire. The male to female ratio was 3.7:1. Mean age at onset was 32 (SD 14) years. Seventy three per cent had episodic CH, 21% had chronic CH, and in 6% the periodicity was undetermined. The time between the first episode and the diagnosis ranged from 1 week to 48 years (median 3 years): 34% had consulted a dentist and 33% an ENT specialist before the diagnosis was established. Among factors that increased the diagnostic delay were the presence of photophobia or phonophobia, nausea, an episodic attack pattern and a low age at onset (p<0.01). Sex or presence of restlessness during episodes did not influence the diagnostic delay.\n\n\nCONCLUSION\nCH remains unrecognised or misdiagnosed in many cases for many years. Photophobia or phonophobia and nausea were in part responsible for this delay, and should be recognised as part of the clinical spectrum of CH. Many patients were first seen by a dentist or ENT specialist for their CH episodes, so more attention should be paid to educate first line physicians to recognise CH, to improve the diagnostic process and so to expose patients to earlier and better treatment of CH." }, { "pmid": "25178541", "title": "Diagnostic and therapeutic errors in cluster headache: a hospital-based study.", "abstract": "BACKGROUND\nCluster headache (CH) is a severe, disabling form of headache. Even though CH has a typical clinical picture it seems that its diagnosis is often missed or delayed in clinical practice. CH patients may thus face: misdiagnosis, unnecessary investigations and delays in accessing adequate treatment. This study was conducted to investigate the occurrence of diagnostic and therapeutic errors with a view to improving the clinical and instrumental work-up in affected patients.\n\n\nMETHODS\nOur study comprised 144 episodic CH patients: 116 from Italy and 28 from Eastern European countries (Moldova, Ukraine, Bulgaria). One hundred six patients (73.6%) were examined personally and 38 (26.4%) were evaluated through telephone interviews conducted by headache specialists using an ad hoc questionnaire developed by the authors.\n\n\nRESULTS\nThe sample was predominantly male (M:F ratio 2.79:1) and had a mean age of 42.4 ± 9.8 years; approximately 76% of the patients had already consulted a physician about their CH at the onset of the disease. The mean interval between onset of the disease and first consultation at a headache center was 4.1 ± 5.6 years. The patients had consulted different specialists prior to receiving their CH diagnosis: neurologists (49%), primary care physicians (35%), ENT specialists (10%), dentists (3%), etc. Misdiagnoses at first consultation were recorded in 77% of the cases: trigeminal neuralgia (22%), migraine without aura (19%), sinusitis (15%), etc. The average \"diagnostic delay\" was 5.3 ± 6.4 years and the condition was diagnosed approximately (\"doctor delay\": one year). Instrumental and laboratory investigations were carried out in 93% of the patients prior to diagnosis of CH. Some of the patients had never received abortive or preventive medications, either before or after diagnosis. Medical prescription compliance: 88% of the cases.\n\n\nCONCLUSIONS\nOur results emphasize the need to improve specialist education in this field in order to improve recognition of the clinical picture of CH and increase knowledge of the proper medical treatments for de novo CH. Continuous medical education on CH should target general neurologists, primary care physicians, ENT specialists and dentists. A study on a larger population of CH patients may further improve error-avoidance strategies." }, { "pmid": "12654956", "title": "Premonitory symptoms in migraine: an electronic diary study.", "abstract": "BACKGROUND\nMigraine is frequently associated with nonheadache symptoms before, during, and after the headache. Premonitory symptoms occurring before the attack have not been rigorously studied. Should these symptoms accurately predict headache, there are considerable implications for the pathophysiology and management of migraine.\n\n\nMETHODS\nElectronic diaries were used in a 3-month multicenter study to record nonheadache symptoms before, during, and after migraine. The authors recruited subjects who reported nonheadache symptoms in at least two of three attacks that they believed predicted headache. Symptoms were entered in the diaries by patient initiation and through prompted entries at random times daily. Entries could not be altered retrospectively. Data recorded included nonheadache symptoms occurring during all three phases of the migraine, prediction of the attack from premonitory symptoms, general state of health, and action taken to prevent the headache.\n\n\nRESULTS\nOne hundred twenty patients were recruited: 97 provided usable data. Patients correctly predicted migraine headaches from 72% of diary entries with premonitory symptoms. A range of cognitive and physical symptoms was reported at a similar rate through all three phases of the migraine. The most common premonitory symptoms were feeling tired and weary (72% of attacks with warning features), having difficulty concentrating (51%), and a stiff neck (50%). Subjects who functioned poorly in the premonitory phase were the most likely to correctly predict headache.\n\n\nCONCLUSIONS\nUsing an electronic diary system, the authors show that migraineurs who report premonitory symptoms can accurately predict the full-blown headache." }, { "pmid": "16886925", "title": "Diaries and calendars for migraine. A review.", "abstract": "Headache is one of the most common types of pain and, in the absence of biological markers, headache diagnosis depends only on information obtained from clinical interviews and physical and neurological examinations. Headache diaries make it possible to record prospectively the characteristics of every attack and the use of headache calendars is indicated for evaluating the time pattern of headache, identifying aggravating factors and evaluating the efficacy of preventive treatment. This may reduce the recall bias and increase accuracy in the description. The use of diagnostic headache diaries does have some limitations because the patient's general acceptance is still limited and some subjects are not able to fill in a diary. In this review, we considered diaries and calendars especially designed for migraine and, in particular, we aimed at: (i) determining what instruments are available in clinical practice for diagnosis and follow-up of treatments; and (ii) describing the tools that have been developed for research and their main applications in the headache field. In addition to the literature review, we added two paragraphs concerning the authors' experience of the use of diaries and calendars in headache centres and their proposals for future areas of research." }, { "pmid": "22842873", "title": "An electronic diary on a palm device for headache monitoring: a preliminary experience.", "abstract": "Patients suffering from headache are usually asked to use charts to allow monitoring of their disease. These diaries, providing they are regularly filled in, become crucial in the diagnosis and management of headache disorders because they provide further information on attack frequency and temporal pattern, drug intake, trigger factors, and short-/long-term responses to treatment. Electronic tools could facilitate diary monitoring and thus the management of headaches. Medication overuse headache (MOH) is a chronic and disabling condition that can be treated by withdrawing the overused drug(s) and adopting specific approaches that focus on the development of a close doctor-patient relationship in the post-withdrawal phase. Although the headache diary is, in this context, an essential tool for the constant, reliable monitoring of these patients to prevent relapses, very little is known about the applicability of electronic diaries in MOH patients. The purpose of this study was to evaluate the acceptability of and patient compliance with an electronic headache diary (palm device) as compared with a traditional diary chart in a group of headache inpatients with MOH. A palm diary device, developed in accordance with the ICHD-II criteria, was given to 85 MOH inpatients during the detoxification phase. On the first day of hospitalization, the patients were instructed in the use of the diary and were then required to fill it in daily for the following 7 days. Data on the patients' opinions on the electronic diary and the instructions given, its screen and layout, as well as its convenience and ease of use, in comparison with the traditional paper version, were collected using a numerical rating scale. A total of 504 days with headache were recorded in both the electronic and the traditional headache diaries simultaneously. The level of patient compliance was good. The patients appreciated the electronic headache diary, deeming it easy to understand and to use (fill in); most of the patients rated the palm device handier than the traditional paper version." }, { "pmid": "25834725", "title": "Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.", "abstract": "As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal, and logistical concerns. Ensuring data security and protection of patient rights while simultaneously facilitating standardization is paramount to maintaining public support. The capabilities of supercomputing need to be applied strategically. A standardized, methodological implementation must be applied to developed artificial intelligence systems with the ability to integrate data and information into clinically relevant knowledge. Ultimately, the integration of bioinformatics and clinical data in a clinical decision support system promises precision medicine and cost effective and personalized patient care." }, { "pmid": "29036404", "title": "The value of prior knowledge in machine learning of complex network systems.", "abstract": "MOTIVATION\nOur overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available.\n\n\nRESULTS\nThe simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects').\n\n\nAVAILABILITY AND IMPLEMENTATION\nLinks to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd.\n\n\nCONTACT\[email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "26306246", "title": "Leveraging Expert Knowledge to Improve Machine-Learned Decision Support Systems.", "abstract": "While the use of machine learning methods in clinical decision support has great potential for improving patient care, acquiring standardized, complete, and sufficient training data presents a major challenge for methods relying exclusively on machine learning techniques. Domain experts possess knowledge that can address these challenges and guide model development. We present Advice-Based-Learning (ABLe), a framework for incorporating expert clinical knowledge into machine learning models, and show results for an example task: estimating the probability of malignancy following a non-definitive breast core needle biopsy. By applying ABLe to this task, we demonstrate a statistically significant improvement in specificity (24.0% with p=0.004) without missing a single malignancy." } ]
BMC Medical Informatics and Decision Making
30424756
PMC6234631
10.1186/s12911-018-0682-y
X-search: an open access interface for cross-cohort exploration of the National Sleep Research Resource
BackgroundThe National Sleep Research Resource (NSRR) is a large-scale, openly shared, data repository of de-identified, highly curated clinical sleep data from multiple NIH-funded epidemiological studies. Although many data repositories allow users to browse their content, few support fine-grained, cross-cohort query and exploration at study-subject level. We introduce a cross-cohort query and exploration system, called X-search, to enable researchers to query patient cohort counts across a growing number of completed, NIH-funded studies in NSRR and explore the feasibility or likelihood of reusing the data for research studies.MethodsX-search has been designed as a general framework with two loosely-coupled components: semantically annotated data repository and cross-cohort exploration engine. The semantically annotated data repository is comprised of a canonical data dictionary, data sources with a data dictionary, and mappings between each individual data dictionary and the canonical data dictionary. The cross-cohort exploration engine consists of five modules: query builder, graphical exploration, case-control exploration, query translation, and query execution. The canonical data dictionary serves as the unified metadata to drive the visual exploration interfaces and facilitate query translation through the mappings.ResultsX-search is publicly available at https://www.x-search.net/with nine NSRR datasets consisting of over 26,000 unique subjects. The canonical data dictionary contains over 900 common data elements across the datasets. X-search has received over 1800 cross-cohort queries by users from 16 countries.ConclusionsX-search provides a powerful cross-cohort exploration interface for querying and exploring heterogeneous datasets in the NSRR data repository, so as to enable researchers to evaluate the feasibility of potential research studies and generate potential hypotheses using the NSRR data.
Comparison to related workTo support fast generation of hypotheses and assessment of the feasibility of research studies, various cohort discovery tools have been developed to facilitate the identification of potential research subjects satisfying certain characteristics.Murphy et al. [13] have developed a cohort selection (or counting) system for the Informatics for Integrating Biology and the Bedside (i2b2) project, which has been widely adopted for querying the count of eligible patients in a single clinical data repository. To support patient cohort identification from multiple data sources, Weber et al. [12] have developed the Shared Health Research Information Network (SHRINE) based on i2b2. SHRINE requires the underlying data sources to have the same data structure based on i2b2. Distinct from SHRINE, our X-search was designed to query multiple data sources with heterogeneous data structures.Bache et al. [15] defined and validated an adaptable architecture (we refer to it as Bache’s architecture) for identifying patient cohorts from multiple heterogeneous data sources. Bache’s architecture supports multiple data sources with heterogeneous data structures, and handles the heterogeneity in the query translation step. Our X-search differs in that it handles the data heterogeneity in the data loading step, which saves users’ waiting time for query translation.Zhang et al. [14] have designed and implemented a query interface VISAGE (VISual AGgregator and Explorer) for query patient cohorts. Our X-search shares a similar visual interface design with VISAGE (e.g., checkboxes for categorical variables, and slider bar for numerical variables), but differs from VISAGE in that it adapts a data warehouse approach to harmonize data sources before querying rather than a federated approach to directly query the data sources.In addition, the above-mentioned systems have been designed for private use in clinical settings, while X-search has been designed for open public use for free in the setting of making data elements more findable in data sharing repositories. To the best of our knowledge, X-search is the first cross-cohort exploration system that is open to public access. Also, more data exploration functionalities (including graphical exploration and case-control exploration) have been provided in X-search.Another related work is the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) [22] for representing healthcare data from diverse sources in a standardized way, which is open-source and maintained by an international collaborative, Observational Health Data Sciences and Informatics (OHDSI) program [23]. OMOP CDM standardizes data structure and common vocabularies (e.g., SNOMED CT, ICD9CM, RxNorm) across disparate sources, such as electronic health records, administrative claims, and clinical data. A natural question would be whether the OMOP CDM could be directly used for modeling NSRR datasets. However, significant effort needs to be made to transform NSRR datasets into utilization of standardized vocabularies, and there may not be direct transformation due to the fine-grained, sleep-related data elements. It would be interesting to explore the generalizability of OMOP CDM using the NSRR datasets.There are existing tools on standardizing and harmonizing data elements for clinical research studies such as eleMAP [24] and D2Refine [25], which enable researchers to harmonize local data elements to existing metadata and terminology standards such as the caDSR (Cancer Data Standards Registry and Repository) [26] and NCI Thesaurus [27]. Such tools may be useful for us to map NSRR data elements to existing standards.
[ "24482835", "23508736", "26107811", "26978244", "27899622", "19483092", "26048618", "27070134", "29860441", "19567788", "20190053", "24064442", "23750107", "28527878", "19960649", "22037893", "26262116", "28815140" ]
[ { "pmid": "26107811", "title": "Biomedical Data Sharing and Reuse: Attitudes and Practices of Clinical and Scientific Research Staff.", "abstract": "BACKGROUND\nSignificant efforts are underway within the biomedical research community to encourage sharing and reuse of research data in order to enhance research reproducibility and enable scientific discovery. While some technological challenges do exist, many of the barriers to sharing and reuse are social in nature, arising from researchers' concerns about and attitudes toward sharing their data. In addition, clinical and basic science researchers face their own unique sets of challenges to sharing data within their communities. This study investigates these differences in experiences with and perceptions about sharing data, as well as barriers to sharing among clinical and basic science researchers.\n\n\nMETHODS\nClinical and basic science researchers in the Intramural Research Program at the National Institutes of Health were surveyed about their attitudes toward and experiences with sharing and reusing research data. Of 190 respondents to the survey, the 135 respondents who identified themselves as clinical or basic science researchers were included in this analysis. Odds ratio and Fisher's exact tests were the primary methods to examine potential relationships between variables. Worst-case scenario sensitivity tests were conducted when necessary.\n\n\nRESULTS AND DISCUSSION\nWhile most respondents considered data sharing and reuse important to their work, they generally rated their expertise as low. Sharing data directly with other researchers was common, but most respondents did not have experience with uploading data to a repository. A number of significant differences exist between the attitudes and practices of clinical and basic science researchers, including their motivations for sharing, their reasons for not sharing, and the amount of work required to prepare their data.\n\n\nCONCLUSIONS\nEven within the scope of biomedical research, addressing the unique concerns of diverse research communities is important to encouraging researchers to share and reuse data. Efforts at promoting data sharing and reuse should be aimed at solving not only technological problems, but also addressing researchers' concerns about sharing their data. Given the varied practices of individual researchers and research communities, standardizing data practices like data citation and repository upload could make sharing and reuse easier." }, { "pmid": "26978244", "title": "The FAIR Guiding Principles for scientific data management and stewardship.", "abstract": "There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community." }, { "pmid": "27899622", "title": "UniProt: the universal protein knowledgebase.", "abstract": "The UniProt knowledgebase is a large resource of protein sequences and associated detailed annotation. The database contains over 60 million sequences, of which over half a million sequences have been curated by experts who critically review experimental and predicted data for each protein. The remainder are automatically annotated based on rule systems that rely on the expert curated knowledge. Since our last update in 2014, we have more than doubled the number of reference proteomes to 5631, giving a greater coverage of taxonomic diversity. We implemented a pipeline to remove redundant highly similar proteomes that were causing excessive redundancy in UniProt. The initial run of this pipeline reduced the number of sequences in UniProt by 47 million. For our users interested in the accessory proteomes, we have made available sets of pan proteome sequences that cover the diversity of sequences for each species that is found in its strains and sub-strains. To help interpretation of genomic variants, we provide tracks of detailed protein information for the major genome browsers. We provide a SPARQL endpoint that allows complex queries of the more than 22 billion triples of data in UniProt (http://sparql.uniprot.org/). UniProt resources can be accessed via the website at http://www.uniprot.org/." }, { "pmid": "19483092", "title": "BioPortal: ontologies and integrated data resources at the click of a mouse.", "abstract": "Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protégé frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers 'one-stop shopping' to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources." }, { "pmid": "26048618", "title": "OpenfMRI: Open sharing of task fMRI data.", "abstract": "OpenfMRI is a repository for the open sharing of task-based fMRI data. Here we outline its goals, architecture, and current status of the repository, as well as outlining future plans for the project." }, { "pmid": "27070134", "title": "Scaling Up Scientific Discovery in Sleep Medicine: The National Sleep Research Resource.", "abstract": "Professional sleep societies have identified a need for strategic research in multiple areas that may benefit from access to and aggregation of large, multidimensional datasets. Technological advances provide opportunities to extract and analyze physiological signals and other biomedical information from datasets of unprecedented size, heterogeneity, and complexity. The National Institutes of Health has implemented a Big Data to Knowledge (BD2K) initiative that aims to develop and disseminate state of the art big data access tools and analytical methods. The National Sleep Research Resource (NSRR) is a new National Heart, Lung, and Blood Institute resource designed to provide big data resources to the sleep research community. The NSRR is a web-based data portal that aggregates, harmonizes, and organizes sleep and clinical data from thousands of individuals studied as part of cohort studies or clinical trials and provides the user a suite of tools to facilitate data exploration and data visualization. Each deidentified study record minimally includes the summary results of an overnight sleep study; annotation files with scored events; the raw physiological signals from the sleep record; and available clinical and physiological data. NSRR is designed to be interoperable with other public data resources such as the Biologic Specimen and Data Repository Information Coordinating Center Demographics (BioLINCC) data and analyzed with methods provided by the Research Resource for Complex Physiological Signals (PhysioNet). This article reviews the key objectives, challenges and operational solutions to addressing big data opportunities for sleep research in the context of the national sleep research agenda. It provides information to facilitate further interactions of the user community with NSRR, a community resource." }, { "pmid": "29860441", "title": "The National Sleep Research Resource: towards a sleep data commons.", "abstract": "Objective\nThe gold standard for diagnosing sleep disorders is polysomnography, which generates extensive data about biophysical changes occurring during sleep. We developed the National Sleep Research Resource (NSRR), a comprehensive system for sharing sleep data. The NSRR embodies elements of a data commons aimed at accelerating research to address critical questions about the impact of sleep disorders on important health outcomes.\n\n\nApproach\nWe used a metadata-guided approach, with a set of common sleep-specific terms enforcing uniform semantic interpretation of data elements across three main components: (1) annotated datasets; (2) user interfaces for accessing data; and (3) computational tools for the analysis of polysomnography recordings. We incorporated the process for managing dataset-specific data use agreements, evidence of Institutional Review Board review, and the corresponding access control in the NSRR web portal. The metadata-guided approach facilitates structural and semantic interoperability, ultimately leading to enhanced data reusability and scientific rigor.\n\n\nResults\nThe authors curated and deposited retrospective data from 10 large, NIH-funded sleep cohort studies, including several from the Trans-Omics for Precision Medicine (TOPMed) program, into the NSRR. The NSRR currently contains data on 26 808 subjects and 31 166 signal files in European Data Format. Launched in April 2014, over 3000 registered users have downloaded over 130 terabytes of data.\n\n\nConclusions\nThe NSRR offers a use case and an example for creating a full-fledged data commons. It provides a single point of access to analysis-ready physiological signals from polysomnography obtained from multiple sources, and a wide variety of clinical data to facilitate sleep research." }, { "pmid": "19567788", "title": "The Shared Health Research Information Network (SHRINE): a prototype federated query tool for clinical data repositories.", "abstract": "The authors developed a prototype Shared Health Research Information Network (SHRINE) to identify the technical, regulatory, and political challenges of creating a federated query tool for clinical data repositories. Separate Institutional Review Boards (IRBs) at Harvard's three largest affiliated health centers approved use of their data, and the Harvard Medical School IRB approved building a Query Aggregator Interface that can simultaneously send queries to each hospital and display aggregate counts of the number of matching patients. Our experience creating three local repositories using the open source Informatics for Integrating Biology and the Bedside (i2b2) platform can be used as a road map for other institutions. The authors are actively working with the IRBs and regulatory groups to develop procedures that will ultimately allow investigators to obtain identified patient data and biomaterials through SHRINE. This will guide us in creating a future technical architecture that is scalable to a national level, compliant with ethical guidelines, and protective of the interests of the participating hospitals." }, { "pmid": "20190053", "title": "Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2).", "abstract": "Informatics for Integrating Biology and the Bedside (i2b2) is one of seven projects sponsored by the NIH Roadmap National Centers for Biomedical Computing (http://www.ncbcs.org). Its mission is to provide clinical investigators with the tools necessary to integrate medical record and clinical research data in the genomics age, a software suite to construct and integrate the modern clinical research chart. i2b2 software may be used by an enterprise's research community to find sets of interesting patients from electronic patient medical record data, while preserving patient privacy through a query tool interface. Project-specific mini-databases (\"data marts\") can be created from these sets to make highly detailed data available on these specific patients to the investigators on the i2b2 platform, as reviewed and restricted by the Institutional Review Board. The current version of this software has been released into the public domain and is available at the URL: http://www.i2b2.org/software." }, { "pmid": "24064442", "title": "An adaptable architecture for patient cohort identification from diverse data sources.", "abstract": "OBJECTIVE\nWe define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them.\n\n\nMETHOD\nThe architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented.\n\n\nRESULTS\nWe show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts.\n\n\nDISCUSSION\nAlthough the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor.\n\n\nCONCLUSIONS\nThe architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity." }, { "pmid": "23750107", "title": "Hypertension and obstructive sleep apnea.", "abstract": "Obstructive sleep apnea (OSA) is increasingly being recognized as a major health burden with strong focus on the associated cardiovascular risk. Studies from the last two decades have provided strong evidence for a causal role of OSA in the development of systemic hypertension. The acute physiological changes that occur during apnea promote nocturnal hypertension and may lead to the development of sustained daytime hypertension via the pathways of sympathetic activation, inflammation, oxidative stress, and endothelial dysfunction. This review will focus on the acute hemodynamic disturbances and associated intermittent hypoxia that characterize OSA and the potential pathophysiological mechanisms responsible for the development of hypertension in OSA. In addition the epidemiology of OSA and hypertension, as well as the role of treatment of OSA, in improving blood pressure control will be examined." }, { "pmid": "28527878", "title": "Obstructive Sleep Apnea and Diabetes: A State of the Art Review.", "abstract": "OSA is a chronic treatable sleep disorder and a frequent comorbidity in patients with type 2 diabetes. Cardinal features of OSA, including intermittent hypoxemia and sleep fragmentation, have been linked to abnormal glucose metabolism in laboratory-based experiments. OSA has also been linked to the development of incident type 2 diabetes. The relationship between OSA and type 2 diabetes may be bidirectional in nature given that diabetic neuropathy can affect central control of respiration and upper airway neural reflexes, promoting sleep-disordered breathing. Despite the strong association between OSA and type 2 diabetes, the effect of treatment with CPAP on markers of glucose metabolism has been conflicting. Variability with CPAP adherence may be one of the key factors behind these conflicting results. Finally, accumulating data suggest an association between OSA and type 1 diabetes as well as gestational diabetes. This review explores the role of OSA in the pathogenesis of type 2 diabetes, glucose metabolism dysregulation, and the impact of OSA treatment on glucose metabolism. The association between OSA and diabetic complications as well as gestational diabetes is also reviewed." }, { "pmid": "19960649", "title": "Clinical guideline for the evaluation, management and long-term care of obstructive sleep apnea in adults.", "abstract": "BACKGROUND\nObstructive sleep apnea (OSA) is a common chronic disorder that often requires lifelong care. Available practice parameters provide evidence-based recommendations for addressing aspects of care.\n\n\nOBJECTIVE\nThis guideline is designed to assist primary care providers as well as sleep medicine specialists, surgeons, and dentists who care for patients with OSA by providing a comprehensive strategy for the evaluation, management and long-term care of adult patients with OSA.\n\n\nMETHODS\nThe Adult OSA Task Force of the American Academy of Sleep Medicine (AASM) was assembled to produce a clinical guideline from a review of existing practice parameters and available literature. All existing evidence-based AASM practice parameters relevant to the evaluation and management of OSA in adults were incorporated into this guideline. For areas not covered by the practice parameters, the task force performed a literature review and made consensus recommendations using a modified nominal group technique.\n\n\nRECOMMENDATIONS\nQuestions regarding OSA should be incorporated into routine health evaluations. Suspicion of OSA should trigger a comprehensive sleep evaluation. The diagnostic strategy includes a sleep-oriented history and physical examination, objective testing, and education of the patient. The presence or absence and severity of OSA must be determined before initiating treatment in order to identify those patients at risk of developing the complications of sleep apnea, guide selection of appropriate treatment, and to provide a baseline to establish the effectiveness of subsequent treatment. Once the diagnosis is established, the patient should be included in deciding an appropriate treatment strategy that may include positive airway pressure devices, oral appliances, behavioral treatments, surgery, and/or adjunctive treatments. OSA should be approached as a chronic disease requiring long-term, multidisciplinary management. For each treatment option, appropriate outcome measures and long-term follow-up are described." }, { "pmid": "22037893", "title": "Validation of a common data model for active safety surveillance research.", "abstract": "OBJECTIVE\nSystematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example.\n\n\nVALIDATION BY EXAMPLE\nTo validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance.\n\n\nCONCLUSION\nThere was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance." }, { "pmid": "26262116", "title": "Observational Health Data Sciences and Informatics (OHDSI): Opportunities for Observational Researchers.", "abstract": "The vision of creating accessible, reliable clinical evidence by accessing the clincial experience of hundreds of millions of patients across the globe is a reality. Observational Health Data Sciences and Informatics (OHDSI) has built on learnings from the Observational Medical Outcomes Partnership to turn methods research and insights into a suite of applications and exploration tools that move the field closer to the ultimate goal of generating evidence about all aspects of healthcare to serve the needs of patients, clinicians and all other decision-makers around the world." }, { "pmid": "28815140", "title": "D2Refine: A Platform for Clinical Research Study Data Element Harmonization and Standardization.", "abstract": "In this paper, we present a platform known as D2Refine for facilitating clinical research study data element harmonization and standardization. D2Refine is developed on top of OpenRefine (formerly Google Refine) and leverages simple interface and extensible architecture of OpenRefine. D2Refine empowers the tabular representation of clinical research study data element definitions by allowing it to be easily organized and standardized using reconciliation services. D2Refine builds on valuable built-in data transformation features of OpenRefine to bring source data sets to a finer state quickly. We implemented the reconciliation services and search capabilities based on the standard Common Terminology Services 2 (CTS2) and the serialization of clinical research study data element definitions into standard representation using clinical information modeling technology for semantic interoperability. We demonstrate that D2Refine is a useful and promising platform that would help address the emergent needs for clinical research study data element harmonization and standardization." } ]
Digital Health
30479828
PMC6240967
10.1177/2055207618811555
Social cognitive determinants of exercise behavior in the context of behavior modeling: a mixed method approach
Research has shown that persuasive technologies aimed at behavior change will be more effective if behavioral determinants are targeted. However, research on the determinants of bodyweight exercise performance in the context of behavior modeling in fitness apps is scarce. To bridge this gap, we conducted an empirical study among 659 participants resident in North America using social cognitive theory as a framework to uncover the determinants of the performance of bodyweight exercise behavior. To contextualize our study, we modeled, in a hypothetical context, two popular bodyweight exercise behaviors – push ups and squats – featured in most fitness apps on the market using a virtual coach (aka behavior model). Our social cognitive model shows that users’ perceived self-efficacy (βT = 0.23, p < 0.001) and perceived social support (βT = 0.23, p < 0.001) are the strongest determinants of bodyweight exercise behavior, followed by outcome expectation (βT = 0.11, p < 0.05). However, users’ perceived self-regulation (βT = –0.07, p = n.s.) turns out to be a non-determinant of bodyweight exercise behavior. Comparatively, our model shows that perceived self-efficacy has a stronger direct effect on exercise behavior for men (β = 0.31, p < 0.001) than for women (β = 0.10, p = n.s.). In contrast, perceived social support has a stronger direct effect on exercise behavior for women (β = 0.15, p < 0.05) than for men (β = −0.01, p = n.s.). Based on these findings and qualitative analysis of participants’ comments, we provide a set of guidelines for the design of persuasive technologies for promoting regular exercise behavior.
Related workA number of studies have been carried out with respect to the social cognitive model of behavior, using SEM analysis. We provide a cross-section of these studies in the domain of physical activity. Rovniak et al.24 presented the social cognitive model of the physical activity of college students from Virginia Polytechnic Institute and State University in the United States (USA). In their study, they measured self-efficacy, self-regulation, social support and outcome expectation at baseline and used them to predict physical activity 8 weeks later. They found that self-efficacy was the strongest determinants of physical activity, followed by self-regulation and social support. Similarly, Oyibo and colleagues30,31 modeled the physical activity of two different college student populations in Canada and Nigeria using the SCT as a theoretical framework. The authors measured all of the four main determinants of physical activity and used it to predict participants’ reported level of physical activity in the past 7 days. They found that self-efficacy and self-regulation had the strongest total effect on physical activity among the Canadian group, while social support and body image had the strongest total effect on physical activity among the Nigerian group.Resnick32 presented a social cognitive model of the current exercise of older adults, living in a continuing care retirement community in the USA. The author found that self-efficacy, outcome expectation and prior exercise were among the strongest determinants of current exercise. Similarly, Anderson et al.33 modeled the physical activity of adults from 14 southwestern Virginia churches in the USA using the SCT as a theoretical framework. They found that self-regulation, self-efficacy and social support are the strongest determinants of the adults’ physical activity. Moreover, Anderson-Bill et al.34 investigated the determinants of physical activity among web-health users resident in the USA and Canada. Their model was based on the SCT and focused on walking as the target behavior. Specifically, they used pedometers to track participants’ daily steps and minutes walked over a 7-day period. They found that, overall, self-efficacy, self-regulation and social support were the determinants of participants’ physical activity, with self-efficacy being the strongest. Moreover, in the context of behavior modeling in a fitness app, Oyibo et al.20 investigated the perceived effect of behavior modeling on the SCT factors. They found that the perceived persuasiveness of the exercise behavior model design has a significant direct effect on self-regulation, outcome expectation and self-efficacy. More specifically, the effect was stronger on the first two SCT factors than on the third.The major limitation of the above studies, apart from the last one reviewed, is that most of them used convenience samples, especially the student population. This may affect generalizing to a more diverse population sample.24 Moreover, none of the previous studies has investigated the SCT determinants of exercise behavior in the context of behavior modeling (in a fitness app), which is one of the main sources of self-efficacy35 including the other core social cognitive factors.20 Second, most previous studies did not use a mixed-method approach, comprising quantitative and qualitative analyses. Our study aims to fill this gap by using the mixed-method approach and providing evidence-based design guidelines for developing more effective fitness apps in the future.
[ "21645392", "24842742", "18624603", "12237980", "12054320", "16846326", "19181688", "15090118" ]
[ { "pmid": "21645392", "title": "Prevalence of physical inactivity and barriers to physical activity among obese attendants at a community health-care center in Karachi, Pakistan.", "abstract": "BACKGROUND\nOverweight and obesity are significant public health problems worldwide with serious health consequences. With increasing urbanization and modernization there has been an increase in prevalence of obesity that is attributed to reduced levels of physical activity (PA). However, little is known about the prevalence of physical inactivity and factors that prohibit physical activity among Pakistani population. This cross-sectional study is aimed at estimating the prevalence of physical inactivity, and determining associated barriers in obese attendants accompanying patients coming to a Community Health Center in Karachi, Pakistan.\n\n\nFINDINGS\nPA was assessed by using international physical activity questionnaire (IPAQ). Barriers to PA were also assessed in inactive obese attendants. A pre-tested questionnaire was used to collect data from a total of 350 obese attendants. Among 350 study participants 254 (72.6%) were found to be physically inactive (95% CI: 68.0%, 77.2%). Multivariable logistic regression analysis indicated that age greater than 33 years, BMI greater than 33 kg/m2 and family history of obesity were independently and significantly associated with physical inactivity. Moreover, there was a significant interaction between family structure and gender; females living in extended families were about twice more likely to be inactive, whereas males from extended families were six times more likely to be inactive relative to females from nuclear families. Lack of information, motivation and skills, spouse & family support, accessibility to places for physical activity, cost effective facilities and time were found to be important barriers to PA.\n\n\nCONCLUSIONS\nConsidering the public health implications of physical inactivity it is essential to promote PA in context of an individual's health and environment. Findings highlight considerable barriers to PA among obese individuals that need to be addressed during counseling sessions with physicians." }, { "pmid": "24842742", "title": "Behavior change techniques in top-ranked mobile apps for physical activity.", "abstract": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market." }, { "pmid": "18624603", "title": "A taxonomy of behavior change techniques used in interventions.", "abstract": "OBJECTIVE\nWithout standardized definitions of the techniques included in behavior change interventions, it is difficult to faithfully replicate effective interventions and challenging to identify techniques contributing to effectiveness across interventions. This research aimed to develop and test a theory-linked taxonomy of generally applicable behavior change techniques (BCTs).\n\n\nDESIGN\nTwenty-six BCTs were defined. Two psychologists used a 5-page coding manual to independently judge the presence or absence of each technique in published intervention descriptions and in intervention manuals.\n\n\nRESULTS\nThree systematic reviews yielded 195 published descriptions. Across 78 reliability tests (i.e., 26 techniques applied to 3 reviews), the average kappa per technique was 0.79, with 93% of judgments being agreements. Interventions were found to vary widely in the range and type of techniques used, even when targeting the same behavior among similar participants. The average agreement for intervention manuals was 85%, and a comparison of BCTs identified in 13 manuals and 13 published articles describing the same interventions generated a technique correspondence rate of 74%, with most mismatches (73%) arising from identification of a technique in the manual but not in the article.\n\n\nCONCLUSIONS\nThese findings demonstrate the feasibility of developing standardized definitions of BCTs included in behavioral interventions and highlight problematic variability in the reporting of intervention content." }, { "pmid": "12237980", "title": "Building a practically useful theory of goal setting and task motivation. A 35-year odyssey.", "abstract": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations." }, { "pmid": "12054320", "title": "Social cognitive determinants of physical activity in young adults: a prospective structural equation analysis.", "abstract": "This study used a prospective design to test a model of the relation between social cognitive variables and physical activity in a sample of 277 university students. Social support, self-efficacy, outcome expectations, and self-regulation were measured at baseline and used to predict physical activity 8 weeks later. Results of structural equation modeling indicated a good fit of the social cognitive model to the data. Within the model, self-efficacy had the greatest total effect on physical activity, mediated largely by self-regulation, which directly predicted physical activity. Social support indirectly predicted physical activity through its effect on self-efficacy. Outcome expectations had a small total effect on physical activity, which did not reach significance. The social cognitive model explained 55% of the variance observed in physical activity." }, { "pmid": "16846326", "title": "Social-cognitive determinants of physical activity: the influence of social support, self-efficacy, outcome expectations, and self-regulation among participants in a church-based health promotion study.", "abstract": "A social-cognitive model of physical activity was tested, using structural equation analysis of data from 999 adults (21% African American; 66% female; 38% inactive) recruited from 14 southwestern Virginia churches participating in the baseline phase of a health promotion study. Within the model, age, race, social support, self-efficacy, and self-regulation contributed to participants' physical activity levels, but outcome expectations did not. Of the social-cognitive variables, self-regulation exerted the strongest effect on physical activity. Independent of self-regulation, self-efficacy had little effect. Social support influenced physical activity as a direct precursor to self-efficacy and self-regulation. The model provided a good fit to the data and explained 46% of the variance in physical activity among the diverse group of adults." }, { "pmid": "19181688", "title": "Assessing outcome expectations in older adults: the multidimensional outcome expectations for exercise scale.", "abstract": "Outcome expectations, an important element of social cognitive theory, have been associated with physical activity in older adults. Yet, the measurement of this construct has often adopted a unidimensional approach. We examined the validity of a theoretically consistent three-factor (physical, social, and self-evaluative) outcome expectations exercise scale in middle-aged and older adults (N = 320; M age = 63.8). Participants completed questionnaires assessing outcome expectations, physical activity, self-efficacy, and health status. Comparisons of the hypothesized factor structure with competing models indicated that a three-factor model provided the best fit for the data. Construct validity was further demonstrated by significant association with physical activity and self-efficacy and differential associations with age and health status. Further evidence of validity and application to social cognitive models of physical activity is warranted." }, { "pmid": "15090118", "title": "Health promotion by social cognitive means.", "abstract": "This article examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate together with goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, behavior, and well-being. Belief in one's efficacy to exercise control is a common pathway through which psychosocial influences affect health functioning. This core belief affects each of the basic processes of personal change--whether people even consider changing their health habits, whether they mobilize the motivation and perseverance needed to succeed should they do so, their ability to recover from setbacks and relapses, and how well they maintain the habit changes they have achieved. Human health is a social matter, not just an individual one. A comprehensive approach to health promotion also requires changing the practices of social systems that have widespread effects on human health." } ]
Frontiers in Neuroscience
30524230
PMC6258738
10.3389/fnins.2018.00857
Low Cost Interconnected Architecture for the Hardware Spiking Neural Networks
A novel low cost interconnected architecture (LCIA) is proposed in this paper, which is an efficient solution for the neuron interconnections for the hardware spiking neural networks (SNNs). It is based on an all-to-all connection that takes each paired input and output nodes of multi-layer SNNs as the source and destination of connections. The aim is to maintain an efficient routing performance under low hardware overhead. A Networks-on-Chip (NoC) router is proposed as the fundamental component of the LCIA, where an effective scheduler is designed to address the traffic challenge due to irregular spikes. The router can find requests rapidly, make the arbitration decision promptly, and provide equal services to different network traffic requests. Experimental results show that the LCIA can manage the intercommunication of the multi-layer neural networks efficiently and have a low hardware overhead which can maintain the scalability of hardware SNNs.
Related WorksIn this section, a brief review of various SNN implementations is presented. Particularly, current NoC-based interconnected strategies for hardware SNN implementations are discussed, and their suitability in supporting SNN hardware implementations are also highlighted.Summary of Various SNN Implementation ApproachesVarious approaches have been explored for SNN implementation, including software, application-specific integrated circuit (ASIC), GPU, field-programmable gate array (FPGA), and so on. Current software approaches based on the traditional von Neumann computer paradigms are too slow for the SNN simulations and suffer from the limited scalability as the SNN systems are inherently parallel (Furber et al., 2013; Lagorce et al., 2015). Another approach is GPU-based architecture, which provides a fine-grained parallel architecture and archives a computing acceleration compared to the CPU-based solution, e.g., the approaches of Fidjeland and Shanahan (2010) and Wang et al. (2011) proposed the simulation frameworks for the SNNs on the GPU platform. However, the main drawback of this technology is that the high-end computers (GPUs included) are generally costly in terms of power consumption (Moctezuma et al., 2015; Kwon et al., 2018). In addition, it has limited memory bandwidth, which constraints the data transfer rate between the GPU and CPU (Nageswaran et al., 2009). They are currently the major drawbacks for realizing large-scale SNN systems. Recently, researchers have attempted to use custom hardware to design the SNNs, e.g., ASIC and FPGA devices. For the former, many approaches have been proposed, e.g., TrueNorth chips (Merolla et al., 2014; Akopyan et al., 2015), a neuromorphic analog chip (Basu et al., 2010) and Neurogrid, a large-scale neural simulator based on a mixed analog-digital multichip system (Benjamin et al., 2014). The main disadvantage of using ASIC devices is the high cost for the development and chip manufacturing as a tiny change would lead to a new development cycle (Pande et al., 2013). For the latter, the ability to reconfigure FPGA logic blocks has attracted researchers to explore the mapping of SNNs to FPGA (Graas et al., 2004; Upegui et al., 2005; Morgan et al., 2009; Cawley et al., 2011; Ang et al., 2012; Pande et al., 2013). For example, the ENABLE machine, a systolic second level trigger processor for track finding, was implemented based on a Xilinx FPGA device in the approach of (Klefenz et al., 1992). It used regular interconnection for the communications between building blocks. A reconfigurable point-to-point interconnect is proposed in the approach of Abdali et al. (2017) to provide a lightweight reconfigurable interconnect system. However, the previous work in Harkin et al. (2009) and Carrillo et al. (2012) have highlighted the challenges of supporting the irregular communication patterns of SNNs due to its Manhattan style interconnections. In addition, it has been demonstrated that the topology of the bus is not scalable for the hardware SNNs as the number of required buses is proportional to the number of neurons (Carrillo et al., 2012). Therefore, it is necessary to look into new full-custom hardware architectures to address the interconnection problems of hardware SNNs.Current NoC-Based Spiking Neural Network ApproachesIn the hardware SNNs, the interconnection strategy of NoC is used to support the communication requirement of SNNs. The advantages of using NoCs for SNNs have been discussed in previous works (Schemmel et al., 2008; Harkin et al., 2009; Carrillo et al., 2012, 2013; Painkras et al., 2013; Liu et al., 2015). The following text summarizes current state-of-the-art NoC-based hardware SNN architectures.The SpiNNaker platform was proposed in Jin et al. (2010), which is based on a multiprocessor architecture. It uses ARM968 processor cores as the computational elements and a triangular torus topology to connect the processors. It has been used for the simulations of a cortical microcircuit with ∼80,000 neurons and 0.3 billion synapses (Van Albada et al., 2018). The FACETS in Schemmel et al. (2008) was based on a 2D torus which provided the connection of several FACETS wafers. Some routing architectures based on two-dimensional (2D) mesh were proposed in the approaches of Harkin et al. (2009); Carrillo et al. (2012), and Liu et al. (2015). Additionally, a hierarchical NoC architecture for hardware SNN was proposed in the approach of Carrillo et al. (2013), which combined the mesh and star topologies for different layers of the SNNs. Most of these systems used either the baseline or some variations of the well-known mesh topology to connect the neurons together. However, for a large scale SNN, when the size of NoC increases, the average communication latency increases due to the large number of indirect connections of the mesh topology (Mohammadi et al., 2015). For instance, when a spike event needs to be forwarded to the neurons in the next layer of SNN, some intermediate nodes are required for the transmissions, which increases delay. In addition, the multiple layer SNNs are generally based on fully connected communications. To map it to the regular topology leads to a high hardware area overhead of the interconnection fabric which constraints the scalability. Therefore in this paper, the LCIA is proposed to provide an efficient communication mechanism for the SNNs with a low hardware cost and a high scalability.
[ "23853376", "22561008", "20652429", "18368364", "29311619", "15800372", "30008668", "26106288", "25104385", "11545701", "29875620", "27853419" ]
[ { "pmid": "23853376", "title": "Neural dynamics in reconfigurable silicon.", "abstract": "A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2)." }, { "pmid": "22561008", "title": "Advancing interconnect density for spiking neural network hardware implementations using traffic-aware adaptive network-on-chip routers.", "abstract": "The brain is highly efficient in how it processes information and tolerates faults. Arguably, the basic processing units are neurons and synapses that are interconnected in a complex pattern. Computer scientists and engineers aim to harness this efficiency and build artificial neural systems that can emulate the key information processing principles of the brain. However, existing approaches cannot provide the dense interconnect for the billions of neurons and synapses that are required. Recently a reconfigurable and biologically inspired paradigm based on network-on-chip (NoC) and spiking neural networks (SNNs) has been proposed as a new method of realising an efficient, robust computing platform. However, the use of the NoC as an interconnection fabric for large-scale SNNs demands a good trade-off between scalability, throughput, neuron/synapse ratio and power consumption. This paper presents a novel traffic-aware, adaptive NoC router, which forms part of a proposed embedded mixed-signal SNN architecture called EMBRACE (EMulating Biologically-inspiRed ArChitectures in hardwarE). The proposed adaptive NoC router provides the inter-neuron connectivity for EMBRACE, maintaining router communication and avoiding dropped router packets by adapting to router traffic congestion. Results are presented on throughput, power and area performance analysis of the adaptive router using a 90 nm CMOS technology which outperforms existing NoCs in this domain. The adaptive behaviour of the router is also verified on a Stratix II FPGA implementation of a 4 × 2 router array with real-time traffic congestion. The presented results demonstrate the feasibility of using the proposed adaptive NoC router within the EMBRACE architecture to realise large-scale SNNs on embedded hardware." }, { "pmid": "20652429", "title": "Acoustic thoracic image of crackle sounds using linear and nonlinear processing techniques.", "abstract": "In this study, a novel approach is proposed, the imaging of crackle sounds distribution on the thorax based on processing techniques that could contend with the detection and count of crackles; hence, the normalized fractal dimension (NFD), the univariate AR modeling combined with a supervised neural network (UAR-SNN), and the time-variant autoregressive (TVAR) model were assessed. The proposed processing schemes were tested inserting simulated crackles in normal lung sounds acquired by a multichannel system on the posterior thoracic surface. In order to evaluate the robustness of the processing schemes, different scenarios were created by manipulating the number of crackles, the type of crackles, the spatial distribution, and the signal to noise ratio (SNR) at different pulmonary regions. The results indicate that TVAR scheme showed the best performance, compared with NFD and UAR-SNN schemes, for detecting and counting simulated crackles with an average specificity very close to 100%, and average sensitivity of 98 ± 7.5% even with overlapped crackles and with SNR corresponding to a scaling factor as low as 1.5. Finally, the performance of the TVAR scheme was tested against a human expert using simulated and real acoustic information. We conclude that a confident image of crackle sounds distribution by crackles counting using TVAR on the thoracic surface is thoroughly possible. The crackles imaging might represent an aid to the clinical evaluation of pulmonary diseases that produce this sort of adventitious discontinuous lung sounds." }, { "pmid": "18368364", "title": "Simulator for neural networks and action potentials.", "abstract": "A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu ." }, { "pmid": "29311619", "title": "In situ immune response and mechanisms of cell damage in central nervous system of fatal cases microcephaly by Zika virus.", "abstract": "Zika virus (ZIKV) has recently caused a pandemic disease, and many cases of ZIKV infection in pregnant women resulted in abortion, stillbirth, deaths and congenital defects including microcephaly, which now has been proposed as ZIKV congenital syndrome. This study aimed to investigate the in situ immune response profile and mechanisms of neuronal cell damage in fatal Zika microcephaly cases. Brain tissue samples were collected from 15 cases, including 10 microcephalic ZIKV-positive neonates with fatal outcome and five neonatal control flavivirus-negative neonates that died due to other causes, but with preserved central nervous system (CNS) architecture. In microcephaly cases, the histopathological features of the tissue samples were characterized in three CNS areas (meninges, perivascular space, and parenchyma). The changes found were mainly calcification, necrosis, neuronophagy, gliosis, microglial nodules, and inflammatory infiltration of mononuclear cells. The in situ immune response against ZIKV in the CNS of newborns is complex. Despite the predominant expression of Th2 cytokines, other cytokines such as Th1, Th17, Treg, Th9, and Th22 are involved to a lesser extent, but are still likely to participate in the immunopathogenic mechanisms of neural disease in fatal cases of microcephaly caused by ZIKV." }, { "pmid": "15800372", "title": "An FPGA-based approach to high-speed simulation of conductance-based neuron models.", "abstract": "The constant requirement for greater performance in neural model simulation has created the need for high-speed simulation platforms. We present a generalized, scalable field programmable gate array (FPGA)-based architecture for fast computation of neural models and focus on the steps involved in implementing a single-compartment and a two-compartment neuron model. Based on timing tests, it is shown that FPGAs can outperform traditional desktop computers in simulating these fairly simple models and would most likely provide even larger performance gains over computers in simulating more complex models. The potential of this method for improving neural modeling and dynamic clamping is discussed. In particular, it is believed that this approach could greatly speed up simulations of both highly complex single neuron models and networks of neurons. Additionally, our design is particularly well suited to automated parameter searches for tuning model behavior and to real-time simulation." }, { "pmid": "30008668", "title": "Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.", "abstract": "[This corrects the article DOI: 10.3389/fninf.2018.00002.]." }, { "pmid": "26106288", "title": "Breaking the millisecond barrier on SpiNNaker: implementing asynchronous event-based plastic models with microsecond resolution.", "abstract": "Spike-based neuromorphic sensors such as retinas and cochleas, change the way in which the world is sampled. Instead of producing data sampled at a constant rate, these sensors output spikes that are asynchronous and event driven. The event-based nature of neuromorphic sensors implies a complete paradigm shift in current perception algorithms toward those that emphasize the importance of precise timing. The spikes produced by these sensors usually have a time resolution in the order of microseconds. This high temporal resolution is a crucial factor in learning tasks. It is also widely used in the field of biological neural networks. Sound localization for instance relies on detecting time lags between the two ears which, in the barn owl, reaches a temporal resolution of 5 μs. Current available neuromorphic computation platforms such as SpiNNaker often limit their users to a time resolution in the order of milliseconds that is not compatible with the asynchronous outputs of neuromorphic sensors. To overcome these limitations and allow for the exploration of new types of neuromorphic computing architectures, we introduce a novel software framework on the SpiNNaker platform. This framework allows for simulations of spiking networks and plasticity mechanisms using a completely asynchronous and event-based scheme running with a microsecond time resolution. Results on two example networks using this new implementation are presented." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "11545701", "title": "Neuromorphic hardware databases for exploring structure-function relationships in the brain.", "abstract": "Neuromorphic hardware is the term used to describe full custom-designed integrated circuits, or silicon 'chips', that are the product of neuromorphic engineering--a methodology for the synthesis of biologically inspired elements and systems, such as individual neurons, retinae, cochleas, oculomotor systems and central pattern generators. We focus on the implementation of neurons and networks of neurons, designed to illuminate structure-function relationships. Neuromorphic hardware can be constructed with either digital or analogue circuitry or with mixed-signal circuitry--a hybrid of the two. Currently, most examples of this type of hardware are constructed using analogue circuits, in complementary metal-oxide-semiconductor technology. The correspondence between these circuits and neurons, or networks of neurons, can exist at a number of levels. At the lowest level, this correspondence is between membrane ion channels and field-effect transistors. At higher levels, the correspondence is between whole conductances and firing behaviour, and filters and amplifiers, devices found in conventional integrated circuit design. Similarly, neuromorphic engineers can choose to design Hodgkin-Huxley model neurons, or reduced models, such as integrate-and-fire neurons. In addition to the choice of level, there is also choice within the design technique itself; for example, resistive and capacitive properties of the neuronal membrane can be constructed with extrinsic devices, or using the intrinsic properties of the materials from which the transistors themselves are composed. So, silicon neurons can be built, with dendritic, somatic and axonal structures, and endowed with ionic, synaptic and morphological properties. Examples of the structure-function relationships already explored using neuromorphic hardware include correlation detection and direction selectivity. Establishing a database for this hardware is valuable for two reasons: first, independently of neuroscientific motivations, the field of neuromorphic engineering would benefit greatly from a resource in which circuit designs could be stored in a form appropriate for reuse and re-fabrication. Analogue designers would benefit particularly from such a database, as there are no equivalents to the algorithmic design methods available to designers of digital circuits. Second, and more importantly for the purpose of this theme issue, is the possibility of a database of silicon neuron designs replicating specific neuronal types and morphologies. In the future, it may be possible to use an automated process to translate morphometric data directly into circuit design compatible formats. The question that needs to be addressed is: what could a neuromorphic hardware database contribute to the wider neuroscientific community that a conventional database could not? One answer is that neuromorphic hardware is expected to provide analogue sensory-motor systems for interfacing the computational power of symbolic, digital systems with the external, analogue environment. It is also expected to contribute to ongoing work in neural-silicon interfaces and prosthetics. Finally, there is a possibility that the use of evolving circuits, using reconfigurable hardware and genetic algorithms, will create an explosion in the number of designs available to the neuroscience community. All this creates the need for a database to be established, and it would be advantageous to set about this while the field is relatively young. This paper outlines a framework for the construction of a neuromorphic hardware database, for use in the biological exploration of structure-function relationships." }, { "pmid": "29875620", "title": "Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model.", "abstract": "The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks." }, { "pmid": "27853419", "title": "Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.", "abstract": "Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field." } ]
BMC Medical Informatics and Decision Making
30509272
PMC6278016
10.1186/s12911-018-0699-2
Combination of conditional random field with a rule based method in the extraction of PICO elements
BackgroundExtracting primary care information in terms of Patient/Problem, Intervention, Comparison and Outcome, known as PICO elements, is difficult as the volume of medical information expands and the health semantics is complex to capture it from unstructured information. The combination of the machine learning methods (MLMs) with rule based methods (RBMs) could facilitate and improve the PICO extraction. This paper studies the PICO elements extraction methods. The goal is to combine the MLMs with the RBMs to extract PICO elements in medical papers to facilitate answering clinical questions formulated with the PICO framework.MethodsFirst, we analyze the aspects of the MLM model that influence the quality of the PICO elements extraction. Secondly, we combine the MLM approach with the RBMs in order to improve the PICO elements retrieval process. To conduct our experiments, we use a corpus of 1000 abstracts.ResultsWe obtain an F-score of 80% for P element, 64% for the I element and 92% for the O element. Given the nature of the used training corpus where P and I elements represent respectively only 6.5 and 5.8% of total sentences, the results are competitive with previously published ones.ConclusionsOur study of the PICO element extraction shows that the task is very challenging. The MLMs tend to have an acceptable precision rate but they have a low recall rate when the corpus is not representative. The RBMs backed up the MLMs to increase the recall rate and consequently the combination of the two methods gave better results.
Related workThere is a significant body of research on extracting PICO elements from abstracts of clinical documents , [3–12]. The recent trend is toward using machine-learning methods that apply a statistical model to classify sentences according to PICO framework [2]; this trend is motivated by the robustness of the MLMs and their high degree of learning.The accuracy of the PICO statistical model depends heavily on the quality of the training corpus. Though it is difficult to specify the minimal quality requirements, we consider that most of the training corpora used in the literature are either not representative in terms of size [8, 10, 13] or not well balanced in terms of:the distribution of PICO elements [11, 12, 14] orthe abstract types (structured, unstructured) [5–7, 9]Table 1 shows an overview of the corpora used in the literature; the training corpus is usually built manually by medical experts who label the training corpus with different PICO elements. A corpus is mixed when it contains a mixture of structured and unstructured abstracts.Table 1Literature review summary of used corporaReferenceTraining CorpusTesting Corpus[8]275Manual and mixed358Mixed[10]148Manual and mixed75Mixed[13]50Manual and mixed156Mixed[12]800Manual and mixed200Mixed[11, 14, 30, 31]1000Manual and mixed200Mixed[9]1575 to 2280Automatic and only structured abstracts318Mixed[5–7]2394 to 14,279Automatic and only structured abstracts2394 to 14,279Only structuredThe sizes of the corpora used in [8, 10, 13] are small and it is difficult to generalize these results. In [11, 12, 14] the distribution of PICO elements is not balanced; the P element sentences represent only 6.8%, whereas the I sentences are only 5.8%; the O sentences are more dominant with 36.6%. Such a distribution has a significant impact on the recall rate because the model did not learn enough about P and I elements. In [5–7] and [9], the authors got around the difficulty of constructing manually a large training corpus. They used the information encapsulated in MEDLINE structured abstracts that contain headings corresponding to the PICO elements. In this case, we do not have to depend on an expert of the medical domain, but we restrict the learning process to certain headings. Recently [4] proposed a novel approach for PICO extraction based on an improved Distant Supervision [15, 16]. The learning model is based on a big structured database (Cochrane), lots of unstructured data and a small amount of manually labeled unstructured data used to reduce the noise in distantly derived annotations. Notably, their Supervised Distant Supervision model automatically extracts PICO sentences from full texts compared to the literature review where the PICO extraction was limited to paper abstracts.Most of the researches on PICO element extraction with MLMs use a non-realistic data collection, consequently the extraction performance is affected and the results are not consistent. For example, some researches state that the usage of medical semantics features is useful [7, 8, 17] while others deny the pertinence of semantic features [12, 14]. In addition, the proposed MLM methods perform inadequately with unstructured abstracts.Generally, most of these researchers reported a precision over 70% (Table 2); however, we observed that the recall measure is usually not as high as the precision, especially when the training corpus is unbalanced in terms of PICO elements or the MLM features are not rigorous enough.Table 2Examples of reported precisions and recalls from review of the literaturePopulationInterventionRef.Precision %Recall %Precision %Recall %[9]56–7737–4077–8771–80[13]NANA76–8958–65[17]702474-7856-58[10]9774NANA[7]66-9461-8450-7926–65In order to reduce the impact of the unavailability of a representative and balanced corpus and the lack of well-designed MLM aspects, we propose a PICO element extraction system based on:a MLM (CRF [18]) with well-designed aspects, these aspects include CRF parameters setting, information redundancy, type of feature value, features concordance, standardization of the abstract structure,a new set of RBM rules based on the MLM features to facilitate the integration of the two methods. RBMs can have a high degree of PICO element coverage; therefore, they can complement the MLMs to improve the recall rate,a hybrid combination of MLMs and RBMs. Some authors suggested the combination of the two methods. In [8], the authors extract the I and P elements using a set of RBMs that rely heavily on the UMLS concepts while they use MLMs to extract the O element because the O element does not have corresponding UMLS concept and makes it difficult to craft an efficient extracting rule. In [19], the authors use the two methods to extract the key characteristics of clinical trials from full-text journal articles reporting on RCTs. In a first stage, they use an MLM based on SVM algorithm to locate the sentences that have the highest probability of describing a trial characteristic; in the second stage, they apply simple rules to these sentences to extract text fragments containing the target answer. In our case, we complement the MLM method with RBMs to extract PICO elements. We take advantage of the robustness of the MLM method to extract the majority of the potential PICO sentences (coarse-grained), then we apply a set of RBM rules (fine-grained) designed with MLM features to extract the PICO sentences that are missed by the MLM stage.cTAKES (Apache clinical Text Analysis and Knowledge Extraction System) medical pipeline [20]. cTAKES is an open source natural language processing system for information extraction from clinical natural text. It provides a type system based on the Clinical Element Model (CEM) [21] that targets and facilitates the deep semantics of the medical field. For example, it can identify the clinical named entities from various dictionaries including the UMLS.The proposed system improves the PICO extraction process and facilitates the validity of the answers to clinical questions formulated with the PICO framework.
[ "8555924", "17612476", "20470429", "19208256", "18852316", "24530879", "19166975", "20920176", "21682870", "9586400", "10709685", "18671474", "7711374", "8865908" ]
[ { "pmid": "17612476", "title": "The identification of clinically important elements within medical journal abstracts: Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR).", "abstract": "BACKGROUND\nInformation retrieval in primary care is becoming more difficult as the volume of medical information held in electronic databases expands. The lexical structure of this information might permit automatic indexing and improved retrieval.\n\n\nOBJECTIVE\nTo determine the possibility of identifying the key elements of clinical studies, namely Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR), from abstracts of medical journals.\n\n\nMETHODS\nWe used a convenience sample of 20 synopses from the journal Evidence-Based Medicine (EBM) and their matching original journal article abstracts obtained from PubMed. Three independent primary care professionals identified PECODR-related extracts of text. Rules were developed to define each PECODR element and the selection process of characters, words, phrases and sentences. From the extracts of text related to PECODR elements, potential lexical patterns that might help identify those elements were proposed and assessed using NVivo software.\n\n\nRESULTS\nA total of 835 PECODR-related text extracts containing 41,263 individual text characters were identified from 20 EBM journal synopses. There were 759 extracts in the corresponding PubMed abstracts containing 31,947 characters. PECODR elements were found in nearly all abstracts and synopses with the exception of duration. There was agreement on 86.6% of the extracts from the 20 EBM synopses and 85.0% on the corresponding PubMed abstracts. After consensus this rose to 98.4% and 96.9% respectively. We found potential text patterns in the Comparison, Outcome and Results elements of both EBM synopses and PubMed abstracts. Some phrases and words are used frequently and are specific for these elements in both synopses and abstracts.\n\n\nCONCLUSIONS\nResults suggest a PECODR-related structure exists in medical abstracts and that there might be lexical patterns specific to these elements. More sophisticated computer-assisted lexical-semantic analysis might refine these results, and pave the way to automating PECODR indexing, and improve information retrieval in primary care." }, { "pmid": "20470429", "title": "Combining classifiers for robust PICO element detection.", "abstract": "BACKGROUND\nFormulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements) facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents.\n\n\nMETHODS\nIn this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element.\n\n\nRESULTS\nCombining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an f-measure score of 86.3% for P, 67% for I and 56.6% for O.\n\n\nCONCLUSIONS\nOur experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements." }, { "pmid": "19208256", "title": "Sentence retrieval for abstracts of randomized controlled trials.", "abstract": "BACKGROUND\nThe practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies.\n\n\nMETHOD\nUsing Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention, Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category.\n\n\nRESULTS\nUsing CRFs, sentences can be labeled for the four rhetorical roles with F-scores from 0.93-0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention, Participant and Outcome Measures, in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F-scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences.\n\n\nCONCLUSION\nResults indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction." }, { "pmid": "18852316", "title": "A method of extracting the number of trial participants from abstracts describing randomized controlled trials.", "abstract": "We have developed a method for extracting the number of trial participants from abstracts describing randomized controlled trials (RCTs); the number of trial participants may be an indication of the reliability of the trial. The method depends on statistical natural language processing. The number of interest was determined by a binary supervised classification based on a support vector machine algorithm. The method was trialled on 223 abstracts in which the number of trial participants was identified manually to act as a gold standard. Automatic extraction resulted in 2 false-positive and 19 false-negative classifications. The algorithm was capable of extracting the number of trial participants with an accuracy of 97% and an F-measure of 0.84. The algorithm may improve the selection of relevant articles in regard to question-answering, and hence may assist in decision-making." }, { "pmid": "24530879", "title": "Identifying scientific artefacts in biomedical literature: the Evidence Based Medicine use case.", "abstract": "Evidence Based Medicine (EBM) provides a framework that makes use of the current best evidence in the domain to support clinicians in the decision making process. In most cases, the underlying foundational knowledge is captured in scientific publications that detail specific clinical studies or randomised controlled trials. Over the course of the last two decades, research has been performed on modelling key aspects described within publications (e.g., aims, methods, results), to enable the successful realisation of the goals of EBM. A significant outcome of this research has been the PICO (Population/Problem-Intervention-Comparison-Outcome) structure, and its refined version PIBOSO (Population-Intervention-Background-Outcome-Study Design-Other), both of which provide a formalisation of these scientific artefacts. Subsequently, using these schemes, diverse automatic extraction techniques have been proposed to streamline the knowledge discovery and exploration process in EBM. In this paper, we present a Machine Learning approach that aims to classify sentences according to the PIBOSO scheme. We use a discriminative set of features that do not rely on any external resources to achieve results comparable to the state of the art. A corpus of 1000 structured and unstructured abstracts - i.e., the NICTA-PIBOSO corpus - is used for training and testing. Our best CRF classifier achieves a micro-average F-score of 90.74% and 87.21%, respectively, over structured and unstructured abstracts, which represents an increase of 25.48 percentage points and 26.6 percentage points in F-score when compared to the best existing approaches." }, { "pmid": "19166975", "title": "Towards identifying intervention arms in randomized controlled trials: extracting coordinating constructions.", "abstract": "BACKGROUND\nLarge numbers of reports of randomized controlled trials (RCTs) are published each year, and it is becoming increasingly difficult for clinicians practicing evidence-based medicine to find answers to clinical questions. The automatic machine extraction of RCT experimental details, including design methodology and outcomes, could help clinicians and reviewers locate relevant studies more rapidly and easily.\n\n\nAIM\nThis paper investigates how the comparison of interventions is documented in the abstracts of published RCTs. The ultimate goal is to use automated text mining to locate each intervention arm of a trial. This preliminary work aims to identify coordinating constructions, which are prevalent in the expression of intervention comparisons.\n\n\nMETHODS AND RESULTS\nAn analysis of the types of constructs that describe the allocation of intervention arms is conducted, revealing that the compared interventions are predominantly embedded in coordinating constructions. A method is developed for identifying the descriptions of the assignment of treatment arms in clinical trials, using a full sentence parser to locate coordinating constructions and a statistical classifier for labeling positive examples. Predicate-argument structures are used along with other linguistic features with a maximum entropy classifier. An F-score of 0.78 is obtained for labeling relevant coordinating constructions in an independent test set.\n\n\nCONCLUSIONS\nThe intervention arms of a randomized controlled trials can be identified by machine extraction incorporating syntactic features derived from full sentence parsing." }, { "pmid": "20920176", "title": "ExaCT: automatic extraction of clinical trial characteristics from journal publications.", "abstract": "BACKGROUND\nClinical trials are one of the most important sources of evidence for guiding evidence-based practice and the design of new trials. However, most of this information is available only in free text - e.g., in journal publications - which is labour intensive to process for systematic reviews, meta-analyses, and other evidence synthesis studies. This paper presents an automatic information extraction system, called ExaCT, that assists users with locating and extracting key trial characteristics (e.g., eligibility criteria, sample size, drug dosage, primary outcomes) from full-text journal articles reporting on randomized controlled trials (RCTs).\n\n\nMETHODS\nExaCT consists of two parts: an information extraction (IE) engine that searches the article for text fragments that best describe the trial characteristics, and a web browser-based user interface that allows human reviewers to assess and modify the suggested selections. The IE engine uses a statistical text classifier to locate those sentences that have the highest probability of describing a trial characteristic. Then, the IE engine's second stage applies simple rules to these sentences to extract text fragments containing the target answer. The same approach is used for all 21 trial characteristics selected for this study.\n\n\nRESULTS\nWe evaluated ExaCT using 50 previously unseen articles describing RCTs. The text classifier (first stage) was able to recover 88% of relevant sentences among its top five candidates (top5 recall) with the topmost candidate being relevant in 80% of cases (top1 precision). Precision and recall of the extraction rules (second stage) were 93% and 91%, respectively. Together, the two stages of the extraction engine were able to provide (partially) correct solutions in 992 out of 1050 test tasks (94%), with a majority of these (696) representing fully correct and complete answers.\n\n\nCONCLUSIONS\nOur experiments confirmed the applicability and efficacy of ExaCT. Furthermore, they demonstrated that combining a statistical method with 'weak' extraction rules can identify a variety of study characteristics. The system is flexible and can be extended to handle other characteristics and document types (e.g., study protocols)." }, { "pmid": "21682870", "title": "The Global Evidence Mapping Initiative: scoping research in broad topic areas.", "abstract": "BACKGROUND\nEvidence mapping describes the quantity, design and characteristics of research in broad topic areas, in contrast to systematic reviews, which usually address narrowly-focused research questions. The breadth of evidence mapping helps to identify evidence gaps, and may guide future research efforts. The Global Evidence Mapping (GEM) Initiative was established in 2007 to create evidence maps providing an overview of existing research in Traumatic Brain Injury (TBI) and Spinal Cord Injury (SCI).\n\n\nMETHODS\nThe GEM evidence mapping method involved three core tasks:1. Setting the boundaries and context of the map: Definitions for the fields of TBI and SCI were clarified, the prehospital, acute inhospital and rehabilitation phases of care were delineated and relevant stakeholders (patients, carers, clinicians, researchers and policymakers) who could contribute to the mapping were identified. Researchable clinical questions were developed through consultation with key stakeholders and a broad literature search. 2. Searching for and selection of relevant studies: Evidence search and selection involved development of specific search strategies, development of inclusion and exclusion criteria, searching of relevant databases and independent screening and selection by two researchers. 3. Reporting on yield and study characteristics: Data extraction was performed at two levels - 'interventions and study design' and 'detailed study characteristics'. The evidence map and commentary reflected the depth of data extraction.\n\n\nRESULTS\nOne hundred and twenty-nine researchable clinical questions in TBI and SCI were identified. These questions were then prioritised into high (n = 60) and low (n = 69) importance by the stakeholders involved in question development. Since 2007, 58 263 abstracts have been screened, 3 731 full text articles have been reviewed and 1 644 relevant neurotrauma publications have been mapped, covering fifty-three high priority questions.\n\n\nCONCLUSIONS\nGEM Initiative evidence maps have a broad range of potential end-users including funding agencies, researchers and clinicians. Evidence mapping is at least as resource-intensive as systematic reviewing. The GEM Initiative has made advancements in evidence mapping, most notably in the area of question development and prioritisation. Evidence mapping complements other review methods for describing existing research, informing future research efforts, and addressing evidence gaps." }, { "pmid": "9586400", "title": "[Familial predisposition to breast cancer. Review].", "abstract": "An estimated 20% of all breast cancer or ovarian and breast cancer cases have familial aggregation. Today it is known that approximately 10% of these cases are attributable to inherited mutations of a predisposition gene that confers a high risk of developing the disease. Several genes have been identified that differ in the risks which they determine, the proportion of cases they explain and the other cancers they may cause. Out of all the genes reported so far, BRCA1 and BRCA2 are the most important ones. The mutations in the remaining genes are rare or involve moderate or low risks. The possibility to detect breast and/or ovarian cancer susceptibility genes in high risk families poses serious challenges that must be faced by health professionals related with this field." }, { "pmid": "10709685", "title": "Pacing for patients with congestive heart failure and dilated cardiomyopathy.", "abstract": "Considerable evidence has now accumulated that permanent pacing may provide symptomatic benefit for at least some patients with CHF. Recently, the most promising results with left ventricular or biventricular pacing have been obtained. The data for improvement in survival with pacing is less compelling. The mortality of CHF associated with systolic dysfunction of the left ventricle remains high and arrhythmic deaths are frequent. Clinical trials such as the Sudden Cardiac Death Heart Failure Trial (SCD-HeFT) are currently underway to investigate the role of the implantable defibrillator in patients with heart failure. The development and general availability of ICDs with biventricular pacing capability may play an increasingly important role in the overall therapeutic plan for this group of patients to allow for optimization of functional status with pacing and protection from sudden cardiac death with defibrillation." }, { "pmid": "18671474", "title": "Update on tizanidine for muscle spasticity and emerging indications.", "abstract": "BACKGROUND\nTizanidine hydrochloride, an alpha(2)-adrenergic receptor agonist, is a widely used medication for the treatment of muscle spasticity. Clinical studies have supported its use in the management of spasticity caused by multiple sclerosis (MS), acquired brain injury or spinal cord injury. It has also been shown to be clinically effective in the management of pain syndromes, such as: myofascial pain, lower back pain and trigeminal neuralgia. This review summarizes the recent findings on the clinical application of tizanidine.\n\n\nOBJECTIVE\nOur objective was to review and summarize the medical literature regarding the evidence for the usefulness of tizanidine in the management of spasticity and in pain syndromes such as myofascial pain.\n\n\nMETHODS\nWe reviewed the current medical and pharmacology literature through various internet literature searches. This information was then synthesized and presented in paragraph and table form.\n\n\nRESULTS/CONCLUSION\nTizanidine hydrochloride is a very useful medication in patients suffering from spasticity caused by MS, acquired brain injury or spinal cord injury. It can also be helpful in patients suffering from chronic neck and/or lower back pain who have a myofascial component to their pain. Doses should be started at low dose and gradually titrated to effect." }, { "pmid": "7711374", "title": "Dorsal ramus irritation associated with recurrent low back pain and its relief with local anesthetic or training therapy.", "abstract": "Nerves leave the spinal cord as mainly motor primary rootlets and sensory rootlets. These join to nerve root before leaving the spinal canal. After the root canal, the nerve root branches into the ventral root, which contains sensory and motor fibers innervating the extremities, and the dorsal root, that is, the dorsal ramus, which innervates the posterior structures, for example, back muscles: the dorsal ramus itself may become irritated (dorsal ramus syndrome). Especially predisposed to entrapment is the medial branch of the dorsal ramus, which innervates the multifidus muscle and also contains pain fibers. Here we describe the influence of local anesthesia and back-muscle-training therapy on subjective and objective pain parameters in 21 low-back-pain patients who had similar clinical status and neurophysiologic findings and whose recurrent low back pain was most apparently associated with dorsal ramus neuropathy, without any radiologic or neurophysiologic evidence of more proximal ventral nerve root damage in the spinal cord or at the nerve root origin. After treatment, all were pain free and back muscle activity during lumbar-pelvic rhythm was normalized." }, { "pmid": "8865908", "title": "Use of laparoscopy in the management of malfunctioning peritoneal dialysis catheters.", "abstract": "The proper function of peritoneal dialysis (PD) catheters can be compromised by catheter malposition, fibrin clot, or omental wrapping. The purpose of this study was to determine the efficacy of laparoscopy in the treatment of malfunctioning PD catheters. All patients undergoing laparoscopy for catheter dysfunction at MetroHealth Medical Center in Cleveland, Ohio, from 1991 to 1995, were reviewed. Twenty-six laparoscopies were performed in 22 patients, for malfunction occurring an average of 3.9 months following insertion (range 0.5-18 months). Omental and/or small below wrapping as present in all but three cases. Lysis of adhesions was required in 19 of 26 cases, with repositioning only in seven. Eight patients had failed attempts at stiff wire manipulation prior to laparoscopy. Perioperative complications occurred in seven cases, consisting of temporary dialysate leakage (2), enterotomy (1), and early reocclusion (4). Repeat laparoscopy was successful in three of these four reocclusions. The overall success rate (catheter function > 30 days after laparoscopy) was 21/22 (96%). Laparoscopy is highly accurate and effective in the management of peritoneal dialysis catheter dysfunction and results in prolongation of catheter life." } ]
BMC Medical Informatics and Decision Making
30509279
PMC6278134
10.1186/s12911-018-0702-y
Towards stroke prediction using electronic health records
BackgroundAs of 2014, stroke is the fourth leading cause of death in Japan. Predicting a future diagnosis of stroke would better enable proactive forms of healthcare measures to be taken. We aim to predict a diagnosis of stroke within one year of the patient’s last set of exam results or medical diagnoses.MethodsAround 8000 electronic health records were provided by Tsuyama Jifukai Tsuyama Chuo Hospital in Japan. These records contained non-homogeneous temporal data which were first transformed into a form usable by an algorithm. The transformed data were used as input into several neural network architectures designed to evaluate efficacy of the supplied data and also the networks’ capability at exploiting relationships that could underlie the data. The prevalence of stroke cases resulted in imbalanced class outputs which resulted in trained neural network models being biased towards negative predictions. To address this issue, we designed and incorporated regularization terms into the standard cross-entropy loss function. These terms penalized false positive and false negative predictions. We evaluated the performance of our trained models using Receiver Operating Characteristic.ResultsThe best neural network incorporated and combined the different sources of temporal data through a dual-input topology. This network attained area under the Receiver Operating Characteristic curve of 0.669. The custom regularization terms had a positive effect on the training process when compared against the standard cross-entropy loss function.ConclusionsThe techniques we describe in this paper are viable and the developed models form part of the foundation of a national clinical decision support system.
Related workVarious techniques have been employed to predict the risk of stroke. Cox proportional hazards regression is one technique used to develop a statistical model, where the use of the Framingham Study cohort forms one such application [3]. This application, developed on top of data collected from a sample population over a span of 36 years, described a formula that estimated the probabilities of stroke based on pre-determined risk factors such as age, systolic blood pressure, and presence of diabetes mellitus. Cox proportional hazards regression is commonly used to develop stroke risk models tailored towards a cohort of interest [4, 5]. Globorisk [6] was another model developed with Cox proportional hazards regression, with the aim of producing a formula that can be recalibrated and updated for use in different countries.Bayesian Rule Lists (BRL) were used to develop an interpretable model to predict risk of stroke within a year for patients diagnosed with atrial fibrillation [7]. BRL produces a hierarchy of decision lists (chains of if …then …rules) ordered by posterior consequent distributions. The model used drug prescriptions, medical conditions, age, and gender to form these decision lists.Support Vector Machines have been used in a study to predict the occurrence of stroke within five years after a set of baseline measurements [8]. The study also identified issues pertaining to missing data and large number of input features. Missing data was addressed through median imputation after a comparison with other methods. A novel feature selection algorithm was used to reduce the number of input features.For the diagnosis of other kinds of disorders, or the development of decision support systems, different techniques have also been used. One of the first diagnosis systems was developed in 1961 [9]. This system focused on the diagnosis of congenital heart disease from clinical data, using a diagnostic model derived using Bayes’ theorem. In 1988, the earliest known disease diagnostic system using a multi-layer neural network was developed [10]. This system used over 200 questionnaire responses as inputs and supported 23 diseases as the diagnosis output.More recently other types of neural network architectures have been investigated for creating predictions from EHRs. Doctor AI [11] employed a Recurrent Neural Network (RNN) architecture to process patient temporal medical events and prescriptions, which were both represented as categorical features. The model was used to predict future diagnoses, medication orders and visit time.ICD-9 label assignments can be used to classify medical notes using a bag-of-words model combined with RNN [12]. The methods used to represent medical notes has a demonstrable effect on performance when evaluated on event prediction tasks such as patient mortality and emergency room visits [13].On the topic of EHR representation, Deepr [14] drew on natural language processing techniques to represent diagnoses and treatments as a sequence of tokens. Temporal features were also discretized and represented as tokens. These tokens were then transformed into a continuous vector space through an embedding process. This representation was fed into a Convolutional Neural Network (CNN) architecture and evaluated on its ability to predict future hospital readmission risk.Autoencoders can be used to derive compressed representations of physiological features as an input pre-processing step [15]. The word2vec algorithm [16] can be applied to generate an embedding over patient diagnoses and medications [17]. The embedded representation was fed into a CNN, whereby event temporality was handled with 1-dimensional convolutions over the temporal dimension of the input matrix.
[ "20489172", "2003301", "17586511", "20671251", "25819778", "13783190", "27913366", "22566982", "1955687", "19028393", "20123229", "20200384", "28959475" ]
[ { "pmid": "20489172", "title": "Lifetime risk of stroke in Japan.", "abstract": "BACKGROUND AND PURPOSE\nLifetime risk (LTR) is an epidemiologic measure that expresses the probability of disease in the remaining lifetime for an index age. The LTR for stroke has not been reported for the Japanese population.\n\n\nMETHODS\nWe included all participants from the Suita Study who were cardiovascular disease-free at baseline. Age (in years) was used as the time scale. Age-specific stroke incidence and all-cause mortality were calculated with the person-year method, and we estimated the sex- and index age-specific LTRs of first-ever stroke and its subtypes, taking into account the competing risk of death.\n\n\nRESULTS\nWe followed up 5498 participants from 1989 to 2005 for a total of 67 475 person-years. At age 55 years, the LTR for stroke, after accounting for competing risks of death, was 18.3% for men and 19.6% for women. The LTR for cerebral infarction was 14.6% for men and 15.5% for women, and the LTR for intracerebral hemorrhage was 2.4% for men and 1.4% for women at the index age of 55 years. The LTR for stroke remained similar across other index ages of 45, 55, and 65 years.\n\n\nCONCLUSIONS\nThe observed probabilities illustrate that approximately 1 in 5 men and women of middle age will experience stroke in their remaining lifetime. This easy understandable information can be used as an important index to assist in public health education and planning." }, { "pmid": "2003301", "title": "Probability of stroke: a risk profile from the Framingham Study.", "abstract": "A health risk appraisal function has been developed for the prediction of stroke using the Framingham Study cohort. The stroke risk factors included in the profile are age, systolic blood pressure, the use of antihypertensive therapy, diabetes mellitus, cigarette smoking, prior cardiovascular disease (coronary heart disease, cardiac failure, or intermittent claudication), atrial fibrillation, and left ventricular hypertrophy by electrocardiogram. Based on 472 stroke events occurring during 10 years' follow-up from biennial examinations 9 and 14, stroke probabilities were computed using the Cox proportional hazards model for each sex based on a point system. On the basis of the risk factors in the profile, which can be readily determined on routine physical examination in a physician's office, stroke risk can be estimated. An individual's risk can be related to the average risk of stroke for persons of the same age and sex. The information that one's risk of stroke is several times higher than average may provide the impetus for risk factor modification. It may also help to identify persons at substantially increased stroke risk resulting from borderline levels of multiple risk factors such as those with mild or borderline hypertension and facilitate multifactorial risk factor modification." }, { "pmid": "17586511", "title": "Stroke risk prediction model: a risk profile from the Korean study.", "abstract": "BACKGROUND AND PURPOSE\nThe objective of this study was to develop the stroke risk prediction model among Korean population with high risk of stroke.\n\n\nMETHODS\nThe data in this prospective cohort study came from 47,233 stroke events occurring over 13 years among 1,223,740 Koreans, aged 30-84 years, who were insured by the National Health Insurance Corporation (NHIC) and take a biennial medical examination from 1992 to 1995. The Cox proportional Hazard Model was used to develop the Korean Stroke Risk Prediction (KSRP) model for each sex. Also, the split-half method was applied for developing a model with the first half and for testing with the rest.\n\n\nRESULTS\nThe average 10-year risk for stroke was 3.52% for men and 3.66% for women. In general, actual stroke event rates were similar to the event rates predicted by the KSRP model. The discrimination using the KSRP model in the Korean cohort was high: the area under the receiver operating characteristic curve was 0.8165 [95% confidence interval (CI), 0.7993-0.8337] for men and 0.8095 (0.7875-0.8315) for women. A graded association between predicted stroke risk and actual stroke event was observed in men [highest versus lowest deciles of the predicted risk (hazard ratio (HR) 63.17; 95% confidence interval (CI), 52.30-76.31)] and in women (HR, 120.34; 95% CI, 85.31-169.77).\n\n\nCONCLUSIONS\nThe KSRP model could be used to predict the risk of stroke and would provide a useful guide to identify the groups at high risk for stroke among Korean." }, { "pmid": "20671251", "title": "Constructing the prediction model for the risk of stroke in a Chinese population: report from a cohort study in Taiwan.", "abstract": "BACKGROUND AND PURPOSE\nPrediction rules for the risk of stroke have been proposed. However, most studies were conducted with whites or for secondary prevention, and it is not clear whether these models apply to the Chinese population. The purpose of this study was to construct a simple points-based clinical model for predicting incident stroke among Chinese adults in Taiwan.\n\n\nMETHODS\nWe estimated the 10-year risk of stroke in a cohort study of middle-aged and elderly participants who were free from stroke at baseline. Multivariate Cox model-derived coefficients were used to construct the simple points-based clinical and biochemical model and the prediction measures using the area under the receive operating characteristic curve, net reclassification improvement, and integrated discrimination improvement statistics were applied.\n\n\nRESULTS\nOf the 3513 participants without stroke at baseline, 240 incident cases of stroke were documented for a median 15.9-year follow-up. Age (8 points), gender (1 point), systolic blood pressure (3 points), diastolic blood pressure (2 points), family history of stroke (1 point), atrial fibrillation (3 points), and diabetes (1 point) were found to significantly predict stroke events. The estimated area under the receive operating characteristic curve for this clinical points-based model was 0.772 (95% CI, 0.744 to 0.799). The discrimination ability of this clinical model was similar to the coefficients-based models and better than available stroke models.\n\n\nCONCLUSIONS\nWe have constructed a model for predicting 15-year incidence of stroke in Chinese adults and this model may be useful in identifying individuals at high risk of stroke." }, { "pmid": "25819778", "title": "A novel risk score to predict cardiovascular disease risk in national populations (Globorisk): a pooled analysis of prospective cohorts and health examination surveys.", "abstract": "BACKGROUND\nTreatment of cardiovascular risk factors based on disease risk depends on valid risk prediction equations. We aimed to develop, and apply in example countries, a risk prediction equation for cardiovascular disease (consisting here of coronary heart disease and stroke) that can be recalibrated and updated for application in different countries with routinely available information.\n\n\nMETHODS\nWe used data from eight prospective cohort studies to estimate coefficients of the risk equation with proportional hazard regressions. The risk prediction equation included smoking, blood pressure, diabetes, and total cholesterol, and allowed the effects of sex and age on cardiovascular disease to vary between cohorts or countries. We developed risk equations for fatal cardiovascular disease and for fatal plus non-fatal cardiovascular disease. We validated the risk equations internally and also using data from three cohorts that were not used to create the equations. We then used the risk prediction equation and data from recent (2006 or later) national health surveys to estimate the proportion of the population at different levels of cardiovascular disease risk in 11 countries from different world regions (China, Czech Republic, Denmark, England, Iran, Japan, Malawi, Mexico, South Korea, Spain, and USA).\n\n\nFINDINGS\nThe risk score discriminated well in internal and external validations, with C statistics generally 70% or more. At any age and risk factor level, the estimated 10 year fatal cardiovascular disease risk varied substantially between countries. The prevalence of people at high risk of fatal cardiovascular disease was lowest in South Korea, Spain, and Denmark, where only 5-10% of men and women had more than a 10% risk, and 62-77% of men and 79-82% of women had less than a 3% risk. Conversely, the proportion of people at high risk of fatal cardiovascular disease was largest in China and Mexico. In China, 33% of men and 28% of women had a 10-year risk of fatal cardiovascular disease of 10% or more, whereas in Mexico, the prevalence of this high risk was 16% for men and 11% for women. The prevalence of less than a 3% risk was 37% for men and 42% for women in China, and 55% for men and 69% for women in Mexico.\n\n\nINTERPRETATION\nWe developed a cardiovascular disease risk equation that can be recalibrated for application in different countries with routinely available information. The estimated percentage of people at high risk of fatal cardiovascular disease was higher in low-income and middle-income countries than in high-income countries.\n\n\nFUNDING\nUS National Institutes of Health, UK Medical Research Council, Wellcome Trust." }, { "pmid": "27913366", "title": "$\\mathtt {Deepr}$: A Convolutional Net for Medical Records.", "abstract": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space." }, { "pmid": "22566982", "title": "Lower hemoglobin correlates with larger stroke volumes in acute ischemic stroke.", "abstract": "BACKGROUND\nHemoglobin tetramers are the major oxygen-carrying molecules within the blood. We hypothesized that a lower hemoglobin level and its reduced oxygen-carrying capacity would associate with larger infarction in acute ischemic stroke patients.\n\n\nMETHODS\nWe studied 135 consecutive patients with acute ischemic stroke and perfusion brain MRI. We explored the association of admission hemoglobin with initial infarct volumes on acute images and the volume of infarct expansion on follow-up images. Multivariable linear regression was performed to analyze the independent effect of hemoglobin on imaging outcomes.\n\n\nRESULTS\nBivariate analyses showed a significant inverse correlation between hemoglobin and initial volume in diffusion-weighted imaging (r = -0.20, p = 0.02) and absolute infarct growth (r = -0.20, p = 0.02). Multivariable linear regression modeling revealed that hemoglobin remained independently predictive of larger infarct volumes acutely (p < 0.005) and with greater infarct expansion (p < 0.01) after adjusting for known covariates.\n\n\nCONCLUSIONS\nHemoglobin level at the time of acute ischemic stroke associates with larger infarcts and increased infarct growth. Clarification of the mechanism of this effect may yield novel insights for therapy." }, { "pmid": "1955687", "title": "The red blood cell distribution width.", "abstract": "The availability of automated blood cell analyzers that provide an index of red blood cell distribution width (RDW) has lead to new approaches to patients with anemia. While the emergency physician is primarily responsible for the detection of patients with anemia, the inclusion of the RDW in the complete blood count has made diagnosing certain anemias easier, especially those that are microcytic. The derivation of the RDW and its clinical application to emergency physicians is discussed and a categorization of anemias based on the mean corpuscular volume (MCV) and RDW is included." }, { "pmid": "19028393", "title": "Elevated red blood cell distribution width predicts mortality in persons with known stroke.", "abstract": "BACKGROUND\nRed cell distribution width (RDW) is a hematological parameter routinely obtained as part of the complete blood count. Recently, RDW has emerged as a potential independent predictor of clinical outcome in patients with established cardiovascular disease. However, little is known about the role of RDW as a prognosticator among persons with stroke, especially with regard to an incontrovertible endpoint like mortality. We assessed the association of RDW with stroke, and its effect on mortality among persons with stroke.\n\n\nMETHODS\nData from the National Health and Nutrition Examination Survey (NHANES) a nationally representative sample of United States adults were analyzed. The study population consisted of 480 individuals aged > or =25 years with a baseline history of stroke followed-up from survey participation (1988-1994) through mortality assessment in 2000. Proportional hazard regression (Cox) was utilized to explore the independent relationship between RDW and mortality after adjusting for potential confounders.\n\n\nRESULTS\nAmong the cohort, 52.4% were female, 64% aged > or =65 years. Mean RDW was significantly higher among persons with stroke compared to individuals without a stroke (13.7% vs.13.2%,p<0.001). Baseline RDW was higher among persons with known stroke who later died vs. remained alive (13.9% vs.13.4%,p<0.001). After adjusting for confounders, those with elevated RDW (fourth vs. first quartile) were more likely to have experienced a stroke (OR 1.71, CI=1.20-2.45). Higher RDW level (fourth vs. first quartile) among those with known stroke independently predicted subsequent cardiovascular deaths (HR=2.38 and CI=1.41-4.01) and all-cause deaths (HR=2.0, CI=1.25-3.20).\n\n\nCONCLUSIONS\nElevated RDW is associated with stroke occurrence and strongly predicts both cardiovascular and all-cause deaths in persons with known stroke." }, { "pmid": "20123229", "title": "Prognostic role of mean platelet volume and platelet count in ischemic and hemorrhagic stroke.", "abstract": "BACKGROUND\nMean platelet volume (MPV) is an indicator of platelet function or reactivity. Platelets play an important role in the pathophysiology of ischemic stroke but the effect of platelet count (PC) and dysfunction in the pathogenesis of hemorrhagic stroke is poorly understood. We have investigated the possibility of MPV and PC being an independent risk factor of ischemic and haemorrhagic stroke and their effect on prognosis.\n\n\nMETHODS\nWe prospectively studied 692 patients with either ischemic or hemorrhagic stroke and compared them with 208 control subjects with similar risk factors, but without evidence of vascular events. The association of MPV and PC with cause, localization, and size of the infarct or hemorrhage was examined. Prognosis was determined by Glasgow Outcome Scale. By multivariate logistic regression analysis, the influence of MPV and PC on stroke subtype and prognosis was determined.\n\n\nRESULTS\nMPV and PC were observed as independent risk factors for ischemic stroke (P = .007, odds ratio [OR] = 0.866; P = .000, OR = 0.996; 95% confidence interval [CI], respectively). There was a negative and significant correlation between PC and hemorrhagic stroke (P = .001), but no association was found with MPV (P > .05). MPV and PC were not statistically significant related to etiological subgroups, localization, and size of the infarct or hemorrhage (P > .05). Ischemic group MPV (P = .013, OR = 1.02, 95% CI) and hemorrhagic group PC were in correlation with worse outcome (P = .001, OR = 1.004, 95% CI).\n\n\nCONCLUSION\nMPV, may be an early and important predictor for the prognosis of ischemic stroke, whereas for hemorrhagic stroke PC has a role for outcome." }, { "pmid": "20200384", "title": "Glycated hemoglobin, diabetes, and cardiovascular risk in nondiabetic adults.", "abstract": "BACKGROUND\nFasting glucose is the standard measure used to diagnose diabetes in the United States. Recently, glycated hemoglobin was also recommended for this purpose.\n\n\nMETHODS\nWe compared the prognostic value of glycated hemoglobin and fasting glucose for identifying adults at risk for diabetes or cardiovascular disease. We measured glycated hemoglobin in whole-blood samples from 11,092 black or white adults who did not have a history of diabetes or cardiovascular disease and who attended the second visit (occurring in the 1990-1992 period) of the Atherosclerosis Risk in Communities (ARIC) study.\n\n\nRESULTS\nThe glycated hemoglobin value at baseline was associated with newly diagnosed diabetes and cardiovascular outcomes. For glycated hemoglobin values of less than 5.0%, 5.0 to less than 5.5%, 5.5 to less than 6.0%, 6.0 to less than 6.5%, and 6.5% or greater, the multivariable-adjusted hazard ratios (with 95% confidence intervals) for diagnosed diabetes were 0.52 (0.40 to 0.69), 1.00 (reference), 1.86 (1.67 to 2.08), 4.48 (3.92 to 5.13), and 16.47 (14.22 to 19.08), respectively. For coronary heart disease, the hazard ratios were 0.96 (0.74 to 1.24), 1.00 (reference), 1.23 (1.07 to 1.41), 1.78 (1.48 to 2.15), and 1.95 (1.53 to 2.48), respectively. The hazard ratios for stroke were similar. In contrast, glycated hemoglobin and death from any cause were found to have a J-shaped association curve. All these associations remained significant after adjustment for the baseline fasting glucose level. The association between the fasting glucose levels and the risk of cardiovascular disease or death from any cause was not significant in models with adjustment for all covariates as well as glycated hemoglobin. For coronary heart disease, measures of risk discrimination showed significant improvement when glycated hemoglobin was added to models including fasting glucose.\n\n\nCONCLUSIONS\nIn this community-based population of nondiabetic adults, glycated hemoglobin was similarly associated with a risk of diabetes and more strongly associated with risks of cardiovascular disease and death from any cause as compared with fasting glucose. These data add to the evidence supporting the use of glycated hemoglobin as a diagnostic test for diabetes." }, { "pmid": "28959475", "title": "High HbA1c is associated with higher risk of ischaemic stroke in Pakistani population without diabetes.", "abstract": "CONTEXT\nThe role of glycated haemoglobin (HbA1c) in the prediction of ischaemic stroke in individuals without diabetes is underestimated.\n\n\nAIMS\nWe performed a study to analyse the role of HbA1c in the risk prediction of ischaemic stroke in Pakistani population without diabetes. We further studied the difference between HbA1c values of individuals with diabetes and without diabetes with stroke.\n\n\nSETTINGS AND DESIGN\nSingle centre, case-control.\n\n\nMATERIALS AND METHODS\nIn phase I, a total of 233 patients without diabetes with ischaemic stroke and 245 as controls were enrolled. Association of HbA1c levels, lipid profiles and blood pressure recordings with ischaemic stroke was analysed. In phase II, comparison was done between diabetics and non-diabetics with stroke.\n\n\nSTATISTICAL ANALYSIS\nComparison of the mean variables was performed with Student's t-tests. Logistic regression analysis with ischaemic stroke as the dependent variable was performed for phase I.\n\n\nRESULTS\nIn phase I, the ischaemic stroke group had significantly higher HbA1c levels (5.9±2.9% vs 5.5±1.6%) compared with controls (p<0.05). Triglyceride cholesterol, high-density lipoprotein cholesterol, systolic blood pressure, diastolic blood pressure and HbA1c were the significant determinants of stroke (p<0.05). In phase II, mean HbA1c values were significantly higher in the diabetes group (7.6±2.1 vs 6.1±2.3) (p<0.05) but other parameters were not statistically significantly different (p>0.05).\n\n\nCONCLUSIONS\nHigher HbA1c indicated a significantly increased risk for ischaemic stroke. An HbA1c value above 5.6% (prediabetic range) predicted future risk of stroke and efforts to maintain glucose level within the normal range (≤5.6%) in individuals with high cardiovascular risk are important." } ]
Frontiers in Psychology
30546339
PMC6279862
10.3389/fpsyg.2018.02396
EmojiGrid: A 2D Pictorial Scale for the Assessment of Food Elicited Emotions
Research on food experience is typically challenged by the way questions are worded. We therefore developed the EmojiGrid: a graphical (language-independent) intuitive self-report tool to measure food-related valence and arousal. In a first experiment participants rated the valence and the arousing quality of 60 food images, using either the EmojiGrid or two independent visual analog scales (VAS). The valence ratings obtained with both tools strongly agree. However, the arousal ratings only agree for pleasant food items, but not for unpleasant ones. Furthermore, the results obtained with the EmojiGrid show the typical universal U-shaped relation between the mean valence and arousal that is commonly observed for a wide range of (visual, auditory, tactile, olfactory) affective stimuli, while the VAS tool yields a positive linear association between valence and arousal. We hypothesized that this disagreement reflects a lack of proper understanding of the arousal concept in the VAS condition. In a second experiment we attempted to clarify the arousal concept by asking participants to rate the valence and intensity of the taste associated with the perceived food items. After this adjustment the VAS and EmojiGrid yielded similar valence and arousal ratings (both showing the universal U-shaped relation between the valence and arousal). A comparison with the results from the first experiment showed that VAS arousal ratings strongly depended on the actual wording used, while EmojiGrid ratings were not affected by the framing of the associated question. This suggests that the EmojiGrid is largely self-explaining and intuitive. To test this hypothesis, we performed a third experiment in which participants rated food images using the EmojiGrid without an associated question, and we compared the results to those of the first two experiments. The EmojiGrid ratings obtained in all three experiments closely agree. We conclude that the EmojiGrid appears to be a valid and intuitive affective self-report tool that does not rely on written instructions and that can efficiently be used to measure food-related emotions.
Related WorkAffective Self-Report Through Cartoon CharactersThe Affect Grid (Russell et al., 1989) is a two-dimensional labeled visual scale to assess affect along the principal dimensions valence and arousal, based on Russell (1980)’s circumplex model of affect. The horizontal valence scale ranges from “unpleasant” (low negative valence) to “pleasant” (high positive valence). The vertical arousal scale ranges from “sleepiness” (low intensity – no arousal) to “high arousal” (high intensity). Four additional labels (“stress,” “excitement,” “depression,” and “relaxation”) clarify the meaning of the extreme emotions represented by the corners of the grid. Users mark the location on the grid that best corresponds to their affect state after perceiving a given stimulus. Hybrid abstract and pictorial versions of the Affect Grid have been created by labeling its axes either with icons of faces showing different emotional expressions (Schubert, 1999) or with abstract cartoon characters (Swindells et al., 2006; Cai and Lin, 2011). Although the Affect Grid has been applied to measure food elicited emotions (Einöther et al., 2015; den Uijl et al., 2016b), none of these tools has been specifically designed to assess food-related emotions.Other affective self-report tools use cartoon characters that express specific emotions through facial and bodily expressions. The rationale for their use is twofold. First, people can accurately identify discrete emotions from bodily signals such as facial expressions (Ekman, 1994) and body language (Wallbott, 1998) across cultures (Ekman and Friesen, 1971). Second, visually expressed emotions are hypothesized to more closely resemble intuitively experienced emotions (Dalenberg et al., 2014). Evidence for this hypothesis stems from EEG experiments showing that emotion processing is faster for facial expressions than for emotional words (Schacht and Sommer, 2009; Frühholz et al., 2011; Rellecke et al., 2011). Although none of the currently available cartoon-based self-assessment tools have been designed to measure food-related emotions, we will first give a brief overview of the existing methods since they are closely related to the new tool that we will present later in Section 2.The Self-Assessment Manikin (SAM; Bradley and Lang, 1994) is a pictorial assessment technique that enables users to report their momentary feelings of valence, arousal, and dominance by selecting for each factor from a set of humanoid figures showing different intensities the one that best expresses their own feeling. Muñoz et al. (2010) introduced an additional SAM scale to measure food-related craving (the desire to consume; see also Miccoli et al., 2014). Although the SAM is widely used and extensively validated, it is generally acknowledged that it has several serious drawbacks. First, users often misunderstand the depicted emotions. Especially children have difficulties understanding the SAM (Yusoff et al., 2013; Hayashi et al., 2016). While the valence dimension of the SAM is quite intuitive (depicted as the figure’s facial expression going from a frown to a smile), the dominance dimension (depicted as the size of the figure) is much harder to explain, and the arousal dimension (depicted as an “explosion” in the stomach area) is often misinterpreted (Broekens and Brinkman, 2013; Betella and Verschure, 2016; Chen et al., 2018). Second, the method still requires a successive assessment of the stimulus on multiple dimensions separately.Product Emotion Measurement Instrument (PrEmo) is a non-verbal cross-cultural validated self-report instrument to measure 14 distinct emotions visualized by an animated cartoon character (Desmet et al., 2000; Laurans and Desmet, 2012). Users rate to what extent the animated figures express their feelings elicited by a stimulus, using a five-point scale. Although PrEmo has been applied to measure food elicited emotions (Dalenberg et al., 2014; Gutjar et al., 2015; den Uijl et al., 2016b; He et al., 2016a,b), it was not designed for this purpose and most of the displayed emotions (e.g., pride, hope, fascination, shame, fear, sadness) therefore have no evident relation to food experiences. Similar cartoon-based self-report tools representing a limited set of emotions are the Pictorial Mood Reporting Instrument (PMRI; Vastenburg et al., 2011), the pictorial ERF (Emotion Rating Figurines; Obaid et al., 2015), the LEMtool (Layered Emotion Measurement tool; Huisman and van Hout, 2008; Huisman et al., 2013), and Pick-A-Mood (Desmet et al., 2016). The Affective Slider is a digital scale composed of two vertically aligned sliders labeled with stylized facial expressions that represent pleasure and arousal (Betella and Verschure, 2016). Unlike the previous methods, the AffectButton (Broekens and Brinkman, 2013) and EMuJoy (Emotion measurement with Music by using a Joystick; Nagel et al., 2007) allow users to continuously adjust the emotional expression of a cartoon character (by moving a mouse controlled cursor). However, these tools require the user to successively explore the entire affective space to find the desired expression each time a response is given, unlike the other graphical tools that provide an instantaneous overview of the affective input space.Affective Self-Report Through EmojiEmoji are pictographs or ideograms representing emotions, concepts, and ideas. They are widely used in electronic messages and Web pages to supplement or substitute written text (Danesi, 2016). Facial emoji are typically used to change or accentuate the tone or meaning of a message. They can support users to express and transmit their intention more clearly and explicitly in computer-mediated communication (dos Reis et al., 2018). Emoji span a broad range of emotions, varying in valence (e.g., smiling face vs. angry face) and arousal (e.g., sleepy face and face with stuck-out tongue and winking eye). Although some facial emoji can be poly-interpretable (Miller et al., 2016; Tigwell and Flatla, 2016) it has been found that emoji with similar facial expressions are typically attributed similar meanings (Moore et al., 2013; Jaeger and Ares, 2017) that are also to a large extent language independent (Kralj Novak et al., 2015). Emoji can elicit the same range of emotional responses as photographs of human faces (Moore et al., 2013). In contrast to photographs of human faces, emoji are not associated with overgeneralization (the misattribution of emotions and traits to neutral human faces that merely bear a subtle structural resemblance to emotional expressions; Said et al., 2009), or racial, cultural, and sexual biases.For a study on children’s sensitivity to mood in music, Giomo (1993) developed a non-verbal response instrument using schematic faces arranged in a semantic differential format along three lines corresponding to each of the three musical mood dimensions defined by Wedin (1972). By marking the most appropriate facial expression children used the tool to report their perceived mood in musical pieces.Schubert (1999) developed the interactive Two-Dimensional Emotion-Space (2DES) graphic response tool to enable continuous measurement of perceived emotions in music. The 2DES tool consists of a square Affect Grid (with valence along the horizontal and arousal along the vertical axis) with schematic faces (showing only eyes and a mouth) arranged at the corners and the midpoints of the four sides of the grid. No further labels are provided. The human–computer interface records cursor movements within the square. The schematic faces represent the arousal dimension by the size of the mouth and the eye opening, while the valence dimension is represented by the concavity of the mouth. These features are based on the literature on facial expression (Ekman et al., 1971). An extensive evaluation study showed that the instrument was intuitive to use and had a significant reliability and validity (Schubert, 1999). The author suggested that the tool could be applied to measure emotion felt in response to a stimulus rather than emotion expressed by the stimulus (Schubert, 1999).Russkman (Russell and Ekman; Sánchez et al., 2006) is an interactive graphic response tool consisting of a set of emoji expressing 28 affective states on three levels of intensity. Russkman is based on Russell (1980)’s circumplex model of affect and Ekman’s facial Action Coding System (FACS; Ekman and Rosenberg, 2004) and was developed to convey mood and emotion in instant messaging. The user can select a specific emotion by moving a cursor on top of one of the four icons representing the quadrants of an Affect Grid, which then expands making all icons in this quadrant available for selection.To make the SAM more accessible to children, Hayashi et al. (2016) replaced the cartoon characters with emoji. Their five-point “emoti-SAM” was quickly grasped by children and effectively used as both an online and a paper version.Swaney-Stueve et al. (2018) developed a seven-point bipolar valence scale labeled with emoji. They compared this scale to a nine-point verbal liking scale in an online experiment in which children reported their affective responses to different pizza flavors and situations. Both scales yielded similar responses distributions with a strong positive linear correlation (R2 > 0.99 for both pizza flavors and situations). They concluded that further research was needed to extend their unidimensional emoji scale into a two-dimensional one that also measures arousal.Emoji-based rating tools are increasingly becoming popular tools as self-report instruments (Kaye et al., 2017) to measure for instance user and consumer experience (e.g., www.emojiscore.com). For instance, Moore et al. (2013) developed a nine-point emoji scale to measure users’ affective responses to an online training simulation, and Alismail and Zhang (2018) used a five-point emoji scale to assess user experience with electronic questionnaires. While emoji typically express different degrees of valence and arousal (Moore et al., 2013), previous studies only validated (Aluja et al., 2018) and used (Moore et al., 2013; Alismail and Zhang, 2018) the valence dimension.While people do not easily name food-related emotions, they appear to use emoji in a spontaneous and intuitive way to communicate food-related emotional experiences (Vidal et al., 2016). Previous studies found that emoji can serve as a direct self-report tool for measuring food-related affective feelings (Vidal et al., 2016; Ares and Jaeger, 2017; Gallo et al., 2017; Jaeger et al., 2017b, 2018a; Schouteten et al., 2018). However, these previous studies used subsets of the most popular and currently available emoji, most of which show facial expressions that have no clear relation to food experiences. Also, the size of these sets (33 emoji: Ares and Jaeger, 2017; Jaeger et al., 2017b, 2018a; Schouteten et al., 2018; 25–39 emoji: Jaeger et al., 2017a; and 50 emoji: Gallo et al., 2017) is rather overwhelming and comparable to the large number of words typically used in emotional lexicons to measure emotional associations to food and beverages (e.g., King and Meiselman, 2010; Spinelli et al., 2014; Nestrud et al., 2016). These large set sizes make emoji-based rating or selection procedures quite inefficient. Sets of emoji were used in both check-all-that-apply (CATA) (Ares and Jaeger, 2017; Jaeger et al., 2017a,b; Schouteten et al., 2018) and rate-all-that-apply (RATA; Ares and Jaeger, 2017) questionnaires. In general, these studies found that emoji are capable to discriminate well between hedonically diverse stimuli, while the reproducibility of the emotional profiles was quite high (Jaeger et al., 2017b). Compared with other non-verbal methods that use cartoon figures to represent different emotions (e.g., Desmet et al., 2012; Laurans and Desmet, 2012; Huisman et al., 2013), emoji characters appear to have the advantage of being more familiar to users. It seems that users easily connect emoji to food-elicited emotions, even without any explicit reference to feelings in the wording of the associated question (Ares and Jaeger, 2017). Given that emotions in facial expressions, gestures, and body postures are similarly perceived across different cultures (Ekman and Friesen, 1971; Ekman, 1994), cross-cultural differences in the interpretation of emoji could also be smaller than the influences of culture and language on verbal affective self-report tasks (Torrico et al., 2018). Also, emoji provide a visual display of emotion, making them also beneficial for use with children who may not have the vocabulary to convey all their emotions (Gallo et al., 2017; Schouteten et al., 2018).For repeated or routine testing in applied settings, selecting emoji from a long list of possible candidates may be a task that is too demanding, and shorter tests are therefore required. The emoji used to measure food-related emotions in previous studies (Ares and Jaeger, 2017; Gallo et al., 2017; Jaeger et al., 2017a,b; Schouteten et al., 2018) were not specifically developed for this purpose but were merely selected as the most appropriate ones from the general set of available emoji. As a result, several emoji were obviously out of context and had no relevance for the description food-related affective associations (Jaeger et al., 2017b). Also, the most frequently used emoji are primarily associated with positive emotional experiences reflecting the dominance of positive emotions in food consumption (hedonic asymmetry; Desmet and Schifferstein, 2008). Hence, there is a need for a set of emoji that (1) specifically relate to food experience and (2) that span the entire hedonic continuum from negative to positive emotions.
[ "29455232", "12536208", "28784478", "3367283", "23055170", "12379594", "26849361", "25009514", "7962581", "26344127", "15701224", "25521352", "15944134", "17945385", "8165272", "5542557", "23459781", "21440031", "19653766", "14992636", "18279324", "18056086", "27340136", "25336280", "30007739", "29803492", "29937744", "28107838", "27330520", "26641093", "23231533", "27102867", "30122793", "12124723", "22617651", "29148306", "28382006", "23996831", "28544868", "27513636", "25490404", "17695356", "24709484", "21794970", "16224608", "28316576", "18824074", "19348537", "19097677", "17418416", "18839484", "15703257", "12925283", "28632746", "12899361", "17495173", "26873934", "21787078", "4645509", "21742041", "26181746", "16192380", "26244107", "27978493", "24141714" ]
[ { "pmid": "29455232", "title": "Startle reflex modulation by affective face \"Emoji\" pictographs.", "abstract": "The current research was designed to assess possible differences in the emotional content of pleasant and unpleasant face emoji using acoustically evoked eyeblink startle reflex response. Stimuli were selected from Emojipedia Webpage. First, we assessed these stimuli with a previous independent sample of 190 undergraduate students (46 males and 144 females) mean age of 21.43 years (SD 3.89). A principal axis method was performed using the 30 selected emoji faces, extracting two factors (15 pleasant and 15 unpleasant emoji). Second, we measured the acoustic startle reflex modulation in 53 young adult women [mean age 22.13 years (SD 4.3)] during the viewing of each of the 30 emoji emotional faces in the context of the theory of motivation and emotion proposed by Lang (1995), but considering only the valence dimension. We expected to find higher acoustically evoked startle responses when viewing unpleasant emoji and lower responses for pleasant ones, similarly to the results obtained in the studies using human faces as emotional stimulus. An ANOVA was conducted to compare acoustic startle responses associated with pleasant and unpleasant emoji. Results yielded main effects for picture valence (λ = 0.80, F(1, 50) = 12.80, p = .001, η2 = 0.20). Post-hoc t test analysis indicated significant differences in the startle response between unpleasant (50.95 ± 1.75) and pleasant (49.14 ± 2.49) emoji (t (52) = 3.59, p = .001), with a Cohen's d = 0.495. Viewing affective facial emoji expressions modulates the acoustic startle reflex response according to their emotional content." }, { "pmid": "12536208", "title": "Dissociated neural representations of intensity and valence in human olfaction.", "abstract": "Affective experience has been described in terms of two primary dimensions: intensity and valence. In the human brain, it is intrinsically difficult to dissociate the neural coding of these affective dimensions for visual and auditory stimuli, but such dissociation is more readily achieved in olfaction, where intensity and valence can be manipulated independently. Using event-related functional magnetic resonance imaging (fMRI), we found amygdala activation to be associated with intensity, and not valence, of odors. Activity in regions of orbitofrontal cortex, in contrast, were associated with valence independent of intensity. These findings show that distinct olfactory regions subserve the analysis of the degree and quality of olfactory stimulation, suggesting that the affective representations of intensity and valence draw upon dissociable neural substrates." }, { "pmid": "28784478", "title": "A comparison of five methodological variants of emoji questionnaires for measuring product elicited emotional associations: An application with seafood among Chinese consumers.", "abstract": "Product insights beyond hedonic responses are increasingly sought and include emotional associations. Various word-based questionnaires for direct measurement exist and an emoji variant was recently proposed. Herein, emotion words are replaced with emoji conveying a range of emotions. Further assessment of emoji questionnaires is needed to establish their relevance in food-related consumer research. Methodological research contributes hereto and in the present research the effects of question wording and response format are considered. Specifically, a web study was conducted with Chinese consumers (n=750) using four seafood names as stimuli (mussels, lobster, squid and abalone). Emotional associations were elicited using 33 facial emoji. Explicit reference to \"how would you feel?\" in the question wording changed product emoji profiles minimally. Consumers selected only a few emoji per stimulus when using CATA (check-all-that-apply) questions, and layout of the CATA question had only a small impact on responses. A comparison of CATA questions with forced yes/no questions and RATA (rate-all-that-apply) questions revealed an increase in frequency of emoji use for yes/no questions, but not a corresponding improvement in sample discrimination. For the stimuli in this research, which elicited similar emotional associations, RATA was probably the best methodological choice, with 8.5 emoji being used per stimulus, on average, and increased sample discrimination relative to CATA (12% vs. 6-8%). The research provided additional support for the potential of emoji surveys as a method for measurement of emotional associations to foods and beverages and began contributing to development of guidelines for implementation." }, { "pmid": "3367283", "title": "The recognition of threatening facial stimuli.", "abstract": "Two studies examined the information that defines a threatening facial display. The first study identified those facial characteristics that distinguish between representations of threatening and nonthreatening facial displays. Masks that presented either threatening or nonthreatening facial displays were obtained from a number of non-Western cultures and scored for the presence of those facial features that discriminated between such displays in the drawings of two American samples. Threatening masks contained a significantly higher number of these characteristics across all cultures examined. The second study determined whether the information provided by the facial display might be more primary nonrepresentational visual patterns than facial features with obvious denotative meaning (e.g., diagonal lines rather than downturned eyebrows). The subjective response to sets of diagonal, angular, and curvilinear visual stimuli revealed that the nonrepresentational features of angularity and diagonality in the visual stimulus appeared to have the ability to evoke the subjective responses that convey the meaning of threat." }, { "pmid": "23055170", "title": "Seriousness checks are useful to improve data validity in online research.", "abstract": "Nonserious answering behavior increases noise and reduces experimental power; it is therefore one of the most important threats to the validity of online research. A simple way to address the problem is to ask respondents about the seriousness of their participation and to exclude self-declared nonserious participants from analysis. To validate this approach, a survey was conducted in the week prior to the German 2009 federal election to the Bundestag. Serious participants answered a number of attitudinal and behavioral questions in a more consistent and predictively valid manner than did nonserious participants. We therefore recommend routinely employing seriousness checks in online surveys to improve data validity." }, { "pmid": "12379594", "title": "Autonomic nervous system responses to odours: the role of pleasantness and arousal.", "abstract": "Perception of odours can provoke explicit reactions such as judgements of intensity or pleasantness, and implicit output such as skin conductance or heart rate variations. The main purpose of the present experiment was to ascertain: (i) the correlation between odour ratings (intensity, arousal, pleasantness and familiarity) and activation of the autonomic nervous system, and (ii) the inter-correlation between self-report ratings on intensity, arousal, pleasantness and familiarity dimensions in odour perception. Twelve healthy volunteers were tested in two separate sessions. Firstly, subjects were instructed to smell six odorants (isovaleric acid, thiophenol, pyridine, L-menthol, isoamyl acetate, and 1-8 cineole), while skin conductance and heart rate variations were being measured. During this phase, participants were not asked to give any judgement about the odorants. Secondly, subjects were instructed to rate the odorants on dimensions of intensity, pleasantness, arousal and familiarity (self-report ratings), by giving a mark between 1 (not at all intense, arousing, pleasant or familiar) and 9 (extremely intense, arousing, pleasant or familiar). Results indicated: (i) a pleasantness factor correlated with heart rate variations, (ii) an arousal factor correlated with skin conductance variations, and (iii) a strong correlation between the arousal and intensity dimensions. In conclusion, given that these correlations are also found in other studies using visual and auditory stimuli, these findings provide preliminary information suggesting that autonomic variations in response to olfactory stimuli are probably not modality specific, and may be organized along two main dimensions of pleasantness and arousal, at least for the parameters considered (i.e. heart rate and skin conductance)." }, { "pmid": "26849361", "title": "The Affective Slider: A Digital Self-Assessment Scale for the Measurement of Human Emotions.", "abstract": "Self-assessment methods are broadly employed in emotion research for the collection of subjective affective ratings. The Self-Assessment Manikin (SAM), a pictorial scale developed in the eighties for the measurement of pleasure, arousal, and dominance, is still among the most popular self-reporting tools, despite having been conceived upon design principles which are today obsolete. By leveraging on state-of-the-art user interfaces and metacommunicative pictorial representations, we developed the Affective Slider (AS), a digital self-reporting tool composed of two slider controls for the quick assessment of pleasure and arousal. To empirically validate the AS, we conducted a systematic comparison between AS and SAM in a task involving the emotional assessment of a series of images taken from the International Affective Picture System (IAPS), a database composed of pictures representing a wide range of semantic categories often used as a benchmark in psychological studies. Our results show that the AS is equivalent to SAM in the self-assessment of pleasure and arousal, with two added advantages: the AS does not require written instructions and it can be easily reproduced in latest-generation digital devices, including smartphones and tablets. Moreover, we compared new and normative IAPS ratings and found a general drop in reported arousal of pictorial stimuli. Not only do our results demonstrate that legacy scales for the self-report of affect can be replaced with new measurement tools developed in accordance to modern design principles, but also that standardized sets of stimuli which are widely adopted in research on human emotion are not as effective as they were in the past due to a general desensitization towards highly arousing content." }, { "pmid": "25009514", "title": "Food-pics: an image database for experimental research on eating and appetite.", "abstract": "Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior." }, { "pmid": "7962581", "title": "Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.", "abstract": "The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts." }, { "pmid": "26344127", "title": "Standardized food images: A photographing protocol and image database.", "abstract": "The regulation of food intake has gained much research interest because of the current obesity epidemic. For research purposes, food images are a good and convenient alternative for real food because many dietary decisions are made based on the sight of foods. Food pictures are assumed to elicit anticipatory responses similar to real foods because of learned associations between visual food characteristics and post-ingestive consequences. In contemporary food science, a wide variety of images are used which introduces between-study variability and hampers comparison and meta-analysis of results. Therefore, we created an easy-to-use photographing protocol which enables researchers to generate high resolution food images appropriate for their study objective and population. In addition, we provide a high quality standardized picture set which was characterized in seven European countries. With the use of this photographing protocol a large number of food images were created. Of these images, 80 were selected based on their recognizability in Scotland, Greece and The Netherlands. We collected image characteristics such as liking, perceived calories and/or perceived healthiness ratings from 449 adults and 191 children. The majority of the foods were recognized and liked at all sites. The differences in liking ratings, perceived calories and perceived healthiness between sites were minimal. Furthermore, perceived caloric content and healthiness ratings correlated strongly (r ≥ 0.8) with actual caloric content in both adults and children. The photographing protocol as well as the images and the data are freely available for research use on http://nutritionalneuroscience.eu/. By providing the research community with standardized images and the tools to create their own, comparability between studies will be improved and a head-start is made for a world-wide standardized food image database." }, { "pmid": "15701224", "title": "Implicit and explicit evaluation: FMRI correlates of valence, emotional intensity, and control in the processing of attitudes.", "abstract": "Previous work suggests that explicit and implicit evaluations (good-bad) involve somewhat different neural circuits that process different dimensions such as valence, emotional intensity, and complexity. To better understand these differences, we used functional magnetic resonance imaging to identify brain regions that respond differentially to such dimensions depending on whether or not an explicit evaluation is required. Participants made either good-bad judgments (evaluative) or abstract-concrete judgments (not explicitly evaluative) about socially relevant concepts (e. g., ''murder,'' ''happiness,'' ''abortion,'' ''welfare''). After scanning, participants rated the concepts for goodness, badness, emotional intensity, and how much they tried to control their evaluation of the concept. Amygdala activation correlated with emotional intensity and right insula activation correlated with valence in both tasks, indicating that these aspects of stimuli were processed by these areas regardless of intention. In contrast, for the explicitly evaluative good-bad task only, activity in the anterior cingulate, frontal pole, and lateral areas of the orbital frontal cortex correlated with ratings of control, which in turn were correlated with a measure of ambivalence. These results highlight that evaluations are the consequence of complex circuits that vary depending on task demands." }, { "pmid": "25521352", "title": "Evoked emotions predict food choice.", "abstract": "In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores." }, { "pmid": "15944134", "title": "Cognitive modulation of olfactory processing.", "abstract": "We showed how cognitive, semantic information modulates olfactory representations in the brain by providing a visual word descriptor, \"cheddar cheese\" or \"body odor,\" during the delivery of a test odor (isovaleric acid with cheddar cheese flavor) and also during the delivery of clean air. Clean air labeled \"air\" was used as a control. Subjects rated the affective value of the test odor as significantly more unpleasant when labeled \"body odor\" than when labeled \"cheddar cheese.\" In an event-related fMRI design, we showed that the rostral anterior cingulate cortex (ACC)/medial orbitofrontal cortex (OFC) was significantly more activated by the test stimulus and by clean air when labeled \"cheddar cheese\" than when labeled \"body odor,\" and the activations were correlated with the pleasantness ratings. This cognitive modulation was also found for the test odor (but not for the clean air) in the amygdala bilaterally." }, { "pmid": "17945385", "title": "Sources of positive and negative emotions in food experience.", "abstract": "Emotions experienced by healthy individuals in response to tasting or eating food were examined in two studies. In the first study, 42 participants reported the frequency with which 22 emotion types were experienced in everyday interactions with food products, and the conditions that elicited these emotions. In the second study, 124 participants reported the extent to which they experienced each emotion type during sample tasting tests for sweet bakery snacks, savoury snacks, and pasta meals. Although all emotions occurred from time to time in response to eating or tasting food, pleasant emotions were reported more often than unpleasant ones. Satisfaction, enjoyment, and desire were experienced most often, and sadness, anger, and jealousy least often. Participants reported a wide variety of eliciting conditions, including statements that referred directly to sensory properties and experienced consequences, and statements that referred to more indirect conditions, such as expectations and associations. Five different sources of food emotions are proposed to represent the various reported eliciting conditions: sensory attributes, experienced consequences, anticipated consequences, personal or cultural meanings, and actions of associated agents." }, { "pmid": "8165272", "title": "Strong evidence for universals in facial expressions: a reply to Russell's mistaken critique.", "abstract": "J. A. Russell (1994) misrepresents what universality means, misinterprets the evidence from past studies, and fails to consider or report findings that disagree with his position. New data are introduced that decisively answer the central question that Russell raises about the use of a forced-choice format in many of the past studies. This article also shows that his many other qualms about other aspects of the design of the studies of literate cultures have no merit. Russell's critique of the preliterate cultures is inaccurate; he does not fully disclose what those who studied preliterate subjects did or what they concluded that they had found. Taking account of all of Russell's qualms, my analysis shows that the evidence from both literate and preliterate cultures is overwhelming in support of universals in facial expressions." }, { "pmid": "23459781", "title": "The FoodCast research image database (FRIDa).", "abstract": "In recent years we have witnessed an increasing interest in food processing and eating behaviors. This is probably due to several reasons. The biological relevance of food choices, the complexity of the food-rich environment in which we presently live (making food-intake regulation difficult), and the increasing health care cost due to illness associated with food (food hazards, food contamination, and aberrant food-intake). Despite the importance of the issues and the relevance of this research, comprehensive and validated databases of stimuli are rather limited, outdated, or not available for non-commercial purposes to independent researchers who aim at developing their own research program. The FoodCast Research Image Database (FRIDa) we present here includes 877 images belonging to eight different categories: natural-food (e.g., strawberry), transformed-food (e.g., french fries), rotten-food (e.g., moldy banana), natural-non-food items (e.g., pinecone), artificial food-related objects (e.g., teacup), artificial objects (e.g., guitar), animals (e.g., camel), and scenes (e.g., airport). FRIDa has been validated on a sample of healthy participants (N = 73) on standard variables (e.g., valence, familiarity, etc.) as well as on other variables specifically related to food items (e.g., perceived calorie content); it also includes data on the visual features of the stimuli (e.g., brightness, high frequency power, etc.). FRIDa is a well-controlled, flexible, validated, and freely available (http://foodcast.sissa.it/neuroscience/) tool for researchers in a wide range of academic fields and industry." }, { "pmid": "21440031", "title": "Time course of implicit processing and explicit processing of emotional faces and emotional words.", "abstract": "Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity." }, { "pmid": "19653766", "title": "How liked and disliked foods affect time perception.", "abstract": "The purpose of this study was to investigate the influence on time perception of pictures showing liked or disliked foods in comparison with a neutral picture. Healthy adults performed a temporal bisection task in which they had to categorize the presentation duration of pictures (neutral, liked, and disliked foods) as more similar to a short (400 ms) or to a long (1,600 ms) standard duration. The data revealed that the presentation duration of food pictures was underestimated compared with the presentation duration of the neutral picture, and that this underestimation was more marked for the disliked than for the liked food pictures. These results are consistent with the idea that this time underestimation arises from an attentional-bias mechanism. The food pictures, and particularly those depicting disliked food items, distracted attention away from the processing of time." }, { "pmid": "14992636", "title": "Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires.", "abstract": "The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated by comparing a new large Internet sample (N = 361,703) with a set of 510 published traditional samples. Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age. Moreover, Internet findings generalize across presentation formats, are not adversely affected by nonserious or repeat responders, and are consistent with findings from traditional methods. It is concluded that Internet methods can contribute to many areas of psychology." }, { "pmid": "18279324", "title": "Selective attention to affective value alters how the brain processes taste stimuli.", "abstract": "How does selective attention to affect influence sensory processing? In an fMRI investigation, when subjects were instructed to remember and rate the pleasantness of a taste stimulus, 0.1 M monosodium glutamate, activations were greater in the medial orbitofrontal and pregenual cingulate cortex than when subjects were instructed to remember and rate the intensity of the taste. When the subjects were instructed to remember and rate the intensity, activations were greater in the insular taste cortex. An interaction analysis showed that this dissociation of taste processing, depending on whether attention to pleasantness or intensity was relevant, was highly significant (P < 0.0002). Thus, depending on the context in which tastes are presented and whether affect is relevant, the brain responds to a taste differently. These findings show that, when attention is paid to affective value, the brain systems engaged to represent the sensory stimulus of taste are different from those engaged when attention is directed to the physical properties of a stimulus such as its intensity. This differential biasing of brain regions engaged in processing a sensory stimulus, depending on whether the cognitive demand is for affect-related vs. more sensory-related processing, may be an important aspect of cognition and attention. This has many implications for understanding the effects not only of taste but also of other sensory stimuli." }, { "pmid": "18056086", "title": "How cognition modulates affective responses to taste and flavor: top-down influences on the orbitofrontal and pregenual cingulate cortices.", "abstract": "How cognition influences the affective brain representations of the taste and flavor of a food is important not only for understanding top-down influences in the brain, but also in relation to the topical issues of appetite control and obesity. We found using functional magnetic resonance imaging that activations related to the affective value of umami taste and flavor (as shown by correlations with pleasantness ratings) in the orbitofrontal cortex were modulated by word-level descriptors. Affect-related activations to taste were modulated in a region that receives from the orbitofrontal cortex, the pregenual cingulate cortex, and to taste and flavor in another region that receives from the orbitofrontal cortex, the ventral striatum. Affect-related cognitive modulations were not found in the insular taste cortex, where the intensity but not the pleasantness of the taste was represented. We conclude that top-down language-level cognitive effects reach far down into the earliest cortical areas that represent the appetitive value of taste and flavor. This is an important way in which cognition influences the neural mechanisms that control appetite." }, { "pmid": "27340136", "title": "Implicit and Explicit Measurements of Affective Responses to Food Odors.", "abstract": "One of the main functions of olfaction is to activate approach/avoidance behavior, toward or away from people, foods, or other odor sources. These behaviors are partly automated and therefore poorly accessible via introspection. Explicit tests need therefore be complemented by implicit tests to provide additional insights into the underlying processes of these behaviors. Affective responses to seven food odors plus one control nonodor were assessed in 28 female participants (18-30 years) using explicit tests [pleasantness, intensity, and non-verbal emotional ratings (PrEmo)] as well as implicit tests that reflect dynamic expressive emotional reactions (facial expressions) as well as behavioral-preparation responses (autonomic nervous system responses: heart rate, skin conductance, and skin temperature). Explicit tests showed significant differences in pleasantness (P < 0.05), and all PrEmo emotions (P < 0.05) except shame. Explicit emotional responses were summarized by valence (explaining 83% of the responses variance) and arousal (14%) as principal components. Early implicit facial and ANS responses (after 1s) seem to reflect the odors' arousal, whereas later ANS responses (after 3-4s) reflected the odors' valence. The results suggest that explicit measures primarily reflect the odors' valence, as result of from relatively long (conscious) processing, which may be less relevant for odor acceptance in the real world where fast and automated processes based on arousal may play a larger role." }, { "pmid": "25336280", "title": "Modulation of eyeblink and postauricular reflexes during the anticipation and viewing of food images.", "abstract": "One of the goals of neuroscience research on the reward system is to fractionate its functions into meaningful subcomponents. To this end, the present study examined emotional modulation of the eyeblink and postauricular components of startle in 60 young adults during anticipation and viewing of food images. Appetitive and disgusting photos served as rewards and punishments in a guessing game. Reflexes evoked during anticipation were not influenced by valence, consistent with the prevailing view that startle modulation indexes hedonic impact (liking) rather than incentive salience (wanting). During the slide-viewing period, postauricular reflexes were larger for correct than incorrect feedback, whereas the reverse was true for blink reflexes. Probes were delivered in brief trains, but only the first response exhibited this pattern. The specificity of affective startle modification makes it a valuable tool for studying the reward system." }, { "pmid": "30007739", "title": "Measuring consumers' product associations with emoji and emotion word questionnaires: case studies with tasted foods and written stimuli.", "abstract": "Measurement of emotional associations to food/beverage stimuli and consumption situations provide consumer insights that extend beyond hedonic responses. The aim of this research was to compare emoji, a novel approach in product-focused emotion research, with emotion words, an established approach. Focus was directed to questionnaires, which are popular in this field of research. The questionnaires were overall comparable in the meanings conveyed by the emoji/emotion words, and matched for length. Eight studies with a total of 1121 consumers in New Zealand and China were conducted with tasted foods and written stimuli. The studies were diverse and compatible with an explorative research strategy. While emoji, overall, were more discriminative than emotion words, the findings were highly study specific. When tasted foods with medium/large sample differences were used, emoji and emotion words showed similar performance overall, although emotion words better discriminated between the most liked samples and emoji better discriminated between the lesser liked samples. When samples were more similar, emoji generally were more discriminative, although emotion words still discriminated well for the pairs of most liked samples. Among Chinese consumers, there was some evidence to suggest less suitability of emotion words to characterise and discriminate written stimuli that elicited negative emotions. Emoji profiles, on the other hand, fitted expectations, and this difference could be linked to the influence of national culture. Taken together, the results from this research suggest that emoji questionnaires can have some advantages. However, their multiple meanings can be an obstacle. Overall, practitioners are advised to not select emotion questionnaire method independently from other experimental factors, but make an informed study-specific decision as to the choice of emoji or emotion word questionnaires. Additional research that eliminate some of the differences between the studies in this research are recommended to corroborate the present conclusions." }, { "pmid": "29803492", "title": "Linking product-elicited emotional associations and sensory perceptions through a circumplex model based on valence and arousal: Five consumer studies.", "abstract": "Sensory product characterisation by consumers is increasingly supplemented by measurement of emotional associations. However, studies that link products' sensory perception and emotional associations are still scarce. Five consumer studies were conducted using cashew nuts, peanuts, chocolate, fruit and processed tomatoes as the product categories. Consumers (n = 685) completed check-all-that-apply (CATA) questions to obtain sensory product perceptions and associations with emotion words. The latter were conceptualised and interpreted through a circumplex emotion model spanned by the dimensions of valence (pleasure to displeasure) and arousal (activation to deactivation). Through regression analysis, sensory terms were mapped to the circumplex model to represent statistical linkages with emotion words. Within a were interpretable. The most notable finding was the highly study-specific nature of the linkages, which was mainly attributed to the influence of product category. Methodological choices may also have been partly responsible for the differences. Three studies used a general emotion vocabulary (EsSense Profile®) and an identical number of sensory terms (n = 39). The less complete coverage of the emotional circumplex and the presence of synonymous sensory terms could have diminished the ability to interpret the results. Conversely, two studies used fewer emotion words and sensory terms and these, furthermore, were purposefully selected for the focal sets of samples. The linkages in these latter studies were more interpretable and this could suggest that customised vocabularies of modest length may be desirable when seeking to establish linkages between emotional associations and sensory characteristics of food/beverage stimuli. Purposeful inclusion of emotion words that fully span the circumplex emotion model may also be desirable. Overall, the research represents a new method for establishing linkages between the sensory properties and emotional association to food and beverage products." }, { "pmid": "29937744", "title": "Methods for Evaluating Emotions Evoked by Food Experiences: A Literature Review.", "abstract": "Besides sensory characteristics of food, food-evoked emotion is a crucial factor in predicting consumer's food preference and therefore in developing new products. Many measures have been developed to assess food-evoked emotions. The aim of this literature review is (i) to give an exhaustive overview of measures used in current research and (ii) to categorize these methods along measurement level (physiological, behavioral, and cognitive) and emotional processing level (unconscious sensory, perceptual/early cognitive, and conscious/decision making) level. This 3 × 3 categorization may help researchers to compile a set of complementary measures (\"toolbox\") for their studies. We included 101 peer-reviewed articles that evaluate consumer's emotions and were published between 1997 and 2016, providing us with 59 different measures. More than 60% of these measures are based on self-reported, subjective ratings and questionnaires (cognitive measurement level) and assess the conscious/decision-making level of emotional processing. This multitude of measures and their overrepresentation in a single category hinders the comparison of results across studies and building a complete multi-faceted picture of food-evoked emotions. We recommend (1) to use widely applied, validated measures only, (2) to refrain from using (highly correlated) measures from the same category but use measures from different categories instead, preferably covering all three emotional processing levels, and (3) to acquire and share simultaneously collected physiological, behavioral, and cognitive datasets to improve the predictive power of food choice and other models." }, { "pmid": "28107838", "title": "Emojis: Insights, Affordances, and Possibilities for Psychological Science.", "abstract": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour." }, { "pmid": "27330520", "title": "A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.", "abstract": "OBJECTIVE\nIntraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis.\n\n\nDISCUSSION FOR RESEARCHERS\nThere are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, \"model,\" \"type,\" and \"definition\" selections.\n\n\nDISCUSSION FOR READERS\nWhen coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively.\n\n\nCONCLUSION\nThis article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article." }, { "pmid": "26641093", "title": "Sentiment of Emojis.", "abstract": "There is a new generation of emoticons, called emojis, that is increasingly being used in mobile communications and social media. In the past two years, over ten billion emojis were used on Twitter. Emojis are Unicode graphic symbols, used as a shorthand to express concepts and ideas. In contrast to the small number of well-known emoticons that carry clear emotional contents, there are hundreds of emojis. But what are their emotional contents? We provide the first emoji sentiment lexicon, called the Emoji Sentiment Ranking, and draw a sentiment map of the 751 most frequently used emojis. The sentiment of the emojis is computed from the sentiment of the tweets in which they occur. We engaged 83 human annotators to label over 1.6 million tweets in 13 European languages by the sentiment polarity (negative, neutral, or positive). About 4% of the annotated tweets contain emojis. The sentiment analysis of the emojis allows us to draw several interesting conclusions. It turns out that most of the emojis are positive, especially the most popular ones. The sentiment distribution of the tweets with and without emojis is significantly different. The inter-annotator agreement on the tweets with emojis is higher. Emojis tend to occur at the end of the tweets, and their sentiment polarity increases with the distance. We observe no significant differences in the emoji rankings between the 13 languages and the Emoji Sentiment Ranking. Consequently, we propose our Emoji Sentiment Ranking as a European language-independent resource for automated sentiment analysis. Finally, the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar." }, { "pmid": "23231533", "title": "The relation between valence and arousal in subjective experience.", "abstract": "Affect is basic to many if not all psychological phenomena. This article examines 2 of the most fundamental properties of affective experience--valence and arousal--asking how they are related to each other on a moment to moment basis. Over the past century, 6 distinct types of relations have been suggested or implicitly presupposed in the literature. We critically review the available evidence for each proposal and argue that the evidence does not provide a conclusive answer. Next, we use statistical modeling to verify the different proposals in 8 data sets (with Ns ranging from 80 to 1,417) where participants reported their affective experiences in response to experimental stimuli in laboratory settings or as momentary or remembered in natural settings. We formulate 3 key conclusions about the relation between valence and arousal: (a) on average, there is a weak but consistent V-shaped relation of arousal as a function of valence, but (b) there is large variation at the individual level, so that (c) valence and arousal can in principle show a variety of relations depending on person or circumstances. This casts doubt on the existence of a static, lawful relation between valence and arousal. The meaningfulness of the observed individual differences is supported by their personality and cultural correlates. The malleability and individual differences found in the structure of affect must be taken into account when studying affect and its role in other psychological phenomena." }, { "pmid": "27102867", "title": "The Relation Between Valence and Arousal in Subjective Experience Varies With Personality and Culture.", "abstract": "OBJECTIVE\nWhile in general arousal increases with positive or negative valence (a so-called V-shaped relation), there are large differences among individuals in how these two fundamental dimensions of affect are related in people's experience. In two studies, we examined two possible sources of this variation: personality and culture.\n\n\nMETHOD\nIn Study 1, participants (Belgian university students) recalled a recent event that was characterized by high or low valence or arousal and reported on their feelings and their personality in terms of the Five-Factor Model. In Study 2, participants from Canada, China/Hong Kong, Japan, Korea, and Spain reported on their feelings in a thin slice of time and on their personality.\n\n\nRESULTS\nIn Study 1, we replicated the V-shape as characterizing the relation between valence and arousal, and identified personality correlates of experiencing particular valence-arousal combinations. In Study 2, we documented how the V-shaped relation varied as a function of Western versus Eastern cultural background and personality.\n\n\nCONCLUSIONS\nThe results showed that the steepness of the V-shaped relation between valence and arousal increases with Extraversion within cultures, and with a West-East distinction between cultures. Implications for the personality-emotion link and research on cultural differences in affect are discussed." }, { "pmid": "30122793", "title": "Simple geometric shapes are implicitly associated with affective value.", "abstract": "Growing evidence suggests that the underlying geometry of a visual image is an effective mechanism for conveying the affective meaning of a scene or object. Indeed, even very simple context-free geometric shapes have been shown to signal emotion. Specifically, downward-pointing V's are perceived as threatening and curvilinear forms are perceived as pleasant. As these shapes are thought to be primitive cues for decoding emotion, we sought to assess whether they are evaluated as affective even without extended cognitive processing. Using an Implicit Association Test to examine associations between three shapes (downward- and upward-pointing triangles, circles) and pleasant, unpleasant, and neutral scenes, in two studies we found that participants were faster to categorize downward-pointing triangles as unpleasant compared to neutral or pleasant. These findings were specific to downward-pointing shapes containing an acute angle. The present findings support the hypothesis that simple geometric forms convey emotion and that this perception does not require explicit judgment." }, { "pmid": "12124723", "title": "Cultural differences in responses to a Likert scale.", "abstract": "Cultural differences in responses to a Likert scale were examined. Self-identified Chinese, Japanese, and Americans (N=136, 323, and 160, respectively) recruited at ethnic or general supermarkets in Southern California completed a 13-question Sense of Coherence scale with a choice of either four, five, or seven responses in either Chinese, Japanese, or English. The Japanese respondents more frequently reported difficulty with the scale, the Chinese more frequently skipped questions, and both these groups selected the midpoint more frequently on items that involved admitting to a positive emotion than did the Americans, who were more likely to indicate a positive emotion. Construct validity of the scale tended to be better for the Chinese and the Americans when there were four response choices and for the Japanese when there were seven. Although culture affected response patterns, the association of sense of coherence and health was positive in all three cultural groups." }, { "pmid": "22617651", "title": "The brain basis of emotion: a meta-analytic review.", "abstract": "Researchers have wondered how the brain creates emotions since the early days of psychological science. With a surge of studies in affective neuroscience in recent decades, scientists are poised to answer this question. In this target article, we present a meta-analytic summary of the neuroimaging literature on human emotion. We compare the locationist approach (i.e., the hypothesis that discrete emotion categories consistently and specifically correspond to distinct brain regions) with the psychological constructionist approach (i.e., the hypothesis that discrete emotion categories are constructed of more general brain networks not specific to those categories) to better understand the brain basis of emotion. We review both locationist and psychological constructionist hypotheses of brain-emotion correspondence and report meta-analytic findings bearing on these hypotheses. Overall, we found little evidence that discrete emotion categories can be consistently and specifically localized to distinct brain regions. Instead, we found evidence that is consistent with a psychological constructionist approach to the mind: A set of interacting brain regions commonly involved in basic psychological operations of both an emotional and non-emotional nature are active during emotion experience and perception across a range of discrete emotion categories." }, { "pmid": "29148306", "title": "The face of wrath: The role of features and configurations in conveying social threat.", "abstract": "We examined the role of single features and feature configurations in the effect of schematic faces on rated threat. A total of 101 medical students rated their emotional impression of schematic facial stimuli using semantic differential scales (Activity, Negative Valence, and Potency). In different parts of the experiment, the ratings concerned single features, eyebrow-mouth configurations, or complete faces. Although eyebrows emerged as the overall most important feature, the effect of features was modulated by configuration. Simple configurations of eyebrows and mouth appeared to convey threat and nonthreat in a way highly similar to that of complete faces. In most cases, the configurations of eyebrows and mouth could significantly predict the effect of the complete faces." }, { "pmid": "28382006", "title": "Conducting Online Behavioral Research Using Crowdsourcing Services in Japan.", "abstract": "Recent research on human behavior has often collected empirical data from the online labor market, through a process known as crowdsourcing. As well as the United States and the major European countries, there are several crowdsourcing services in Japan. For research purpose, Amazon's Mechanical Turk (MTurk) is the widely used platform among those services. Previous validation studies have shown many commonalities between MTurk workers and participants from traditional samples based on not only personality but also performance on reasoning tasks. The present study aims to extend these findings to non-MTurk (i.e., Japanese) crowdsourcing samples in which workers have different ethnic backgrounds from those of MTurk. We conducted three surveys (N = 426, 453, 167, respectively) designed to compare Japanese crowdsourcing workers and university students in terms of their demographics, personality traits, reasoning skills, and attention to instructions. The results generally align with previous studies and suggest that non-MTurk participants are also eligible for behavioral research. Furthermore, small screen devices are found to impair participants' attention to instructions. Several recommendations concerning this sample are presented." }, { "pmid": "23996831", "title": "The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database.", "abstract": "Selecting appropriate stimuli to induce emotional states is essential in affective research. Only a few standardized affective stimulus databases have been created for auditory, language, and visual materials. Numerous studies have extensively employed these databases using both behavioral and neuroimaging methods. However, some limitations of the existing databases have recently been reported, including limited numbers of stimuli in specific categories or poor picture quality of the visual stimuli. In the present article, we introduce the Nencki Affective Picture System (NAPS), which consists of 1,356 realistic, high-quality photographs that are divided into five categories (people, faces, animals, objects, and landscapes). Affective ratings were collected from 204 mostly European participants. The pictures were rated according to the valence, arousal, and approach-avoidance dimensions using computerized bipolar semantic slider scales. Normative ratings for the categories are presented for each dimension. Validation of the ratings was obtained by comparing them to ratings generated using the Self-Assessment Manikin and the International Affective Picture System. In addition, the physical properties of the photographs are reported, including luminance, contrast, and entropy. The new database, with accompanying ratings and image parameters, allows researchers to select a variety of visual stimulus materials specific to their experimental questions of interest. The NAPS system is freely accessible to the scientific community for noncommercial use by request at http://naps.nencki.gov.pl ." }, { "pmid": "28544868", "title": "A Mathematical Model Captures the Structure of Subjective Affect.", "abstract": "Although it is possible to observe when another person is having an emotional moment, we also derive information about the affective states of others from what they tell us they are feeling. In an effort to distill the complexity of affective experience, psychologists routinely focus on a simplified subset of subjective rating scales (i.e., dimensions) that capture considerable variability in reported affect: reported valence (i.e., how good or bad?) and reported arousal (e.g., how strong is the emotion you are feeling?). Still, existing theoretical approaches address the basic organization and measurement of these affective dimensions differently. Some approaches organize affect around the dimensions of bipolar valence and arousal (e.g., the circumplex model), whereas alternative approaches organize affect around the dimensions of unipolar positivity and unipolar negativity (e.g., the bivariate evaluative model). In this report, we (a) replicate the data structure observed when collected according to the two approaches described above, and reinterpret these data to suggest that the relationship between each pair of affective dimensions is conditional on valence ambiguity, and (b) formalize this structure with a mathematical model depicting a valence ambiguity dimension that decreases in range as arousal decreases (a triangle). This model captures variability in affective ratings better than alternative approaches, increasing variance explained from ~60% to over 90% without adding parameters." }, { "pmid": "27513636", "title": "Affective Pictures and the Open Library of Affective Foods (OLAF): Tools to Investigate Emotions toward Food in Adults.", "abstract": "Recently, several sets of standardized food pictures have been created, supplying both food images and their subjective evaluations. However, to date only the OLAF (Open Library of Affective Foods), a set of food images and ratings we developed in adolescents, has the specific purpose of studying emotions toward food. Moreover, some researchers have argued that food evaluations are not valid across individuals and groups, unless feelings toward food cues are compared with feelings toward intense experiences unrelated to food, that serve as benchmarks. Therefore the OLAF presented here, comprising a set of original food images and a group of standardized highly emotional pictures, is intended to provide valid between-group judgments in adults. Emotional images (erotica, mutilations, and neutrals from the International Affective Picture System/IAPS) additionally ensure that the affective ratings are consistent with emotion research. The OLAF depicts high-calorie sweet and savory foods and low-calorie fruits and vegetables, portraying foods within natural scenes matching the IAPS features. An adult sample evaluated both food and affective pictures in terms of pleasure, arousal, dominance, and food craving, following standardized affective rating procedures. The affective ratings for the emotional pictures corroborated previous findings, thus confirming the reliability of evaluations for the food images. Among the OLAF images, high-calorie sweet and savory foods elicited the greatest pleasure, although they elicited, as expected, less arousal than erotica. The observed patterns were consistent with research on emotions and confirmed the reliability of OLAF evaluations. The OLAF and affective pictures constitute a sound methodology to investigate emotions toward food within a wider motivational framework. The OLAF is freely accessible at digibug.ugr.es." }, { "pmid": "25490404", "title": "Meet OLAF, a good friend of the IAPS! The Open Library of Affective Foods: a tool to investigate the emotional impact of food in adolescents.", "abstract": "In the last decades, food pictures have been repeatedly employed to investigate the emotional impact of food on healthy participants as well as individuals who suffer from eating disorders and obesity. However, despite their widespread use, food pictures are typically selected according to each researcher's personal criteria, which make it difficult to reliably select food images and to compare results across different studies and laboratories. Therefore, to study affective reactions to food, it becomes pivotal to identify the emotional impact of specific food images based on wider samples of individuals. In the present paper we introduce the Open Library of Affective Foods (OLAF), which is a set of original food pictures created to reliably select food pictures based on the emotions they prompt, as indicated by affective ratings of valence, arousal, and dominance and by an additional food craving scale. OLAF images were designed to allow simultaneous use with affective images from the International Affective Picture System (IAPS), which is a well-known instrument to investigate emotional reactions in the laboratory. The ultimate goal of the OLAF is to contribute to understanding how food is emotionally processed in healthy individuals and in patients who suffer from eating and weight-related disorders. The present normative data, which was based on a large sample of an adolescent population, indicate that when viewing affective non-food IAPS images, valence, arousal, and dominance ratings were in line with expected patterns based on previous emotion research. Moreover, when viewing food pictures, affective and food craving ratings were consistent with research on food cue processing. As a whole, the data supported the methodological and theoretical reliability of the OLAF ratings, therefore providing researchers with a standardized tool to reliably investigate the emotional and motivational significance of food. The OLAF database is publicly available at zenodo.org." }, { "pmid": "17695356", "title": "EMuJoy: software for continuous measurement of perceived emotions in music.", "abstract": "An adequate study of emotions in music and film should be based on the real-time measurement of self-reported data using a continuous-response method. The recording system discussed in this article reflects two important aspects of such research: First, for a better comparison of results, experimental and technical standards for continuous measurement should be taken into account, and second, the recording system should be open to the inclusion of multimodal stimuli. In light of these two considerations, our article addresses four basic principles of the continuous measurement of emotions: (1) the dimensionality of the emotion space, (2) data acquisition (e.g., the synchronization of media and the self-reported data), (3) interface construction for emotional responses, and (4) the use of multiple stimulus modalities. Researcher-developed software (EMuJoy) is presented as a freeware solution for the continuous measurement of responses to different media, along with empirical data from the self-reports of 38 subjects listening to emotional music and viewing affective pictures." }, { "pmid": "24709484", "title": "\"Yummy\" versus \"Yucky\"! Explicit and implicit approach-avoidance motivations towards appealing and disgusting foods.", "abstract": "Wanting and rejecting food are natural reactions that we humans all experience, often unconsciously, on a daily basis. However, in the food domain, the focus to date has primarily been on the approach tendency, and researchers have tended not to study the two opposing tendencies in a balanced manner. Here, we develop a methodology with which to understand people's implicit and explicit reactions to both positive (appealing) and negative (disgusting) foods. It consists of a combination of direct and indirect computer-based tasks, as well as a validated food image stimulus set, specifically designed to investigate motivational approach and avoidance responses towards foods. Fifty non-dieting participants varying in terms of their hunger state (hungry vs. not hungry) reported their explicit evaluations of pleasantness, wanting, and disgust towards the idea of tasting each of the food images that were shown. Their motivational tendencies towards those food items were assessed indirectly using a joystick-based approach-avoidance procedure. For each of the food images that were presented, the participants had to move the joystick either towards or away from themselves (approach and avoidance movements, respectively) according to some unrelated instructions, while their reaction times were recorded. Our findings demonstrated the hypothesised approach-avoidance compatibility effect: a significant interaction of food valence and direction of movement. Furthermore, differences between the experimental groups were observed. The participants in the no-hunger group performed avoidance (vs. approach) movements significantly faster; and their approach movements towards positive (vs. negative) foods were significantly faster. As expected, the self-report measures revealed a strong effect of the food category on the three dependent variables and a strong main effect of the hunger state on wanting and to a lesser extent on pleasantness." }, { "pmid": "21794970", "title": "On the automaticity of emotion processing in words and faces: event-related brain potentials evidence from a superficial task.", "abstract": "The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was irrelevant. Both emotional words and facial expressions impacted ERPs already between 50 and 100 ms after stimulus onset, possibly reflecting rapid relevance detection. Following this initial processing stage only emotionality in faces but not in words was associated with an early posterior negativity (EPN). Therefore, when emotion is irrelevant in a task which requires superficial stimulus analysis, automatically enhanced sensory encoding of emotional content appears to occur only for evolutionary prepared emotional stimuli, as reflected in larger EPN amplitudes to faces, but not to symbolic word stimuli." }, { "pmid": "16224608", "title": "Comparison of Brazilian and American norms for the International Affective Picture System (IAPS).", "abstract": "OBJECTIVE\nThe present article compares Brazilian and American norms for the International Affective Picture System (IAPS), a set of normative emotional photographic slides for experimental investigations.\n\n\nMETHODS\nSubjects were 1,062 Brazilian university students (364 men and 698 women) who rated 707 pictures from the IAPS in terms of pleasure, arousal, and dominance following the methodology of the original normative study in the US, enabling direct comparison of data from the two samples through Pearson product moment correlation and Student t test.\n\n\nRESULTS\nAll correlations were highly significant with the highest level for the pleasure dimension, followed by dominance and arousal. However, contrary to the American normative values, our data showed that Brazilian subjects generally assigned higher arousal ratings overall.\n\n\nCONCLUSION\nOur findings confirm that this set of stimuli can be used in Brazil as an affective rating tool due to the high correlations found across the two populations, despite differences on the arousal dimension, which are discussed in detail." }, { "pmid": "28316576", "title": "Nencki Affective Picture System: Cross-Cultural Study in Europe and Iran.", "abstract": "Although emotions have been assumed conventionally to be universal, recent studies have suggested that various aspects of emotions may be mediated by cultural background. The purpose of our research was to test these contradictory views, in the case of the subjective evaluation of visual affective stimuli. We also sought to validate the recently introduced Nencki Affective Picture System (NAPS) database on a different cultural group. Since there has been, to date, no attempt to compare the emotions of a culturally distinct sample of Iranians with those of Europeans, subjective ratings were collected from 40 Iranians and 39 Europeans. Each cultural group was asked separately to provide normative affective ratings and classify pictures according to discrete emotions. The results were analyzed to identify cultural differences in the ratings of individual images. One hundred and seventy NAPS pictures were rated with regard to the intensity of the basic emotions (happiness, sadness, fear, surprise, anger, and disgust) they elicited, as well as in terms of affective dimensions (valence and arousal). Contrary to previous studies using the International Affective Picture System, our results for Europeans and Iranians show that neither the ratings for affective dimensions nor for basic emotions differed across cultural groups. In both cultural groups, the relationship between valence and arousal ratings could be best described by a classical boomerang-shaped function. However, the content of the pictures (animals, faces, landscapes, objects, or people) had a significant effect on the ratings for valence and arousal. These findings indicate that further studies in cross-cultural affective research should control for the content of stimuli." }, { "pmid": "18824074", "title": "The orbitofrontal cortex and beyond: from affect to decision-making.", "abstract": "The orbitofrontal cortex represents the reward or affective value of primary reinforcers including taste, touch, texture, and face expression. It learns to associate other stimuli with these to produce representations of the expected reward value for visual, auditory, and abstract stimuli including monetary reward value. The orbitofrontal cortex thus plays a key role in emotion, by representing the goals for action. The learning process is stimulus-reinforcer association learning. Negative reward prediction error neurons are related to this affective learning. Activations in the orbitofrontal cortex correlate with the subjective emotional experience of affective stimuli, and damage to the orbitofrontal cortex impairs emotion-related learning, emotional behaviour, and subjective affective state. With an origin from beyond the orbitofrontal cortex, top-down attention to affect modulates orbitofrontal cortex representations, and attention to intensity modulates representations in earlier cortical areas of the physical properties of stimuli. Top-down word-level cognitive inputs can bias affective representations in the orbitofrontal cortex, providing a mechanism for cognition to influence emotion. Whereas the orbitofrontal cortex provides a representation of reward or affective value on a continuous scale, areas beyond the orbitofrontal cortex such as the medial prefrontal cortex area 10 are involved in binary decision-making when a choice must be made. For this decision-making, the orbitofrontal cortex provides a representation of each specific reward in a common currency." }, { "pmid": "19348537", "title": "Structural resemblance to emotional expressions predicts evaluation of emotionally neutral faces.", "abstract": "People make trait inferences based on facial appearance despite little evidence that these inferences accurately reflect personality. The authors tested the hypothesis that these inferences are driven in part by structural resemblance to emotional expressions. The authors first had participants judge emotionally neutral faces on a set of trait dimensions. The authors then submitted the face images to a Bayesian network classifier trained to detect emotional expressions. By using a classifier, the authors can show that neutral faces perceived to possess various personality traits contain objective resemblance to emotional expression. In general, neutral faces that are perceived to have positive valence resemble happiness, faces that are perceived to have negative valence resemble disgust and fear, and faces that are perceived to be threatening resemble anger. These results support the idea that trait inferences are in part the result of an overgeneralization of emotion recognition systems. Under this hypothesis, emotion recognition systems, which typically extract accurate information about a person's emotional state, are engaged during the perception of neutral faces that bear subtle resemblance to emotional expressions. These emotions could then be misattributed as traits." }, { "pmid": "19097677", "title": "Emotions in word and face processing: early and late cortical responses.", "abstract": "Recent research suggests that emotion effects in word processing resemble those in other stimulus domains such as pictures or faces. The present study aims to provide more direct evidence for this notion by comparing emotion effects in word and face processing in a within-subject design. Event-related brain potentials (ERPs) were recorded as participants made decisions on the lexicality of emotionally positive, negative, and neutral German verbs or pseudowords, and on the integrity of intact happy, angry, and neutral faces or slightly distorted faces. Relative to neutral and negative stimuli both positive verbs and happy faces elicited posterior ERP negativities that were indistinguishable in scalp distribution and resembled the early posterior negativities reported by others. Importantly, these ERP modulations appeared at very different latencies. Therefore, it appears that similar brain systems reflect the decoding of both biological and symbolic emotional signals of positive valence, differing mainly in the speed of meaning access, which is more direct and faster for facial expressions than for words." }, { "pmid": "17418416", "title": "Relevance to self: A brief review and framework of neural systems underlying appraisal.", "abstract": "We argue that many similar findings observed in cognitive, affective, and social neuroimaging research may compose larger processes central to generating self-relevance. In support of this, recent findings from these research domains were reviewed to identify common systemic activation patterns. Superimposition of these patterns revealed evidence for large-scale supramodal processes, which are argued to mediate appraisal of self-relevant content irrespective of specific stimulus types (e.g. words, pictures) and task domains (e.g. induction of reward, fear, pain, etc.). Furthermore, we distinguish between two top-down sub-systems involved in appraisal of self-relevance, one that orients pre-attentive biasing information (e.g. anticipatory or mnemonic) to salient or explicitly self-relevant phenomena, and another that engages introspective processes (e.g. self-reflection, evaluation, recollection) either in conjunction with or independent of the former system. Based on aggregate patterns of activation derived from the reviewed studies, processes in a ventral medial prefrontal cortex (MPFC)-subcortical network appear to track with the former pathway, and processes in a dorsal MPFC-cortical-subcortical network with the latter. As a whole, the purpose of this framework is to re-conceive the functionality of these systems in terms of supramodal processes that more directly reflect the influences of relevance to the self." }, { "pmid": "18839484", "title": "Intraclass correlations: uses in assessing rater reliability.", "abstract": "Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed." }, { "pmid": "15703257", "title": "Pictures of appetizing foods activate gustatory cortices for taste and reward.", "abstract": "Increasing research indicates that concepts are represented as distributed circuits of property information across the brain's modality-specific areas. The current study examines the distributed representation of an important but under-explored category, foods. Participants viewed pictures of appetizing foods (along with pictures of locations for comparison) during event-related fMRI. Compared to location pictures, food pictures activated the right insula/operculum and the left orbitofrontal cortex, both gustatory processing areas. Food pictures also activated regions of visual cortex that represent object shape. Together these areas contribute to a distributed neural circuit that represents food knowledge. Not only does this circuit become active during the tasting of actual foods, it also becomes active while viewing food pictures. Via the process of pattern completion, food pictures activate gustatory regions of the circuit to produce conceptual inferences about taste. Consistent with theories that ground knowledge in the modalities, these inferences arise as reenactments of modality-specific processing." }, { "pmid": "12925283", "title": "Dissociation of neural representation of intensity and affective valuation in human gustation.", "abstract": "We used a 2 x 2 factorial design to dissociate regions responding to taste intensity and taste affective valence. Two intensities each of a pleasant and unpleasant taste were presented to subjects during event-related fMRI scanning. The cerebellum, pons, middle insula, and amygdala responded to intensity irrespective of valence. In contrast, valence-specific responses were observed in anterior insula/operculum extending into the orbitofrontal cortex (OFC). The right caudolateral OFC responded preferentially to pleasant compared to unpleasant taste, irrespective of intensity, and the left dorsal anterior insula/operculuar region responded preferentially to unpleasant compared to pleasant tastes equated for intensity. Responses best characterized as an interaction between intensity and pleasantness were also observed in several limbic regions. These findings demonstrate a functional segregation within the human gustatory system. They also show that amygdala activity may be driven by stimulus intensity irrespective of valence, casting doubt upon the notion that the amygdala responds preferentially to negative stimuli." }, { "pmid": "28632746", "title": "Body-part compatibility effects are modulated by the tendency for women to experience negative social comparative emotions and the body-type of the model.", "abstract": "Although exposure to physique-salient media images of women's bodies has been consistently linked with negative psychological consequences, little is known about the cognitive processes that lead to these negative effects. The present study employed a novel adaptation of a computerized response time (RT) task to (i) assess implicit cognitive processing when exposed to the body of another individual, and (ii) examine individual differences in social comparative emotions that may influence the cognitive processing of human bodies. Adult females with low (n = 44) or high (n = 23) tendencies for comparative emotions completed a task in which they executed responses to coloured targets presented on the hands or feet of images of ultra-thin, average-size, and above average-size female models. Although the colour of the target is the only relevant target feature, it is typically found that the to-be-ignored location of the target on the body of the model influences RTs such that RTs are shorter when the target is on a body-part that is compatible with the responding limb (e.g., hand response when target was on hand) than on a body-part that is incompatible with the responding limb (e.g., hand response when target was on foot). Findings from the present study revealed that the magnitude of the body-part compatibility effect (i.e., the index of the cognitive processing of the model) was modulated by tendencies for affective body-related comparisons. Specifically, women who were prone to experiencing social comparative emotions demonstrated stronger and more consistent body-part compatibility effects across models. Therefore, women with higher social comparison tendencies have heightened processing of bodies at a neurocognitive level and may be at higher risk of the negative outcomes linked with physique-salient media exposure." }, { "pmid": "12899361", "title": "The eyebrow frown: a salient social signal.", "abstract": "Seven experiments investigated the finding that threatening schematic faces are detected more quickly than nonthreatening faces. Threatening faces with v-shaped eyebrows (angry and scheming expressions) were detected more quickly than nonthreatening faces with inverted v-shaped eyebrows (happy and sad expressions). In contrast to the hypothesis that these effects were due to perceptual features unrelated to the face, no advantage was found for v-shaped eyebrows presented in a nonfacelike object. Furthermore, the addition of internal facial features (the eyes, or the nose and mouth) was necessary to produce the detection advantage for faces with v-shaped eyebrows. Overall, the results are interpreted as showing that the v-shaped eyebrow configuration affords easy detection, but only when other internal facial features are present." }, { "pmid": "17495173", "title": "Trying to detect taste in a tasteless solution: modulation of early gustatory cortex by attention to taste.", "abstract": "Selective attention is thought to be associated with enhanced processing in modality-specific cortex. We used functional magnetic resonance imaging to evaluate brain response during a taste detection task. We demonstrate that trying to detect the presence of taste in a tasteless solution results in enhanced activity in insula and overlying operculum. The same task does not recruit orbitofrontal cortex (OFC). Instead, the OFC responds preferentially during receipt of an unpredicted taste stimulus. These findings demonstrate functional specialization of taste cortex in which the insula and the overlying operculum are recruited during taste detection and selective attention to taste, and the OFC is recruited during receipt of an unpredicted taste stimulus." }, { "pmid": "26873934", "title": "\"Turn Up the Taste\": Assessing the Role of Taste Intensity and Emotion in Mediating Crossmodal Correspondences between Basic Tastes and Pitch.", "abstract": "People intuitively match basic tastes to sounds of different pitches, and the matches that they make tend to be consistent across individuals. It is, though, not altogether clear what governs such crossmodal mappings between taste and auditory pitch. Here, we assess whether variations in taste intensity influence the matching of taste to pitch as well as the role of emotion in mediating such crossmodal correspondences. Participants were presented with 5 basic tastants at 3 concentrations. In Experiment 1, the participants rated the tastants in terms of their emotional arousal and valence/pleasantness, and selected a musical note (from 19 possible pitches ranging from C2 to C8) and loudness that best matched each tastant. In Experiment 2, the participants made emotion ratings and note matches in separate blocks of trials, then made emotion ratings for all 19 notes. Overall, the results of the 2 experiments revealed that both taste quality and concentration exerted a significant effect on participants' loudness selection, taste intensity rating, and valence and arousal ratings. Taste quality, not concentration levels, had a significant effect on participants' choice of pitch, but a significant positive correlation was observed between individual perceived taste intensity and pitch choice. A significant and strong correlation was also demonstrated between participants' valence assessments of tastants and their valence assessments of the best-matching musical notes. These results therefore provide evidence that: 1) pitch-taste correspondences are primarily influenced by taste quality, and to a lesser extent, by perceived intensity; and 2) such correspondences may be mediated by valence/pleasantness." }, { "pmid": "21787078", "title": "Negative triangles: simple geometric shapes convey emotional valence.", "abstract": "It has been suggested that downward pointing triangles convey negative valence, perhaps because they mimic an underlying primitive feature present in negative facial expressions (Larson, Aronoff, and Stearns, 2007). Here, we test this proposition using a flanker interference paradigm in which participants indicated the valence of a central face target, presented between two adjacent distracters. Experiment 1 showed that, compared with face flankers, downward pointing triangles had little influence on responses to face targets. However, in Experiment 2, when attentional competition was increased between target and flankers, downward pointing triangles slowed responses to positively valenced face targets, and speeded them to negatively valenced targets, consistent with valence-based flanker compatibility effects. These findings provide converging evidence that simple geometric shapes may convey emotional valence." }, { "pmid": "21742041", "title": "The face is more than its parts--brain dynamics of enhanced spatial attention to schematic threat.", "abstract": "A rapid response to environmental threat is crucial for survival and requires an appropriate attention allocation toward its location. Visual search paradigms have provided evidence for the enhanced capture of attention by threatening faces. In two EEG experiments, we sought to determine whether the detection of threat requires complete faces or salient features underlying the facial expression. Measuring the N2pc component as an electrophysiological indicator of attentional selection we investigated participants searching for either a complete discrepant schematic threatening or friendly face within an array of neutral faces, or single features (eyebrows and eyes vs. eyebrows) of threatening and friendly faces. Threatening faces were detected faster compared to friendly faces. In accordance, threatening angry targets showed a more pronounced occipital N2pc between 200 and 300 ms than friendly facial targets. Moreover, threatening configurations, were detected more rapidly than friendly-related features when the facial configuration contained eyebrows and eyes. No differences were observed when only a single feature (eyebrows) had to be detected. Threatening-related and friendly-related features did not show any differences in the N2pc across all configuration conditions. Taken together, the findings provide direct electrophysiological support for rapid prioritized attention to facial threat, an advantage that seems not to be driven by low level visual features." }, { "pmid": "26181746", "title": "Face to Face : The Perception of Automotive Designs.", "abstract": "Over evolutionary time, humans have developed a selective sensitivity to features in the human face that convey information on sex, age, emotions, and intentions. This ability might not only be applied to our conspecifics nowadays, but also to other living objects (i.e., animals) and even to artificial structures, such as cars. To investigate this possibility, we asked people to report the characteristics, emotions, personality traits, and attitudes they attribute to car fronts, and we used geometric morphometrics (GM) and multivariate statistical methods to determine and visualize the corresponding shape information. Automotive features and proportions are found to covary with trait perception in a manner similar to that found with human faces. Emerging analogies are discussed. This study should have implications for both our understanding of our prehistoric psyche and its interrelation with the modern world." }, { "pmid": "16192380", "title": "Integrated neural representations of odor intensity and affective valence in human amygdala.", "abstract": "Arousal and valence are proposed to represent fundamental dimensions of emotion. The neural substrates for processing these aspects of stimuli are studied widely, with recent studies of chemosensory processing suggesting the amygdala processes intensity (a surrogate for arousal) rather than valence. However, these investigations have assumed that a valence effect in the amygdala is linear such that testing valence extremes is sufficient to infer responses across valence space. In this study, we tested an alternative hypothesis, namely that valence responses in the amygdala are nonlinear. Using event-related functional magnetic resonance imaging, we measured amygdala responses to high- and low-concentration variants of pleasant, neutral, and unpleasant odors. Our results demonstrate that the amygdala exhibits an intensity-by-valence interaction in olfactory processing. In other words, the effect of intensity on amygdala activity is not the same at all levels of valence. Specifically, the amygdala responds differentially to high (vs low)-intensity odor for pleasant and unpleasant smells but not for neutral smells. This implies that the amygdala codes neither intensity nor valence per se, but a combination that we suggest reflects the overall emotional value of a stimulus." }, { "pmid": "26244107", "title": "Conducting perception research over the internet: a tutorial review.", "abstract": "This article provides an overview of the recent literature on the use of internet-based testing to address important questions in perception research. Our goal is to provide a starting point for the perception researcher who is keen on assessing this tool for their own research goals. Internet-based testing has several advantages over in-lab research, including the ability to reach a relatively broad set of participants and to quickly and inexpensively collect large amounts of empirical data, via services such as Amazon's Mechanical Turk or Prolific Academic. In many cases, the quality of online data appears to match that collected in lab research. Generally-speaking, online participants tend to be more representative of the population at large than those recruited for lab based research. There are, though, some important caveats, when it comes to collecting data online. It is obviously much more difficult to control the exact parameters of stimulus presentation (such as display characteristics) with online research. There are also some thorny ethical elements that need to be considered by experimenters. Strengths and weaknesses of the online approach, relative to others, are highlighted, and recommendations made for those researchers who might be thinking about conducting their own studies using this increasingly-popular approach to research in the psychological sciences." }, { "pmid": "27978493", "title": "Valence and arousal-based affective evaluations of foods.", "abstract": "We investigated the nutrient-specific and individual-specific validity of dual-process models of valenced and arousal-based affective evaluations of foods across the disordered eating spectrum. 283 undergraduate women provided implicit and explicit valence and arousal-based evaluations of 120 food photos with known nutritional information on structurally similar indirect and direct affect misattribution procedures (AMP; Payne et al., 2005, 2008), and completed questionnaires assessing body mass index (BMI), hunger, restriction, and binge eating. Nomothetically, added fat and added sugar enhance evaluations of foods. Idiographically, hunger and binge eating enhance activation, whereas BMI and restriction enhance pleasantness. Added fat is salient for women who are heavier, hungrier, or who restrict; added sugar is influential for less hungry women. Restriction relates only to valence, whereas binge eating relates only to arousal. Findings are similar across implicit and explicit affective evaluations, albeit stronger for explicit, providing modest support for dual-process models of affective evaluation of foods." } ]
Frontiers in Neurorobotics
30546302
PMC6279894
10.3389/fnbot.2018.00078
Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
2. Related workThe CLS theory (McClelland et al., 1995) provides the basis for computational frameworks that aim to generalize across experiences while retaining specific memories in a lifelong fashion. Early computational attempts include French (1997) who developed a dual-memory framework using pseudo-rehearsal (Robins, 1995) to transfer memories, i.e., the training samples are not explicitly kept in memory but drawn from a probabilistic model. However, there is no empirical evidence showing that this or similar contemporaneous approaches (see O'Reilly and Norman, 2002 for a review) scale up to large-scale image and video benchmark datasets. More recently, Gepperth and Karaoguz (2015) proposed two approaches for incremental learning using a modified self-organizing map (SOM) and a SOM extended with a short-term memory (STM). We refer to these two approaches as GeppNet and GeppNet+STM, respectively. In GeppNet, task-relevant feedback from a regression layer is used to select whether learning in the self-organizing hidden layer takes place. In GeppNet+STM, the STM is used to store novel knowledge which is occasionally played back to the GeppNet layer during sleep phases interleaved with training phases. This latter approach yields better performance and faster convergence in incremental learning tasks with the MNIST dataset. However, the STM has a limited capacity, thus learning new knowledge can overwrite old knowledge. In both cases, the learning process is divided into the initialization and the actual incremental learning phase. Furthermore, GeppNet and GeppNet+STM require storing the entire training dataset during training. Kemker and Kanan (2018) proposed the FearNet model for incremental class learning inspired by studies of memory recall and consolidation in the mammalian brain during fear conditioning (Kitamura et al., 2017). FearNet uses a hippocampal network capable of immediately recalling new examples, a PFC network for long-term memories, and a third neural network inspired by the basolateral amygdala for determining whether the system should use the PFC or hippocampal network for a particular example. FearNet consolidates information from its hippocampal network to its PFC network during sleep phases. Kamra et al. (2018) presented a similar dual-memory framework for lifelong learning that uses a variational autoencoder as a generative model for pseudo-rehearsal. Their framework generates a short-term memory module for each new task. However, prior to consolidation, predictions are made using an oracle, i.e., they know which module contains the associated memory.Different methods have been proposed that are based on regularization techniques to impose constraints on the update of the neural weights. This is inspired by neuroscience findings suggesting that consolidated knowledge can be protected from interference via changing levels of synaptic plasticity (Benna and Fusi, 2016) and is typically modeled in terms of adding regularization terms that penalize changes in the mapping function of a neural network. For instance, Li and Hoiem (2016) proposed a convolutional neural network (CNN) architecture in which the network that predicts the previously learned tasks is enforced to be similar to the network that also predicts the current task by using knowledge distillation, i.e., the transferring of knowledge from a large, highly regularized model to a smaller model. This approach, known as learning without forgetting (LwF), has the drawbacks of highly depending on the relevance of the tasks and that the training time for one task linearly increases with the number of old tasks. Kirkpatrick et al. (2017) proposed elastic weight consolidation (EWC) which adds a penalty term to the loss function and constrains the weight parameters that are relevant to retain previously learned tasks. However, this approach requires a diagonal weighting over the parameters of the learned tasks which is proportional to the diagonal of the Fisher information metric, with synaptic importance being computed offline and limiting its computational application to low-dimensional output spaces. Zenke et al. (2017b) proposed to alleviate catastrophic forgetting by allowing individual synapses to estimate their importance for solving a learned task. Similar to Kirkpatrick et al. (2017), this approach penalizes changes to the most relevant synapses so that new tasks can be learned with minimal interference. In this case, the synaptic importance is computed in an online fashion over the learning trajectory in the parameter space.In general, regularization approaches comprise additional loss terms for protecting consolidated knowledge which, with a limited amount of neural resources, leads to a trade-off on the performance of old and novel tasks. Other approaches expand the neural architecture to accommodate novel knowledge. Rusu et al. (2016) proposed to block any changes to the network trained on previous knowledge and expand the architecture by allocating novel sub-networks with a fixed capacity to be trained with the new information. This prevents catastrophic forgetting but leads the complexity of the architecture to grow with the number of learned tasks. Draelos et al. (2017) trained an autoencoder incrementally using the reconstruction error to show whether the older digits were retained. Their model added new neural units to the autoencoder to facilitate the addition of new MNIST digits. Rebuffi et al. (2017) proposed the iCaRL approach which stores example data points that are used along with new data to dynamically adapt the weights of a feature extractor. By combining new and old data, they prevent catastrophic forgetting but at the expense of a higher memory footprint.The approaches described above are designed for the classification of static images, often exposing the learning algorithm to training samples in a random order. Conversely, in more natural settings, we make use of the spatiotemporal structure of the input. In previous research (Parisi et al., 2017), we showed that the lifelong learning of action sequences can be achieved in terms of prediction-driven neural dynamics with internal representations emerging in a hierarchy of recurrent self-organizing networks. The networks can dynamically allocate neural resources and update connectivity patterns according to competitive Hebbian learning by computing the input based on its similarity with existing knowledge and minimizing interference by creating new neurons whenever they are required. This approach has shown competitive results with batch learning methods on action benchmark datasets. However, the neural growth and update are driven by the minimization of the bottom-up reconstruction error and, thus, without taking into account top-down, task-relevant signals that can regulate the plasticity-stability balance. Furthermore, the model cannot learn in the absence of external sensory input, which leads to a non-negligible degree of disruptive interference during incremental learning tasks.
[ "16732202", "19186162", "27694992", "16097870", "29625071", "9758214", "21270783", "24695697", "16021516", "9809557", "18772395", "7375607", "17964756", "23149242", "29089520", "19525943", "28292907", "28386011", "24858841", "10234037", "27315762", "22775499", "26017442", "24435505", "12416693", "7624455", "23935590", "9472486", "21609825", "24462102", "12475710", "29017140", "26106323", "27911497", "18267801", "29513649", "179015", "21788086", "28431369" ]
[ { "pmid": "16732202", "title": "Potential role for adult neurogenesis in the encoding of time in new memories.", "abstract": "The dentate gyrus in the hippocampus is one of two brain regions with lifelong neurogenesis in mammals. Despite an increasing amount of information about the characteristics of the newborn granule cells, the specific contribution of their robust generation to memory formation by the hippocampus remains unclear. We describe here a possible role that this population of young granule cells may have in the formation of temporal associations in memory. Neurogenesis is a continuous process; the newborn population is only composed of the same cells for a short period of time. As time passes, the young neurons mature or die and others are born, gradually changing the identity of this young population. We discuss the possibility that one cognitive impact of this gradually changing population on hippocampal memory formation is the formation of the temporal clusters of long-term episodic memories seen in some human psychological studies." }, { "pmid": "19186162", "title": "Computational influence of adult neurogenesis on memory encoding.", "abstract": "Adult neurogenesis in the hippocampus leads to the incorporation of thousands of new granule cells into the dentate gyrus every month, but its function remains unclear. Here, we present computational evidence that indicates that adult neurogenesis may make three separate but related contributions to memory formation. First, immature neurons introduce a degree of similarity to memories learned at the same time, a process we refer to as pattern integration. Second, the extended maturation and change in excitability of these neurons make this added similarity a time-dependent effect, supporting the possibility that temporal information is included in new hippocampal memories. Finally, our model suggests that the experience-dependent addition of neurons results in a dentate gyrus network well suited for encoding new memories in familiar contexts while treating novel contexts differently. Taken together, these results indicate that new granule cells may affect hippocampal function in several unique and previously unpredicted ways." }, { "pmid": "27694992", "title": "Computational principles of synaptic memory consolidation.", "abstract": "Memories are stored and retained through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions, we construct a broad class of synaptic models that efficiently harness biological complexity to preserve numerous memories by protecting them against the adverse effects of overwriting. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Notably, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modifications, metaplasticity, and spacing effects." }, { "pmid": "16097870", "title": "Slow feature analysis yields a rich repertoire of complex cell properties.", "abstract": "In this study we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data. We find a good qualitative and quantitative match between the set of learned functions trained on image sequences and the population of complex cells in the primary visual cortex (V1). The functions show many properties found also experimentally in complex cells, such as direction selectivity, non-orthogonal inhibition, end-inhibition, and side-inhibition. Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties." }, { "pmid": "29625071", "title": "Human Hippocampal Neurogenesis Persists throughout Aging.", "abstract": "Adult hippocampal neurogenesis declines in aging rodents and primates. Aging humans are thought to exhibit waning neurogenesis and exercise-induced angiogenesis, with a resulting volumetric decrease in the neurogenic hippocampal dentate gyrus (DG) region, although concurrent changes in these parameters are not well studied. Here we assessed whole autopsy hippocampi from healthy human individuals ranging from 14 to 79 years of age. We found similar numbers of intermediate neural progenitors and thousands of immature neurons in the DG, comparable numbers of glia and mature granule neurons, and equivalent DG volume across ages. Nevertheless, older individuals have less angiogenesis and neuroplasticity and a smaller quiescent progenitor pool in anterior-mid DG, with no changes in posterior DG. Thus, healthy older subjects without cognitive impairment, neuropsychiatric disease, or treatment display preserved neurogenesis. It is possible that ongoing hippocampal neurogenesis sustains human-specific cognitive function throughout life and that declines may be linked to compromised cognitive-emotional resilience." }, { "pmid": "9758214", "title": "View-invariant representations of familiar objects by neurons in the inferior temporal visual cortex.", "abstract": "A view-invariant representation of objects in the brain would have many computational advantages. Here we describe a population of single neurons in the temporal visual cortex (IT) that have view-invariant representations of familiar objects. Ten real plastic objects were placed in the monkeys' home cages for a period of time before neurophysiological experiments in which neuronal responses were measured to four views of each object. The macaques performed a visual fixation task, and had never been trained in object discrimination. The majority of the visual neurons recorded were responsive to some views of some objects and/or to the control stimuli, as would be expected from previous studies. However, a small subset of these neurons were responsive to all views of one or more of the objects, providing evidence that these neurons were coding for objects, rather than simply for individual views or visual features within the image. This result was confirmed by information theoretic analyses, which showed that the neurons provided information about which object was being seen, independently of the view. The coding scheme was shown to be sparse distributed, with relatively independent information being provided by the different neurons. Hypotheses about how these view-invariant cells are formed are described." }, { "pmid": "21270783", "title": "Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval.", "abstract": "The hippocampus is required for the encoding, consolidation and retrieval of event memories. Although the neural mechanisms that underlie these processes are only partially understood, a series of recent papers point to awake memory replay as a potential contributor to both consolidation and retrieval. Replay is the sequential reactivation of hippocampal place cells that represent previously experienced behavioral trajectories and occurs frequently in the awake state, particularly during periods of relative immobility. Awake replay may reflect trajectories through either the current environment or previously visited environments that are spatially remote. The repetition of learned sequences on a compressed time scale is well suited to promote memory consolidation in distributed circuits beyond the hippocampus, suggesting that consolidation occurs in both the awake and sleeping animal. Moreover, sensory information can influence the content of awake replay, suggesting a role for awake replay in memory retrieval." }, { "pmid": "24695697", "title": "Object-specific semantic coding in human perirhinal cortex.", "abstract": "Category-specificity has been demonstrated in the human posterior ventral temporal cortex for a variety of object categories. Although object representations within the ventral visual pathway must be sufficiently rich and complex to support the recognition of individual objects, little is known about how specific objects are represented. Here, we used representational similarity analysis to determine what different kinds of object information are reflected in fMRI activation patterns and uncover the relationship between categorical and object-specific semantic representations. Our results show a gradient of informational specificity along the ventral stream from representations of image-based visual properties in early visual cortex, to categorical representations in the posterior ventral stream. A key finding showed that object-specific semantic information is uniquely represented in the perirhinal cortex, which was also increasingly engaged for objects that are more semantically confusable. These findings suggest a key role for the perirhinal cortex in representing and processing object-specific semantic information that is more critical for highly confusable objects. Our findings extend current distributed models by showing coarse dissociations between objects in posterior ventral cortex, and fine-grained distinctions between objects supported by the anterior medial temporal lobes, including the perirhinal cortex, which serve to integrate complex object information." }, { "pmid": "16021516", "title": "Learning viewpoint invariant object representations using a temporal coherence principle.", "abstract": "Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the \"stability\" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations." }, { "pmid": "9809557", "title": "Neurogenesis in the adult human hippocampus.", "abstract": "The genesis of new cells, including neurons, in the adult human brain has not yet been demonstrated. This study was undertaken to investigate whether neurogenesis occurs in the adult human brain, in regions previously identified as neurogenic in adult rodents and monkeys. Human brain tissue was obtained postmortem from patients who had been treated with the thymidine analog, bromodeoxyuridine (BrdU), that labels DNA during the S phase. Using immunofluorescent labeling for BrdU and for one of the neuronal markers, NeuN, calbindin or neuron specific enolase (NSE), we demonstrate that new neurons, as defined by these markers, are generated from dividing progenitor cells in the dentate gyrus of adult humans. Our results further indicate that the human hippocampus retains its ability to generate neurons throughout life." }, { "pmid": "18772395", "title": "Internally generated reactivation of single neurons in human hippocampus during free recall.", "abstract": "The emergence of memory, a trace of things past, into human consciousness is one of the greatest mysteries of the human mind. Whereas the neuronal basis of recognition memory can be probed experimentally in human and nonhuman primates, the study of free recall requires that the mind declare the occurrence of a recalled memory (an event intrinsic to the organism and invisible to an observer). Here, we report the activity of single neurons in the human hippocampus and surrounding areas when subjects first view cinematic episodes consisting of audiovisual sequences and again later when they freely recall these episodes. A subset of these neurons exhibited selective firing, which often persisted throughout and following specific episodes for as long as 12 seconds. Verbal reports of memories of these specific episodes at the time of free recall were preceded by selective reactivation of the same hippocampal and entorhinal cortex neurons. We suggest that this reactivation is an internally generated neuronal correlate for the subjective experience of spontaneous emergence of human recollection." }, { "pmid": "17964756", "title": "Consciousness CLEARS the mind.", "abstract": "A full understanding of consciousness requires that we identify the brain processes from which conscious experiences emerge. What are these processes, and what is their utility in supporting successful adaptive behaviors? Adaptive Resonance Theory (ART) predicted a functional link between processes of Consciousness, Learning, Expectation, Attention, Resonance and Synchrony (CLEARS), including the prediction that \"all conscious states are resonant states\". This connection clarifies how brain dynamics enable a behaving individual to autonomously adapt in real time to a rapidly changing world. The present article reviews theoretical considerations that predicted these functional links, how they work, and some of the rapidly growing body of behavioral and brain data that have provided support for these predictions. The article also summarizes ART models that predict functional roles for identified cells in laminar thalamocortical circuits, including the six layered neocortical circuits and their interactions with specific primary and higher-order specific thalamic nuclei and nonspecific nuclei. These predictions include explanations of how slow perceptual learning can occur without conscious awareness, and why oscillation frequencies in the lower layers of neocortex are sometimes slower beta oscillations, rather than the higher-frequency gamma oscillations that occur more frequently in superficial cortical layers. ART traces these properties to the existence of intracortical feedback loops, and to reset mechanisms whereby thalamocortical mismatches use circuits such as the one from specific thalamic nuclei to nonspecific thalamic nuclei and then to layer 4 of neocortical areas via layers 1-to-5-to-6-to-4." }, { "pmid": "23149242", "title": "Adaptive Resonance Theory: how a brain learns to consciously attend, learn, and recognize a changing world.", "abstract": "Adaptive Resonance Theory, or ART, is a cognitive and neural theory of how the brain autonomously learns to categorize, recognize, and predict objects and events in a changing world. This article reviews classical and recent developments of ART, and provides a synthesis of concepts, principles, mechanisms, architectures, and the interdisciplinary data bases that they have helped to explain and predict. The review illustrates that ART is currently the most highly developed cognitive and neural theory available, with the broadest explanatory and predictive range. Central to ART's predictive power is its ability to carry out fast, incremental, and stable unsupervised and supervised learning in response to a changing world. ART specifies mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony during both unsupervised and supervised learning. ART provides functional and mechanistic explanations of such diverse topics as laminar cortical circuitry; invariant object and scenic gist learning and recognition; prototype, surface, and boundary attention; gamma and beta oscillations; learning of entorhinal grid cells and hippocampal place cells; computation of homologous spatial and temporal mechanisms in the entorhinal-hippocampal system; vigilance breakdowns during autism and medial temporal amnesia; cognitive-emotional interactions that focus attention on valued objects in an adaptively timed way; item-order-rank working memories and learned list chunks for the planning and control of sequences of linguistic, spatial, and motor information; conscious speech percepts that are influenced by future context; auditory streaming in noise during source segregation; and speaker normalization. Brain regions that are functionally described include visual and auditory neocortex; specific and nonspecific thalamic nuclei; inferotemporal, parietal, prefrontal, entorhinal, hippocampal, parahippocampal, perirhinal, and motor cortices; frontal eye fields; supplementary eye fields; amygdala; basal ganglia: cerebellum; and superior colliculus. Due to the complementary organization of the brain, ART does not describe many spatial and motor behaviors whose matching and learning laws differ from those of ART. ART algorithms for engineering and technology are listed, as are comparisons with other types of models." }, { "pmid": "29089520", "title": "Invariant object recognition is a personalized selection of invariant features in humans, not simply explained by hierarchical feed-forward vision models.", "abstract": "One key ability of human brain is invariant object recognition, which refers to rapid and accurate recognition of objects in the presence of variations such as size, rotation and position. Despite decades of research into the topic, it remains unknown how the brain constructs invariant representations of objects. Providing brain-plausible object representations and reaching human-level accuracy in recognition, hierarchical models of human vision have suggested that, human brain implements similar feed-forward operations to obtain invariant representations. However, conducting two psychophysical object recognition experiments on humans with systematically controlled variations of objects, we observed that humans relied on specific (diagnostic) object regions for accurate recognition which remained relatively consistent (invariant) across variations; but feed-forward feature-extraction models selected view-specific (non-invariant) features across variations. This suggests that models can develop different strategies, but reach human-level recognition performance. Moreover, human individuals largely disagreed on their diagnostic features and flexibly shifted their feature extraction strategy from view-invariant to view-specific when objects became more similar. This implies that, even in rapid object recognition, rather than a set of feed-forward mechanisms which extract diagnostic features from objects in a hard-wired fashion, the bottom-up visual pathways receive, through top-down connections, task-related information possibly processed in prefrontal cortex." }, { "pmid": "19525943", "title": "Awake replay of remote experiences in the hippocampus.", "abstract": "Hippocampal replay is thought to be essential for the consolidation of event memories in hippocampal-neocortical networks. Replay is present during both sleep and waking behavior, but although sleep replay involves the reactivation of stored representations in the absence of specific sensory inputs, awake replay is thought to depend on sensory input from the current environment. Here, we show that stored representations are reactivated during both waking and sleep replay. We found frequent awake replay of sequences of rat hippocampal place cells from a previous experience. This spatially remote replay was as common as local replay of the current environment and was more robust when the rat had recently been in motion than during extended periods of quiescence. Our results indicate that the hippocampus consistently replays past experiences during brief pauses in waking behavior, suggesting a role for waking replay in memory consolidation and retrieval." }, { "pmid": "28292907", "title": "Overcoming catastrophic forgetting in neural networks.", "abstract": "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially." }, { "pmid": "28386011", "title": "Engrams and circuits crucial for systems consolidation of a memory.", "abstract": "Episodic memories initially require rapid synaptic plasticity within the hippocampus for their formation and are gradually consolidated in neocortical networks for permanent storage. However, the engrams and circuits that support neocortical memory consolidation have thus far been unknown. We found that neocortical prefrontal memory engram cells, which are critical for remote contextual fear memory, were rapidly generated during initial learning through inputs from both the hippocampal-entorhinal cortex network and the basolateral amygdala. After their generation, the prefrontal engram cells, with support from hippocampal memory engram cells, became functionally mature with time. Whereas hippocampal engram cells gradually became silent with time, engram cells in the basolateral amygdala, which were necessary for fear memory, were maintained. Our data provide new insights into the functional reorganization of engrams and circuits underlying systems consolidation of memory." }, { "pmid": "24858841", "title": "Structural synaptic plasticity has high memory capacity and can explain graded amnesia, catastrophic forgetting, and the spacing effect.", "abstract": "Although already William James and, more explicitly, Donald Hebb's theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are \"potential synapses\" defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the \"effectual network connectivity\", that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect." }, { "pmid": "10234037", "title": "Reactivation of hippocampal cell assemblies: effects of behavioral state, experience, and EEG dynamics.", "abstract": "During slow wave sleep (SWS), traces of neuronal activity patterns from preceding behavior can be observed in rat hippocampus and neocortex. The spontaneous reactivation of these patterns is manifested as the reinstatement of the distribution of pairwise firing-rate correlations within a population of simultaneously recorded neurons. The effects of behavioral state [quiet wakefulness, SWS, and rapid eye movement (REM)], interactions between two successive spatial experiences, and global modulation during 200 Hz electroencephalographic (EEG) \"ripples\" on pattern reinstatement were studied in CA1 pyramidal cell population recordings. Pairwise firing-rate correlations during often repeated experiences accounted for a significant proportion of the variance in these interactions in subsequent SWS or quiet wakefulness and, to a lesser degree, during SWS before the experience on a given day. The latter effect was absent for novel experiences, suggesting that a persistent memory trace develops with experience. Pattern reinstatement was strongest during sharp wave-ripple oscillations, suggesting that these events may reflect system convergence onto attractor states corresponding to previous experiences. When two different experiences occurred in succession, the statistically independent effects of both were evident in subsequent SWS. Thus, the patterns of neural activity reemerge spontaneously, and in an interleaved manner, and do not necessarily reflect persistence of an active memory (i.e., reverberation). Firing-rate correlations during REM sleep were not related to the preceding familiar experience, possibly as a consequence of trace decay during the intervening SWS. REM episodes also did not detectably influence the correlation structure in subsequent SWS, suggesting a lack of strengthening of memory traces during REM sleep, at least in the case of familiar experiences." }, { "pmid": "27315762", "title": "What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.", "abstract": "We update complementary learning systems (CLS) theory, which holds that intelligent agents must possess two learning systems, instantiated in mammalians in neocortex and hippocampus. The first gradually acquires structured knowledge representations while the second quickly learns the specifics of individual experiences. We broaden the role of replay of hippocampal memories in the theory, noting that replay allows goal-dependent weighting of experience statistics. We also address recent challenges to the theory and extend it by showing that recurrent activation of hippocampal traces can support some forms of generalization and that neocortical learning can be rapid for information that is consistent with known structure. Finally, we note the relevance of the theory to the design of artificial intelligent agents, highlighting connections between neuroscience and machine learning." }, { "pmid": "22775499", "title": "Generalization through the recurrent interaction of episodic memories: a model of the hippocampal system.", "abstract": "In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus in pattern separation (Marr, 1971; McClelland, McNaughton, & O'Reilly, 1995), and empirical support for its role in generalization and flexible relational memory (Cohen & Eichenbaum, 1993; Eichenbaum, 1999). Our account provides a means by which to resolve this conflict, by demonstrating that the basic representational scheme envisioned by complementary learning systems theory (McClelland et al., 1995), which relies upon orthogonalized codes in the hippocampus, is compatible with efficient generalization-as long as there is recurrence rather than unidirectional flow within the hippocampal circuit or, more widely, between the hippocampus and neocortex. We propose that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory (e.g., Nosofsky, 1984) and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "24435505", "title": "Early experience and multisensory perceptual narrowing.", "abstract": "Perceptual narrowing reflects the effects of early experience and contributes in key ways to perceptual and cognitive development. Previous studies have found that unisensory perceptual sensitivity in young infants is broadly tuned such that they can discriminate native as well as non-native sensory inputs but that it is more narrowly tuned in older infants such that they only respond to native inputs. Recently, my coworkers and I discovered that multisensory perceptual sensitivity narrows as well. The present article reviews this new evidence in the general context of multisensory perceptual development and the effects of early experience. Together, the evidence on unisensory and multisensory narrowing shows that early experience shapes the emergence of perceptual specialization and expertise." }, { "pmid": "12416693", "title": "A self-organising network that grows when required.", "abstract": "The ability to grow extra nodes is a potentially useful facility for a self-organising neural network. A network that can add nodes into its map space can approximate the input space more accurately, and often more parsimoniously, than a network with predefined structure and size, such as the Self-Organising Map. In addition, a growing network can deal with dynamic input distributions. Most of the growing networks that have been proposed in the literature add new nodes to support the node that has accumulated the highest error during previous iterations or to support topological structures. This usually means that new nodes are added only when the number of iterations is an integer multiple of some pre-defined constant, A. This paper suggests a way in which the learning algorithm can add nodes whenever the network in its current state does not sufficiently match the input. In this way the network grows very quickly when new data is presented, but stops growing once the network has matched the data. This is particularly important when we consider dynamic data sets, where the distribution of inputs can change to a new regime after some time. We also demonstrate the preservation of neighbourhood relations in the data by the network. The new network is compared to an existing growing network, the Growing Neural Gas (GNG), on a artificial dataset, showing how the network deals with a change in input distribution after some time. Finally, the new network is applied to several novelty detection tasks and is compared with both the GNG and an unsupervised form of the Reduced Coulomb Energy network on a robotic inspection task and with a Support Vector Machine on two benchmark novelty detection tasks." }, { "pmid": "7624455", "title": "Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.", "abstract": "Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems." }, { "pmid": "9472486", "title": "Analysis of direction selectivity arising from recurrent cortical interactions.", "abstract": "The relative contributions of feedforward and recurrent connectivity to the direction-selective responses of cells in layer IVB of primary visual cortex are currently the subject of debate in the neuroscience community. Recently, biophysically detailed simulations have shown that realistic direction-selective responses can be achieved via recurrent cortical interactions between cells with nondirection-selective feedforward input (Suarez et al., 1995; Maex & Orban, 1996). Unfortunately these models, while desirable for detailed comparison with biology, are complex and thus difficult to analyze mathematically. In this article, a relatively simple cortical dynamical model is used to analyze the emergence of direction-selective responses via recurrent interactions. A comparison between a model based on our analysis and physiological data is presented. The approach also allows analysis of the recurrently propagated signal, revealing the predictive nature of the implementation." }, { "pmid": "21609825", "title": "Adult neurogenesis in the mammalian brain: significant answers and significant questions.", "abstract": "Adult neurogenesis, a process of generating functional neurons from adult neural precursors, occurs throughout life in restricted brain regions in mammals. The past decade has witnessed tremendous progress in addressing questions related to almost every aspect of adult neurogenesis in the mammalian brain. Here we review major advances in our understanding of adult mammalian neurogenesis in the dentate gyrus of the hippocampus and from the subventricular zone of the lateral ventricle, the rostral migratory stream to the olfactory bulb. We highlight emerging principles that have significant implications for stem cell biology, developmental neurobiology, neural plasticity, and disease mechanisms. We also discuss remaining questions related to adult neural stem cells and their niches, underlying regulatory mechanisms, and potential functions of newborn neurons in the adult brain. Building upon the recent progress and aided by new technologies, the adult neurogenesis field is poised to leap forward in the next decade." }, { "pmid": "24462102", "title": "CA3 retrieves coherent representations from degraded input: direct evidence for CA3 pattern completion and dentate gyrus pattern separation.", "abstract": "Theories of associative memory suggest that successful memory storage and recall depend on a balance between two complementary processes: pattern separation (to minimize interference) and pattern completion (to retrieve a memory when presented with partial or degraded input cues). Putative attractor circuitry in the hippocampal CA3 region is thought to be the final arbiter between these two processes. Here we present direct, quantitative evidence that CA3 produces an output pattern closer to the originally stored representation than its degraded input patterns from the dentate gyrus (DG). We simultaneously recorded activity from CA3 and DG of behaving rats when local and global reference frames were placed in conflict. CA3 showed a coherent population response to the conflict (pattern completion), even though its DG inputs were severely disrupted (pattern separation). The results thus confirm the hallmark predictions of a longstanding computational model of hippocampal memory processing." }, { "pmid": "12475710", "title": "Hippocampal and neocortical contributions to memory: advances in the complementary learning systems framework.", "abstract": "The complementary learning systems framework provides a simple set of principles, derived from converging biological, psychological and computational constraints, for understanding the differential contributions of the neocortex and hippocampus to learning and memory. The central principles are that the neocortex has a low learning rate and uses overlapping distributed representations to extract the general statistical structure of the environment, whereas the hippocampus learns rapidly using separated representations to encode the details of specific events while minimizing interference. In recent years, we have instantiated these principles in working computational models, and have used these models to address human and animal learning and memory findings, across a wide range of domains and paradigms. Here, we review a few representative applications of our models, focusing on two domains: recognition memory and animal learning in the fear-conditioning paradigm. In both domains, the models have generated novel predictions that have been tested and confirmed." }, { "pmid": "29017140", "title": "Lifelong learning of human actions with deep neural network self-organization.", "abstract": "Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference." }, { "pmid": "26106323", "title": "Self-organizing neural integration of pose-motion features for human action recognition.", "abstract": "The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions." }, { "pmid": "27911497", "title": "Neural plasticity across the lifespan.", "abstract": "An essential feature of the brain is its capacity to change. Neuroscientists use the term 'plasticity' to describe the malleability of neuronal connectivity and circuitry. How does plasticity work? A review of current data suggests that plasticity encompasses many distinct phenomena, some of which operate across most or all of the lifespan, and others that operate exclusively in early development. This essay surveys some of the key concepts related to neural plasticity, beginning with how current patterns of neural activity (e.g., as you read this essay) come to impact future patterns of activity (e.g., your memory of this essay), and then extending this framework backward into more development-specific mechanisms of plasticity. WIREs Dev Biol 2017, 6:e216. doi: 10.1002/wdev.216 For further resources related to this article, please visit the WIREs website." }, { "pmid": "18267801", "title": "An analysis of the gamma memory in dynamic neural networks.", "abstract": "Presents a vector space framework to study short-term memory filters in dynamic neural networks. The authors define parameters to quantify the function of feedforward and recursive linear memory filters. They show, using vector spaces, what is the optimization problem solved by the PEs of the first hidden layer of the single input focused network architecture. Due to the special properties of the gamma bases, recursion brings an extra parameter lambda (the time constant of the leaky integrator) that displaces the memory manifold towards the desired signal when the mean square error is minimized. In contrast, for the feedforward memory filter the angle between the desired signal and the memory manifold is fixed for a given memory order. The adaptation of the feedback parameter can be done using gradient descent, but the optimization is nonconvex." }, { "pmid": "29513649", "title": "Human hippocampal neurogenesis drops sharply in children to undetectable levels in adults.", "abstract": "New neurons continue to be generated in the subgranular zone of the dentate gyrus of the adult mammalian hippocampus. This process has been linked to learning and memory, stress and exercise, and is thought to be altered in neurological disease. In humans, some studies have suggested that hundreds of new neurons are added to the adult dentate gyrus every day, whereas other studies find many fewer putative new neurons. Despite these discrepancies, it is generally believed that the adult human hippocampus continues to generate new neurons. Here we show that a defined population of progenitor cells does not coalesce in the subgranular zone during human fetal or postnatal development. We also find that the number of proliferating progenitors and young neurons in the dentate gyrus declines sharply during the first year of life and only a few isolated young neurons are observed by 7 and 13 years of age. In adult patients with epilepsy and healthy adults (18-77 years; n = 17 post-mortem samples from controls; n = 12 surgical resection samples from patients with epilepsy), young neurons were not detected in the dentate gyrus. In the monkey (Macaca mulatta) hippocampus, proliferation of neurons in the subgranular zone was found in early postnatal life, but this diminished during juvenile development as neurogenesis decreased. We conclude that recruitment of young neurons to the primate hippocampus decreases rapidly during the first years of life, and that neurogenesis in the dentate gyrus does not continue, or is extremely rare, in adult humans. The early decline in hippocampal neurogenesis raises questions about how the function of the dentate gyrus differs between humans and other species in which adult hippocampal neurogenesis is preserved." }, { "pmid": "21788086", "title": "Pattern separation in the hippocampus.", "abstract": "The ability to discriminate among similar experiences is a crucial feature of episodic memory. This ability has long been hypothesized to require the hippocampus, and computational models suggest that it is dependent on pattern separation. However, empirical data for the role of the hippocampus in pattern separation have not been available until recently. This review summarizes data from electrophysiological recordings, lesion studies, immediate-early gene imaging, transgenic mouse models, as well as human functional neuroimaging, that provide convergent evidence for the involvement of particular hippocampal subfields in this key process. We discuss the impact of aging and adult neurogenesis on pattern separation, and also highlight several challenges to linking across species and approaches, and suggest future directions for investigation." }, { "pmid": "28431369", "title": "The temporal paradox of Hebbian learning and homeostatic plasticity.", "abstract": "Hebbian plasticity, a synaptic mechanism which detects and amplifies co-activity between neurons, is considered a key ingredient underlying learning and memory in the brain. However, Hebbian plasticity alone is unstable, leading to runaway neuronal activity, and therefore requires stabilization by additional compensatory processes. Traditionally, a diversity of homeostatic plasticity phenomena found in neural circuits is thought to play this role. However, recent modelling work suggests that the slow evolution of homeostatic plasticity, as observed in experiments, is insufficient to prevent instabilities originating from Hebbian plasticity. To remedy this situation, we suggest that homeostatic plasticity is complemented by additional rapid compensatory processes, which rapidly stabilize neuronal activity on short timescales." } ]
BMC Medical Informatics and Decision Making
30526592
PMC6284263
10.1186/s12911-018-0690-y
SBLC: a hybrid model for disease named entity recognition based on semantic bidirectional LSTMs and conditional random fields
BackgroundDisease named entity recognition (NER) is a fundamental step in information processing of medical texts. However, disease NER involves complex issues such as descriptive modifiers in actual practice. The accurate identification of disease NER is a still an open and essential research problem in medical information extraction and text mining tasks.MethodsA hybrid model named Semantics Bidirectional LSTM and CRF (SBLC) for disease named entity recognition task is proposed. The model leverages word embeddings, Bidirectional Long Short Term Memory networks and Conditional Random Fields. A publically available NCBI disease dataset is applied to evaluate the model through comparing with nine state-of-the-art baseline methods including cTAKES, MetaMap, DNorm, C-Bi-LSTM-CRF, TaggerOne and DNER.ResultsThe results show that the SBLC model achieves an F1 score of 0.862 and outperforms the other methods. In addition, the model does not rely on external domain dictionaries, thus it can be more conveniently applied in many aspects of medical text processing.ConclusionsAccording to performance comparison, the proposed SBLC model achieved the best performance, demonstrating its effectiveness in disease named entity recognition.
Related workDisease NERIn medical domain, most existing studies on disease NER mainly used machine learning methods with supervised, unsupervised or semi-supervised training. For example, Dogan et al. [2] proposed an inference-based method which linked disease names mentioned in medical texts with their corresponding medical lexical entries. The method, for the first time, used Unified Medical Language System (UMLS) [13] developed by the National Library of Medicine in the NCBI disease corpus. Some similar systems, such as MetaMap [14], cTAKES [15], MedLEE [16], SymText / MPlus [17], KnowledgeMap [18], HiTEX [19] have been developed utilizing UMLS. Although UMLS could cover a wide range of medical mentions, many of these methods failed to identify disease mentions not appearing in the UMLS. In addition, the NER efficiency in terms of accuracy was not sufficiently high for practical usage. For example, the F1 in NCBI dataset of official MetaMap was only 0.559 as reported in [2].DNorm [3] was one of the recent studies using a NCBI disease corpus and a MEDICS vocabulary. It combined MeSH [20] and OMIM [21]. DNorm learned the similarity between disease names directly from training data, which was based on the technology of paired learning to rank (pLTR) strings normalization. Instead of solely relying on medical lexical resources, DNorm adopted a machine learning approach including pattern matching, dictionary searching, heuristic rules. By defining a vector space, it converted disease mentions and concepts into vectors. DNorm achieved an F1 score of 0.809 on the NCBI disease corpus.In 2016, Leaman and Lu proposed the TaggerOne [22]. It was a joint model that combined NER and normalized machine learning during training and predicting to overcome the cascading error of DNorm. TaggerOne consisted of a semi-Markov structured linear classifier for NER and a supervised semantic index for normalization, and ensured high throughput. Based on the same NCBI disease corpus, TaggerOne achieved an F1 score of 0.829.With respect to the methods applying deep learning to NER, some neural network models that could automatically extract word representation characteristics from raw texts have been widely used in the NER field (e.g., [23]). Using deep learning, some sequence annotation methods were also proposed and applied to disease NER tasks (e.g., [24, 25]). As a typical method, Pyysalo et al. [12] used word2vec to train a list of medical resources, and obtained a better performance on a NCBI Disease corpus. Recently, Wei et al. proposed a multi-layer neural network, DNER [24], which used GENIA Tagger [26] to extract a number of word features including words, part-of-speech tags, words chunking information, glyphs, morphological features, word embeddings, and so on. After extraction, the word features were embedded as inputs to a bidirectional Recurrent Neural Network model, and other features like POS tags were used for a CRF model. The normalization method of dictionary matching and the vector space model (VSM) were used together to generate optimized outputs. The overall performance of the model in terms of F1 score was 0.843 on the NCBI disease corpus. To our knowledge, DNER was the best performance deep learning-based method.Motivated by the benefits of word embedding and deep learning from the existing research, we intend to utilize external medical resources for word representation and combine bidirectional LSTM and CRF for NER recognition. We use a large number of medical resources to train the word embeddings model in an unsupervised manner, and combine the deep learning techniques for disease NER tasks.Word embedding trainingSuccess of machine learning algorithms usually depended on appropriate data representation, since different representations could capture different features of the data. Distributed word representation proposed by Hinton [27], has been widely used. The word distribution hypothesis held that the words in a similar context have similar meanings, which convey similarities in semantic dimensions. Along with the recent development of machine learning techniques, more and more complex models have been trained on larger datasets and achieved superior performance [28].Mikolov et al. [29] proposed a skip-gram method for calculating vector representations of words in large data sets. The compositions of disease named entities often contained rare medical words. In order to improve the computational efficiency, the Skip-gram model removed the hidden layer so that all words in input layer shared a mapping layer. In the skip-gram method, Negative Sampling (NEG) was used. It was a simplified version of Noise Contrastive Estimation (NCE) [30]. NEG simplified NCE by guaranteeing word vector quality and improving training speed. NEG no longer used a relatively complex Huffman tree, but rather a relatively simple random negative sample, which could be used as an alternative for hierarchical softmax.Motivated by the related work, particularly from Mikolov et al. [9, 29], we apply the NEG skip-gram method for disease NER. The method is described as follows. Given a training text sequence w1, …, wT, at position t, the distribution score s(w, c; θ) for the true probability model was calculated using Eq. (1). The target of w was a set of context words wt − n, …, wt − 1, wt + 1, …, wt + n.1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ s\left({w}_t,{c}_t;\theta \right)={v}_{w_t}^T{v}_{w_{t+j}}^{\prime },-n\le j\le n,j\ne 0 $$\end{document}swtctθ=vwtTvwt+j′,−n≤j≤n,j≠0When using the negative sampling method, k negative cases (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\tilde{w}}_{t,i},1\le i\le k $$\end{document}w˜t,i,1≤i≤k) were randomly sampled in the noise distribution Q(w) for each positive case (wt, ct). σ was a logistic function. The negative function for negative samples was shown in Eq. (2):2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\displaystyle \begin{array}{l}{L}_{\theta}\left({w}_t,{c}_t\right)=\log P\left(y=1|{w}_t,{c}_t\right)+\sum \limits_{i=1}^k\log \left(1-P\left(y=1|{\tilde{w}}_{t,i},{c}_t\right)\right)\\ {}\kern3.75em =\log \sigma \left(s\left({w}_t,{c}_t;\theta \right)\right)+\sum \limits_{i=1}^k\log \sigma \left(-s\left({\tilde{w}}_{t,i},{c}_t;\theta \right)\right)\end{array}} $$\end{document}Lθwtct=logPy=1wtct+∑i=1klog1−Py=1w˜t,ict=logσswtctθ+∑i=1klogσ−sw˜t,ictθThe value k was determined by the size of the data. Normally, k ranged within [5, 20] in a small-scale data, while decreased to [2, 5] in a large-scale data [9]. Equation (2) could be solved by a random gradient rise method.Bi-LSTM & CRFAs a typical deep learning method, the long and short memory network (LSTM) [10] was usually used for annotation tasks of text sequences. LSTM, as shown in Eq. (3), could capture long distance information by adding several threshold cells which controlled the contribution of each memory cell. Therefore, LSTM enhanced the ability of keeping long distance context information. Longer contextual information could help the model to learn semantics more precisely.3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\displaystyle \begin{array}{l}{i}_t=\sigma \left({W}_{xi}{x}_t+{W}_{hi}{h}_{t-1}+{W}_{ci}{c}_{t-1}+{b}_i\right)\\ {}{c}_t=\left(1-{i}_t\right)\odot {c}_{t-1}+{i}_t\odot \tanh \left({W}_{xc}{x}_t+{W}_{hc}{h}_{t-1}+{b}_c\right)\\ {}{o}_t=\sigma \left({W}_{xo}{x}_t+{W}_{ho}{h}_{t-1}+{W}_{co}{c}_t+{b}_o\right)\\ {}{h}_t={o}_t\odot \tanh \left({c}_t\right)\end{array}} $$\end{document}it=σWxixt+Whiht−1+Wcict−1+bict=1−it⊙ct−1+it⊙tanhWxcxt+Whcht−1+bcot=σWxoxt+Whoht−1+Wcoct+boht=ot⊙tanhctBidirectional LSTM (Bi-LSTM) could simultaneously learn forward and backward information of input sentences and enhance the ability of entity classification. A sentence X containing multiple words could be represented as a set of dimension vectors (x1, x2, …, xn).\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overrightarrow{y}}_t $$\end{document}y→t denoted the forward LSTM and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overleftarrow{y}}_t $$\end{document}y←t denotes the backward LSTM. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overrightarrow{y}}_t $$\end{document}y→t and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overleftarrow{y}}_t $$\end{document}y←t were calculated by capturing from the LSTM the preceding and following information of the word t, respectively. The overall representation was achieved by generating the same backend sequence in LSTM. This pair of forward and backward LSTMs was Bi-LSTM. This representation preserved the context information for the word t.Since there was more and more research focusing on Bi-LSTM and Conditional Random Field (CRF) in NER tasks, the following of this subsection described CRF. It was first introduced as a sequence data tag recognition model by Lafferty et al. [11]. Considering that the target of NER was label sequences, linear chain CRF could compute the global optimal sequence, thus it was widely used to solve NER problems. The objective function of a linear chain CRF was the conditional probability of the state sequence y given the input sequence x, as shown in Eq. (4).4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ P\left(y|x\right)=\frac{1}{z(x)}\exp \left(\sum \limits_{k=1}^K{\lambda}_k{f}_k\left({y}_t,{y}_{t-1},{x}_t\right)\right) $$\end{document}Pyx=1zxexp∑k=1Kλkfkytyt−1xtfk(yt, yt − 1, xt) was a characteristic function. λk denoted the learning weights of the function features, while yt − 1 and ytreferred to the previous and the current states, respectively. Z(x) was the normalization factor for all state sequences, as shown in Eq. (5).5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ Z(x)=\sum \limits_y\exp \left(\sum \limits_{k=1}^K{\lambda}_k{f}_k\left({y}_t,{y}_{t-1},{x}_t\right)\right) $$\end{document}Zx=∑yexp∑k=1Kλkfkytyt−1xtThe maximum likelihood method and numerical optimization L-BFGS algorithm were used to solve the parameter vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overrightarrow{\lambda}=\left\{{\lambda}_1,\dots, {\lambda}_k\right\} $$\end{document}λ→=λ1…λk in training process. The viterbi algorithm was used to find the most likely hidden state sequences from observed sequences [31].
[ "24393765", "23969135", "20678228", "18614002", "9377276", "20819853", "20442141", "12668688", "16872495", "10928714", "27283952", "28502909", "23787338", "18817555", "25879978" ]
[ { "pmid": "24393765", "title": "NCBI disease corpus: a resource for disease name recognition and concept normalization.", "abstract": "Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora. This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency. The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks. The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/." }, { "pmid": "23969135", "title": "DNorm: disease name normalization with pairwise learning to rank.", "abstract": "MOTIVATION\nDespite the central role of diseases in biomedical research, there have been much fewer attempts to automatically determine which diseases are mentioned in a text-the task of disease name normalization (DNorm)-compared with other normalization tasks in biomedical text mining research.\n\n\nMETHODS\nIn this article we introduce the first machine learning approach for DNorm, using the NCBI disease corpus and the MEDIC vocabulary, which combines MeSH® and OMIM. Our method is a high-performing and mathematically principled framework for learning similarities between mentions and concept names directly from training data. The technique is based on pairwise learning to rank, which has not previously been applied to the normalization task but has proven successful in large optimization problems for information retrieval.\n\n\nRESULTS\nWe compare our method with several techniques based on lexical normalization and matching, MetaMap and Lucene. Our algorithm achieves 0.782 micro-averaged F-measure and 0.809 macro-averaged F-measure, an increase over the highest performing baseline method of 0.121 and 0.098, respectively.\n\n\nAVAILABILITY\nThe source code for DNorm is available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/DNorm, along with a web-based demonstration and links to the NCBI disease corpus. Results on PubMed abstracts are available in PubTator: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator ." }, { "pmid": "20678228", "title": "Automatic de-identification of textual documents in the electronic health record: a review of recent research.", "abstract": "BACKGROUND\nIn the United States, the Health Insurance Portability and Accountability Act (HIPAA) protects the confidentiality of patient data and requires the informed consent of the patient and approval of the Internal Review Board to use data for research purposes, but these requirements can be waived if data is de-identified. For clinical data to be considered de-identified, the HIPAA \"Safe Harbor\" technique requires 18 data elements (called PHI: Protected Health Information) to be removed. The de-identification of narrative text documents is often realized manually, and requires significant resources. Well aware of these issues, several authors have investigated automated de-identification of narrative text documents from the electronic health record, and a review of recent research in this domain is presented here.\n\n\nMETHODS\nThis review focuses on recently published research (after 1995), and includes relevant publications from bibliographic queries in PubMed, conference proceedings, the ACM Digital Library, and interesting publications referenced in already included papers.\n\n\nRESULTS\nThe literature search returned more than 200 publications. The majority focused only on structured data de-identification instead of narrative text, on image de-identification, or described manual de-identification, and were therefore excluded. Finally, 18 publications describing automated text de-identification were selected for detailed analysis of the architecture and methods used, the types of PHI detected and removed, the external resources used, and the types of clinical documents targeted. All text de-identification systems aimed to identify and remove person names, and many included other types of PHI. Most systems used only one or two specific clinical document types, and were mostly based on two different groups of methodologies: pattern matching and machine learning. Many systems combined both approaches for different types of PHI, but the majority relied only on pattern matching, rules, and dictionaries.\n\n\nCONCLUSIONS\nIn general, methods based on dictionaries performed better with PHI that is rarely mentioned in clinical text, but are more difficult to generalize. Methods based on machine learning tend to perform better, especially with PHI that is not mentioned in the dictionaries used. Finally, the issues of anonymization, sufficient performance, and \"over-scrubbing\" are discussed in this publication." }, { "pmid": "18614002", "title": "Seeking a new biology through text mining.", "abstract": "Tens of thousands of biomedical journals exist, and the deluge of new articles in the biomedical sciences is leading to information overload. Hence, there is much interest in text mining, the use of computational tools to enhance the human ability to parse and understand complex text." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "20819853", "title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.", "abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text." }, { "pmid": "20442141", "title": "Automated evaluation of electronic discharge notes to assess quality of care for cardiovascular diseases using Medical Language Extraction and Encoding System (MedLEE).", "abstract": "The objective of this study was to develop and validate an automated acquisition system to assess quality of care (QC) measures for cardiovascular diseases. This system combining searching and retrieval algorithms was designed to extract QC measures from electronic discharge notes and to estimate the attainment rates to the current standards of care. It was developed on the patients with ST-segment elevation myocardial infarction and tested on the patients with unstable angina/non-ST-segment elevation myocardial infarction, both diseases sharing almost the same QC measures. The system was able to reach a reasonable agreement (kappa value) with medical experts from 0.65 (early reperfusion rate) to 0.97 (beta-blockers and lipid-lowering agents before discharge) for different QC measures in the test set, and then applied to evaluate QC in the patients who underwent coronary artery bypass grafting surgery. The result has validated a new tool to reliably extract QC measures for cardiovascular diseases." }, { "pmid": "12668688", "title": "\"Understanding\" medical school curriculum content using KnowledgeMap.", "abstract": "OBJECTIVE\nTo describe the development and evaluation of computational tools to identify concepts within medical curricular documents, using information derived from the National Library of Medicine's Unified Medical Language System (UMLS). The long-term goal of the KnowledgeMap (KM) project is to provide faculty and students with an improved ability to develop, review, and integrate components of the medical school curriculum.\n\n\nDESIGN\nThe KM concept identifier uses lexical resources partially derived from the UMLS (SPECIALIST lexicon and Metathesaurus), heuristic language processing techniques, and an empirical scoring algorithm. KM differentiates among potentially matching Metathesaurus concepts within a source document. The authors manually identified important \"gold standard\" biomedical concepts within selected medical school full-content lecture documents and used these documents to compare KM concept recognition with that of a known state-of-the-art \"standard\"-the National Library of Medicine's MetaMap program.\n\n\nMEASUREMENTS\nThe number of \"gold standard\" concepts in each lecture document identified by either KM or MetaMap, and the cause of each failure or relative success in a random subset of documents.\n\n\nRESULTS\nFor 4,281 \"gold standard\" concepts, MetaMap matched 78% and KM 82%. Precision for \"gold standard\" concepts was 85% for MetaMap and 89% for KM. The heuristics of KM accurately matched acronyms, concepts underspecified in the document, and ambiguous matches. The most frequent cause of matching failures was absence of target concepts from the UMLS Metathesaurus.\n\n\nCONCLUSION\nThe prototypic KM system provided an encouraging rate of concept extraction for representative medical curricular texts. Future versions of KM should be evaluated for their ability to allow administrators, lecturers, and students to navigate through the medical curriculum to locate redundancies, find interrelated information, and identify omissions. In addition, the ability of KM to meet specific, personal information needs should be assessed." }, { "pmid": "16872495", "title": "Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system.", "abstract": "BACKGROUND\nThe text descriptions in electronic medical records are a rich source of information. We have developed a Health Information Text Extraction (HITEx) tool and used it to extract key findings for a research study on airways disease.\n\n\nMETHODS\nThe principal diagnosis, co-morbidity and smoking status extracted by HITEx from a set of 150 discharge summaries were compared to an expert-generated gold standard.\n\n\nRESULTS\nThe accuracy of HITEx was 82% for principal diagnosis, 87% for co-morbidity, and 90% for smoking status extraction, when cases labeled \"Insufficient Data\" by the gold standard were excluded.\n\n\nCONCLUSION\nWe consider the results promising, given the complexity of the discharge summaries and the extraction tasks." }, { "pmid": "27283952", "title": "TaggerOne: joint named entity recognition and normalization with semi-Markov Models.", "abstract": "MOTIVATION\nText mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization.\n\n\nMETHODS\nWe propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput.\n\n\nRESULTS\nWe validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "28502909", "title": "Character-level neural network for biomedical named entity recognition.", "abstract": "Biomedical named entity recognition (BNER), which extracts important named entities such as genes and proteins, is a challenging task in automated systems that mine knowledge in biomedical texts. The previous state-of-the-art systems required large amounts of task-specific knowledge in the form of feature engineering, lexicons and data pre-processing to achieve high performance. In this paper, we introduce a novel neural network architecture that benefits from both word- and character-level representations automatically, by using a combination of bidirectional long short-term memory (LSTM) and conditional random field (CRF) eliminating the need for most feature engineering tasks. We evaluate our system on two datasets: JNLPBA corpus and the BioCreAtIvE II Gene Mention (GM) corpus. We obtained state-of-the-art performance by outperforming the previous systems. To the best of our knowledge, we are the first to investigate the combination of deep neural networks, CRF, word embeddings and character-level representation in recognizing biomedical named entities." }, { "pmid": "23787338", "title": "Representation learning: a review and new perspectives.", "abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning." }, { "pmid": "18817555", "title": "Abbreviation definition identification based on automatic precision estimates.", "abstract": "BACKGROUND\nThe rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation.\n\n\nRESULTS\nOn the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm.\n\n\nCONCLUSION\nWe developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic." }, { "pmid": "25879978", "title": "SimConcept: a hybrid approach for simplifying composite named entities in biomedical text.", "abstract": "One particular challenge in biomedical named entity recognition (NER) and normalization is the identification and resolution of composite named entities, where a single span refers to more than one concept (e.g., BRCA1/2). Previous NER and normalization studies have either ignored composite mentions, used simple ad hoc rules, or only handled coordination ellipsis, making a robust approach for handling multitype composite mentions greatly needed. To this end, we propose a hybrid method integrating a machine-learning model with a pattern identification strategy to identify the individual components of each composite mention. Our method, which we have named SimConcept, is the first to systematically handle many types of composite mentions. The technique achieves high performance in identifying and resolving composite mentions for three key biological entities: genes (90.42% in F-measure), diseases (86.47% in F-measure), and chemicals (86.05% in F-measure). Furthermore, our results show that using our SimConcept method can subsequently improve the performance of gene and disease concept recognition and normalization. SimConcept is available for download at: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/SimConcept/." } ]
BMC Medical Informatics and Decision Making
30537977
PMC6290509
10.1186/s12911-018-0677-8
Improving palliative care with deep learning
BackgroundAccess to palliative care is a key quality metric which most healthcare organizations strive to improve. The primary challenges to increasing palliative care access are a combination of physicians over-estimating patient prognoses, and a shortage of palliative staff in general. This, in combination with treatment inertia can result in a mismatch between patient wishes, and their actual care towards the end of life.MethodsIn this work, we address this problem, with Institutional Review Board approval, using machine learning and Electronic Health Record (EHR) data of patients. We train a Deep Neural Network model on the EHR data of patients from previous years, to predict mortality of patients within the next 3-12 month period. This prediction is used as a proxy decision for identifying patients who could benefit from palliative care.ResultsThe EHR data of all admitted patients are evaluated every night by this algorithm, and the palliative care team is automatically notified of the list of patients with a positive prediction. In addition, we present a novel technique for decision interpretation, using which we provide explanations for the model’s predictions.ConclusionThe automatic screening and notification saves the palliative care team the burden of time consuming chart reviews of all patients, and allows them to take a proactive approach in reaching out to such patients rather then relying on referrals from the treating physicians.
Related workAccurate prognostic information is valuable to patients and caregivers (for setting expectations, planning for care and end of life), and to clinicians (for planning treatment) [7, 8]. Several studies have shown that clinicians generally tend to be over optimistic in their estimates of the prognoses of terminally ill patients [5, 9–11]. It has also been shown that no subset of clinicians are better at late stage prognostication than others [12, 13]. However, in practice, the most common method of predictive survival remains to be the clinician’s subjective judgment [12]. Several solutions exist that attempt to make patient prognosis more objective and automated. Many of these solutions are models that produce a score based on the patient’s clinical and biological parameters, and can be mapped to an expected survival rate.Prognostic tools in palliative careThe Palliative Performance Scale [14] was developed as a modification of the Karnofsky Performance Status Scale (KPS) [15] to the Palliative care setting, and is calculated based on observable factors such as: degree of ambulation, ability to do activities, ability to do self-care, food and fluid intake, and state of consciousness. The Palliative Prognostic Score (PPS) was constructed for the Palliative Care setting as well, focusing on terminally ill cancer patients [16]. The PPS is calculated with multiple regression analysis based on the following variables: Clinical Prediction of Survival (CPS), Karnofsky Performance Status (KPS), anorexia, dyspnea, total white blood count (WBC) and lymphocyte percentage. The Palliative Prognostic Index (PPI), developed around the same time as PPS, also calculates a multiple regression analysis based score using Performance Status, oral intake, edema, dyspnea at rest, and delirium. These scores are difficult to implement at scale since they involve face-to-face clinical assessment and involve prediction of survival by the clinician. Furthermore, these scores were designed to be used within the palliative care setting, where the patient is already in an advanced stage of the disease — as opposed to identifying them earlier.Prognostic tools in the intensive care unitThere also are prognosis scoring models that are commonly used in the Intensive Care Unit. The APACHE-II (Acute Physiology, Age, Chronic Health Evaluation) Score predicts hospital mortality risk for critically ill hospitalized adults in the ICU [17]. This model has been more recently refined with the APACHE-III Score, which uses factors such as major medical and surgical disease categories, acute physiologic abnormalities, age, preexisting functional limitations, major comorbidities, and treatment location immediately prior to ICU admission [18]. Another commonly used scoring system in the ICU is the Simplified Acute Physiological Score, or SAPS II [19], which is calculated based on the patient’s physiological and underlying disease variables. While these score are useful for the treatment team when the patient is already in the ICU, they have limited use in terms of identifying patients who are at risk of longer term mortality, while they are still capable of having a meaningful discussion of their goals and values, so that they can be set on an alternative path of care.Prognostic tools for early identificationThere have been a number of studies and tools developed that aim to identify terminally ill patients early enough for an end-of-life plan and care to be meaningful.CriSTAL (Criteria for Screening and Triaging to Appropriate aLternative care) was developed to identify elderly patients nearing end of life, and quantifies the risk of death in the hospital or soon after discharge [20]. CriSTAL provides a check list using eighteen predictors with the goal of identifying the dying patient.CARING is a tool that was developed to identify patients who could benefit from palliative care [21]. The goal was to use six simple criteria in order to identify patients who were at risk of death within 1 year. PREDICT [22] is a screening tool also based on six prognostic indicators, which were refined from CARING. The model was derived from 976 patients.The Intermountain Mortality Risk score is an all-causes mortality prediction based on common laboratory tests [23]. The model provides score for 30-day, 1-year and 5-year mortality risk. It was trained on a population of 71,921 and tested on 47,458.Cowen et al. [24] proposed using a twenty-four factor based prediction rule at the time of hospital admission to identify patients with high risk of 30-day mortality, and to organize care activities using this prediction as a context. One of the their motivation was to have a rule from a single set of factors, and not be disease specific. The model was derived from 56,003 patients.Meffert et al. [25] proposed a scoring method based on logistic regression on six factors to identify hospitalized patients in need of palliative care. In this prospective study, they asked the treating physician at the time of discharge whether the patient had palliative care needs. The trained model was then used to identify such patients at the time of admission. The model was derived from 39,849 patients.Ramachandran et al. [26] developed a 30-day mortality prediction tool for hospitalized cancer patients. Their model used eight variables that were based on information from the first 24 h of admission, and laboratory results and vitals. A logistic regression model was developed from these eight variables and used as a scoring function. The model was derived from 3062 patients.Amarasingham et al. [27] built a tool to screen patients who were admitted with heart failure, and identify those who are at risk of 30-day readmission or death. Their regression model uses a combination of Tabak Morality Score [28], markers of social, behavioral, and utilization activity that could be obtained electronically, ICD-9 CM codes specific to depression and anxiety, billing and administrative data. Though this study was not specifically focused on palliative care, the methodology of using EHR system data is relevant to our work. The model was derived from 1372 patients.Makar et al. [29] used only Medicare claims data on older population (≥ 65 years) to predict mortality in six months. By limiting their model to use only administrative data, they hypothesized an easier deployment scenario thereby making automated prognostic models more prevalent. The model was derived separately on four cohorts (one per disease type) with 20,000 patients per cohort.Prognosis in the age of big-dataThe rapid rise and proliferation of EHR systems in healthcare over the past couple of decades, combined with advances in Machine Learning techniques on high dimensional data provides a unique opportunity to make contributions in healthcare, especially in precision medicine and disease prognosis [30, 31]. All the tools described above, and those we reviewed [32–36], have at least one of the following limitations. They were either derived from small data sets (limited to specific studies or cohorts), or used too few variables (intentionally to make the model portable, or avoid overfitting), or the model was too simple to capture the complexities and subtleties of human health, or was limited to certain sub-populations (based on disease type, age etc.) We address these limitations in our work.
[ "26417923", "10369435", "10391577", "27560380", "9616448", "17040144", "3928249", "1959406", "25062815", "20940649", "17667314", "28018571", "26809201", "9040894", "26017442", "25941668" ]
[ { "pmid": "26417923", "title": "The Growth of Palliative Care in U.S. Hospitals: A Status Report.", "abstract": "BACKGROUND\nPalliative care is expanding rapidly in the United States.\n\n\nOBJECTIVE\nTo examine variation in access to hospital palliative care.\n\n\nMETHODS\nData were obtained from the American Hospital Association (AHA) Annual Surveys™ for Fiscal Years 2012 and 2013, the National Palliative Care Registry™, the Dartmouth Atlas of Healthcare, the American Census Bureau's American Community Survey (ACS), web searches, and telephone interviews of hospital administrators and program directors. Multivariable logistic regression was used to examine predictors of hospital palliative care programs.\n\n\nRESULTS\nSixty-seven percent of hospitals with 50 or more total facility beds reported a palliative care program. Institutional characteristics were strongly associated with the presence of a hospital palliative care program. Ninety percent of hospitals with 300 beds or more were found to have palliative care programs as compared to 56% of hospitals with fewer than 300 beds. Tax status was also a significant predictor. Not-for-profit hospitals and public hospitals were, respectively, 4.8 times and 7.1 times more likely to have a palliative care program as compared to for-profit hospitals. Palliative care penetration was highest in the New England (88% of hospitals), Pacific (77% of hospitals), and mid-Atlantic (77% of hospitals) states and lowest in the west south central (43% of hospitals) and east south central (42% of hospitals) states.\n\n\nCONCLUSIONS\nThis study demonstrates continued steady growth in the number of hospital palliative care programs in the United States, with almost universal access to services in large U.S. hospitals and academic medical centers. Nevertheless access to palliative care remains uneven and depends on accidents of geography and hospital ownership." }, { "pmid": "10369435", "title": "Information needs in terminal illness.", "abstract": "Despite evidence that doctor-patient communication affects important patient outcomes, patient expectations are often not met. Communication is especially important in terminal illness, when the appropriate course of action may depend more on patient values than on medical dogma. We sought to describe the issues important to terminally ill patients receiving palliative care and to determine whether patient characteristics influence the needs of these patients. We utilized a multimethod approach, first conducting interviews with 22 terminally ill individuals, then using these data to develop a more structured instrument which was administered to a second population of 56 terminally ill patients. Patient needs and concerns were described and associations between patient characteristics and issues of importance were evaluated. Seven key issues were identified in the initial interviews: change in functional status or activity level; role change; symptoms, especially pain; stress of the illness on family members; loss of control; financial burden and conflict between wanting to know what is going on and fearing bad news. Overall, respondent needs were both disease- and illness-oriented. Few easily identifiable patient characteristics were associated with expressed concerns or needs, suggesting that physicians need to individually assess patient needs. Terminally ill patients receiving palliative care had needs that were broad in scope. Given that few patient characteristics predicted responses, and that the majority opinion may not accurately reflect that of an individual patient, health care providers must be aware of the diverse concerns among this population and individualize assessment of each patient's needs and expectations." }, { "pmid": "10391577", "title": "The relative accuracy of the clinical estimation of the duration of life for patients with end of life cancer.", "abstract": "BACKGROUND\nAlthough the prediction of the duration of life of patients with end of life cancer most often relies on the clinical estimation of survival (CES) made by the treating physician, the accuracy and practical value of CES remains controversial.\n\n\nMETHODS\nThe authors prospectively evaluated the accuracy of CES in an inception and population-based cohort of 233 cancer patients who were seen at the onset of their terminal phase. They also systematically reviewed the literature on CES in advanced or end-stage cancer patients in MEDLINE, CANCERLIT, and EMBASE data bases, using two search strategies developed by a research librarian.\n\n\nRESULTS\nCES had low sensitivity in detecting patients who died within shorter time frames (< or =2 months), and a tendency to overestimate survival was noted. A moderate correlation was observed between actual survival (AS) and CES (Pearson correlation coefficient = 0.47, intraclass correlation coefficient = 0.46, weighted kappa coefficient = 0.42).\n\n\nCONCLUSIONS\nTreating physicians appear to overestimate the duration of life of end of life ill cancer patients, particularly those patients who die early in the terminal phase and who may potentially benefit from earlier participation in palliative care programs. CES should be considered one of many criteria, rather than a unique criterion, by which to choose therapeutic intervention or health care programs for patients in the end of life cancer phase." }, { "pmid": "27560380", "title": "A Systematic Review of Predictions of Survival in Palliative Care: How Accurate Are Clinicians and Who Are the Experts?", "abstract": "BACKGROUND\nPrognostic accuracy in palliative care is valued by patients, carers, and healthcare professionals. Previous reviews suggest clinicians are inaccurate at survival estimates, but have only reported the accuracy of estimates on patients with a cancer diagnosis.\n\n\nOBJECTIVES\nTo examine the accuracy of clinicians' estimates of survival and to determine if any clinical profession is better at doing so than another.\n\n\nDATA SOURCES\nMEDLINE, Embase, CINAHL, and the Cochrane Database of Systematic Reviews and Trials. All databases were searched from the start of the database up to June 2015. Reference lists of eligible articles were also checked.\n\n\nELIGIBILITY CRITERIA\n\n\n\nINCLUSION CRITERIA\npatients over 18, palliative population and setting, quantifiable estimate based on real patients, full publication written in English.\n\n\nEXCLUSION CRITERIA\nif the estimate was following an intervention, such as surgery, or the patient was artificially ventilated or in intensive care.\n\n\nSTUDY APPRAISAL AND SYNTHESIS METHODS\nA quality assessment was completed with the QUIPS tool. Data on the reported accuracy of estimates and information about the clinicians were extracted. Studies were grouped by type of estimate: categorical (the clinician had a predetermined list of outcomes to choose from), continuous (open-ended estimate), or probabilistic (likelihood of surviving a particular time frame).\n\n\nRESULTS\n4,642 records were identified; 42 studies fully met the review criteria. Wide variation was shown with categorical estimates (range 23% to 78%) and continuous estimates ranged between an underestimate of 86 days to an overestimate of 93 days. The four papers which used probabilistic estimates tended to show greater accuracy (c-statistics of 0.74-0.78). Information available about the clinicians providing the estimates was limited. Overall, there was no clear \"expert\" subgroup of clinicians identified.\n\n\nLIMITATIONS\nHigh heterogeneity limited the analyses possible and prevented an overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified.\n\n\nCONCLUSION\nStudies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians' predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other.\n\n\nIMPLICATIONS OF KEY FINDINGS\nFurther research is needed to understand how clinical predictions are formulated and how their accuracy can be improved." }, { "pmid": "17040144", "title": "Use of Palliative Performance Scale in end-of-life prognostication.", "abstract": "BACKGROUND\nCurrent literature suggests clinicians are not accurate in prognostication when estimating survival times of palliative care patients. There are reported studies in which the Palliative Performance Scale (PPS) is used as a prognostic tool to predict survival of these patients. Yet, their findings are different in terms of the presence of distinct PPS survival profiles and significant covariates.\n\n\nOBJECTIVE\nThis study investigates the use of PPS as a prognostication tool for estimating survival times of patients with life-limiting illness in a palliative care unit. These findings are compared to those from earlier studies in terms of PPS survival profiles and covariates.\n\n\nMETHODS\nThis is a retrospective cohort study in which the admission PPS scores of 733 palliative care patients admitted between March 3, 2000 and August 9, 2002 were examined for survival patterns. Other predictors for survival included were age, gender, and diagnosis.\n\n\nRESULTS\nStudy findings revealed that admission PPS score was a strong predictor of survival in patients already identified as palliative, along with gender and age, but diagnosis was not significantly related to survival. We also found that scores of PPS 10% through PPS 50% led to distinct survival curves, and male patients had consistently lower survival rates than females regardless of PPS score.\n\n\nCONCLUSION\nOur findings differ somewhat from earlier studies that suggested the presence of three distinct PPS survival profiles or bands, with diagnosis and noncancer as significant covariates. Such differences are likely attributed to the size and characteristics of the patient populations involved and further analysis with larger patient samples may help clarify PPS use in prognosis." }, { "pmid": "3928249", "title": "APACHE II: a severity of disease classification system.", "abstract": "This paper presents the form and validation results of APACHE II, a severity of disease classification system. APACHE II uses a point score based upon initial values of 12 routine physiologic measurements, age, and previous health status to provide a general measure of severity of disease. An increasing score (range 0 to 71) was closely correlated with the subsequent risk of hospital death for 5815 intensive care admissions from 13 hospitals. This relationship was also found for many common diseases. When APACHE II scores are combined with an accurate description of disease, they can prognostically stratify acutely ill patients and assist investigators comparing the success of new or differing forms of therapy. This scoring index can be used to evaluate the use of hospital resources and compare the efficacy of intensive care in different hospitals or over time." }, { "pmid": "1959406", "title": "The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults.", "abstract": "The objective of this study was to refine the APACHE (Acute Physiology, Age, Chronic Health Evaluation) methodology in order to more accurately predict hospital mortality risk for critically ill hospitalized adults. We prospectively collected data on 17,440 unselected adult medical/surgical intensive care unit (ICU) admissions at 40 US hospitals (14 volunteer tertiary-care institutions and 26 hospitals randomly chosen to represent intensive care services nationwide). We analyzed the relationship between the patient's likelihood of surviving to hospital discharge and the following predictive variables: major medical and surgical disease categories, acute physiologic abnormalities, age, preexisting functional limitations, major comorbidities, and treatment location immediately prior to ICU admission. The APACHE III prognostic system consists of two options: (1) an APACHE III score, which can provide initial risk stratification for severely ill hospitalized patients within independently defined patient groups; and (2) an APACHE III predictive equation, which uses APACHE III score and reference data on major disease categories and treatment location immediately prior to ICU admission to provide risk estimates for hospital mortality for individual ICU patients. A five-point increase in APACHE III score (range, 0 to 299) is independently associated with a statistically significant increase in the relative risk of hospital death (odds ratio, 1.10 to 1.78) within each of 78 major medical and surgical disease categories. The overall predictive accuracy of the first-day APACHE III equation was such that, within 24 h of ICU admission, 95 percent of ICU admissions could be given a risk estimate for hospital death that was within 3 percent of that actually observed (r2 = 0.41; receiver operating characteristic = 0.90). Recording changes in the APACHE III score on each subsequent day of ICU therapy provided daily updates in these risk estimates. When applied across the individual ICUs, the first-day APACHE III equation accounted for the majority of variation in observed death rates (r2 = 0.90, p less than 0.0001)." }, { "pmid": "25062815", "title": "PREDICT: a diagnostic accuracy study of a tool for predicting mortality within one year: who should have an advance healthcare directive?", "abstract": "BACKGROUND\nCARING is a screening tool developed to identify patients who have a high likelihood of death in 1 year.\n\n\nAIM\nThis study sought to validate a modified CARING tool (termed PREDICT) using a population of patients presenting to the Emergency Department.\n\n\nSETTING/PARTICIPANTS\nIn total, 1000 patients aged over 55 years who were admitted to hospital via the Emergency Department between January and June 2009 were eligible for inclusion in this study.\n\n\nDESIGN\nData on the six prognostic indicators comprising PREDICT were obtained retrospectively from patient records. One-year mortality data were obtained from the State Death Registry. Weights were applied to each PREDICT criterion, and its final score ranged from 0 to 44. Receiver operator characteristic analyses and diagnostic accuracy statistics were used to assess the accuracy of PREDICT in identifying 1-year mortality.\n\n\nRESULTS\nThe sample comprised 976 patients with a median (interquartile range) age of 71 years (62-81 years) and a 1-year mortality of 23.4%. In total, 50% had ≥1 PREDICT criteria with a 1-year mortality of 40.4%. Receiver operator characteristic analysis gave an area under the curve of 0.86 (95% confidence interval: 0.83-0.89). Using a cut-off of 13 points, PREDICT had a 95.3% (95% confidence interval: 93.6-96.6) specificity and 53.9% (95% confidence interval: 47.5-60.3) sensitivity for predicting 1-year mortality. PREDICT was simpler than the CARING criteria and identified 158 patients per 1000 admitted who could benefit from advance care planning.\n\n\nCONCLUSION\nPREDICT was successfully applied to the Australian healthcare system with findings similar to the original CARING study conducted in the United States. This tool could improve end-of-life care by identifying who should have advance care planning or an advance healthcare directive." }, { "pmid": "20940649", "title": "An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data.", "abstract": "BACKGROUND\nA real-time electronic predictive model that identifies hospitalized heart failure (HF) patients at high risk for readmission or death may be valuable to clinicians and hospitals who care for these patients.\n\n\nMETHODS\nAn automated predictive model for 30-day readmission and death was derived and validated from clinical and nonclinical risk factors present on admission in 1372 HF hospitalizations to a major urban hospital between January 2007 and August 2008. Data were extracted from an electronic medical record. The performance of the electronic model was compared with mortality and readmission models developed by the Center for Medicaid and Medicare Services (CMS models) and a HF mortality model derived from the Acute Decompensated Heart Failure Registry (ADHERE model).\n\n\nRESULTS\nThe 30-day mortality and readmission rates were 3.1% and 24.1% respectively. The electronic model demonstrated good discrimination for 30 day mortality (C statistic 0.86) and readmission (C statistic 0.72) and performed as well, or better than, the ADHERE model and CMS models for both outcomes (C statistic ranges: 0.72-0.73 and 0.56-0.66 for mortality and readmissions respectively; P < 0.05 in all comparisons). Markers of social instability and lower socioeconomic status improved readmission prediction in the electronic model (C statistic 0.72 vs. 0.61, P < 0.05).\n\n\nCONCLUSIONS\nClinical and social factors available within hours of hospital presentation and extractable from an EMR predicted mortality and readmission at 30 days. Incorporating complex social factors increased the model's accuracy, suggesting that such factors could enhance risk adjustment models designed to compare hospital readmission rates." }, { "pmid": "17667314", "title": "Using automated clinical data for risk adjustment: development and validation of six disease-specific mortality predictive models for pay-for-performance.", "abstract": "BACKGROUND\nClinically plausible risk-adjustment methods are needed to implement pay-for-performance protocols. Because billing data lacks clinical precision, may be gamed, and chart abstraction is costly, we sought to develop predictive models for mortality that maximally used automated laboratory data and intentionally minimized the use of administrative data (Laboratory Models). We also evaluated the additional value of vital signs and altered mental status (Full Models).\n\n\nMETHODS\nSix models predicting in-hospital mortality for ischemic and hemorrhagic stroke, pneumonia, myocardial infarction, heart failure, and septicemia were derived from 194,903 admissions in 2000-2003 across 71 hospitals that imported laboratory data. Demographics, admission-based labs, International Classification of Diseases (ICD)-9 variables, vital signs, and altered mental status were sequentially entered as covariates. Models were validated using abstractions (629,490 admissions) from 195 hospitals. Finally, we constructed hierarchical models to compare hospital performance using the Laboratory Models and the Full Models.\n\n\nRESULTS\nModel c-statistics ranged from 0.81 to 0.89. As constructed, laboratory findings contributed more to the prediction of death compared with any other risk factor characteristic groups across most models except for stroke, where altered mental status was more important. Laboratory variables were between 2 and 67 times more important in predicting mortality than ICD-9 variables. The hospital-level risk-standardized mortality rates derived from the Laboratory Models were highly correlated with the results derived from the Full Models (average rho = 0.92).\n\n\nCONCLUSIONS\nMortality can be well predicted using models that maximize reliance on objective pathophysiologic variables whereas minimizing input from billing data. Such models should be less susceptible to the vagaries of billing information and inexpensive to implement." }, { "pmid": "28018571", "title": "Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data.", "abstract": "Risk prediction is central to both clinical medicine and public health. While many machine learning models have been developed to predict mortality, they are rarely applied in the clinical literature, where classification tasks typically rely on logistic regression. One reason for this is that existing machine learning models often seek to optimize predictions by incorporating features that are not present in the databases readily available to providers and policy makers, limiting generalizability and implementation. Here we tested a number of machine learning classifiers for prediction of six-month mortality in a population of elderly Medicare beneficiaries, using an administrative claims database of the kind available to the majority of health care payers and providers. We show that machine learning classifiers substantially outperform current widely-used methods of risk prediction-but only when used with an improved feature set incorporating insights from clinical medicine, developed for this study. Our work has applications to supporting patient and provider decision making at the end of life, as well as population health-oriented efforts to identify patients at high risk of poor outcomes." }, { "pmid": "9040894", "title": "An evaluation of machine-learning methods for predicting pneumonia mortality.", "abstract": "This paper describes the application of eight statistical and machine-learning methods to derive computer models for predicting mortality of hospital patients with pneumonia from their findings at initial presentation. The eight models were each constructed based on 9847 patient cases and they were each evaluated on 4352 additional cases. The primary evaluation metric was the error in predicted survival as a function of the fraction of patients predicted to survive. This metric is useful in assessing a model's potential to assist a clinician in deciding whether to treat a given patient in the hospital or at home. We examined the error rates of the models when predicting that a given fraction of patients will survive. We examined survival fractions between 0.1 and 0.6. Over this range, each model's predictive error rate was within 1% of the error rate of every other model. When predicting that approximately 30% of the patients will survive, all the models have an error rate of less than 1.5%. The models are distinguished more by the number of variables and parameters that they contain than by their error rates; these differences suggest which models may be the most amenable to future implementation as paper-based guidelines." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "25941668", "title": "Threshold-free measures for assessing the performance of medical screening tests.", "abstract": "BACKGROUND\nThe area under the receiver operating characteristic curve (AUC) is frequently used as a performance measure for medical tests. It is a threshold-free measure that is independent of the disease prevalence rate. We evaluate the utility of the AUC against an alternate measure called the average positive predictive value (AP), in the setting of many medical screening programs where the disease has a low prevalence rate.\n\n\nMETHODS\nWe define the two measures using a common notation system and show that both measures can be expressed as a weighted average of the density function of the diseased subjects. The weights for the AP include prevalence in some form, but those for the AUC do not. These measures are compared using two screening test examples under rare and common disease prevalence rates.\n\n\nRESULTS\nThe AP measures the predictive power of a test, which varies when the prevalence rate changes, unlike the AUC, which is prevalence independent. The relationship between the AP and the prevalence rate depends on the underlying screening/diagnostic test. Therefore, the AP provides relevant information to clinical researchers and regulators about how a test is likely to perform in a screening population.\n\n\nCONCLUSION\nThe AP is an attractive alternative to the AUC for the evaluation and comparison of medical screening tests. It could improve the effectiveness of screening programs during the planning stage." } ]
Environmental Health and Preventive Medicine
30547743
PMC6293619
10.1186/s12199-018-0753-9
Association between time-related work factors and dietary behaviors: results from the Japan Environment and Children’s Study (JECS)
BackgroundFew studies have examined the association of workhours and shift work (referred to here as “time-related work factors”) with dietary behaviors. We aimed to investigate this association, as well as the dietary behaviors among individuals with occupations characterized by time-related work factors.MethodsA cross-sectional study was performed using data from the Japan Environment and Children’s Study. The study included 39,315 working men. Dietary behaviors (i.e., skipping breakfast, eating out, eating instant food, overeating, and eating fast) were assessed with a self-reported information from the Food Frequency Questionnaire. Logistic regression analysis was conducted to examine the associations of time-related work factors with dietary behaviors and dietary behavior tendencies among those in occupations characterized by long workhours and/or shift work.ResultsLong workhours were associated with high frequencies of skipping breakfast, eating out, eating instant food, overeating, and eating fast. The frequency of having shift work was associated with high frequencies of skipping breakfast, eating out, and eating instant food. Several occupations involving long workhours and/or shift work showed specific dietary behaviors; in some occupations, the level of significance changed after adjusting for time-related work factors in addition to other potential confounding factors.ConclusionsTime-related work factors may help explain workers’ dietary behaviors. Long workhours and shift work may lead to poor dietary behaviors. Other factors influenced by occupation itself, such as food environment, may also influence workers’ dietary behaviors. Workhours and/or shift work, and these other work factors, should be given attention in workplace health promotion.
Time-related work factors: workhours and shift workThe number of weekly workhours was calculated from the answers to the questions “How many hours do you work per day?” and “How many days do you work per week?” Accordingly, workhours were categorized into six groups: equal to or less than 40 h; > 40, ≦ 45 h; > 45, ≦ 50 h; > 50, ≦ 55 h; > 55, ≦ 65 h; and more than 65 h per week. Information on shift work was assessed using the question: “How often do you have shifts other than the day shift?” Based on the responses, the frequency of shift work was categorized into three groups: no (with “zero” as the answer), > 0, ≦ 8 times, and more than 8 times per month.
[ "28800293", "9740562", "9222722", "23026037", "20143038", "23964589", "27091251", "15921512", "21669600", "30272860", "29093304", "24410977", "25912098", "27064130", "12828387", "24988105", "14972072", "23850518" ]
[ { "pmid": "28800293", "title": "Rotating shift work associated with obesity in men from northeastern Ontario.", "abstract": "INTRODUCTION\nWhile some studies have suggested associations between shift work and obesity, few have been population-based or considered multiple shift schedules. Since obesity is linked with several chronic health conditions, understanding which types of shift work influence obesity is important and additional work with more detailed exposure assessment of shift work is warranted.\n\n\nMETHODS\nUsing multivariate polytomous logistic regression, we investigated the associations between shift work (evening/night, rotating and other shift schedules) and overweight and obesity as measured by body mass index cross-sectionally among 1561 men. These men had previously participated as population controls in a prostate cancer case-control study conducted in northeastern Ontario from 1995 to 1999. We obtained information on work history (including shift work), height and weight from the existing self-reported questionnaire data.\n\n\nRESULTS\nWe observed an association for ever (vs. never) having been employed in rotating shift work for both the overweight (OR [odds ratio] = 1.34; 95% CI [confidence interval]: 1.05-1.73) and obese (OR = 1.57; 95% CI: 1.12-2.21) groups. We also observed nonsignificant associations for ever (vs. never) having been employed in permanent evening/night shifts. In addition, we found a significant trend of increased risk for both overweight and obesity with increasing duration of rotating shift work.\n\n\nCONCLUSION\nBoth the positive association between rotating shift work and obesity and the suggested positive association for permanent evening/night shift work in this study are consistent with previous findings. Future population-based research that is able to build on our results while examining additional shift work characteristics will further clarify whether some shift patterns have a greater impact on obesity than others." }, { "pmid": "9740562", "title": "Working hours as a risk factor for acute myocardial infarction in Japan: case-control study.", "abstract": "OBJECTIVE\nTo clarify the extent to which working hours affect the risk of acute myocardial infarction, independent of established risk factors and occupational conditions.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nUniversity and general hospitals and routine medical examinations at workplaces in Japan.\n\n\nSUBJECTS\nCases were 195 men aged 30-69 years admitted to hospital with acute myocardial infarction during 1990-3. Controls were 331 men matched at group level for age and occupation who were judged to be free of coronary heart diseases at routine medical examinations in the workplace.\n\n\nMAIN OUTCOME MEASURES\nOdds ratios for myocardial infarction in relation to previous mean daily working hours in a month and changes in mean working hours during previous year.\n\n\nRESULTS\nCompared with men with mean working hours of >7-9 hours, the odds ratio of acute myocardial infarction (adjusted for age and occupation) for men with working hours of >11 hours was 2.44 (95% confidence interval 1.26 to 4.73) and for men with working hours of <=7 hours was 3.07 (1.77 to 5.32). Compared with men who experienced an increase of <=1 hour in mean working hours, the adjusted odds ratio of myocardial infarction for men who experienced an increase of >3 hours was 2.53 (1.34 to 4. 77). No appreciable change was observed when odds ratios were adjusted for established and psychosocial risk factors for myocardial infarction.\n\n\nCONCLUSION\nThere was a U shaped relation between the mean working hours and the risk of acute myocardial infarction. There also seemed to be a trend for the risk of infarction to increase with greater increases in mean working hours." }, { "pmid": "9222722", "title": "Difficulties in trying to eat healthier: descriptive analysis of perceived barriers for healthy eating.", "abstract": "OBJECTIVE\nTo determine the factors which are perceived to be important barriers to healthy eating among European adults.\n\n\nDESIGN\nA cross sectional study in which quota-controlled, nationally-representative samples of approximately 1000 adults from each country completed a face-to-face interview-assisted questionnaire.\n\n\nSETTING\nThe survey was conducted between October 1995 and February 1996 in the 15 member states of the European Union.\n\n\nSUBJECTS\n14,331 subjects (aged 15 y upwards) completed the questionnaire. Data were weighted by population size for each country and by sex, age and regional distribution within each member state.\n\n\nRESULTS\nThe study demonstrates a great variability in the perceived barriers to healthy eating between different EU countries. Lack of time was the most frequently mentioned difficulty among EU subjects for not following nutritional advice (24% of total EU sample). This barrier was frequently reported by the younger and the higher education people. Other frequently reported barriers were giving up favourite foods (23%) and willpower (18%). Thus healthy diets do not appear to be viewed as an easy or attractive alternative to current diets. There was wide geographical variation in the number of subjects mentioning price as an important barrier to healthy eating (15% in overall EU sample) ranging from less than 10% in Germany and Italy to 23% in the UK and 24% in Luxembourg." }, { "pmid": "23026037", "title": "Work hours and perceived time barriers to healthful eating among young adults.", "abstract": "OBJECTIVE\nTo describe time-related beliefs and behaviors regarding healthful eating, indicators of dietary intake, and their associations with the number of weekly hours of paid work among young adults.\n\n\nMETHODS\nPopulation-based study in a diverse cohort (N=2287).\n\n\nRESULTS\nWorking > 40 hours per week was associated with time-related barriers to healthful eating most persistently among young adult men. Associations were found among females working both part-time and > 40 hours per week with both time-related barriers and dietary intake.\n\n\nCONCLUSIONS\nFindings indicate that intervention strategies, ideally those addressing time burden, are needed to promote healthful eating among young, working adults." }, { "pmid": "20143038", "title": "Eating and shift work - effects on habits, metabolism and performance.", "abstract": "Compared to individuals who work during the day, shift workers are at higher risk of a range of metabolic disorders and diseases (eg, obesity, cardiovascular disease, peptic ulcers, gastrointestinal problems, failure to control blood sugar levels, and metabolic syndrome). At least some of these complaints may be linked to the quality of the diet and irregular timing of eating, however other factors that affect metabolism are likely to play a part, including psychosocial stress, disrupted circadian rhythms, sleep debt, physical inactivity, and insufficient time for rest and revitalization. In this overview, we examine studies on food and nutrition among shift workers [ie, dietary assessment (designs, methods, variables) and the factors that might influence eating habits and metabolic parameters]. The discussion focuses on the quality of existing dietary assessment data, nutritional status parameters (particularly in obesity), the effect of circadian disruptions, and the possible implications for performance at work. We conclude with some dietary guidelines as a basis for managing the nutrition of shift workers." }, { "pmid": "23964589", "title": "Dietary patterns, metabolic markers and subjective sleep measures in resident physicians.", "abstract": "Shiftwork is common in medical training and is necessary for 24-h hospital coverage. Shiftwork poses difficulties not only because of the loss of actual sleep hours but also because it can affect other factors related to lifestyle, such as food intake, physical activity level, and, therefore, metabolic patterns. However, few studies have investigated the nutritional and metabolic profiles of medical personnel receiving training who are participating in shiftwork. The aim of the present study was to identify the possible negative effects of food intake, anthropometric variables, and metabolic and sleep patterns of resident physicians and establish the differences between genders. The study included 72 resident physicians (52 women and 20 men) who underwent the following assessments: nutritional assessment (3-day dietary recall evaluated by the Adapted Healthy Eating Index), anthropometric variables (height, weight, body mass index, and waist circumference), fasting metabolism (lipids, cortisol, high-sensitivity C-reactive protein [hs-CRP], glucose, and insulin), physical activity level (Baecke questionnaire), sleep quality (Pittsburgh Sleep Quality Index; PSQI), and sleepiness (Epworth Sleepiness Scale; ESS). We observed a high frequency of residents who were overweight or obese (65% for men and 21% for women; p = 0.004). Men displayed significantly greater body mass index (BMI) values (p = 0.002) and self-reported weight gain after the beginning of residency (p = 0.008) than women. Poor diet was observed for both genders, including the low intake of vegetables and fruits and the high intake of sweets, saturated fat, cholesterol, and caffeine. The PSQI global scores indicated significant differences between genders (5.9 vs. 7.5 for women and men, respectively; p = 0.01). Women had significantly higher mean high-density lipoprotein cholesterol (HDL-C; p < 0.005), hs-CRP (p = 0.04), and cortisol (p = 0.009) values than men. The elevated prevalence of hypertriglyceridemia and abnormal values of low-density lipoprotein cholesterol (LDL-C; >100 mg/dL) were observed in most individuals. Higher than recommended hs-CRP levels were observed in 66% of the examined resident physicians. Based on current recommendations, a high prevalence of low sleep quality and excessive daytime sleepiness was identified. These observations indicate the need to monitor health status and develop actions to reassess the workload of medical residency and the need for permission to perform extra night shifts for medical residents to avoid worsening health problems in these individuals." }, { "pmid": "27091251", "title": "Poor dietary behaviors among hospital nurses in Seoul, South Korea.", "abstract": "BACKGROUND\nNurses reportedly practice unhealthy behaviors due to unfavorable work schedules. Korean nurses are particularly vulnerable to dietary and health behaviors due to high patient-to-nurse ratios; however, there are few studies on Korean hospital nurses' health behaviors.\n\n\nPURPOSE\nTo investigate the dietary and health behaviors of Korean hospital nurses according to their work schedule type.\n\n\nMETHOD\nThis was a cross-sectional descriptive study using survey data from 340 hospital nurses. Nurses' dietary and health behaviors were evaluated across different work schedules and compared to the general Korean female population.\n\n\nRESULTS\nNurses with rotating night shift schedules were more often underweight than nurses without night shifts and had more unhealthy dietary behaviors, such as skipping breakfast and eating late night snacks. Nonetheless, Korean nurses practiced healthy behaviors, such as engaging actively in physical activity.\n\n\nCONCLUSIONS\nHospitals should create policies to provide healthy schedules for nurses to mitigate the negative effects of rotating and night shifts. However, these management-led measures will be effective only if individual nurses realize and take responsibility for their health behaviors and choices." }, { "pmid": "15921512", "title": "Accumulation of health risk behaviours is associated with lower socioeconomic status and women's urban residence: a multilevel analysis in Japan.", "abstract": "BACKGROUND\nLittle is known about the socioeconomic differences in health-related behaviours in Japan. The present study was performed to elucidate the effects of individual and regional socioeconomic factors on selected health risk behaviours among Japanese adults, with a particular focus on regional variations.\n\n\nMETHODS\nIn a nationally representative sample aged 25 to 59 years old (20,030 men and 21,076 women), the relationships between six risk behaviours (i.e., current smoking, excessive alcohol consumption, poor dietary habits, physical inactivity, stress and non-attendance of health check-ups), individual characteristics (i.e., age, marital status, occupation and household income) and regional (N = 60) indicators (per capita income and unemployment rate) were examined by multilevel analysis.\n\n\nRESULTS\nDivorce, employment in women, lower occupational class and lower household income were generally associated with a higher likelihood of risk behaviour. The degrees of regional variation in risk behaviour and the influence of regional indicators were greater in women than in men: higher per capita income was significantly associated with current smoking, excessive alcohol consumption, stress and non-attendance of health check-ups in women.\n\n\nCONCLUSION\nIndividual lower socioeconomic status was a substantial predictor of risk behaviour in both sexes, while a marked regional influence was observed only in women. The accumulation of risk behaviours in individuals with lower socioeconomic status and in women in areas with higher income, reflecting an urban context, may contribute to their higher mortality rates." }, { "pmid": "21669600", "title": "2005-2008 Nutrition and Health Survey in Taiwan: the nutrition knowledge, attitude and behavior of 19-64 year old adults.", "abstract": "The purpose of this study is to understand nutrition knowledge, attitude, and behavior in Taiwanese adults. Results indicated that adults' knowledge on 'relationship between diet and disease' and 'comparison of foods in terms of specific nutrients' is acceptable. However, they lack knowledge on 'daily serving requirements' and 'weight and weight loss'. Although they recognize the importance of nutrition, nutrition was not the major concern of food selection. Significant differences were found among gender and age groups. Females of most age groups are better than males in many aspects of nutrition knowledge, attitude and behavior except emotional and external eating behavior. Young (age 19-30) and prime (age 31-44) adults have better knowledge than that of middle adults (age 45-64), while prime adults hold a more positive attitude than young adults. As for nutrition behavior, prime and middle adults are better than young adults. Nutrition knowledge and attitude of adults in urban areas is generally better than those in suburban and remote areas. However, adults in urban areas perform 'emotional and external cued eating' more frequently than those in suburban and remote areas. There are significantly positive correlations among nutrition knowledge, attitude and behavior; and attitude has stronger correlation (r=0.42) with behavior than knowledge does (r=0.27). Therefore, to achieve desirable eating behaviors, the adult nutrition education program should include knowledge of what constitutes a balanced diet and what constitutes being overweight. Proper strategies to enhance the behavioral motivation of healthy food selection must also not be neglected." }, { "pmid": "30272860", "title": "Dietary patterns among Japanese adults: findings from the National Health and Nutrition Survey, 2012.", "abstract": "BACKGROUND AND OBJECTIVES\nRecent studies have analyzed dietary patterns to assess overall dietary habits, but there have been no studies of dietary patterns among the contemporary Japanese population nationwide. The objective of this study was to identify dietary patterns based on consumption of food items among Japanese adults, and to examine whether these dietary patterns were associated with nutrient intake, demographic characteristics, and lifestyle factors.\n\n\nMETHODS AND STUDY DESIGN\nThe study population included 25,754 Japanese adults aged 20 years and older registered in the nationwide National Health and Nutrition Survey database in 2012. Dietary patterns were analyzed by factor analysis of 29 food items from the dietary intake survey and household-based semiweighed dietary records.\n\n\nRESULTS\nFive dietary patterns were identified: high-bread and low-rice, high-meat and low-fish, vegetable, wheat-based food, and noodle and alcohol patterns. The lowest quartile of factor scores for high-meat and low-fish, wheat-based food, and noodle and alcohol patterns had higher nutrient intakes, and the highest quartile of factor scores for the vegetable pattern had a higher nutrient intake overall (all p<0.01). Dietary pattern scores were associated with demographic and lifestyle factors such as sex, age, region, smoking status, and alcohol intake.\n\n\nCONCLUSIONS\nFive major dietary patterns among Japanese adults were identified by factor analysis. Dietary pattern scores were associated with differences in nutrient intakes and demographic and lifestyle factors. These patterns were further used for examining the association between Japanese diets and health outcomes." }, { "pmid": "29093304", "title": "Baseline Profile of Participants in the Japan Environment and Children's Study (JECS).", "abstract": "BACKGROUND\nThe Japan Environment and Children's Study (JECS), known as Ecochil-Chosa in Japan, is a nationwide birth cohort study investigating the environmental factors that might affect children's health and development. We report the baseline profiles of the participating mothers, fathers, and their children.\n\n\nMETHODS\nFifteen Regional Centres located throughout Japan were responsible for recruiting women in early pregnancy living in their respective recruitment areas. Self-administered questionnaires and medical records were used to obtain such information as demographic factors, lifestyle, socioeconomic status, environmental exposure, medical history, and delivery information. In the period up to delivery, we collected bio-specimens, including blood, urine, hair, and umbilical cord blood. Fathers were also recruited, when accessible, and asked to fill in a questionnaire and to provide blood samples.\n\n\nRESULTS\nThe total number of pregnancies resulting in delivery was 100,778, of which 51,402 (51.0%) involved program participation by male partners. Discounting pregnancies by the same woman, the study included 95,248 unique mothers and 49,189 unique fathers. The 100,778 pregnancies involved a total of 101,779 fetuses and resulted in 100,148 live births. The coverage of children in 2013 (the number of live births registered in JECS divided by the number of all live births within the study areas) was approximately 45%. Nevertheless, the data on the characteristics of the mothers and children we studied showed marked similarity to those obtained from Japan's 2013 Vital Statistics Survey.\n\n\nCONCLUSIONS\nBetween 2011 and 2014, we established one of the largest birth cohorts in the world." }, { "pmid": "24410977", "title": "Rationale and study design of the Japan environment and children's study (JECS).", "abstract": "BACKGROUND\nThere is global concern over significant threats from a wide variety of environmental hazards to which children face. Large-scale and long-term birth cohort studies are needed for better environmental management based on sound science. The primary objective of the Japan Environment and Children's Study (JECS), a nation-wide birth cohort study that started its recruitment in January 2011, is to elucidate environmental factors that affect children's health and development.\n\n\nMETHODS/DESIGN\nApproximately 100,000 expecting mothers who live in designated study areas will be recruited over a 3-year period from January 2011. Participating children will be followed until they reach 13 years of age. Exposure to environmental factors will be assessed by chemical analyses of bio-specimens (blood, cord blood, urine, breast milk, and hair), household environment measurements, and computational simulations using monitoring data (e.g. ambient air quality monitoring) as well as questionnaires. JECS' priority outcomes include reproduction/pregnancy complications, congenital anomalies, neuropsychiatric disorders, immune system disorders, and metabolic/endocrine system disorders. Genetic factors, socioeconomic status, and lifestyle factors will also be examined as covariates and potential confounders. To maximize representativeness, we adopted provider-mediated community-based recruitment.\n\n\nDISCUSSION\nThrough JECS, chemical substances to which children are exposed during the fetal stage or early childhood will be identified. The JECS results will be translated to better risk assessment and management to provide healthy environment for next generations." }, { "pmid": "25912098", "title": "The Japan Environment and Children's Study (JECS): A Preliminary Report on Selected Characteristics of Approximately 10 000 Pregnant Women Recruited During the First Year of the Study.", "abstract": "BACKGROUND\nThe Japan Environment and Children's Study (JECS) is an ongoing nationwide birth cohort study launched in January 2011. In this progress report, we present data collected in the first year to summarize selected maternal and infant characteristics.\n\n\nMETHODS\nIn the 15 Regional Centers located throughout Japan, the expectant mothers were recruited in early pregnancy at obstetric facilities and/or at local government offices issuing pregnancy journals. Self-administered questionnaires were distributed to the women during their first trimester and then again during the second or third trimester to obtain information on demographic factors, physical and mental health, lifestyle, occupation, environmental exposure, dwelling conditions, and socioeconomic status. Information was obtained from medical records in the first trimester and after delivery on medical history, including gravidity and related complications, parity, maternal anthropometry, and infant physical examinations.\n\n\nRESULTS\nWe collected data on a total of 9819 expectant mothers (mean age = 31.0 years) who gave birth during 2011. There were 9635 live births. The selected infant characteristics (singleton births, gestational age at birth, sex, birth weight) in the JECS population were similar to those in national survey data on the Japanese general population.\n\n\nCONCLUSIONS\nOur final birth data will eventually be used to evaluate the national representativeness of the JECS population. We hope the JECS will provide valuable information on the impact of the environment in which our children live on their health and development." }, { "pmid": "27064130", "title": "Validity of Short and Long Self-Administered Food Frequency Questionnaires in Ranking Dietary Intake in Middle-Aged and Elderly Japanese in the Japan Public Health Center-Based Prospective Study for the Next Generation (JPHC-NEXT) Protocol Area.", "abstract": "BACKGROUND\nLongitudinal epidemiological studies require both the periodic update of intake information via repeated dietary survey and the minimization of subject burden in responding to questionnaires. We developed a 66-item Food Frequency Questionnaire (short-FFQ) for the Japan Public Health Center-based prospective Study for the Next Generation (JPHC-NEXT) follow-up survey using major foods from the FFQ developed for the original JPHC Study. For the JPHC-NEXT baseline survey, we used a larger 172-item FFQ (long-FFQ), which was also derived from the JPHC-FFQ. We compared the validity of ranking individuals by levels of dietary consumption by these FFQs among residents of selected JPHC-NEXT study areas.\n\n\nMETHODS\nFrom 2012 to 2013, 240 men and women aged 40-74 years from five areas in the JPHC-NEXT protocol were asked to respond to the long-FFQ and provide 12-day weighed food records (WFR) as reference; 228 also completed the short-FFQ. Spearman's correlation coefficients (CCs) between estimates from the FFQs and WFR were calculated and corrected for intra-individual variation of the WFR.\n\n\nRESULTS\nMedian CC values for energy and 53 nutrients for the short-FFQ for men and women were 0.46 and 0.44, respectively. Respective values for the long-FFQ were 0.50 and 0.43. Compared with the long-FFQ, cross-classification into exact plus adjacent quintiles with the short-FFQ ranged from 68% to 91% in men and 58% to 85% in women.\n\n\nCONCLUSIONS\nSimilar to the long-FFQ, the short-FFQ provided reasonably valid measures for ranking middle-aged and elderly Japanese for many nutrients and food groups. The short-FFQ can be used in follow-up surveys in prospective cohort studies aimed at updating diet rank information." }, { "pmid": "12828387", "title": "Long workhours and health.", "abstract": "This paper summarizes the associations between long workhours and health, with special attention for the physiological recovery and behavioral life-style mechanisms that may explain the relationship. The evidence for these mechanisms has not been systematically reviewed earlier. A total of 27 recent empirical studies met the selection criteria. They showed that long workhours are associated with adverse health as measured by several indicators (cardiovascular disease, diabetes, disability retirement, subjectively reported physical health, subjective fatigue). Furthermore, some evidence exists for an association between long workhours and physiological changes (cardiovascular and immunologic parameters) and changes in health-related behavior (reduced sleep hours). Support for the physiological recovery mechanism seems stronger than support for the behavioral life-style mechanism. However, the evidence is inconclusive because many studies did not control for potential confounders. Due to the gaps in the current evidence and the methodological shortcomings of the studies in the review, further research is needed." }, { "pmid": "24988105", "title": "The association between worksite physical environment and employee nutrition, and physical activity behavior and weight status.", "abstract": "OBJECTIVE\nTo explore the relationship between worksite physical environment and employee dietary intake, physical activity behavior, and weight status.\n\n\nMETHODS\nTwo trained research assistants completed audits (Checklist of Health Promotion Environments at Worksites) at each worksite (n = 28). Employees (n = 6261) completed a brief health survey before participation in a weight loss program.\n\n\nRESULTS\nEmployees' access to outdoor areas was directly associated with lower body mass index (BMI), whereas access to workout facilities within a worksite was associated with higher BMI. The presence of a cafeteria and fewer vending machines was directly associated with better eating habits. Better eating habits and meeting physical activity recommendations were both related to lower BMI.\n\n\nCONCLUSIONS\nSelected environmental factors in worksites were significantly associated with employee behaviors and weight status, providing additional intervention targets to change the worksite environment and promote employee weight loss." }, { "pmid": "14972072", "title": "Having lunch at a staff canteen is associated with recommended food habits.", "abstract": "OBJECTIVE\nTo describe the characteristics of employees having lunch at staff canteens and to examine the association between workplace lunch and recommended food habits.\n\n\nDESIGN\nA mailed questionnaire including data on lunch pattern, food habits, sociodemographic background, work-related factors and body weight. Logistic regression models including food habits as dependent variables and lunch pattern, sociodemographic factors, work-related factors and body mass index as independent variables.\n\n\nSETTING\nHelsinki Health Study survey data, collected in spring 2001.\n\n\nSUBJECTS\nEmployees from the City of Helsinki reaching 40, 45, 50, 55 and 60 years. The data included 2474 women and 591 men; the response rate was 68%.\n\n\nRESULTS\nAbout half of those with a staff canteen at work had lunch there. Those with higher educational level were more likely to have lunch at the staff canteen, as also were women with pre-school children and normal-weight men. Those having lunch at staff canteens were more likely to follow recommended food habits, compared with other subjects. Having lunch at the staff canteen seemed to increase the consumption frequency of vegetables and fish.\n\n\nCONCLUSIONS\nHaving lunch at staff canteens is associated with the quality of the diet. To serve a cooked meal including vegetables during working time may be an efficient way to improve diet among adult employees. More emphasis should be put on increasing the possibility for employees to have lunch at staff canteens." }, { "pmid": "23850518", "title": "The effectiveness of workplace dietary modification interventions: a systematic review.", "abstract": "OBJECTIVE\nTo evaluate the effectiveness of workplace dietary modification interventions alone or in combination with nutrition education on employees' dietary behaviour, health status, self-efficacy, perceived health, determinants of food choice, nutrition knowledge, co-worker support, job satisfaction, economic cost and food-purchasing patterns.\n\n\nMETHOD\nData sources included PubMed, Medline, Embase, Psych Info., Web of Knowledge and Cochrane Library (November 2011). This review was guided by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. Studies were randomised controlled trials and controlled studies. Interventions were implemented for at least three months. Cochrane Collaboration's risk of bias tool measured potential biases. Heterogeneity precluded meta-analysis. Results were presented in a narrative summary.\n\n\nRESULTS\nSix studies conducted in Brazil, the USA, Netherlands and Belgium met the inclusion criteria. Four studies reported small increases in fruit and vegetable consumption (≤half serving/day). These studies involved workplace dietary modifications and three incorporated nutrition education. Other outcomes reported included health status, co-worker support, job satisfaction, perceived health, self-efficacy and food-purchasing patterns. All studies had methodological limitations that weakened confidence in the results.\n\n\nCONCLUSION\nLimited evidence suggests that workplace dietary modification interventions alone and in combination with nutrition education increase fruit and vegetable intakes. These interventions should be developed with recommended guidelines, workplace characteristics, long-term follow-up and objective outcomes for diet, health and cost." } ]
Research and Practice in Technology Enhanced Learning
30595717
PMC6294195
10.1186/s41039-017-0057-5
Model-based analysis of thinking in problem posing as sentence integration focused on violation of the constraints
The advancement of computer and communication technologies has enabled researchers to conduct and analyze the learning process of posing problems. This study investigates what learners think while posing problems as sentence integration in terms of intermediate products as well as the posed problems as the resultant product. Problem posing as sentence integration defines the arithmetic word problem structure, and posing a problem is a task to satisfy all the constraints and requirements to build a valid structure. A previous study shows that, in problem posing as sentence integration for arithmetic word problems, learners try to satisfy a relatively large number of constraints in the posed problems. In contrast, this study focuses on the violation of constraints in the intermediate products while posing problems. The result shows that learners were inclined to avoid as many violated constraints as possible throughout the problem-posing process. Although learners tend to avoid the violated constraints, naturally, they cannot avoid some mistakes. Further analysis shows that learners actually have difficulty in fulfilling particular constraints while posing the problems. Based on this analysis, it is possible to detect the difficulty of learners’ actions from the model perspective. Hence, it is possible to give accurate feedback and appropriately support the learners.
Related workIn recent years, interest in integrating problem posing in mathematical instruction has continuously grown among mathematics education researchers and practitioners (Norman 2011; Ellerton 2013; Singer et al. 2015; Cai and Jiang 2016). Investigations of problems posed by learners and teachers in classrooms have provided insight into the relationships between mathematical knowledge, skills, and processes (Chen et al. 2011; Stickles 2011; Kılıç 2013; Van Harpen and Presmeg 2013). Given the importance of problem-posing activities in school mathematics, some researchers have investigated various aspects of problem-posing processes. One important direction is to examine thinking processes related to problem posing (e.g., Bonotto 2013; Şengül and Katranci 2015). Other studies underline the need to incorporate problem-posing activities into mathematics classrooms to determine prospective teachers’ problem-posing skills appropriate to selecting, translating, comprehending, and editing models and the possible difficulties they could encounter during the process in fraction problems (Işık et al. 2011), to explore students’ creativity in mathematics by analyzing their problem-posing abilities in geometric scenarios (Van Harpen and Sriraman 2013) and to examine the knowledge influences of learners’ abilities in posing combinatorial problems (Melušová and Šunderlík 2014). Furthermore, some studies provide evidence that problem posing has a positive influence on students’ abilities in problem solving (e.g., Kar et al. 2010; Şengül and Katranci 2012). Kar et al. (2010) asserted that the positive relation between posing and solving problems is an indicator of the acceptance of problem-posing skills as a phase in the development of problem-solving skills. In the analysis of the posed problems, the participants map the level of their own notions and concepts, understanding, and various interpretations and realize possible misconceptions and erroneous reasoning (Tichá and Hošpesová 2009). Learning to pose problems might also enhance learning to understand mathematical concepts (Pirie 2002). Pirie (2002) said that in asking questions on mathematical concepts, students might come to understand those concepts in a more generalized, less context-dependent way. In addition, Toluk-Uçar (2009) emphasized that problem posing has a positive effect on understanding fractions as well as on learners’ views about what it means to know mathematics.On the other hand, investigations of problem posing from the viewpoint of interactive learning systems promote active engagement in learning through the activities of learners. Chang et al. (2012) developed game-based problem-solving modules in a mathematics problem-posing system and investigated the effects of the problem-posing system on students’ abilities to pose and solve problems. Yamamoto et al. (2012) and Abramovich and Cho (2015) demonstrated how the appropriate use of digital technology tools can motivate problem-posing activities and evaluate the learner’s performance by assessing the number of posed problems. Hung et al. (2014a) investigated the effects of an integrated mind mapping and problem-posing approach on learners’ in-field mobile learning performance in an elementary school natural science course. Moreover, Majumdar and Iyer (2015) presented how an online visual analytic tool can be used to analyze clicker responses during an active learning strategy where the instructor poses a multiple-choice question. In this study, an interactive learning system is used to encourage learners in posing arithmetic word problems. The system asks learners to arrange and integrate five or six presented sentence cards into a problem, which consists of three sentence cards. We analyze the learners’ tendencies while posing the problems in the system.Several studies examined learners’ behaviors through a collaborative problem-posing strategy. Beal and Cohen (2012) demonstrated that the mathematics problem-posing skill was improved when the activity was carried out over an online collaborative learning system. Mishra and Iyer (2015) implemented a collaborative problem-posing activity in which two learners collaborated as a team to generate questions. Sung et al. (2016) conducted a group collaborative problem-posing mobile learning activity. They found that such an approach could improve learning achievement and group learning self-efficacy. In this study, we analyze log data of learners’ individual activity collected from a tablet personal computer-based software for learning by posing arithmetic word problems.Several problem-posing techniques on interactive learning systems have been conducted. One approach is using the question-posing technique. The systems allow students to generate different types of questions using different media formats with peer-assessment using one type of communication mode (Wilson 2004) and multiple peer-assessment modes (Yu 2011). The studies evaluated students’ abilities to pose questions and their processes in an online learning system. Lan and Lin (2011) developed a system integrating a reward mechanism into assessment activities and analyzed student’s abilities to pose questions in a web-based learning system. Moreover, Hung et al. (2014b) investigated the effect of promoting questioning ability in problem-based scientific inquiry activities. The research developed a ubiquitous problem-based learning system regarding learners’ question-raising performance. This study used agent assessment, which can assess the validity of posed problems and automatically give feedback to the learners according to their mistakes. We investigate the learners’ difficulty based on their actions, which are logged in the system.The second approach is learning from the example technique. This support system is developed to facilitate posing of diverse problems by learners using examples. Leikin (2015) described posing various types of problems associated with geometry investigations using examples from a course with prospective mathematics teachers, while Hsiao et al. (2013) conducted examples across three homework exercises in which students were required to generate at least one applied problem. The studies showed that integrating worked examples into problem posing has a significant skill development effect on posing more oriented and complex problems. Moreover, Kojima et al. (2015) presented examples that are merely shown to the learners and prompted them to compare the base with their posed problems. They investigated the effects of learning from an example on solution composition for posing problems. The system used in this study provides sentence cards and requests learners to create a problem according to the requirements in the task. The learners’ activities while arranging the sentence cards are recorded by the system. Then, we check their thinking processes in posing the problem focused on violation of the constraints.Another approach is learning by problem posing as sentence integration. Problem posing as sentence integration requires learners to interpret the sentence cards and integrate them into one problem. In an assignment, the system presents a requirement, which consists of a story type and a numerical expression. The system asks learners to arrange the provided sentence cards based on the requirement. One of the few research studies that has been found in this direction is about analyzing the results of the posed problems. Hirashima et al. (2007) examined whether learners could pose the problems, showing and discussing the number of posed problems and correct problems based on the system log data. Kurayama and Hirashima (2010) analyzed the learning effects by comparing pre- and post-test problem-solving and problem-posing scores. Further analyses have been conducted on this topic by investigating the learners’ thinking processes based on the first selected sentence in assignments (Hasanah et al. 2015a) concerning the completed posed problems (Hasanah et al. 2015b). There is a dearth of research that investigates every action of learners in posing the problems to understand the learning process of problem posing on an interactive learning system. Moreover, no significant research has been found that examines the intermediate products while posing the problems. In this study, we examine every learner’s movements while posing an arithmetic word problem.There has been considerable thorough and fine-grained investigation of the activities of learners in interactive learning systems to reveal their behavior throughout the learning process. Fournier-Viger et al. (2010) developed a virtual learning system for learning how to operate the Canadarm2 robotic arm on the international space station. The study extracted patterns from learners’ solutions to problem-solving exercises for automatically learning a task model that can then be used to aid and guide them during problem-solving activities. Hou (2012) utilized an online discussion activity adopting a role-playing strategy and conducted an empirical analysis to explore and evaluate both the content structure and behavioral patterns in the discussion process. The study adopted a new method of multi-dimensional process analysis that integrates both content and sequential analyses, whereby the dimension of interaction and cognition are analyzed simultaneously. Hsieh et al. (2015) identified higher and lower engagement patterns to represent students’ learning processes in a game-based learning system. The study investigated a possible connection between students’ verbal (asking themselves, expressing frustration, etc.) and nonverbal (smiling, focusing, moving closer to the screen, moving away from the screen, etc.) behaviors. However, the central issue in such research is basically limited to solving problems and does not include posing problems.This study aims to investigate the problem-posing process and reveals the trends of the process. Problem-posing activities could provide us with valuable insight into a learner’s understanding of mathematical concepts and processes. Studies in this area suggest that problem posing has a positive influence on a learner’s ability to solve problems. There is significant improvement in the problem-solving performance of learners. In addition, problem posing could guide learners to achieve understanding of mathematical concepts. Technology-enhanced learning has been developed to realize and actively promote learning by problem posing. Several methods of problem-posing activities on an interactive learning system have been proposed, such as posing questions, learning from examples, and learning by problem posing as sentence integration. Additionally, considerable studies have been analyzing the results of posed problems and the learning effects. Moreover, investigational studies that examine the process of a learner’s activities in an interactive learning system to reveal behavior have been conducted, and deep examination of learner behaviors may make beneficial contributions to the educational technology field with the adoption of process analysis. This study investigates the problem-posing processes of Japanese elementary students in actual classes by analyzing the log files of the learners’ problem-posing activities on a computer-based learning system with sentence integration, which is called Monsakun.
[]
[]
Research and Practice in Technology Enhanced Learning
30595720
PMC6294202
10.1186/s41039-017-0054-8
The effectiveness of using in-game cards as reward
The research team has developed a web-based multiplayer trading card game to allow teachers choosing cards as rewards for students who actively participate in discussions and classroom activities as well as perform well in terms of doing assignments and writing exams or quizzes. In order to verify the effectiveness of the use of in-game cards as rewards, the research team integrated the trading card game into a web-based English vocabulary learning system. Students can receive cards as rewards every time after they use the learning system. A 6-week experiment had been conducted at an elementary school with 172 fifth-grade students. The results showed that boys have higher intention of getting the in-game cards as rewards. The research also showed that the use of the in-game cards as educational rewards not only motivates students to use the vocabulary learning system but also improves their learning outcome. The research result supported the recommended process for teachers to adopt the trading card game in their courses.
Related worksGame-based learningDigital game industry becomes the mainstream of the market (Siwek 2014; Rideout et al. 2010) because fantasy, curiosity, challenge, and control are the features that attract players (Malone and Lepper 1987). Some researchers point out that playing can hold student attentions and make learning be more interesting (Virvou, Katsionis, & Manos, 2005; Boyle 1997). In fact, games can produce engagement and delight in learning (Boyle 1997). For these reasons, many studies use commercial games directly or design new education games and have evidences of students can get significant improvement in learning.Games can be classified into different genres, such as simulation, strategy, and role-playing (Apperley 2006). Simulation game simulates the dynamics of the system in the real world and help learner learn practical skills by understanding how decisions may cause the results in the simulated environment (Dankbaar et al. 2016). Take Shakshouka Restaurant game in the My Money website as an example; the game helps students develop skills in financial and math (Barzilai and Blau 2014).Strategy games are the most popular genre in the education game (Hainey et al. 2016) because the players is using the god’s-eye-view to see the virtual world (Apperley 2006). Take Maguth and colleagues’ study for example; they used Age of Empires II: The Age of Kings to teach seventh-grade students in a social study class (Maguth et al. 2015). In the game, students can understand what happened when two cultures encounter in the Middle Ages. The narrative in role-playing game can help students develop the disciplinary knowledge when experiencing through the story (Cheville 2016). Chang and colleagues integrated the challenge, control, and fantasy game features into a mobile game for museum learning (Chang et al. 2014). Students can role play as a cinema property handler or an artist in the game and find the museum’s artifact that its properties, such as dynasty or materials, fit the requirements their roles’ clients in the game ask for.Trading card game has also been used as tools to enhance learning. In teaching host defense concept, Steinman and Blastos (2002) have designed a board game to teach the principles of host defense in biomedical lesson. Students have to understand the concepts of immunology and infectious disease in order to use the correct pathogen cards to attack opponents’ organs or defense their own organs. Another educational application of TCG is the Weatherlings, an online collectible card battle game developed by MIT’s Scheller Teacher Education Program to teach weather and climate (Klopfer et al. 2012; Sheldon et al. 2010). In Weatherlings, players have to predict the weather in the match according to the previous climate data of the arena; they have to choose proper cards which fit the climate to make more damage to their opponents.Rewards in game-based learningIn digital games, reward systems can keep players’ interests in playing the games (Hallford and Hallford 2001). Wang and Sun (2011) believe that the digital in-game rewards can be effectively motivating tools for educational purpose. In traditional learning or e-learning, after the students finished a learning activity or written an exam, they usually receive feedback from the teacher or the system. The feedback may be a summary of the learning activity they just finished, for example, a total score presenting their performance, a brief comment to suggest them how to do better, and/or some information for them what to learn further. Reward is also one of the feedbacks.In Chang and colleagues’ research, they provided jewels as reward when learner accomplishes one quest in the mobile game in the museum (Chang et al. 2008). Wu and Elliott (2008) indicated three types of rewards shown in Table 1. They found that different students have different preferences toward the rewards. Gifted students preferred competition rewards, whereas non-gifted students preferred chance rewards (Wu and Elliott 2008). Rewards can be given not only by teachers but also by students. Pedro et al. (2015) designed a badging system which all participants, including teachers and students, in the educational web platform can design badges as rewards for other participants (Pedro et al. 2015). For example, a student can give badges to other students in the same group when he or she believes that the other students have good involvement in the activity.Table 1Reward typesTypeDescriptionCompetition rewardThe rewards are limited and must to compete to get.Performance rewardGet the rewards according to the performance improvement.Chance rewardCan get according to chance and do not need to pay any effort.
[ "26433730", "12472757", "23393616" ]
[ { "pmid": "26433730", "title": "An experimental study on the effects of a simulation game on students' clinical cognitive skills and motivation.", "abstract": "Simulation games are becoming increasingly popular in education, but more insight in their critical design features is needed. This study investigated the effects of fidelity of open patient cases in adjunct to an instructional e-module on students' cognitive skills and motivation. We set up a three-group randomized post-test-only design: a control group working on an e-module; a cases group, combining the e-module with low-fidelity text-based patient cases, and a game group, combining the e-module with a high-fidelity simulation game with the same cases. Participants completed questionnaires on cognitive load and motivation. After a 4-week study period, blinded assessors rated students' cognitive emergency care skills in two mannequin-based scenarios. In total 61 students participated and were assessed; 16 control group students, 20 cases students and 25 game students. Learning time was 2 h longer for the cases and game groups than for the control group. Acquired cognitive skills did not differ between groups. The game group experienced higher intrinsic and germane cognitive load than the cases group (p = 0.03 and 0.01) and felt more engaged (p < 0.001). Students did not profit from working on open cases (in adjunct to an e-module), which nonetheless challenged them to study longer. The e-module appeared to be very effective, while the high-fidelity game, although engaging, probably distracted students and impeded learning. Medical educators designing motivating and effective skills training for novices should align case complexity and fidelity with students' proficiency level. The relation between case-fidelity, motivation and skills development is an important field for further study." }, { "pmid": "12472757", "title": "A trading-card game teaching about host defence.", "abstract": "OBJECTIVES\nTo heighten the understanding of host-disease interactions by adolescents and young adults, using a trading card game format.\n\n\nDESIGN\nA trading card game was developed in which paired students attack one another with pathogens or parry those attacks with appropriate defences. Twenty-five infectious pathogens or cancers, 30 defence agents and 6 health status modifying conditions were included.\n\n\nSETTING\nA middle school, upper school and medical school in the United States.\n\n\nSUBJECTS\n8th grade, 10th grade and first year medical students.\n\n\nRESULTS\nThe game was tested using pre-test/post-test evaluations in 8th graders, 10th graders and medical students. Factual information, pathogen-organ specificity, and general concepts were tested. There was a significant increase in test scores, from 39% to 58% correct in the 8th graders (P < 0.0001), from 47% to 59% among 10th graders (P = 0.0007), and from 80% to 88% (P = 0.049) among the medical students. Responses to control questions unrelated to the game did not improve.\n\n\nCONCLUSION\nAn interactive trading card format is a useful method for conveying information about host defence." }, { "pmid": "23393616", "title": "How women organize social networks different from men.", "abstract": "Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways." } ]
Research and Practice in Technology Enhanced Learning
30595722
PMC6294211
10.1186/s41039-017-0058-4
Classroom practice for understanding pointers using learning support system for visualizing memory image and target domain world
Pointers are difficult learning targets for novice learners of C programming. For such difficult targets, introducing a system visualizing program behaviors is generally expected to support learners to understand the targets. However, visualization in existing systems often conceals the concrete value of variables such as pointers; the way in which each visualized object is located on the memory is not made explicit. In order to address this issue, we focused on a program visualization system called TEDViT. It visualizes simultaneously and synchronously the memory image that is the field that presents the concrete value of variables and the target domain world that is the field that presents logically the data structures processed by the program. We consider that observing and comparing program code, memory image, and target domain world with TEDViT could work for understanding pointers. TEDViT visualizes the status of the target domain world according to the visualization policy defined by the teacher in order to allow teachers to set their instruction content based on the growing variety of learner background knowledge. We also consider that this feature could support teachers’ instructions and class managements appropriately, and improving teachers’ performance by TEDViT’s support would bring improvement of learners’ understanding. We conducted classroom practice for understanding pointers in connection with a memory model, thus introducing TEDViT to a real class. Analysis of answered scores in a questionnaire conducted after the practice suggests that our practice using TEDViT provided useful supports for participants to understand pointers. It also suggests our practice had a certain effect to reduce uneven levels of understanding among participants. Based on these results, we describe that classroom practices in our framework could support learners to understand pointers and support teachers to manage the class.
Related worksThe concept to support learners’ understanding by visualizing data structures and their behavior along with the statement executions of target program in certain ways has relatively long history in programming education research. Thus far, several program visualization systems have been developed. The reason of our adoption of TEDViT among these existing systems is that we have dissatisfaction with many of the existing systems as follows:The visualization provided by existing systems is too much abstracted to learn the target closely related to hardware, such as pointers.If the abstractions of data structures by existing systems were differ from teacher’s explanations in the classrooms, learners may confuse in understanding by providing various visualization objects each of which has a different abstraction.Using systems that allow teachers to arbitrarily set the visualization fields observed by learners, it takes considerable time to complete visualization settings. Many existing systems such as Jeliot 3 (Moreno, Myller, Sutinen, & Ben-Ari, 2004; Ben-Ari et al., 2011), NoobLab (Neve, Hunter, Livingstone, & Orwell, 2012), and LEPA (Yamashita et al., 2016) reproduce the behaviors of programs by visualizing logical data structures processed by target program and their changes made by statement execution. However, these visualizations involve certain abstractions of data structures which are established independently in each system. These abstractions often conceal detailed data which teachers might want to make students observe in a learning target closely related to hardware, such as pointers. For example, iList (Fossati, Eugenio, Brown, & Ohlsson, 2008) visualizes logical data structures targeting linked lists and supports learners to understand algorithm behavior and role of code statement, allowing learners to operate the visualized structure by inputting code fragments. The concrete values of pointer variables are concealed in iList, and hence, the way in which each visualized object is located on memory is not made explicit. Depending on the learning target, teachers are highly likely to explain program behaviors based on memory models. However, almost all of these existing systems do not have a function that allows teachers to alter the abstraction of visualizations.Moreover, in algorithm visualization systems such as TRAKRA 2 (Malmi et al., 2004), the scope of abstraction is extended to program codes. TRAKRA 2 reproduces algorithm behaviors by learner’s GUI manipulations on visualized logical data structures, and hence, it is expected to have an effect to understand algorithms. However, the visualization of TRAKRA 2 often cannot be immediately expressed by some lexical items and syntactic fragments provided by programming language. If learners reached an appropriate level of understanding of an algorithm, they would have to write a code with combining various syntactic fragments complicatedly, such as self-referential data structures. In programming education, the gap between an algorithm and its implementation method needs to be bridged by a certain way. In the learning target such as pointers, the system introduced into the class is required to deal with the visualizations of not only the statuses of target domain world with a certain abstraction but also the less abstracted statuses of notional machine and actual program codes controlling it (Sorva, 2013).Also, it would be an obstacle to introduce these systems into the class that visualizations of program behaviors have fixed policies established beforehand and independently by the system’s developer team. In introducing these systems into the class, the teachers would have to accommodate their explanation to the visualization policy established by each system. Because teachers would avoid students’ confusion in understanding by providing various visualization objects, each of which has a different policy. It would be a burden for teachers designing classrooms, and hence, it would be one of the factors that teachers cancel their introduction of program visualization systems. As the range of educational opportunities for introducing students to programming has been expanded, we consider that systems introduced into the class need a function that enables teachers to adjust the visualization policies. For example, Gries and Gries (2002) proposed the visualization method of memory model for teaching Java and object orientation to novice learners. The system introduced into the class should deal with visualizations reflecting teacher’s intention, as much as possible.We adopted TEDViT as the system introduced into the class because it satisfies these requirements. TEDViT visualizes program codes, memory images, and logical data structures, simultaneously and synchronously, and allows teachers to define visualization policy of logical data structures. Other systems capable of arbitrary visualization definitions include ANIMAL (Rössling & Freisleben, 2002); however, the cost of visualization definitions in ANIMAL tends to be relatively high. By using ANIMAL, teachers can define arbitrary policy with a script language named AnimalScript and can provide arbitrary visualization to their students. Although the description capability increases significantly by using the script language, the cost associated with learning the language is a matter that cannot be ignored. Moreover, the sizes of script codes also tend to be large relatively. For example, the sample script for a bubble sort algorithm bundled in ANIMAL consists of 170 lines of code. Comparing it with the size of source code for bubble sort, it is hard to say that the script size is sufficiently small.Visualization policies in TEDViT is defined by a set of drawing rules in CSV format, and any definition could be completed in practical time with some experience in rule definitions. The times required to complete rule definitions are approximately the same as slide creations with some presentation software. However, the teaching material created through the visualization definitions in TEDViT would be more useful because TEDViT can reproduce the program behavior without rule modifications, even if the target data processed by the program change. Furthermore, development of graphical interface for visualization definition in TEDViT has proceeded to reducing the cost (Tezuka et al., 2016).
[]
[]
Research and Practice in Technology Enhanced Learning
30595724
PMC6294213
10.1186/s41039-017-0056-6
Design-based research on teacher facilitation practices for serious gaming in formal schooling
Serious gaming has been regarded as one of the important student-centric learning approaches in the coming decade. However, there has been a lack of in-depth discussion of the teacher role in the course of serious gaming when it is adopted in formal schooling. The study discussed in this paper is a piece of two-cycle design-based research, involving three teachers respectively from top, middle and bottom academic banding schools in Hong Kong and their Grade 11 classes in two consecutive school years (197 students in total). In the context of formal curriculum learning and teaching, we (researchers) collaborated with the teachers (practitioners) to investigate (design, enact, analyse and redesign) what and how they should do in order to optimise their students’ serious gaming process and advance the pedagogic effectiveness of serious gaming in different classroom settings.
Related workSerious gamingUnlike traditional drill-and-practice mini-games for the purpose of sugaring the pills (Prensky, 2001), serious games are developed with state-of-art digital technology, designed and implemented with dedicated pedagogy, and embedded with specific educative content (Games & Squire, 2011). For example, in Shaffer et al.’s serious games (Bagley & Shaffer, 2015; Nash & Shaffer, 2013; Shaffer, 2009; Shaffer & Graesser, 2010), distributed authentic professionalism is the underpinning pedagogy. They believe that members of a profession should have a specific way of thinking and working, namely, epistemic frame. Hence, developing a person to be an “insider” of a profession is a matter of empowering him/her with that particular frame. Urban Science is one of the serious games developed by Shaffer’s group. In this game, players are required to role-play a staff member of an urban planning company that handles various land use issues in ecological areas. Via ongoing interactions with different game characters, the epistemic frame of ecologists will be infused into the players’ minds in a spontaneous fashion.In fact, against the backdrop of the advocacy of constructivist education in the twenty-first century, the proposition of harnessing serious gaming in learning and teaching is raging (Johnson et al., 2015). However, evidence of its widespread adoption in school education still appears to be lacking (Chee, 2016). So far, most of the serious gaming studies and instances have been aimed at supporting informal learning outside the school contexts, or carrying out short-term learning experiments for testing educational hypotheses (Tobias et al. 2011). The developed serious games are not targeted on supporting formal subject-based curriculum learning and teaching in schools (Chee, 2016; Gee, 2013). Therefore, it is hard for school teachers to adopt serious gaming in practice (Jong, 2015; Jong & Shang, 2015).As aforementioned in the introduction part of this paper, another major limitation in the scholarship of serious gaming is the ignorance of teacher facilitation in students’ game-based learning process. In fact, notwithstanding the emphasis of the active, self-directed and learner-centric role for students in various constructivist learning theories, teachers are always regarded as the major person to scaffold students to attain the educative goals in the course of learning (e.g., Collins et al. 1989; Howland et al. 2012; Lave & Wenger, 1991; Tsai & Chai, 2012; Vygotsky, 1978). Serious gaming should be no exception (Jong, Lee & Shang, 2013).Design-based research (DBR)Researchers may not always be able to provide practitioners with desirable solutions to be applied in real-world contexts (Wang & Hannafin, 2005). Design is research; research is design (Cobb et al. 2003). Design-based research (hereinafter referred as DBR) aims to improve or enhance innovations through a collaborative effort among researchers and practitioners and via recursive research cycles of development and implementation (Design-based Research Collective, 2003). DBR situates applied work in authentic, naturalistic settings (Wang & Hannafin, 2005), with the aim of “more than understanding the happenings of one particular context, but also requires showing the relevance of the findings derived from the context of intervention(s) to other contexts” (Barab & Squire, 2004, p. 5).In the domain of education, DBR is particularly useful for generating usable knowledge that sheds light on developing or revamping educational practices (Lagemann, 2002; Mckenney & Reeves, 2012). Researchers use this methodological approach to design interventions for tackling real-world problems taking place in education and then empirically implemented in authentic educational contexts (Mckenney & Reeves, 2012).A number of DBR paradigms have been proposed by various DBR researchers, while Design-based Research Collective’s (2003) which has been largely cited in many important DBR references (e.g., Barab & Squire 2004; Philips et al. 2012; Mckenney & Reeves, 2012;) is perhaps the most well-known one. It characterises the course of DBR by iterative research cycles of design, enactment, analysis and redesign (see Fig. 1). The “output(s)” of the previous research cycle will steer the focal investigation of the next research cycle.Fig. 1Two-cycle DBR design
[]
[]
Research and Practice in Technology Enhanced Learning
30595738
PMC6294214
10.1186/s41039-018-0075-y
Student placement and skill ranking predictors for programming classes using class attitude, psychological scales, and code metrics
In some situations, it is necessary to measure personal programming skills. For example, often students must be divided according to skill level and motivation to learn or companies recruiting employees must rank candidates by evaluating programming skills through programming tests, programming contests, etc. This process is burdensome because teachers and recruiters must prepare, implement, and evaluate a placement examination. This paper tries to predict the placement and ranking results of programming contests via machine learning without such an examination. Explanatory variables used for machine learning are classified into three categories: Psychological Scales, Programming Tasks, and Student-answered Questionnaires. The participants are university students enrolled in a Java programming class. One target variable is the placement result based on an examination by a teacher of a class and the ranking results of the programming contest. Our best classification model with a decision tree has an F-measure of 0.912, while our best ranking model with an SVM-rank has an nDCG of 0.962. In both prediction models, the best explanatory variable is from the Programming Task followed in order by Psychological Sale and Student-answered Questionnaire. Our classification model uses 9 explanatory variables, while our ranking model uses 20 explanatory variables. These include all three types of explanatory variables. The source code complexity, which is a source code metrics from Programming Task, shows best performance when the prediction uses only one explanatory variable. Contribution (1), this method can automate some of the teacher’s workload, which may improve educational quality and increase the number of acceptable students in the course. Contribution (2), this paper shows the potential of using difficult-to-formulate information for an evaluation such as a Psychological Scale is demonstrated. These are the contributions and implications of this paper.
Related workMethods to support education are mainly divided into student support and teacher support. Many studies have focused on student support in programming education such as the visualization program execution status (Ishizue et al. 2017b; 2018) and a method to learn a language based on another language already learned (Li et al. 2017). This study focuses on teacher support.How are students’ programming skills traditionally assessed?Traditionally, students’ programming skills are assessed by whether they can solve Programming Tasks. McCracken et al. (2001) surveyed a multi-national, multi-institutional study of assessments of programming skills of first-year CS students. They defined the general evaluation (GE) criteria and the degree of closeness (DoC) evaluation criteria. The GE criteria objectively assess how accurately students implement their solutions. The DoC criteria subjectively evaluate the results of the abstraction and transformation generated sub-problems into sub-solutions.The GE criteria consist of: Execution: Does the program execute without errors? (30 points)Verification: Does the program correctly produce answers to the benchmark data set? (60 points)Validation: Does the program represent what is asked for in the exercise specifications? (10 points)Style (Optional): Does the style of the program conform to local standards? (10 points)The total number of points is considered to represent importance.The DoC criteria consist of: Does the program compile and work?Is part or all of the method missing?Are there meaningful comments, stub code, etc.?Does the source code complete little of the program?Does the source code show that the student has no idea about how to approach the problem?The results of the programming contest are also used to assess programming skills. Trotman and Handley (2008) indicated that programming contests with automated assessments have become popular activities for training of programming skills. Verdú et al. (2012) also indicated that competition is a very important element since the combination of a contest with an automated assessment provides the educational community with an effective and efficient learning tool in the context of teaching programming.Can additional variables be used to predict programming skills?We investigated explanatory variables that can predict general academic skills not only for programming. Prior studies indicate that the Psychological Scale may be an explanatory variable.We use famous Psychological Scales as explanatory variables in machine learning. The following scales are thought to affect academic performance. Deci and Ryan (1985, 2002) studied intrinsic motivation in human behavior. They defined intrinsic motivation as the life force or energy for an activity and the development of an internal structure. The degree of self-efficacy affects the efficiency of such behavior. According to Bandura (1977), self-efficacy expectancies determine the initial decision to perform a behavior, the effort expended, and the persistence in the face of adversity. Sherer et al. (1982) developed a self-efficacy scale.The task value is a scale focusing on the value aspect of motivation. According to Eccles and Wigfield (1985), the task value is divided into three subscales (interest value, attainment value, and utility value). Moreover, Ida (2001) further divided attainment value and utility value into two for a total of five subscales. The attainment value is divided into the private attainment value, which is an internal absolute standard that varies by individual, and the public attainment value, which focuses on the superiority/inferiority with others. The utility value is divided into the institutional utility value, which is used when learning is necessary to pass an examination for employment or admission, and practical utility value, which is used when learning is useful in occupational practice. Ida (2001) also proposed a task value evaluation scale.According to Duckworth and Gross (2014) and Duckworth et al. (2007), and Duckworth and Quinn (2009), self-control is needed to achieve goals that require long-term effort. Self-control allows one to focus on a goal (consistency of interest) and persevere through difficulties (perseverance of effort). They called this combination Grit, and developed an evaluation scale.Goal orientation is divided into three subscales: mastery orientation, performance approach, and performance avoidance. Elliot and Church (1997) examined their influences and factors.Ota (2010), Ryckman et al. (1990, 1996), and Smither and Houston (1992) developed a multi-dimensional competitiveness. Multi-dimensional competitiveness is divided into three subscales: instrumental competitiveness, avoidance of competition, and never-give-up attitude. Specific questions based on these scales are shown in Section 3.2.1.Some studies have investigated these Psychological Scales and learning. For example, Robbins et al. (2004) examined the relationship between psychosocial and study skill factors (PSFs) and college outcomes. They found that the best predictors for grade point average (GPA) are academic self-efficacy and achievement motivation. Shen et al. (2007) investigated the influence of a mastery goal, performance-approach goal, avoidance-approach goal, individual interest, and situational interest on students’ learning of physical education. They reported that a mastery goal is a significant predictor to recognize of situational interest.We have used class attitude as an explanatory variable for machine learning. Class attitude is also thought to affect the understanding of class content. For example, Saito et al. (2017) studied the relationship between attitudes and understanding of programming with an emphasis on the differences between text-based and visual-based programming.How is machine learning previously used in relevant areas?In this paper, we use classification and ranking machine learning. Various fields, including education, have used machine learning.Some studies actually predict students’ grades or scores by machine learning. Okubo et al. (2017) studied a method to predict students’ final grades using a recurrent neural network (RNN) and a time series of learning activities logs (e.g., attendance, quiz, and report) in multiple courses. Yasuda et al. (2016) proposed an automatic scoring method for a conversational English test using automatic speech recognition and machine learning techniques.Some studies use machine learning to find students who need assistance. Ahadi et al. (2015) and Castro-Wunsch et al. (2017) propose methods to automatically identify students in need of assistance. They predict such students using students’ source code snapshot data by machine learning approaches such as decision trees. Hong et al. (2015) implemented a function to the learning system called SQL-Tutor, which identifies students who will abandon the programming task and provides encouragement by displaying motivational messages.Additional studies have investigated dropouts. Kotsiantis et al. (2003) proposed a prototype web-based support tool using a Naive Bayes algorithm, which can automatically recognize students with a high probability of dropping out. Márquez-Vera et al. (2016) predicted the high school dropout rates of students at different steps in a course to determine the best indicators for dropping out.It takes time and effort to appropriately categorize students as class size increases. Sohsah et al. (2016) classified educational materials in low-resource languages with machine learning. Machine learning is used not only for teachers but also for school cost problems. Jamison (2017) tried to solve the problem of a declining enrollment rate of students accepted at a given college or university due to academic, economic, and logistical reasons by machine learning.This paper uses three different kinds of explanatory variables. Such a dataset is called multi-view or multi-source data. Machine learning dealing with this kind of data is called multi-view learning. According to the latest survey of Zhao et al. (2017), multi-view learning has made great advances in recent years. Multi-view learning is machine learning that considers learning from multiple views to improve the general performance. Although this paper uses a traditional method, if this method is applied, our machine learning model may further improve the performance in the future.
[ "847061", "26855479", "17547490", "19205937", "14979772", "8869578" ]
[ { "pmid": "26855479", "title": "Self-Control and Grit: Related but Separable Determinants of Success.", "abstract": "Other than talent and opportunity, what makes some people more successful than others? One important determinant of success is self-control - the capacity to regulate attention, emotion, and behavior in the presence of temptation. A second important determinant of success is grit - the tenacious pursuit of a dominant superordinate goal despite setbacks. Self-control and grit are strongly correlated, but not perfectly so. This means that some people with high levels of self-control capably handle temptations but do not consistently pursue a dominant goal. Likewise, some exceptional achievers are prodigiously gritty but succumb to temptations in domains other than their chosen life passion. Understanding how goals are hierarchically organized clarifies how self-control and grit are related but distinct: Self-control entails aligning actions with any valued goal despite momentarily more-alluring alternatives; grit, in contrast, entails having and working assiduously toward a single challenging superordinate goal through thick and thin, on a timescale of years or even decades. Although both self-control and grit entail aligning actions with intentions, they operate in different ways and at different time scales. This hierarchical goal framework suggests novel directions for basic and applied research on success." }, { "pmid": "17547490", "title": "Grit: perseverance and passion for long-term goals.", "abstract": "The importance of intellectual talent to achievement in all professional domains is well established, but less is known about other individual differences that predict success. The authors tested the importance of 1 noncognitive trait: grit. Defined as perseverance and passion for long-term goals, grit accounted for an average of 4% of the variance in success outcomes, including educational attainment among 2 samples of adults (N=1,545 and N=690), grade point average among Ivy League undergraduates (N=138), retention in 2 classes of United States Military Academy, West Point, cadets (N=1,218 and N=1,308), and ranking in the National Spelling Bee (N=175). Grit did not relate positively to IQ but was highly correlated with Big Five Conscientiousness. Grit nonetheless demonstrated incremental predictive validity of success measures over and beyond IQ and conscientiousness. Collectively, these findings suggest that the achievement of difficult goals entails not only talent but also the sustained and focused application of talent over time." }, { "pmid": "19205937", "title": "Development and validation of the short grit scale (grit-s).", "abstract": "In this article, we introduce brief self-report and informant-report versions of the Grit Scale, which measures trait-level perseverance and passion for long-term goals. The Short Grit Scale (Grit-S) retains the 2-factor structure of the original Grit Scale (Duckworth, Peterson, Matthews, & Kelly, 2007) with 4 fewer items and improved psychometric properties. We present evidence for the Grit-S's internal consistency, test-retest stability, consensual validity with informant-report versions, and predictive validity. Among adults, the Grit-S was associated with educational attainment and fewer career changes. Among adolescents, the Grit-S longitudinally predicted GPA and, inversely, hours watching television. Among cadets at the United States Military Academy, West Point, the Grit-S predicted retention. Among Scripps National Spelling Bee competitors, the Grit-S predicted final round attained, a relationship mediated by lifetime spelling practice." }, { "pmid": "14979772", "title": "Do psychosocial and study skill factors predict college outcomes? A meta-analysis.", "abstract": "This study examines the relationship between psychosocial and study skill factors (PSFs) and college outcomes by meta-analyzing 109 studies. On the basis of educational persistence and motivational theory models, the PSFs were categorized into 9 broad constructs: achievement motivation, academic goals, institutional commitment, perceived social support, social involvement, academic self-efficacy, general self-concept, academic-related skills, and contextual influences. Two college outcomes were targeted: performance (cumulative grade point average; GPA) and persistence (retention). Meta-analyses indicate moderate relationships between retention and academic goals, academic self-efficacy, and academic-related skills (ps =.340,.359, and.366, respectively). The best predictors for GPA were academic self-efficacy and achievement motivation (ps =.496 and.303, respectively). Supplementary regression analyses confirmed the incremental contributions of the PSF over and above those of socioeconomic status, standardized achievement, and high school GPA in predicting college outcomes." }, { "pmid": "8869578", "title": "Construction of a personal development competitive attitude scale.", "abstract": "Theory development and research in the area of psychologically healthy competition has been impeded by the lack of a psychometrically sound instrument. Four studies were conducted as part of a research program designed to remedy this deficiency by constructing an individual difference measure of general personal development competitive attitude with satisfactory psychometric properties. In Studies 1 and 2, a 15-item scale was derived primarily through item-total correlational analysis; it demonstrated satisfactory internal and test-retest reliabilities. Studies 3 and 4 were concerned with establishing the construct validity of the scale. Both Studies 3 and 4 showed the scale's discriminant validity through its lack of association with hypercompetitiveness. In addition, its construct validity was seen in its negative association with neurosis and its positive links with personal and social self-esteem and optimal psychological health. Also, as expected, personal development competitiveness was positively correlated with needs for affiliation, whereas hypercompetitiveness was unrelated to affiliation needs. Although hypercompetitive individuals were more aggressive, dominant, and exhibitionistic, this was not the case for personal development competitors." } ]
Research and Practice in Technology Enhanced Learning
30595747
PMC6294216
10.1186/s41039-018-0087-7
Measuring Behaviors and Identifying Indicators of Self-Regulation in Computer-Assisted Language Learning Courses
The aim of this research is to measure self-regulated behavior and identify significant behavioral indicators in computer-assisted language learning courses. The behavioral measures were based on log data from 2454 freshman university students from Art and Science departments for 1 year. These measures reflected the degree of self-regulation, including anti-procrastination, irregularity of study interval, and pacing. Clustering analysis was conducted to identify typical patterns of learning pace, and hierarchical regression analysis was performed to examine significant behavioral indicators in the online course. The results of learning pace clustering analysis revealed that the final course point average in different clusters increased with the number of completed quizzes, and students who had procrastination behavior were more likely to achieve lower final course points. Furthermore, the number of completed quizzes and study interval irregularity were strong predictors of course performance in the regression model. It clearly indicated the importance of self-regulation skill, in particular completion of assigned tasks and regular learning.
Related workSRL in computer-assisted environmentsSRL is an active and constructive process through which learners can set goals and monitor and control their cognition, motivation, and behavior (Pintrich 2000). It is also characterized as a self-directive process, as self-beliefs enable learners to transform their academic abilities (Zimmerman 2008). Winne and Hadwin (1998) proposed that SRL included four phases: defining the task, setting goals and plans, enacting tactics, and adapting metacognition. Therefore, learners need to analyze the learning context and define tasks, set the appropriate learning goals and make plans, select the effective learning strategies to use, monitor the whole learning process, and evaluate their learning performance.Previous studies indicated that SRL is a crucial skill for success in computer-assisted environments (Adeyinka and Mutula 2010). However, learners cannot always regulate themselves successfully because of reasons, such as lack of good strategy use, lack of metacognitive knowledge, failure to control metacognitive processes, or lack of experience in learning environments with multiple representations.Thus, how to foster SRL ability has become a central issue in the field of education research and practice. In order to support learner’s acquisition of self-regulation skills in CALL courses, instruments that capture students’ self-regulation are critical. Most studies on self-regulated learning have used self-reported instruments, which not only are intrusive but also are limited to capturing self-regulated behaviors in computer-assisted environments. However, as mentioned earlier, this issue can be resolved through the use of online trace data, and technologically mediated learning environments enable the collection of a comprehensive set of student learning behaviors that occur (Pardo 2014).Learning analytics for SRLAs Winne and Baker (2013) noted, “Self-regulated learning is a behavioral expression of metacognitively guided motivation.” Consequently, every trace records a motivated choice about how to learn. Analyzing trace data could better understand and discover meaningful behavioral patterns about rate of progress, effort spent, or time management.Numerous studies have reported the benefits of utilizing learning analytics (LA) in terms of examining online course performance (Johnson 2005; Morris et al. 2005; DietzUhler and Hurn 2013). These results imply that active participation is essential to successful online learning. Furthermore, a few studies have focused on the quality of learning rather than the number of online participation (Asarta and Schmidt 2013; Cheng and Chau 2016). Asarta and Schmidt were particularly interested in the timing dimension of access to 36 online lesson materials. They examined the effect of timing, volume, intensity, and consistency of access on achievement. They clarified that keeping pace with the class schedule, studying the materials in advance of an exam without cramming, and accessing course materials regularly are vital factors for achievement. These findings support the notion that various characteristics of learning behaviors rather than simply the frequency of access should be taken into account.Despite a growing body of research that examines interpreting online engagement to support the learning process in online learning environments, little is known on how to measure self-regulated learning and to examine the effects on course success. Yet, interpreting and evaluating qualities of actions, strategies, goals, and more broadly regulation is a much more challenging task (Roll et al. 2014). Developing indicators of self-regulated learning is the first step to addressing this challenge. The extraction and aggregation of meaningful indicators should support understanding of students’ learning statuses and providing actionable feedback.
[]
[]
Scientific Reports
30560945
PMC6298992
10.1038/s41598-018-36284-5
An Automatic Classification Method on Chronic Venous Insufficiency Images
Chronic venous insufficiency (CVI) affect a large population, and it cannot heal without doctors’ interventions. However, many patients do not get the medical advisory service in time. At the same time, the doctors also need an assistant tool to classify the patients according to the severity level of CVI. We propose an automatic classification method, named CVI-classifier to help doctors and patients. In this approach, first, low-level image features are mapped into middle-level semantic features by a concept classifier, and a multi-scale semantic model is constructed to form the image representation with rich semantics. Second, a scene classifier is trained using an optimized feature subset calculated by the high-order dependency based feature selection approach, and is used to estimate CVI’s severity. At last, classification accuracy, kappa coefficient, F1-score are used to evaluate classification performance. Experiments on the CVI images from 217 patients’ medical records demonstrated superior performance and efficiency for CVI-classifier, with classification accuracy up to 90.92%, kappa coefficient of 0.8735 and F1score of 0.9006. This method also outperformed doctors’ diagnosis (doctors rely solely on images to make judgments) with accuracy, kappa and F1-score improved by 9.11%, 0.1250 and 0.0955 respectively.
Related WorkWith the rapid development of modern medical imaging technology, automatic medical image classification has become more and more important for diseases diagnosis, medical references and surgical planning10–12. Medical image classification approaches have already been used for cancer detection13, stroke identification14, Alzheimer’s disease15, etc., however, no classification approach for CVI images exists at present. Moreover, traditional medical image classification methods are mostly based on low-level image features, such as color, texture and shape16. These low-level features cannot reflect certain hidden information in the medical images, creating the “semantic gap” problem between low-level features and high-level information, which is one of the biggest challenges for medical image classification17.To reduce the semantic gap, Bag of Visual Word (BoVW) model18,19 is introduced to form middle-level features for describing high-level semantics. A typical procedure of BoVW model is illustrated as follows:an image is sampled efficiently with various local interest point detectors20 or dense regions21, and is described by local descriptors22;A codebook consisting of several codewords is learned with clustering techniques, such as K-means and spectral clustering, to quantize the local features into discrete values;The visual histogram achieved by calculating the occurring frequency of each visual word is used to represent the medical image.BoVW has been used in medical image classification tasks and achieved inspiring performance. However, there are still some design choices to make in different steps of BoVW model, including the choice of local feature descriptor, dictionary learning and middle-level semantic image representation.The delicate design of local features depicting different aspects of visual appearance is the basis for the success of BoVW models in medical image classification. Due to its invariance to translation, illumination, and scale, SIFT and its improved versions, such as SURF, become the most popular local descriptor23,24. Considering the dimension of SIFT-based local features is high, affine moment invariants that can produce small-sized compact feature vectors with minimum information redundancy are proposed25. The above descriptors are computed based on the local region around some key points, however, most medical images have little meaningful key points and structures in the lesions, so patch-based BoVW models are proposed to medical image classification tasks26. In these approaches, medical images are partitioned into multiple blocks and local descriptors are calculated according to block intensity. All these types of features have been proven to be very powerful descriptors to detect and describe local features in images. However, the single local descriptor may perform poorly when the image contains complex background due to the fact that a portion of extracted features may come from the noisy background. In practice, a combination of multiple local features27 often yields better performance on image representation.Dictionary learning is another a critical component of BoVW, and can be divided into unsupervised and supervised approaches28–30. Unsupervised clustering techniques, such as the K-means, K-median clustering, mean-shift clustering, hierarchical K-means, agglomerative clustering31–34, are usually used for constructing the visual dictionary. In these approaches, the feature vectors are clustered and the centroids of the clusters are used to form the codebook. One of the common features of these unsupervised methods is that they only optimize an objective function fitting to the data but ignoring their class information. Therefore, this reduces the discriminative power of the resulting visual dictionaries. To create more discriminative visual words, supervised dictionary learning techniques that optimize the dictionary for a specific classification problem are proposed, and are proved to outperform corresponding unsupervised methods35. In recent work, Saghafi36 proposed a concept space to illustrate the semantic relations between the visual codewords. They apply generative models such as latent semantic analysis (LSA) and probabilistic latent semantic analysis (pLSA) to discover the latent semantic relations between the initial codewords. N. Passalis37 generalized and formulated the BoVW model as a neural network, which is composed of a Radial Basis Function (RBF) layer and an accumulation layer. Moreover, the proposed model can be trained in the supervised fashion when it is followed by a multilayer perceptron (MLP) as classification layer. B. Fernando38 introduced the Gaussian mixture model for codebook generation that not only generalizes the K-means algorithm by allowing soft assignments, but also exploits supervised information to improve the discriminative power of the clusters. R. Ji39 proposed a task-dependent codebook compression framework to reduce dimension of BOVW, based on the supervise labels coming from the classification task. Although supervised learning has been introduced to codebook generation, local labels are still ignored, which impedes its overall performance.After learning the visual words, all the encoded local features will be pooled to form an image-level feature vector. However, a global histogram only reflects the holistic distribution of codewords, the information about the spatial layout is lost, which could be important cues for image classification. To take advantage of spatial information, Lazebnik et al.40 proposed a spatial pyramid matching (SPM) framework by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region, which has become a widely used strategy to incorporate spatial information. To further improve the SPM method, Zhou et al.41 incorporated a multiresolution representation into the traditional BoVW model by constructing multiple resolution images and represented each resolution image by features extracted from the sub-regions of two modalities of horizontal and vertical partitions. Although the above methods fully consider the spatial characteristics of the image, they do not include local semantic that can effectively eliminate the ambiguity of the local features. In order to further exploit local concepts of images, Y. Tanaka42 proposed a multi-level resolution semantic modeling method for automatic scene recognition, which constructs global image representation with concept probabilities of local regions in each level resolution, and combine semantic representations of multi-level resolution for scene recognition. Although these variants of the original BoVW model have a good descriptive ability to depict spatial layout and semantic information of images, they are designed mainly for natural image scene classification, and seldom have they been applied in the field of medical image classification.To this end, considering local lesions of CVI images have meaningful semantics and several sizes, this paper proposes a framework for discovering multi-level local semantics to improve the performance of automatic CVI image classification. Differences from traditional BoVW models include: 1) Various features of local region are combined to better exploit discriminative appearances. 2) Supervised learning approach directly based on local-regions’ labels is used to generate visual vocabulary. 3) The final representation of global image is modeled by the combination of middle-level semantics that are calculated by counting frequencies of local concepts at each level resolution.
[ "21962926", "26992626", "10842165", "26890880", "25055385", "28117445", "21914569", "28327449", "27688597", "21118769", "29567484", "22128004", "23289130", "23627889", "19320818", "15622385", "28042512", "22006275", "24658240", "20858575" ]
[ { "pmid": "21962926", "title": "Validation of Venous Clinical Severity Score (VCSS) with other venous severity assessment tools from the American Venous Forum, National Venous Screening Program.", "abstract": "BACKGROUND\nSeveral standard venous assessment tools have been used as independent determinants of venous disease severity, but correlation between these instruments as a global venous screening tool has not been tested. The scope of this study is to assess the validity of Venous Clinical Severity Scoring (VCSS) and its integration with other venous assessment tools as a global venous screening instrument.\n\n\nMETHODS\nThe American Venous Forum (AVF), National Venous Screening Program (NVSP) data registry from 2007 to 2009 was queried for participants with complete datasets, including CEAP clinical staging, VCSS, modified Chronic Venous Insufficiency Quality of Life (CIVIQ) assessment, and venous ultrasound results. Statistical correlation trends were analyzed using Spearman's rank coefficient as related to VCSS.\n\n\nRESULTS\nFive thousand eight hundred fourteen limbs in 2,907 participants were screened and included CEAP clinical stage C0: 26%; C1: 33%; C2: 24%; C3: 9%; C4: 7%; C5: 0.5%; C6: 0.2% (mean, 1.41 ± 1.22). VCSS mean score distribution (range, 0-3) for the entire cohort included: pain 1.01 ± 0.80, varicose veins 0.61 ± 0.84, edema 0.61 ± 0.81, pigmentation 0.15 ± 0.47, inflammation 0.07 ± 0.33, induration 0.04 ± 0.27, ulcer number 0.004 ± 0.081, ulcer size 0.007 ± 0.112, ulcer duration 0.007 ± 0.134, and compression 0.30 ± 0.81. Overall correlation between CEAP and VCSS was moderately strong (r(s) = 0.49; P < .0001), with highest correlation for attributes reflecting more advanced disease, including varicose vein (r(s) = 0.51; P < .0001), pigmentation (r(s) = 0.39; P < .0001), inflammation (r(s) = 0.28; P < .0001), induration (r(s) = 0.22; P < .0001), and edema (r(s) = 0.21; P < .0001). Based on the modified CIVIQ assessment, overall mean score for each general category included: Quality of Life (QoL)-Pain 6.04 ± 3.12 (range, 3-15), QoL-Functional 9.90 ± 5.32 (range, 5-25), and QoL-Social 5.41 ± 3.09 (range, 3-15). Overall correlation between CIVIQ and VCSS was moderately strong (r(s) = 0.43; P < .0001), with the highest correlation noted for pain (r(s) = 0.55; P < .0001) and edema (r(s) = 0.30; P < .0001). Based on screening venous ultrasound results, 38.1% of limbs had reflux and 1.5% obstruction in the femoral, saphenous, or popliteal vein segments. Correlation between overall venous ultrasound findings (reflux + obstruction) and VCSS was slightly positive (r(s) = 0.23; P < .0001) but was highest for varicose vein (r(s) = 0.32; P < .0001) and showed no correlation to swelling (r(s) = 0.06; P < .0001) and pain (r(s) = 0.003; P = .7947).\n\n\nCONCLUSIONS\nWhile there is correlation between VCSS, CEAP, modified CIVIQ, and venous ultrasound findings, subgroup analysis indicates that this correlation is driven by different components of VCSS compared with the other venous assessment tools. This observation may reflect that VCSS has more global application in determining overall severity of venous disease, while at the same time highlighting the strengths of the other venous assessment tools." }, { "pmid": "26992626", "title": "Use of the Clinical, Etiologic, Anatomic, and Pathophysiologic classification and Venous Clinical Severity Score to establish a treatment plan for chronic venous disorders.", "abstract": "To be useful in clinical practice and in the evaluation of clinical therapies for chronic venous disorders, a measurement instrument should be objective, inclusive of all severities of venous disease, and rapidly performed by clinicians. The Clinical, Etiologic, Anatomic, and Pathophysiologic classification helps us identify the etiology, whether it is congenital, nonthrombotic, or post-thrombotic; anatomic segments involved, whether deep, superficial, or perforators; and pathophysiologic data, such as reflux or obstruction. The Venous Clinical Severity Score can be used to observe patients longitudinally, especially after interventions, although the total score is biased with regard to advanced disease, such as C4 through C6. To be able to predict progression of disease, more patient-validated instruments are needed. Physician-reported outcomes (the Venous Clinical Severity Score and the Clinical, Etiologic, Anatomic, and Pathophysiologic classification) in association with a patient-reported outcome may be the solution for the development of an ideal treatment plan." }, { "pmid": "10842165", "title": "Venous severity scoring: An adjunct to venous outcome assessment.", "abstract": "Some measure of disease severity is needed to properly compare the outcomes of the various approaches to the treatment of chronic venous insufficiency. Comparing the outcomes of two or more different treatments in a clinical trial, or the same treatment in two or more reports from the literature cannot be done with confidence unless the relative severity of the venous disease in each treatment group is known. The CEAP (Clinical-Etiology-Anatomic-Pathophysiologic) system is an excellent classification scheme, but it cannot serve the purpose of venous severity scoring because many of its components are relatively static and others use detailed alphabetical designations. A disease severity scoring scheme needs to be quantifiable, with gradable elements that can change in response to treatment. However, an American Venous Forum committee on venous outcomes assessment has developed a venous severity scoring system based on the best usable elements of the CEAP system. Two scores are proposed. The first is a Venous Clinical Severity Score: nine clinical characteristics of chronic venous disease are graded from 0 to 3 (absent, mild, moderate, severe) with specific criteria to avoid overlap or arbitrary scoring. Zero to three points are added for differences in background conservative therapy (compression and elevation) to produce a 30 point-maximum flat scale. The second is a Venous Segmental Disease Score, which combines the Anatomic and Pathophysiologic components of CEAP. Major venous segments are graded according to presence of reflux and/or obstruction. It is entirely based on venous imaging, primarily duplex scan but also phlebographic findings. This scoring scheme weights 11 venous segments for their relative importance when involved with reflux and/or obstruction, with a maximum score of 10. A third score is simply a modification of the existing CEAP disability score that eliminates reference to work and an 8-hour working day, substituting instead the patient's prior normal activities. These new scoring schemes are intended to complement the current CEAP system." }, { "pmid": "26890880", "title": "Adapting content-based image retrieval techniques for the semantic annotation of medical images.", "abstract": "The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images." }, { "pmid": "25055385", "title": "Computer-Aided Prostate Cancer Diagnosis From Digitized Histopathology: A Review on Texture-Based Systems.", "abstract": "Prostate cancer (PCa) is currently diagnosed by microscopic evaluation of biopsy samples. Since tissue assessment heavily relies on the pathologists level of expertise and interpretation criteria, it is still a subjective process with high intra- and interobserver variabilities. Computer-aided diagnosis (CAD) may have a major impact on detection and grading of PCa by reducing the pathologists reading time, and increasing the accuracy and reproducibility of diagnosis outcomes. However, the complexity of the prostatic tissue and the large volumes of data generated by biopsy procedures make the development of CAD systems for PCa a challenging task. The problem of automated diagnosis of prostatic carcinoma from histopathology has received a lot of attention. As a result, a number of CAD systems, have been proposed for quantitative image analysis and classification. This review aims at providing a detailed description of selected literature in the field of CAD of PCa, emphasizing the role of texture analysis methods in tissue description. It includes a review of image analysis tools for image preprocessing, feature extraction, classification, and validation techniques used in PCa detection and grading, as well as future directions in pursuit of better texture-based CAD systems." }, { "pmid": "28117445", "title": "Dermatologist-level classification of skin cancer with deep neural networks.", "abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care." }, { "pmid": "21914569", "title": "NMF-SVM based CAD tool applied to functional brain images for the diagnosis of Alzheimer's disease.", "abstract": "This paper presents a novel computer-aided diagnosis (CAD) technique for the early diagnosis of the Alzheimer's disease (AD) based on nonnegative matrix factorization (NMF) and support vector machines (SVM) with bounds of confidence. The CAD tool is designed for the study and classification of functional brain images. For this purpose, two different brain image databases are selected: a single photon emission computed tomography (SPECT) database and positron emission tomography (PET) images, both of them containing data for both Alzheimer's disease (AD) patients and healthy controls as a reference. These databases are analyzed by applying the Fisher discriminant ratio (FDR) and nonnegative matrix factorization (NMF) for feature selection and extraction of the most relevant features. The resulting NMF-transformed sets of data, which contain a reduced number of features, are classified by means of a SVM-based classifier with bounds of confidence for decision. The proposed NMF-SVM method yields up to 91% classification accuracy with high sensitivity and specificity rates (upper than 90%). This NMF-SVM CAD tool becomes an accurate method for SPECT and PET AD image classification." }, { "pmid": "28327449", "title": "Integrated local binary pattern texture features for classification of breast tissue imaged by optical coherence microscopy.", "abstract": "This paper proposes a texture analysis technique that can effectively classify different types of human breast tissue imaged by Optical Coherence Microscopy (OCM). OCM is an emerging imaging modality for rapid tissue screening and has the potential to provide high resolution microscopic images that approach those of histology. OCM images, acquired without tissue staining, however, pose unique challenges to image analysis and pattern classification. We examined multiple types of texture features and found Local Binary Pattern (LBP) features to perform better in classifying tissues imaged by OCM. In order to improve classification accuracy, we propose novel variants of LBP features, namely average LBP (ALBP) and block based LBP (BLBP). Compared with the classic LBP feature, ALBP and BLBP features provide an enhanced encoding of the texture structure in a local neighborhood by looking at intensity differences among neighboring pixels and among certain blocks of pixels in the neighborhood. Fourty-six freshly excised human breast tissue samples, including 27 benign (e.g. fibroadenoma, fibrocystic disease and usual ductal hyperplasia) and 19 breast carcinoma (e.g. invasive ductal carcinoma, ductal carcinoma in situ and lobular carcinoma in situ) were imaged with large field OCM with an imaging area of 10 × 10 mm2 (10, 000 × 10, 000 pixels) for each sample. Corresponding H&E histology was obtained for each sample and used to provide ground truth diagnosis. 4310 small OCM image blocks (500 × 500 pixels) each paired with corresponding H&E histology was extracted from large-field OCM images and labeled with one of the five different classes: adipose tissue (n = 347), fibrous stroma (n = 2,065), breast lobules (n = 199), carcinomas (pooled from all sub-types, n = 1,127), and background (regions outside of the specimens, n = 572). Our experiments show that by integrating a selected set of LBP and the two new variant (ALBP and BLBP) features at multiple scales, the classification accuracy increased from 81.7% (using LBP features alone) to 93.8% using a neural network classifier. The integrated feature was also used to classify large-field OCM images for tumor detection. A receiver operating characteristic (ROC) curve was obtained with an area under the curve value of 0.959. A sensitivity level of 100% and specificity level of 85.2% was achieved to differentiate benign from malignant samples. Several other experiments also demonstrate the complementary nature of LBP and the two variants (ALBP and BLBP features) and the significance of integrating these texture features for classification. Using features from multiple scales and performing feature selection are also effective mechanisms to improve accuracy while maintaining computational efficiency." }, { "pmid": "27688597", "title": "Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.", "abstract": "Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency." }, { "pmid": "21118769", "title": "X-ray categorization and retrieval on the organ and pathology level, using patch-based visual words.", "abstract": "In this study we present an efficient image categorization and retrieval system applied to medical image databases, in particular large radiograph archives. The methodology is based on local patch representation of the image content, using a \"bag of visual words\" approach. We explore the effects of various parameters on system performance, and show best results using dense sampling of simple features with spatial content, and a nonlinear kernel-based support vector machine (SVM) classifier. In a recent international competition the system was ranked first in discriminating orientation and body regions in X-ray images. In addition to organ-level discrimination, we show an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology. The system discriminates between healthy and pathological cases, and is also shown to successfully identify specific pathologies in a set of chest radiographs taken from a routine hospital examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics." }, { "pmid": "29567484", "title": "Biomedical image classification based on a cascade of an SVM with a reject option and subspace analysis.", "abstract": "Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy." }, { "pmid": "22128004", "title": "Task-dependent visual-codebook compression.", "abstract": "A visual codebook serves as a fundamental component in many state-of-the-art computer vision systems. Most existing codebooks are built based on quantizing local feature descriptors extracted from training images. Subsequently, each image is represented as a high-dimensional bag-of-words histogram. Such highly redundant image description lacks efficiency in both storage and retrieval, in which only a few bins are nonzero and distributed sparsely. Furthermore, most existing codebooks are built based solely on the visual statistics of local descriptors, without considering the supervise labels coming from the subsequent recognition or classification tasks. In this paper, we propose a task-dependent codebook compression framework to handle the above two problems. First, we propose to learn a compression function to map an originally high-dimensional codebook into a compact codebook while maintaining its visual discriminability. This is achieved by a codeword sparse coding scheme with Lasso regression, which minimizes the descriptor distortions of training images after codebook compression. Second, we propose to adapt our codebook compression to the subsequent recognition or classification tasks. This is achieved by introducing a label constraint kernel (LCK) into our compression loss function. In particular, our LCK can model heterogeneous kinds of supervision, i.e., (partial) category labels, correlative semantic annotations, and image query logs. We validated our codebook compression in three computer vision tasks: 1) object recognition in PASCAL Visual Object Class 07; 2) near-duplicate image retrieval in UKBench; and 3) web image search in a collection of 0.5 million Flickr photographs. Our compressed codebook has shown superior performances over several state-of-the-art supervised and unsupervised codebooks." }, { "pmid": "23289130", "title": "Proximity-based frameworks for generating embeddings from multi-output data.", "abstract": "This paper is about supervised and semi-supervised dimensionality reduction (DR) by generating spectral embeddings from multi-output data based on the pairwise proximity information. Two flexible and generic frameworks are proposed to achieve supervised DR (SDR) for multilabel classification. One is able to extend any existing single-label SDR to multilabel via sample duplication, referred to as MESD. The other is a multilabel design framework that tackles the SDR problem by computing weight (proximity) matrices based on simultaneous feature and label information, referred to as MOPE, as a generalization of many current techniques. A diverse set of different schemes for label-based proximity calculation, as well as a mechanism for combining label-based and feature-based weight information by considering information importance and prioritization, are proposed for MOPE. Additionally, we summarize many current spectral methods for unsupervised DR (UDR), single/multilabel SDR, and semi-supervised DR (SSDR) and express them under a common template representation as a general guide to researchers in the field. We also propose a general framework for achieving SSDR by combining existing SDR and UDR models, and also a procedure of reducing the computational cost via learning with a target set of relation features. The effectiveness of our proposed methodologies is demonstrated with experiments with document collections for multilabel text categorization from the natural language processing domain." }, { "pmid": "23627889", "title": "A comparison of Cohen's Kappa and Gwet's AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples.", "abstract": "BACKGROUND\nRater agreement is important in clinical research, and Cohen's Kappa is a widely used method for assessing inter-rater reliability; however, there are well documented statistical problems associated with the measure. In order to assess its utility, we evaluated it against Gwet's AC1 and compared the results.\n\n\nMETHODS\nThis study was carried out across 67 patients (56% males) aged 18 to 67, with a mean SD of 44.13 ± 12.68 years. Nine raters (7 psychiatrists, a psychiatry resident and a social worker) participated as interviewers, either for the first or the second interviews, which were held 4 to 6 weeks apart. The interviews were held in order to establish a personality disorder (PD) diagnosis using DSM-IV criteria. Cohen's Kappa and Gwet's AC1 were used and the level of agreement between raters was assessed in terms of a simple categorical diagnosis (i.e., the presence or absence of a disorder). Data were also compared with a previous analysis in order to evaluate the effects of trait prevalence.\n\n\nRESULTS\nGwet's AC1 was shown to have higher inter-rater reliability coefficients for all the PD criteria, ranging from .752 to 1.000, whereas Cohen's Kappa ranged from 0 to 1.00. Cohen's Kappa values were high and close to the percentage of agreement when the prevalence was high, whereas Gwet's AC1 values appeared not to change much with a change in prevalence, but remained close to the percentage of agreement. For example a Schizoid sample revealed a mean Cohen's Kappa of .726 and a Gwet's AC1of .853 , which fell within the different level of agreement according to criteria developed by Landis and Koch, and Altman and Fleiss.\n\n\nCONCLUSIONS\nBased on the different formulae used to calculate the level of chance-corrected agreement, Gwet's AC1 was shown to provide a more stable inter-rater reliability coefficient than Cohen's Kappa. It was also found to be less affected by prevalence and marginal probability than that of Cohen's Kappa, and therefore should be considered for use with inter-rater reliability analysis." }, { "pmid": "19320818", "title": "Measurement properties of the Villalta scale to define and classify the severity of the post-thrombotic syndrome.", "abstract": "The post-thrombotic syndrome (PTS) is a frequent and important complication of deep venous thrombosis (DVT). The diagnosis of PTS is based primarily on the presence of typical symptoms and clinical signs. In the 1990s, a clinical scale known as the Villalta scale was proposed as a measure that could be used to diagnose and classify the severity of PTS. The objective of the present paper was to review the published evidence on the measurement properties of the Villalta scale. Results of the review demonstrate that the Villalta scale is a reliable and valid measure of PTS in patients with previous, objectively confirmed DVT. The scale is acceptable to research subjects and research personnel, and shows responsiveness to clinical change in PTS. Aspects of the Villalta scale that merit further evaluation include test-retest reliability, more detailed assessment of ulcer severity and assessment of responsiveness across the full range of PTS severity. Research aimed at improving the measurement of PTS will also help to improve the overall validity of findings generated by clinical studies of PTS." }, { "pmid": "15622385", "title": "Revision of the CEAP classification for chronic venous disorders: consensus statement.", "abstract": "The CEAP classification for chronic venous disorders (CVD) was developed in 1994 by an international ad hoc committee of the American Venous Forum, endorsed by the Society for Vascular Surgery, and incorporated into \"Reporting Standards in Venous Disease\" in 1995. Today most published clinical papers on CVD use all or portions of CEAP. Rather than have it stand as a static classification system, an ad hoc committee of the American Venous Forum, working with an international liaison committee, has recommended a number of practical changes, detailed in this consensus report. These include refinement of several definitions used in describing CVD; refinement of the C classes of CEAP; addition of the descriptor n (no venous abnormality identified); elaboration of the date of classification and level of investigation; and as a simpler alternative to the full (advanced) CEAP classification, introduction of a basic CEAP version. It is important to stress that CEAP is a descriptive classification, whereas venous severity scoring and quality of life scores are instruments for longitudinal research to assess outcomes." }, { "pmid": "28042512", "title": "THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES.", "abstract": "In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors." }, { "pmid": "22006275", "title": "Automatic detection of pectoral muscle using average gradient and shape based feature.", "abstract": "In medio-lateral oblique view of mammogram, pectoral muscle may sometimes affect the detection of breast cancer due to their similar characteristics with abnormal tissues. As a result pectoral muscle should be handled separately while detecting the breast cancer. In this paper, a novel approach for the detection of pectoral muscle using average gradient- and shape-based feature is proposed. The process first approximates the pectoral muscle boundary as a straight line using average gradient-, position-, and shape-based features of the pectoral muscle. Straight line is then tuned to a smooth curve which represents the pectoral margin more accurately. Finally, an enclosed region is generated which represents the pectoral muscle as a segmentation mask. The main advantage of the method is its' simplicity as well as accuracy. The method is applied on 200 mammographic images consisting 80 randomly selected scanned film images from Mammographic Image Analysis Society (mini-MIAS) database, 80 direct radiography (DR) images, and 40 computed radiography (CR) images from local database. The performance is evaluated based upon the false positive (FP), false negative (FN) pixel percentage, and mean distance closest point (MDCP). Taking all the images into consideration, the average FP and FN pixel percentages are 4.22%, 3.93%, 18.81%, and 6.71%, 6.28%, 5.12% for mini-MIAS, DR, and CR images, respectively. Obtained MDCP values for the same set of database are 3.34, 3.33, and 10.41 respectively. The method is also compared with two well-known pectoral muscle detection techniques and in most of the cases, it outperforms the other two approaches." }, { "pmid": "24658240", "title": "Lung nodule classification with multilevel patch-based context analysis.", "abstract": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance." }, { "pmid": "20858575", "title": "Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.", "abstract": "Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects." } ]
BMC Medical Informatics and Decision Making
30563507
PMC6299608
10.1186/s12911-018-0721-8
Promoting exercise training and physical activity in daily life: a feasibility study of a virtual group intervention for behaviour change in COPD
BackgroundPhysical inactivity is associated with poor health outcomes in chronic obstructive pulmonary disease (COPD). It is therefore crucial for patients to have a physically active lifestyle. The aims of this feasibility study were to assess a tablet-based physical activity behavioural intervention in virtual groups for COPD regarding 1) patients’ acceptance 2) technology usability 3) patients’ exercise programme adherence and 4) changes in patients’ physical activity level.MethodsWe used an application with functionality for a virtual peer group, a digital exercise diary, a follow-along exercise video, and visual rewards on the home screen wallpaper. The exercise programme combined scheduled virtual group exercising (outdoor ground walking, indoor resistance and strength training) with self-chosen individual exercises. Ten participants with COPD were enrolled into two exercise training groups. Patients’ acceptance was assessed by semi-structured interviews, technology usability was assessed by the System Usability Scale, and exercise programme adherence and level of physical activity by self-reporting. The interviews were also used for the latter three aspects.ResultsThe virtual peer group was experienced as motivating, helping participants to get started and be physically active. They updated their own activity status and kept track of the others’ status. Having a time schedule for the virtual group exercises helped them to avoid postponing the exercise training. All participants recorded individual exercises in the diary, the exercise video was well received and used, and most participants paid attention to the visual rewards. All participants found the technology easy both to learn and to use. The exercise programme adherence was good, with, on average, 77% attendance for the virtual group exercises, and all participants performed additional individual exercises. The average number of physical activity sessions per week was doubled from 2.9 (range 0–10, median 2) at baseline to 5.9 (range 3.3–10.33, median 4.8) during the intervention period.ConclusionThe results indicate that the tablet-based intervention may be feasible in COPD, and that it was acceptable, encouraged a sense of peer support and fellowship in the group and motivated participants to physical activity and exercise training in daily life. Further assessment is needed on patient outcomes.
Related workIn COPD, there are several studies on technology interventions for physical activity behaviour change for individual patients [34, 46–52]. Many of these include the use of an exercise diary or questionnaire for self-monitoring of behaviour [34, 47, 48, 50], and/or the use of activity sensors and step counters [46, 50, 51]. Single-player exergames have also been used at home for exercising [53], and virtual reality in remotely supervised exercising for COPD [54]. We are not aware of any studies in COPD of physical activity intervention with visual rewards. However, visual rewards are used in such interventions in other populations [55]. Online pulmonary rehabilitation programmes have been delivered to individual COPD patients at home [56] and to online groups of patients [21–27]. However, for COPD, surprisingly few physical activity behaviour change interventions studied include virtual groups, besides videoconferencing exercise groups and an intervention combining an online forum with a pedometer and other functionality [29]. Further, findings by others support the use of multifaceted interventions with the opportunity for individual adaptation for targeting physical activity behaviour change in COPD [57].
[ "15665324", "21426563", "19010994", "25359358", "16760357", "17494825", "25221907", "21596892", "21843834", "22884186", "24947760", "23079176", "28392983", "18487712", "26651831", "27992099", "27502583", "25419125", "25142484", "20727209", "18508824", "24647863", "23512568", "18203128", "19036552", "22193935", "25246781", "25886014", "26911326", "28137918", "23742208", "23235321", "28716786", "16707399", "847061" ]
[ { "pmid": "15665324", "title": "Characteristics of physical activities in daily life in chronic obstructive pulmonary disease.", "abstract": "Quantification of physical activities in daily life in patients with chronic obstructive pulmonary disease has increasing clinical interest. However, detailed comparison with healthy subjects is not available. Furthermore, it is unknown whether time spent actively during daily life is related to lung function, muscle force, or maximal and functional exercise capacity. We assessed physical activities and movement intensity with the DynaPort activity monitor in 50 patients (age 64 +/- 7 years; FEV1 43 +/- 18% predicted) and 25 healthy elderly individuals (age 66 +/- 5 years). Patients showed lower walking time (44 +/- 26 vs. 81 +/- 26 minutes/day), standing time (191 +/- 99 vs. 295 +/- 109 minutes/day), and movement intensity during walking (1.8 +/- 0.3 vs. 2.4 +/- 0.5 m/second2; p < 0.0001 for all), as well as higher sitting time (374 +/- 139 vs. 306 +/- 108 minutes/day; p = 0.04) and lying time (87 +/- 97 vs. 29 +/- 33 minutes/day; p = 0.004). Walking time was highly correlated with the 6-minute walking test (r = 0.76, p < 0.0001) and more modestly to maximal exercise capacity, lung function, and muscle force (0.28 < r < 0.64, p < 0.05). Patients with chronic obstructive pulmonary disease are markedly inactive in daily life. Functional exercise capacity is the strongest correlate of physical activities in daily life." }, { "pmid": "21426563", "title": "Level of daily physical activity in individuals with COPD compared with healthy controls.", "abstract": "BACKGROUND\nPersons with chronic obstructive pulmonary disease (COPD), performing some level of regular physical activity, have a lower risk of both COPD-related hospital admissions and mortality. COPD patients of all stages seem to benefit from exercise training programs, thereby improving with respect to both exercise tolerance and symptoms of dyspnea and fatigue. Physical inactivity, which becomes more severe with increasing age, is a point of concern in healthy older adults. COPD might worsen this scenario, but it is unclear to what degree. This literature review aims to present the extent of the impact of COPD on objectively-measured daily physical activity (DPA). The focus is on the extent of the impact that COPD has on duration, intensity, and counts of DPA, as well as whether the severity of the disease has an additional influence on DPA.\n\n\nRESULTS\nA literature review was performed in the databases PubMed [MEDLINE], Picarta, PEDRO, ISI Web of Knowledge and Google scholar. After screening, 11 studies were identified as being relevant for comparison between COPD patients and healthy controls with respect to duration, intensity, and counts of DPA. Four more studies were found to be relevant to address the subject of the influence the severity of the disease may have on DPA. The average percentage of DPA of COPD patients vs. healthy control subjects for duration was 57%, for intensity 75%, and for activity counts 56%. Correlations of DPA and severity of the disease were low and/or not significant.\n\n\nCONCLUSIONS\nFrom the results of this review, it appears that patients with COPD have a significantly reduced duration, intensity, and counts of DPA when compared to healthy control subjects. The intensity of DPA seems to be less affected by COPD than duration and counts. Judging from the results, it seems that severity of COPD is not strongly correlated with level of DPA. Future research should focus in more detail on the relation between COPD and duration, intensity, and counts of DPA, as well as the effect of disease severity on DPA, so that these relations become more understandable." }, { "pmid": "19010994", "title": "Physical activity in patients with COPD.", "abstract": "The present study aimed to measure physical activity in patients with chronic obstructive pulmonary disease (COPD) to: 1) identify the disease stage at which physical activity becomes limited; 2) investigate the relationship of clinical characteristics with physical activity; 3) evaluate the predictive power of clinical characteristics identifying very inactive patients; and 4) analyse the reliability of physical activity measurements. In total, 163 patients with COPD (Global Initiative for Chronic Obstructive Lung Disease (GOLD) stage I-IV; BODE (body mass index, airway obstruction, dyspnoea, exercise capacity) index score 0-10) and 29 patients with chronic bronchitis (normal spirometry; former GOLD stage 0) wore activity monitors that recorded steps per day, minutes of at least moderate activity, and physical activity levels for 5 days (3 weekdays plus Saturday and Sunday). Compared with patients with chronic bronchitis, steps per day, minutes of at least moderate activity and physical activity levels were reduced from GOLD stage II/BODE score 1, GOLD stage III/BODE score 3/4 and from GOLD stage III/BODE score 1, respectively. Reliability of physical activity measurements improved with the number of measured days and with higher GOLD stages. Moderate relationships were observed between clinical characteristics and physical activity. GOLD stages III and IV best predicted very inactive patients. Physical activity is reduced in patients with chronic obstructive pulmonary disease from Global Initiative for Chronic Obstructive Lung Disease stage II/ body mass index, airway obstruction, dyspnoea, exercise capacity score 1. Clinical characteristics of patients with chronic obstructive pulmonary disease only incompletely reflect their physical activity." }, { "pmid": "25359358", "title": "An official European Respiratory Society statement on physical activity in COPD.", "abstract": "This European Respiratory Society (ERS) statement provides a comprehensive overview on physical activity in patients with chronic obstructive pulmonary disease (COPD). A multidisciplinary Task Force of experts representing the ERS Scientific Group 01.02 \"Rehabilitation and Chronic Care\" determined the overall scope of this statement through consensus. Focused literature reviews were conducted in key topic areas and the final content of this Statement was agreed upon by all members. The current knowledge regarding physical activity in COPD is presented, including the definition of physical activity, the consequences of physical inactivity on lung function decline and COPD incidence, physical activity assessment, prevalence of physical inactivity in COPD, clinical correlates of physical activity, effects of physical inactivity on hospitalisations and mortality, and treatment strategies to improve physical activity in patients with COPD. This Task Force identified multiple major areas of research that need to be addressed further in the coming years. These include, but are not limited to, the disease-modifying potential of increased physical activity, and to further understand how improvements in exercise capacity, dyspnoea and self-efficacy following interventions may translate into increased physical activity. The Task Force recommends that this ERS statement should be reviewed periodically (e.g. every 5-8 years)." }, { "pmid": "17494825", "title": "Pulmonary Rehabilitation: Joint ACCP/AACVPR Evidence-Based Clinical Practice Guidelines.", "abstract": "BACKGROUND\nPulmonary rehabilitation has become a standard of care for patients with chronic lung diseases. This document provides a systematic, evidence-based review of the pulmonary rehabilitation literature that updates the 1997 guidelines published by the American College of Chest Physicians (ACCP) and the American Association of Cardiovascular and Pulmonary Rehabilitation.\n\n\nMETHODS\nThe guideline panel reviewed evidence tables, which were prepared by the ACCP Clinical Research Analyst, that were based on a systematic review of published literature from 1996 to 2004. This guideline updates the previous recommendations and also examines new areas of research relevant to pulmonary rehabilitation. Recommendations were developed by consensus and rated according to the ACCP guideline grading system.\n\n\nRESULTS\nThe new evidence strengthens the previous recommendations supporting the benefits of lower and upper extremity exercise training and improvements in dyspnea and health-related quality-of-life outcomes of pulmonary rehabilitation. Additional evidence supports improvements in health-care utilization and psychosocial outcomes. There are few additional data about survival. Some new evidence indicates that longer term rehabilitation, maintenance strategies following rehabilitation, and the incorporation of education and strength training in pulmonary rehabilitation are beneficial. Current evidence does not support the routine use of inspiratory muscle training, anabolic drugs, or nutritional supplementation in pulmonary rehabilitation. Evidence does support the use of supplemental oxygen therapy for patients with severe hypoxemia at rest or with exercise. Noninvasive ventilation may be helpful for selected patients with advanced COPD. Finally, pulmonary rehabilitation appears to benefit patients with chronic lung diseases other than COPD.\n\n\nCONCLUSIONS\nThere is substantial new evidence that pulmonary rehabilitation is beneficial for patients with COPD and other chronic lung diseases. Several areas of research provide opportunities for future research that can advance the field and make rehabilitative treatment available to many more eligible patients in need." }, { "pmid": "25221907", "title": "Interventions to Increase Physical Activity in Patients with COPD: A Comprehensive Review.", "abstract": "It is unknown how interventions aimed at increasing physical activity (PA), other than traditional pulmonary rehabilitation, are structured and whether they are effective in increasing PA in chronic obstructive pulmonary disease (COPD). The primary aim of this review was to outline the typical components of PA interventions in patients with COPD. This review followed the PRISMA guidelines. A structured literature search of relevant electronic databases from inception to April 2014 was undertaken to outline typical components and examine outcome variables of PA interventions in patients with COPD. Over 12000 articles were screened and 20 relevant studies involving 31 PA interventions were included. Data extracted included patient demographics, components of the PA intervention, PA outcome measures and effects of the intervention. Quality was assessed using the PEDro and CASP scales. There were 13 randomised controlled trials and three randomised trials (PEDro score 5-7/10) and four cohort studies (CASP score 5/10). Interventions varied in duration, number of participant/researcher contacts and mode of delivery. The most common behaviour change techniques included information on when and where (n = 26/31) and how (n = 22/31) to perform PA behaviour and self-monitoring (n = 18/31). Significant between-group differences post-intervention in favour of the PA intervention, compared to a control group or to other PA interventions, in one or more PA assessments were found in 7/16 studies. All seven studies used walking as the main type of PA/exercise. In conclusion, although the components of PA interventions were variable, there is some evidence that PA interventions have the potential to increase PA in patients with COPD." }, { "pmid": "21596892", "title": "What prevents people with chronic obstructive pulmonary disease from attending pulmonary rehabilitation? A systematic review.", "abstract": "Pulmonary rehabilitation is an essential component of care for people with chronic obstructive pulmonary disease (COPD) and is supported by strong scientific evidence. Despite this, many people with COPD do not complete their program or choose not to attend at all. The aim of this study was to determine the factors associated with uptake and completion of pulmonary rehabilitation for people with COPD. Seven electronic databases were searched for qualitative or quantitative studies that documented factors associated with uptake and completion of pulmonary rehabilitation in people with COPD. Two reviewers independently extracted data, which was synthesized to provide overall themes. Travel and transport were consistently identified as barriers to both uptake and completion. A lack of perceived benefit of pulmonary rehabilitation also influenced both uptake and completion. The only demographic features that consistently predicted non-completion were being a current smoker (pooled odds ratio 0.17, 95% confidence interval 0.10 to 0.32) and depression. The limited data available regarding barriers to uptake indicated that disruption to usual routine, influence of the referring doctor and program timing were important. In conclusion poor access to transport and lack of perceived benefit affect uptake of pulmonary rehabilitation. Current smokers and patients who are depressed are at increased risk of non-completion. Enhancing attendance in pulmonary rehabilitation will require more attention to transportation, support for those at risk of non-completion and greater involvement of patients in informed decisions about their care." }, { "pmid": "21843834", "title": "Lack of perceived benefit and inadequate transport influence uptake and completion of pulmonary rehabilitation in people with chronic obstructive pulmonary disease: a qualitative study.", "abstract": "QUESTION\nWhat prevents people with chronic obstructive pulmonary disease (COPD) from attending and completing pulmonary rehabilitation programs?\n\n\nDESIGN\nQualitative design using semi-structured interviews.\n\n\nPARTICIPANTS\n19 adults with COPD who had declined to participate and 18 adults with COPD who had not completed a pulmonary rehabilitation program at a metropolitan teaching hospital.\n\n\nRESULTS\nA lack of perceived benefit from pulmonary rehabilitation was a significant theme for those who chose not to participate in pulmonary rehabilitation. Participants expressed perceptions that exercise was not a worthwhile treatment, or that they were already doing enough exercise at home. Difficulty getting to the program related to poor mobility, lack of transport, and cost of travel was a significant theme, expressed both by those who chose not to participate and those who did not complete. Another major theme associated with both uptake and completion involved being unwell, with participants indicating that the burden of COPD and other comorbidities impacted on attendance. Minor themes involved competing demands on time, older age, fatigue, program timing, and lack of social support.\n\n\nCONCLUSION\nMany people with COPD who elect not to take up a referral to pulmonary rehabilitation perceive that they would not experience any health benefits from attendance. Difficulties with travel to the program and being unwell are barriers to both uptake and completion. Improving attendance at pulmonary rehabilitation requires consideration of how information regarding the proven benefits of pulmonary rehabilitation can be conveyed to eligible patients, along with flexible program models that facilitate access and accommodate co-morbid disease." }, { "pmid": "22884186", "title": "People with COPD perceive ongoing, structured and socially supportive exercise opportunities to be important for maintaining an active lifestyle following pulmonary rehabilitation: a qualitative study.", "abstract": "QUESTION\nWhat are the views and perceptions of people with chronic obstructive pulmonary disease (COPD) regarding maintaining an active lifestyle following a course of pulmonary rehabilitation?\n\n\nDESIGN\nQualitative study of two focus groups using a grounded theory approach.\n\n\nPARTICIPANTS\nSixteen people with COPD who had completed a course of pulmonary rehabilitation.\n\n\nRESULTS\nData from focus groups concurred and five main themes emerged: value of pulmonary rehabilitation, ongoing exercise, professional support, peer social support, and health status. Pulmonary rehabilitation was seen as facilitating greater participation in everyday activity by improving physical ability and confidence to manage breathlessness, and reducing fear about exertional activity. An exercise routine following rehabilitation was perceived as essential for maintaining activity, with participants voicing a need for ongoing, structured and supervised sessions to maintain new found abilities. The exercise facility presented a possible barrier to attendance due to its potential to provoke feelings of embarrassment or intimidation. Professional and peer support were identified as key elements; participants expressed a desire to exercise within a peer group combined with an opportunity for social interaction. Health status relating to COPD symptoms was also identified as negatively impacting on physical activity participation. Confidence or self-efficacy for physical activity emerged as a prominent factor within main themes.\n\n\nCONCLUSION\nThe opportunity for structured, ongoing exercise with peer and professional support, in a suitable venue, is perceived as important to people with COPD in facilitating a physically active lifestyle following pulmonary rehabilitation. This desire for such opportunities may be related to individuals' self-efficacy towards physical activity." }, { "pmid": "24947760", "title": "Maintenance of a physically active lifestyle after pulmonary rehabilitation in patients with COPD: a qualitative study toward motivational factors.", "abstract": "OBJECTIVES\nTo explore determinants of behavior change maintenance of a physically active lifestyle in patients with chronic obstructive pulmonary disease (COPD) 8-11 months after completion of a 4-month outpatient pulmonary rehabilitation program.\n\n\nDESIGN\nA qualitative descriptive study of semistructured interviews.\n\n\nSETTING\nPulmonary rehabilitation assessment center.\n\n\nPARTICIPANTS\nPatients with COPD.\n\n\nMEASUREMENTS\nSemistructured interviews until data saturation, coded by 2 independent researchers. Patients were classified as responder (maintenance or improvement) or nonresponder (relapse or decrease), based on 3 quantitative variables reflecting exercise capacity (Constant Work Rate Test), health-related quality of life (Short-Form health survey [SF-36]), and self-management abilities (Self-Management Ability Scale [SMAS-30/Version 2]).\n\n\nRESULTS\nMean (SD) forced expiratory volume in the first second (FEV1) among interviewees was 52.5% (14.4%) predicted and the mean age was 63.5 years (range: 45-78). The group consisted of 15 responders and 7 nonresponders. Physical limitations reduced competence to engage in an active lifestyle and responders appeared to experience higher levels of perceived competence. Social support was found important and the experienced understanding from fellow patients made exercising together enjoyable. Particularly, responders expressed autonomous motivation and said they exercised because of the benefits they gain from it. Unexpectedly, only responders also experienced controlled motivation.\n\n\nCONCLUSION\nPerceived competence and autonomous motivation are important determinants for maintenance of an active lifestyle in patients with COPD. In contrast to common theoretical assumptions, a certain threshold level of controlled motivation may remain important in maintaining a physically active lifestyle after a pulmonary rehabilitation program." }, { "pmid": "23079176", "title": "A social media-based physical activity intervention: a randomized controlled trial.", "abstract": "BACKGROUND\nOnline social networks, such as Facebook™, have extensive reach, and they use technology that could enhance social support, an established determinant of physical activity. This combination of reach and functionality makes online social networks a promising intervention platform for increasing physical activity.\n\n\nPURPOSE\nTo test the efficacy of a physical activity intervention that combined education, physical activity monitoring, and online social networking to increase social support for physical activity compared to an education-only control.\n\n\nDESIGN\nRCT. Students (n=134) were randomized to two groups: education-only controls receiving access to a physical activity-focused website (n=67) and intervention participants receiving access to the same website with physical activity self-monitoring and enrollment in a Facebook group (n=67). Recruitment and data collection occurred in 2010 and 2011; data analyses were performed in 2011.\n\n\nSETTING/PARTICIPANTS\nFemale undergraduate students at a large southeastern public university.\n\n\nINTERVENTION\nIntervention participants were encouraged through e-mails, website instructions, and moderator communications to solicit and provide social support related to increasing physical activity through a physical activity-themed Facebook group. Participants received access to a dedicated website with educational materials and a physical activity self-monitoring tool.\n\n\nMAIN OUTCOME MEASURES\nThe primary outcome was perceived social support for physical activity; secondary outcomes included self-reported physical activity.\n\n\nRESULTS\nParticipants experienced increases in social support and physical activity over time but there were no differences in perceived social support or physical activity between groups over time. Facebook participants posted 259 times to the group. Two thirds (66%) of intervention participants completing a post-study survey indicated that they would recommend the program to friends.\n\n\nCONCLUSIONS\nUse of an online social networking group plus self-monitoring did not produce greater perceptions of social support or physical activity as compared to education-only controls. Given their promising features and potential reach, efforts to further understand how online social networks can be used in health promotion should be pursued.\n\n\nTRIAL REGISTRATION\nThis study is registered at clinicaltrials.govNCT01421758." }, { "pmid": "28392983", "title": "Effects of online group exercises for older adults on physical, psychological and social wellbeing: a randomized pilot trial.", "abstract": "BACKGROUND\nIntervention programs to promote physical activity in older adults, either in group or home settings, have shown equivalent health outcomes but different results when considering adherence. Group-based interventions seem to achieve higher participation in the long-term. However, there are many factors that can make of group exercises a challenging setting for older adults. A major one, due to the heterogeneity of this particular population, is the difference in the level of skills. In this paper we report on the physical, psychological and social wellbeing outcomes of a technology-based intervention that enable online group exercises in older adults with different levels of skills.\n\n\nMETHODS\nA total of 37 older adults between 65 and 87 years old followed a personalized exercise program based on the OTAGO program for fall prevention, for a period of eight weeks. Participants could join online group exercises using a tablet-based application. Participants were assigned either to the Control group, representing the traditional individual home-based training program, or the Social group, representing the online group exercising. Pre- and post- measurements were taken to analyze the physical, psychological and social wellbeing outcomes.\n\n\nRESULTS\nAfter the eight-weeks training program there were improvements in both the Social and Control groups in terms of physical outcomes, given the high level of adherence of both groups. Considering the baseline measures, however, the results suggest that while in the Control group fitter individuals tended to adhere more to the training, this was not the case for the Social group, where the initial level had no effect on adherence. For psychological outcomes there were improvements on both groups, regardless of the application used. There was no significant difference between groups in social wellbeing outcomes, both groups seeing a decrease in loneliness despite the presence of social features in the Social group. However, online social interactions have shown to be correlated to the decrease in loneliness in the Social group.\n\n\nCONCLUSION\nThe results indicate that technology-supported online group-exercising which conceals individual differences in physical skills is effective in motivating and enabling individuals who are less fit to train as much as fitter individuals. This not only indicates the feasibility of training together despite differences in physical skills but also suggests that online exercise might reduce the effect of skills on adherence in a social context. However, results from this pilot are limited to a small sample size and therefore are not conclusive. Longer term interventions with more participants are instead recommended to assess impacts on wellbeing and behavior change." }, { "pmid": "18487712", "title": "An easy to use and affordable home-based personal eHealth system for chronic disease management based on free open source software.", "abstract": "This paper describes an easy to use home-based eHealth system for chronic disease management. We present the design and implementation of a prototype for home based education, exercises, treatment and following-up, with the TV and a remote control as user interface. We also briefly describe field trials of the system for patients with COPD and diabetes, and their experience with the technology." }, { "pmid": "26651831", "title": "Comprehensive pulmonary rehabilitation in home-based online groups: a mixed method pilot study in COPD.", "abstract": "BACKGROUND\nComprehensive multidisciplinary pulmonary rehabilitation is vital in the management of chronic obstructive pulmonary disease (COPD) and is considered for any stage of the disease. Rehabilitation programmes are often centre-based and organised in groups. However, the distance from the patient's home to the centre and lack of transportation may hinder participation. Rehabilitation at home can improve access to care for patients regardless of disease severity. We had previously studied the technology usability and acceptability of a comprehensive home rehabilitation programme designed for patients with very severe COPD receiving long-term oxygen therapy. The acceptability of such comprehensive home programmes for those with less severe COPD, who may be less homebound, is not known. The aims of this feasibility study were to assess patient acceptability of the delivery mode and components of a comprehensive pulmonary rehabilitation programme for any stage of COPD, as well as the technology usability, patient outcomes and economic aspects.\n\n\nMETHODS\nTen participants with COPD in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) grade I-IV were enrolled in a 9-week home programme and divided into two rehabilitation groups, with five patients in each group. The programme included exercise training and self-management education in online groups of patients, and individual online consultations. The patients also kept a digital health diary. To assess the acceptability of the programme, the patients were interviewed after the intervention using a semi-structured interview guide. In addition the number of sessions attended was observed. The usability of the technology was assessed using interviews and the System Usability Scale questionnaire. The St George's Respiratory Questionnaire (SGRQ) was used to measure health-related quality of life.\n\n\nRESULTS\nThe mode of delivery and the components of the programme were well accepted by the patients. The programme provided an environment for learning from both healthcare professionals and peers, for asking questions and discussing disease-related issues and for group exercising. The patients considered that it facilitated health-enhancing behaviours and social interactions with a social group formed among the participants. Even participants who were potentially less homebound appreciated the home group and social aspects of the programme. The participants found the technology easy to learn and use. The acceptability and usability results were consistent with those in our previous study of patients with very severe COPD. Only the mean change in the SGRQ total score of -6.53 (CI 95 % -0.38 to -12.68, p = 0.04) indicates a probable clinically significant effect. Economic calculations indicated that the cost of the programme was feasible.\n\n\nCONCLUSIONS\nThe results of this study indicate that comprehensive pulmonary rehabilitation delivered in home-based online groups may be feasible in COPD. The mode of delivery and components of the programme appeared to be acceptable across patients with different disease severity. The results in terms of patient outcomes are inconclusive, and further assessment is needed." }, { "pmid": "27992099", "title": "Home-based telerehabilitation via real-time videoconferencing improves endurance exercise capacity in patients with COPD: The randomized controlled TeleR Study.", "abstract": "BACKGROUND AND OBJECTIVE\nTelerehabilitation has the potential to increase access to pulmonary rehabilitation (PR) for patients with COPD who have difficulty accessing centre-based PR due to poor mobility, lack of transport and cost of travel. We aimed to determine the effect of supervised, home-based, real-time videoconferencing telerehabilitation on exercise capacity, self-efficacy, health-related quality of life (HRQoL) and physical activity in patients with COPD compared with usual care without exercise training.\n\n\nMETHODS\nPatients with COPD were randomized to either a supervised home-based telerehabilitation group (TG) that received exercise training three times a week for 8 weeks or a control group (CG) that received usual care without exercise training. Outcomes were measured at baseline and following the intervention.\n\n\nRESULTS\nThirty-six out of 37 participants (mean ± SD age = 74 ± 8 years, forced expiratory volume in 1 s (FEV1 ) = 64 ± 21% predicted) completed the study. Compared with the CG, the TG showed a statistically significant increase in endurance shuttle walk test time (mean difference = 340 s (95% CI: 153-526, P < 0.001)), an increase in self-efficacy (mean difference = 8 points (95% CI: 2-14, P < 0.007)), a trend towards a statistically significant increase in the Chronic Respiratory Disease Questionnaire total score (mean difference = 8 points (95% CI: -1 to 16, P = 0.07)) and no difference in physical activity (mean difference = 475 steps per day (95% CI: -200 to 1151, P = 0.16)).\n\n\nCONCLUSION\nThis study showed that telerehabilitation improved endurance exercise capacity and self-efficacy in patients with COPD when compared with usual care." }, { "pmid": "27502583", "title": "Long-Term Effects of an Internet-Mediated Pedometer-Based Walking Program for Chronic Obstructive Pulmonary Disease: Randomized Controlled Trial.", "abstract": "BACKGROUND\nRegular physical activity (PA) is recommended for persons with chronic obstructive pulmonary disease (COPD). Interventions that promote PA and sustain long-term adherence to PA are needed.\n\n\nOBJECTIVE\nWe examined the effects of an Internet-mediated, pedometer-based walking intervention, called Taking Healthy Steps, at 12 months.\n\n\nMETHODS\nVeterans with COPD (N=239) were randomized in a 2:1 ratio to the intervention or wait-list control. During the first 4 months, participants in the intervention group were instructed to wear the pedometer every day, upload daily step counts at least once a week, and were provided access to a website with four key components: individualized goal setting, iterative feedback, educational and motivational content, and an online community forum. The subsequent 8-month maintenance phase was the same except that participants no longer received new educational content. Participants randomized to the wait-list control group were instructed to wear the pedometer, but they did not receive step-count goals or instructions to increase PA. The primary outcome was health-related quality of life (HRQL) assessed by the St George's Respiratory Questionnaire Total Score (SGRQ-TS); the secondary outcome was daily step count. Linear mixed-effect models assessed the effect of intervention over time. One participant was excluded from the analysis because he was an outlier. Within the intervention group, we assessed pedometer adherence and website engagement by examining percent of days with valid step-count data, number of log-ins to the website each month, use of the online community forum, and responses to a structured survey.\n\n\nRESULTS\nParticipants were 93.7% male (223/238) with a mean age of 67 (SD 9) years. At 12 months, there were no significant between-group differences in SGRQ-TS or daily step count. Between-group difference in daily step count was maximal and statistically significant at month 4 (P<.001), but approached zero in months 8-12. Within the intervention group, mean 76.7% (SD 29.5) of 366 days had valid step-count data, which decreased over the months of study (P<.001). Mean number of log-ins to the website each month also significantly decreased over the months of study (P<.001). The online community forum was used at least once during the study by 83.8% (129/154) of participants. Responses to questions assessing participants' goal commitment and intervention engagement were not significantly different at 12 months compared to 4 months.\n\n\nCONCLUSIONS\nAn Internet-mediated, pedometer-based PA intervention, although efficacious at 4 months, does not maintain improvements in HRQL and daily step counts at 12 months. Waning pedometer adherence and website engagement by the intervention group were observed. Future efforts should focus on improving features of PA interventions to promote long-term behavior change and sustain engagement in PA.\n\n\nCLINICALTRIAL\nClinicaltrials.gov NCT01102777; https://clinicaltrials.gov/ct2/show/NCT01102777 (Archived by WebCite at http://www.webcitation.org/6iyNP9KUC)." }, { "pmid": "25419125", "title": "Time to adapt exercise training regimens in pulmonary rehabilitation--a review of the literature.", "abstract": "Exercise intolerance, exertional dyspnea, reduced health-related quality of life, and acute exacerbations are features characteristic of chronic obstructive pulmonary disease (COPD). Patients with a primary diagnosis of COPD often report comorbidities and other secondary manifestations, which diversifies the clinical presentation. Pulmonary rehabilitation that includes whole body exercise training is a critical part of management, and core programs involve endurance and resistance training for the upper and lower limbs. Improvement in maximal and submaximal exercise capacity, dyspnea, fatigue, health-related quality of life, and psychological symptoms are outcomes associated with exercise training in pulmonary rehabilitation, irrespective of the clinical state in which it is commenced. There may be benefits for the health care system as well as the individual patient, with fewer exacerbations and subsequent hospitalization reported with exercise training. The varying clinical profile of COPD may direct the need for modification to traditional training strategies for some patients. Interval training, one-legged cycling (partitioning) and non-linear periodized training appear to be equally or more effective than continuous training. Inspiratory muscle training may have a role as an adjunct to whole body training in selected patients. The benefits of balance training are also emerging. Strategies to ensure that health enhancing behaviors are adopted and maintained are essential. These may include training for an extended duration, alternative environments to undertake the initial program, maintenance programs following initial exercise training, program repetition, and incorporation of approaches to address behavioral change. This may be complemented by methods designed to maximize uptake and completion of a pulmonary rehabilitation program." }, { "pmid": "25142484", "title": "Ground-based walking training improves quality of life and exercise capacity in COPD.", "abstract": "This study was designed to determine the effect of ground-based walking training on health-related quality of life and exercise capacity in people with chronic obstructive pulmonary disease (COPD). People with COPD were randomised to either a walking group that received supervised, ground-based walking training two to three times a week for 8-10 weeks, or a control group that received usual medical care and did not participate in exercise training. 130 out of 143 participants (mean±sd age 69±8 years, forced expiratory volume in 1 s 43±15% predicted) completed the study. Compared to the control group, the walking group demonstrated greater improvements in the St George's Respiratory Questionnaire total score (mean difference -6 points (95% CI -10- -2), p<0.003), Chronic Respiratory Disease Questionnaire total score (mean difference 7 points (95% CI 2-11), p<0.01) and endurance shuttle walk test time (mean difference 208 s (95% CI 104-313), p<0.001). This study shows that ground-based walking training is an effective training modality that improves quality of life and endurance exercise capacity in people with COPD." }, { "pmid": "20727209", "title": "Nordic walking improves daily physical activities in COPD: a randomised controlled trial.", "abstract": "BACKGROUND\nIn patients with COPD progressive dyspnoea leads to a sedentary lifestyle. To date, no studies exist investigating the effects of Nordic Walking in patients with COPD. Therefore, the aim was to determine the feasibility of Nordic Walking in COPD patients at different disease stages. Furthermore we aimed to determine the short- and long-term effects of Nordic Walking on COPD patients' daily physical activity pattern as well as on patients exercise capacity.\n\n\nMETHODS\nSixty COPD patients were randomised to either Nordic Walking or to a control group. Patients of the Nordic Walking group (n = 30; age: 62 +/- 9 years; FEV1: 48 +/- 19% predicted) underwent a three-month outdoor Nordic Walking exercise program consisting of one hour walking at 75% of their initial maximum heart rate three times per week, whereas controls had no exercise intervention. Primary endpoint: daily physical activities (measured by a validated tri-axial accelerometer); secondary endpoint: functional exercise capacity (measured by the six-minute walking distance; 6MWD). Assessment time points in both groups: baseline, after three, six and nine months.\n\n\nRESULTS\nAfter three month training period, in the Nordic Walking group time spent walking and standing as well as intensity of walking increased (Delta walking time: +14.9 +/- 1.9 min/day; Delta standing time: +129 +/- 26 min/day; Delta movement intensity: +0.40 +/- 0.14 m/s2) while time spent sitting decreased (Delta sitting time: -128 +/- 15 min/day) compared to baseline (all: p < 0.01) as well as compared to controls (all: p < 0.01). Furthermore, 6MWD significantly increased compared to baseline (Delta 6MWD: +79 +/- 28 meters) as well as compared to controls (both: p < 0.01). These significant improvements were sustained six and nine months after baseline. In contrast, controls showed unchanged daily physical activities and 6MWD compared to baseline for all time points.\n\n\nCONCLUSIONS\nNordic Walking is a feasible, simple and effective physical training modality in COPD. In addition, Nordic Walking has proven to positively impact the daily physical activity pattern of COPD patients under short- and long-term observation." }, { "pmid": "18508824", "title": "Efficacy of a cell phone-based exercise programme for COPD.", "abstract": "The application of a supervised endurance exercise training programme in a home setting offering convenience and prolonged effects is a challenge. In total, 48 patients were initially assessed by the incremental shuttle walk test (ISWT), spirometry and the Short Form-12 (SF-12) quality-of-life questionnaire, and then every 4 weeks for 3 months thereafter and again after 1 yr. During the first 3 months, 24 patients in the cell phone group were asked to perform daily endurance walking at 80% of their maximal capacity by following the tempo of music from a program installed on a cell phone. The level of endurance walking at home was readjusted monthly according to the result of ISWT. In the control group, 24 patients received the same protocol and were verbally asked to take daily walking exercise at home. Patients in the cell phone group significantly improved their ISWT distance and duration of endurance walking after 8 weeks. The improvements in ISWT distance, inspiratory capacity and SF-12 scoring at 12 weeks persisted until the end of the study, with less acute exacerbations and hospitalisations. In the present pilot study, the cell phone-based system provides an efficient, home endurance exercise training programme with good compliance and clinical outcomes in patients with moderate-to-severe chronic obstructive pulmonary disease." }, { "pmid": "24647863", "title": "The effects of elastic tubing-based resistance training compared with conventional resistance training in patients with moderate chronic obstructive pulmonary disease: a randomized clinical trial.", "abstract": "OBJECTIVE\nTo investigate the effects of elastic tubing training compared with conventional resistance training on the improvement of functional exercise capacity, muscle strength, fat-free mass, and systemic inflammation in patients with chronic obstructive pulmonary disease.\n\n\nDESIGN\nA prospective, randomized, eight-week clinical trial.\n\n\nSETTING\nThe study was conducted in a university-based, outpatient, physical therapy clinic.\n\n\nSUBJECTS\nA total of 49 patients with moderate chronic obstructive pulmonary disease.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to perform elastic tubing training or conventional resistance training three times per week for eight weeks.\n\n\nMAIN MEASURES\nThe primary outcome measure was functional exercise capacity. The secondary outcome measures were peripheral muscle strength, health-related quality of life assessed by the Chronic Respiratory Disease Questionnaire (CRDQ), fat-free mass, and cytokine profile.\n\n\nRESULTS\nAfter eight weeks, the mean distance covered during six minutes increased by 73 meters (±69) in the elastic tubing group and by 42 meters (±59) in the conventional group (p < 0.05). The muscle strength and quality of life improved in both groups (P < 0.05), with no significant differences between the groups. There was a trend toward an improved fat-free mass in both groups (P = 0.05). After the first and last sessions, there was an increase in interleukin 1β (IL-1β) and interleukin 10 (IL-10) in both groups, while tumour necrosis factor alpha (TNF-α) was stimulated only in the conventional training group.\n\n\nCONCLUSION\nElastic tubing training had a greater effect on functional exercise capacity than conventional resistance training. Both interventions were equally effective in improving muscle strength and quality of life." }, { "pmid": "23512568", "title": "The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions.", "abstract": "BACKGROUND\nCONSORT guidelines call for precise reporting of behavior change interventions: we need rigorous methods of characterizing active content of interventions with precision and specificity.\n\n\nOBJECTIVES\nThe objective of this study is to develop an extensive, consensually agreed hierarchically structured taxonomy of techniques [behavior change techniques (BCTs)] used in behavior change interventions.\n\n\nMETHODS\nIn a Delphi-type exercise, 14 experts rated labels and definitions of 124 BCTs from six published classification systems. Another 18 experts grouped BCTs according to similarity of active ingredients in an open-sort task. Inter-rater agreement amongst six researchers coding 85 intervention descriptions by BCTs was assessed.\n\n\nRESULTS\nThis resulted in 93 BCTs clustered into 16 groups. Of the 26 BCTs occurring at least five times, 23 had adjusted kappas of 0.60 or above.\n\n\nCONCLUSIONS\n\"BCT taxonomy v1,\" an extensive taxonomy of 93 consensually agreed, distinct BCTs, offers a step change as a method for specifying interventions, but we anticipate further development and evaluation based on international, interdisciplinary consensus." }, { "pmid": "18203128", "title": "Is there a bias against telephone interviews in qualitative research?", "abstract": "Telephone interviews are largely neglected in the qualitative research literature and, when discussed, they are often depicted as a less attractive alternative to face-to-face interviewing. The absence of visual cues via telephone is thought to result in loss of contextual and nonverbal data and to compromise rapport, probing, and interpretation of responses. Yet, telephones may allow respondents to feel relaxed and able to disclose sensitive information, and evidence is lacking that they produce lower quality data. This apparent bias against telephone interviews contrasts with a growing interest in electronic qualitative interviews. Research is needed comparing these modalities, and examining their impact on data quality and their use for studying varying topics and populations. Such studies could contribute evidence-based guidelines for optimizing interview data." }, { "pmid": "19036552", "title": "Enhancement of daily physical activity increases physical fitness of outclinic COPD patients: results of an exercise counseling program.", "abstract": "OBJECTIVE\nTo investigate whether a 12-week pedometer-based exercise counseling strategy is feasible and effectively enhances daily physical activity in outclinic Chronic Obstructive Pulmonary Disease (COPD) patients who do not participate in a rehabilitation program in a controlled way.\n\n\nMETHODS\n35 outclinic COPD patients (21 males, mean age 62 years, GOLD I-III, mean FEV(1)% predicted 64.7) were randomized for a 12-week individual pedometer-based exercise counseling program promoting daily physical activities or usual care. Daily physical activity (DigiWalker SW-200), physical fitness, health-related quality of life, self-efficacy, fatigue, depression and motivation to be physically active were assessed before and after the intervention.\n\n\nRESULTS\nAfter the intervention, COPD patients in the exercise counseling group showed a significant increase in their mean number of steps/day (from 7087 to 7872), whereas the usual care group showed a decrease (from 7539 to 6172). Significant differences favoring the exercise counseling group were demonstrated in arm strength, leg strength, health-related quality of life and intrinsic motivation to be physically active.\n\n\nCONCLUSION\nOur study shows that a 12-week pedometer-based exercise counseling strategy is feasible and effectively enhances daily physical activity, physical fitness, health-related quality of life and intrinsic motivation in outclinic COPD patients who do not participate in a rehabilitation program.\n\n\nPRACTICE IMPLICATIONS\nThe feasibility of our exercise counseling strategy is good and patients were motivated to participate." }, { "pmid": "22193935", "title": "A simple method for home exercise training in patients with chronic obstructive pulmonary disease: one-year study.", "abstract": "PURPOSE\nThe success of long-term exercise training (ExT) programs resides in the integration between exercise prescription and patient compliance with home training. One of the crucial issues for the patients is the understanding of appropriate exercise intensity. We compared 2 methods of home ExT, based on walking.\n\n\nMETHODS\nForty-seven patients with chronic obstructive pulmonary disease were recruited and underwent respiratory function, exercise capacity evaluation with a 6-minute walk test, and treadmill tests. Physical activity was monitored by a multisensor Armband (SenseWear, Body Media, Pittsburgh, PA). Patients were randomly assigned to 2 different home training methods and assessed again after 6 and 12 months; group A₁: speed walking paced by a metronome, and group A₂: walking a known distance in a fixed time.\n\n\nRESULTS\nThirty-six patients completed the study. All subjects showed a significant improvement in the 6-minute walk test after 1 year but the improvement was higher in A₁ than in A₂ (P < .05). Physical activity levels were significantly higher at T12 versus baseline only in group A₁ (P < .05).\n\n\nCONCLUSIONS\nThe use of a metronome to maintain the rate of walking during home ExT seems to be beneficial, allowing patients to achieve and sustain the optimal exercise intensity, and resulting in greater improvement compared to simply using a fixed time interval exercise." }, { "pmid": "25246781", "title": "A telehealth program for self-management of COPD exacerbations and promotion of an active lifestyle: a pilot randomized controlled trial.", "abstract": "The objective of this pilot study was to investigate the use of and satisfaction with a chronic obstructive pulmonary disease (COPD) telehealth program applied in both primary and secondary care. The program consisted of four modules: 1) activity coach for ambulant activity monitoring and real-time coaching of daily activity behavior, 2) web-based exercise program for home exercising, 3) self-management of COPD exacerbations via a triage diary on the web portal, including self-treatment of exacerbations, and 4) teleconsultation. Twenty-nine COPD patients were randomly assigned to either the intervention group (telehealth program for 9 months) or the control group (usual care). Page hits on the web portal showed the use of the program, and the Client Satisfaction Questionnaire showed satisfaction with received care. The telehealth program with decision support showed good satisfaction (mean 26.4, maximum score 32). The program was accessed on 86% of the treatment days, especially the diary. Patient adherence with the exercise scheme was low (21%). Health care providers seem to play an important role in patients' adherence to telehealth in usual care. Future research should focus on full-scale implementation in daily care and investigating technological advances, like gaming, to increase adherence." }, { "pmid": "25886014", "title": "Early telemedicine training and counselling after hospitalization in patients with severe chronic obstructive pulmonary disease: a feasibility study.", "abstract": "BACKGROUND\nAn essential element in the treatment of patients with chronic obstructive pulmonary disease (COPD) is rehabilitation, of which supervised training is an important part. However, not all individuals with severe COPD can participate in the rehabilitation provided by hospitals and municipal training centres due to distance to the training venues and transportation difficulties. The aim of the study was to assess the feasibility of an individualized home-based training and counselling programme via video conference to patients with severe COPD after hospitalization including assessment of safety, clinical outcomes, patients' perceptions, organisational aspects and economic aspects.\n\n\nMETHODS\nThe design was a pre- and post-test intervention study. Fifty patients with severe COPD were included. The telemedicine training and counselling included three weekly supervised exercise sessions by a physiotherapist and up to two supervised counselling and training sessions in energy conservation techniques by an occupational therapist. The telemedicine videoconferencing equipment was a computer containing a screen, a microphone, an on/off switch and a volume control.\n\n\nRESULTS\nThirty seven (74%) participants completed the programme, with improvements in health status assessed by the Clinical COPD Questionnaire and physical performance assessed by a sit-to-stand test and a timed-up-and-go test. There were no cases of patient fall or emergency contact with a general practitioner during the telemedicine training sessions. The study participants believed the telemedicine training and counselling was essential for getting started with being physically active in a secure manner. The business case showed that under the current financing system, the reimbursement to the hospital was slightly higher than the hospital expenditures. Thus, the business case for the hospital was positive. The organizational analysis indicated that the perceptions of the staff were that the telemedicine service had improved the continuity of the rehabilitation programme for the patients and enabled the patients' everyday lives to be included in the treatment.\n\n\nCONCLUSIONS\nThis study showed that home-based supervised training and counselling via video conference is safe and feasible and that telemedicine can help to ensure more equitable access to supervised training in patients with severe COPD.\n\n\nTRIAL REGISTRATION\nClinical Trials NCT02085187 (Date of registration 10.03.2014)." }, { "pmid": "26911326", "title": "Adherence and factors affecting satisfaction in long-term telerehabilitation for patients with chronic obstructive pulmonary disease: a mixed methods study.", "abstract": "BACKGROUND\nTelemedicine may increase accessibility to pulmonary rehabilitation in chronic obstructive pulmonary disease (COPD), thus enhancing long-term exercise maintenance. We aimed to explore COPD patients' adherence and experiences in long-term telerehabilitation to understand factors affecting satisfaction and potential for service improvements.\n\n\nMETHODS\nA two-year pilot study with 10 patients with COPD was conducted. The intervention included treadmill exercise training at home and a webpage for telemonitoring and self-management combined with weekly videoconferencing sessions with a physiotherapist. We conducted four separate series of data collection. Adherence was measured in terms of frequency of registrations on the webpage. Factors affecting satisfaction and adherence, together with potential for service improvements, were explored through two semi-structured focus groups and an individual open-ended questionnaire. Qualitative data were analysed by systematic text condensation. User friendliness was measured by the means of a usability questionnaire.\n\n\nRESULTS\nOn average, participants registered 3.0 symptom reports/week in a web-based diary and 1.7 training sessions/week. Adherence rate decreased during the second year. Four major themes regarding factors affecting satisfaction, adherence and potential improvements of the intervention emerged: (i) experienced health benefits; (ii) increased self-efficacy and independence; and (iii) emotional safety due to regular meetings and access to special competence; (iv) maintenance of motivation. Participants were generally highly satisfied with the technical components of the telerehabilitation intervention.\n\n\nCONCLUSIONS\nLong-term adherence to telerehabilitation in COPD was maintained for a two-year period. Satisfaction was supported by experienced health benefits, self-efficacy, and emotional safety. Maintenance of motivation was a challenge and might have affected long-term adherence. Four key factors of potential improvements in long-term telerehabilitation were identified: (i) adherence to different components of the telerehabilitation intervention is dependent on the level of focus provided by the health personnel involved; (ii) the potential for regularity that lies within the technology should be exploited to avoid relapses after vacation; (iii) motivation might be increased by tailoring individual consultations to support experiences of good health and meet individual goals and motivational strategies; (iv) interactive functionalities or gaming tools might provide peer-support, peer-modelling and enhance motivation." }, { "pmid": "28137918", "title": "Physical activity is increased by a 12-week semiautomated telecoaching programme in patients with COPD: a multicentre randomised controlled trial.", "abstract": "RATIONALE\nReduced physical activity (PA) in patients with COPD is associated with a poor prognosis. Increasing PA is a key therapeutic target, but thus far few strategies have been found effective in this patient group.\n\n\nOBJECTIVES\nTo investigate the effectiveness of a 12-week semiautomated telecoaching intervention on PA in patients with COPD in a multicentre European randomised controlled trial.\n\n\nMETHODS\n343 patients from six centres, encompassing a wide spectrum of disease severity, were randomly allocated to either a usual care group (UCG) or a telecoaching intervention group (IG) between June and December 2014. This 12-week intervention included an exercise booklet and a step counter providing feedback both directly and via a dedicated smartphone application. The latter provided an individualised daily activity goal (steps) revised weekly and text messages as well as allowing occasional telephone contacts with investigators. PA was measured using accelerometry during 1 week preceding randomisation and during week 12. Secondary outcomes included exercise capacity and health status. Analyses were based on modified intention to treat.\n\n\nMAIN RESULTS\nBoth groups were comparable at baseline in terms of factors influencing PA. At 12 weeks, the intervention yielded a between-group difference of mean, 95% CI (lower limit - upper limit; ll-ul) +1469, 95% CI (971 to 1965) steps/day and +10.4, 95% CI (6.1 to 14.7) min/day moderate PA; favouring the IG (all p≤0.001). The change in 6-min walk distance was significantly different (13.4, 95% CI (3.40 to 23.5) m, p<0.01), favouring the IG. In IG patients, an improvement could be observed in the functional state domain of the clinical COPD questionnaire (p=0.03) compared with UCG. Other health status outcomes did not differ.\n\n\nCONCLUSIONS\nThe amount and intensity of PA can be significantly increased in patients with COPD using a 12-week semiautomated telecoaching intervention including a step counter and an application installed on a smartphone.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02158065." }, { "pmid": "23742208", "title": "A randomised controlled trial testing a web-based, computer-tailored self-management intervention for people with or at risk for chronic obstructive pulmonary disease: a study protocol.", "abstract": "BACKGROUND\nChronic Obstructive Pulmonary Disease (COPD) is a major cause of morbidity and mortality. Effective self-management support interventions are needed to improve the health and functional status of people with COPD or at risk for COPD. Computer-tailored technology could be an effective way to provide this support.\n\n\nMETHODS/DESIGN\nThis paper presents the protocol of a randomised controlled trial testing the effectiveness of a web-based, computer-tailored self-management intervention to change health behaviours of people with or at risk for COPD. An intervention group will be compared to a usual care control group, in which the intervention group will receive a web-based, computer-tailored self-management intervention. Participants will be recruited from an online panel and through general practices. Outcomes will be measured at baseline and at 6 months. The primary outcomes will be smoking behaviour, measuring the 7-day point prevalence abstinence and physical activity, measured in minutes. Secondary outcomes will include dyspnoea score, quality of life, stages of change, intention to change behaviour and alternative smoking behaviour measures, including current smoking behaviour, 24-hour point prevalence abstinence, prolonged abstinence, continued abstinence and number of quit attempts.\n\n\nDISCUSSION\nTo the best of our knowledge, this will be the first randomised controlled trial to test the effectiveness of a web-based, computer-tailored self-management intervention for people with or at risk for COPD. The results will be important to explore the possible benefits of computer-tailored interventions for the self-management of people with or at risk for COPD and potentially other chronic health conditions.\n\n\nDUTCH TRIAL REGISTER\nNTR3421." }, { "pmid": "23235321", "title": "The use of a home exercise program based on a computer system in patients with chronic obstructive pulmonary disease.", "abstract": "PURPOSE\nTo test the effectiveness of a home exercise program based on a user-friendly, computer system, the Nintendo Wii Fit.\n\n\nMETHODS\nIn this longitudinal study, 25 clinically stable patients with chronic obstructive pulmonary disease began a 6-week nonintervention (baseline) period followed by 12 weeks of Wii exercise training at home. Patients were instructed to exercise 5 or more days per week. Exercise capacity, health status, and dyspnea were evaluated after home exercise training.\n\n\nRESULTS\nEvaluable data were available in 20 patients after home exercise training; their force expiratory volume in 1 second was 45 ± 16%. Following 12 weeks of Wii exercise training, the Endurance Shuttle Walk Test increased by 131 ± 183 seconds over the baseline determination (P = .005). Significant improvements were also noted in arm-lift and sit-to-stand repetitions, the total score, and the emotion dimension of the Chronic Respiratory Questionnaire. Men had significantly greater increases in the Endurance Shuttle Walk Test than women, although their self-reported exercise durations were similar. There were no significant adverse outcomes.\n\n\nCONCLUSION\nThis study suggests that 12 weeks of regular, home exercise based on an interactive entertainment computer system can lead to positive short-term outcomes." }, { "pmid": "28716786", "title": "Online versus face-to-face pulmonary rehabilitation for patients with chronic obstructive pulmonary disease: randomised controlled trial.", "abstract": "OBJECTIVE\nTo obtain evidence whether the online pulmonary rehabilitation(PR) programme 'my-PR' is non-inferior to a conventional face-to-face PR in improving physical performance and symptom scores in patients with COPD.\n\n\nDESIGN\nA two-arm parallel single-blind, randomised controlled trial.\n\n\nSETTING\nThe online arm carried out pulmonary rehabilitation in their own homes and the face to face arm in a local rehabilitation facility.\n\n\nPARTICIPANTS\n90 patients with a diagnosis of chronic obstructive pulmonary disease (COPD), modified Medical Research Council score of 2 or greater referred for pulmonary rehabilitation (PR), randomised in a 2:1 ratio to online (n=64) or face-to-face PR (n=26). Participants unable to use an internet-enabled device at home were excluded.\n\n\nMAIN OUTCOME MEASURES\nCoprimary outcomes were 6 min walk distance test and the COPD assessment test (CAT) score at completion of the programme.\n\n\nINTERVENTIONS\nA 6-week PR programme organised either as group sessions in a local rehabilitation facility, or online PR via log in and access to 'myPR'.\n\n\nRESULTS\nThe adjusted mean difference for the 6 min walk test (6MWT) between groups for the intention-to-treat (ITT) population was 23.8 m with the lower 95% CI well above the non-inferiority threshold of -40.5 m at -4.5 m with an upper 95% CI of +52.2 m. This result was consistent in the per-protocol (PP) population with a mean adjusted difference of 15 m (-13.7 to 43.8). The CAT score difference in the ITT was -1.0 in favour of the online intervention with the upper 95% CI well below the non-inferiority threshold of 1.8 at 0.86 and the lower 95% CI of -2.9. The PP analysis was consistent with the ITT.\n\n\nCONCLUSION\nPR is an evidenced-based and guideline-mandated intervention for patients with COPD with functional limitation. A 6-week programme of online-supported PR was non-inferior to a conventional model delivered in face-to-face sessions in terms of effects on 6MWT distance, and symptom scores and was safe and well tolerated." }, { "pmid": "16707399", "title": "Quantifying physical activity in daily life with questionnaires and motion sensors in COPD.", "abstract": "Accurate assessment of the amount and intensity of physical activity in daily life is considered very important due to the close relationship between physical activity level, health, disability and mortality. For this reason, assessment of physical activity in daily life has gained interest in recent years, especially in sedentary populations, such as patients with chronic obstructive pulmonary disease (COPD). The present article aims to compare and discuss the two kinds of instruments more commonly used to quantify the amount of physical activity performed by COPD patients in daily life: subjective methods (questionnaires, diaries) and motion sensors (electronic or mechanical methods). Their characteristics are summarised and evidence of their validity, reliability and sensitivity is discussed, when available. Subjective methods have practical value mainly in providing the patients' view on their performance in activities of daily living and functional status. However, care must be taken when using subjective methods to accurately quantify the amount of daily physical activity performed. More accurate information is likely to be available with motion sensors rather than questionnaires. The selection of which motion sensor to use for quantification of physical activity in daily life should depend mainly on the purpose of its use." } ]
Research and Practice in Technology Enhanced Learning
30613262
PMC6302055
10.1186/s41039-018-0093-9
Maintaining reading experience continuity across e-book revisions
E-book reader supports users to create digital learning footprints in many forms like highlighting sentences or taking memos. Nowadays, it also allows an instructor to update their e-books in the e-book reader. However, e-book users often face problems when trying to find learning footprints they made in a new version e-book. Thus, users’ reading experience continuity across e-book revisions is hard to be maintained and seems to become a shortcoming within the e-book system. In this paper, in order to maintain users’ reading experience continuity, we deal with the transfer of learning footprints such as a marker, memo, and bookmark across e-book revisions on an e-book reader in a coursework scenario. We first give introduction and related works to demonstrate how researchers dedicated on the problem mentioned in this paper and page similarity comparison. Then, we compare three page similarity comparison methods using similarity computing models to compute page pairwise similarity in image level, text level, and image & text level. In the analysis, for each level, we analyze the performance of transferring learning footprint across e-book revisions and also the optimal threshold for similar page determination. After that, we give the analysis results to show the performances of three methods in image level, text level, and image & text level, and then, the error analysis is presented to specify the error types that occur in the results. We then propose page image & text similarity comparison as the optimal method to automatically transfer learning footprints across e-book revisions based on the analysis results and error analysis among three compared methods. Finally, the discussion and conclusions are shown in the end of this paper.
Related worksLearning footprint transferringWhen the underlying document changes, an annotation may need to adapt in response. By adapting the annotation, it retains its meaning and value (Sutherland et al. 2016). Several researchers have dedicated on annotation repositioning problem using anchoring mode at a more granular level than a whole page in a modified version web-based document. Some of the articles used a line bounding box (Priest and Plimmer 2006; Chen and Plimmer 2007); one used bounding box based on HTML elements (Plimmer et al. 2010); and two used word level bounding boxes (Bargeron and Moscovich 2003; Golovchinsky and Denoue 2002). They mentioned annotations will become orphan in a document when the on-line documents changed, since these annotations lost the link to their proper location within the document. They aimed to find a proper location to anchor and reposition annotations on the modified new version document. In other words, they were also trying to transfer digital footprints from old version web-based document to a new version document. Among these articles dedicating on digital annotation repositioning problem, none of them tried to reposition annotations by page similarity comparison. In this paper, we focus on learning footprint transferring by page similarity comparison in the context of e-book, so we consider more on how accurately these methods can successfully transfer learning footprints between pages in different revisions of slide-based learning material, instead of the location of learning footprints on a specific page. However, we will take the incorrect location of learning footprints into account as a kind of error when testing the performances of methods.Page similarity comparisonSimilarity comparison has been widely used in many research domains like object classification and document clustering. The process of similarity comparison and ranking makes the similarity measure more robust acting as a filter and eliminating the noise contained in the values of the quantitative properties (Dinu and Ionescu 2012).Content similarity at both book level and page level have been compared, and the relationship between books are clustered and classified (Spasojevic and Poncin 2011). In this paper, the method they used to preprocess text contents in a corpus and compute page pairwise similarity is word N-Gram model and Jaccard similarity measure. The first difference between our research and theirs is the research purpose. They tried to compare book-to-book similarity and page-to-page similarity for the relationship classifying, and our purpose is to compare page-to-page pairwise similarity across e-book revisions for learning footprint transferring. The second difference is the similarity measure. They proposed Locally Sensitive Hashing (LSH) which can be used to compute set similarity during a corpus also known as Jaccard similarity. In this paper, we compare two similarity measure cosine similarity and Jaccard similarity for the representation of page similarity and propose cosine similarity as a better similarity measure since it can perform faster than Jaccard similarity. Besides, page similarity was compared for the detection of web phishing (Sanglerdsinlapachai and Rungsawang 2010). In this paper, they tried to find a term frequency matrix for pages and used cosine similarity model to represent the similarity between pages. Their experiments were focused on comparing machine learning techniques and thresholds based on page similarity comparison and the performances represented by F-measure. In our research, instead of comparing web-based pages, we try to compare slide-based pages for learning footprints transferring across e-book revisions.As shown above, many researchers have dedicated on learning footprints transferring problem and page similarity comparison no matter in book level, page level, or web-based page level and further applied the similarities to other research field. Nevertheless, none of them focused on transferring learning footprints across e-book revisions based on page similarity comparison. In this paper, we compared three different methods based on page similarity comparison and propose the optimal one as a solution for learning footprint transferring problem in an e-book reader.Research questionsIn order to transfer learning footprints from old version e-book to new version e-book, we compare page similarities across e-book revisions in an e-book reader. Furthermore, it is not easy to determine one page in new version e-book is similar and the other pages are not similar to the source page in old version e-book. To determine similar pages across e-book revisions, it is important to find an optimal threshold for the determination of similar pages across e-book revisions. Thus, in this paper, we give two research questions: How accurately can learning footprints be transferred across e-book revisions when comparing page similarity by image, text, and image & text?What is the optimal method to automatically transfer learning footprints across e-book revisions?
[ "15376593" ]
[ { "pmid": "15376593", "title": "Image quality assessment: from error visibility to structural similarity.", "abstract": "Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000." } ]
Research and Practice in Technology Enhanced Learning
30613221
PMC6302834
10.1186/s41039-015-0008-y
XML-based e-assessment system for Office skills in open learning environments
Learning and teaching systems have seen fast transformations being increasingly applied in emerging formal and informal education contexts. Indeed, the shift to open learning environments is remarkable, where the number of students is extremely high. To allow a huge amount of learners gaining new knowledge and skills in an open education framework, the recourse to e-assessment systems able to cover this strong demand and respective challenges is inevitable. Facing Office skills as those most frequently needed in education and business settings, in this paper, we address the design of a novel assessment system for automated assessment of Office skills in an authentic context. This approach exploits the powerful potential of the Extensible Markup Language (XML) format and related technologies by transforming the model of both students’ documents and answers to an XML format and extracting from the teacher’s correct document the required skills as patterns. To assign a mark, we measure similarities between the patterns of the students’ and the teacher’s documents. We conducted an experimental study to validate our approach for Word processing skills assessment and developed a system that was evaluated in a real exam scenario. The results demonstrated the accuracy and suitability of this research direction.
Related workThe history of e-assessment goes back to the 1960s (Mohammad and Gutl 2008). This history, elaborated by many authors as an example of the recent ones by Jordan (2013) as well as Lahtonen and Isomöttönen (2012), shows that many assessment systems have been developed for many knowledge domains, such as mathematics, programming languages, and free text. However, a few attempts have led to sufficient grading systems for automated assessment of IT skills, especially for Office skills (Kovacic and Green 2012) despite the significant need identified. In general, there are two strategies for automated assessment of Office skills (Zhu and Shen 2013; Tang and Chen 2008). The first strategy applies the trace analysis technique. It records the operating steps of the users, for example by using the macro recording function of Microsoft Office. Another example addresses the construction of a simulation system, which is considered as a large project and has as drawbacks the difficulty of updating the software, the consideration of the environmental constraints, as well as the poor adaptability. Dowsing (2000) proposed a system that performs a comparison between the stream of actions done by the candidate and the correct stream of actions extracted from the model event stream of the teacher. The second strategy, most natural, consists of analyzing directly the student’s produced document. This strategy has been frequently implemented in contrast to the first strategy (Zhu and Shen 2013; Tang and Chen 2008). Evidently, this direction of development, maintenance, and extension is less problematic since it is not constrained by the operating environment and, generally, it reflects the students’ proficiency in Office applications more realistically.Following this line, we concentrate in this study only on related work that follows this latter strategy. In general, there are many practical implementations to mark a student’s outcomes by analyzing the student’s produced document.As illustrated in Fig. 1, we have classified these developed systems according to the techniques and technologies they apply. These techniques and technologies are the following: (1) the Visual Basic for Application (VBA), (2) the Component Object Model (COM), (3) the XML technology, and (4) Artificial Intelligence (AI) techniques and other techniques such as using the “Combine and Compare” feature of Microsoft Office Word.Fig. 1Techniques and technologies applied in Office skills assessment systems In the first category, assessment systems use the VBA technology, which is a version of the Visual Basic language included in Microsoft Office. VBA enables the user to design routines that will run in the background in order to respond to events such as opening a form or clicking a command button (Herbadji 2012). In addition, the VBA technology allows an easiness of communication with Office programs. According to the literature review, most of the developed assessment systems for Office applications rely on the VBA technology (Koike et al. 2005; Tuparova and Tuparov 2010; De-cai 2010; Wang and Jing-zhu 2009; Ren et al. 2010). For example, Koike et al. (2005) developed marking systems for Microsoft Office Word and Microsoft Office Excel. The systems mark student files according to a grading criteria set given by the instructors. The system for Word files checks page settings, paragraphs, indents, figures, tables, fonts, colors, texts, and so on. The system can be applied to check whether the students correctly understand how to use each feature of Microsoft Office Word. Similarly, for Excel files, the system can be applied to check whether students understand how to use each feature of Microsoft Office Excel. Both systems highlight errors and students get feedback on their work through messages. Systems in this category are not flexible to modify or add new questions. They require VBA programming abilities as confirmed by Tuparova and Tuparov (2010) who advanced such systems by proposing a real-live performance-based assessment tool using VBA and allowing for an effective interaction with the teacher.Concerning the second category in Fig. 1, assessment systems adopt the COM technology, which is a binary-interface standard for software components introduced by Microsoft (https://www.microsoft.com/com). It is used to enable software components to communicate with each other and serves as a basis for many systems (Zhu and Shen 2013; Hunt et al. 2002; Tang and Chen 2008). For example, Zhu and Shen (2013) present a framework implementation of automated assessment for Office applications. The system stores the required IT skills in a database table and invokes a series of methods through the COM interface provided. These methods simultaneously extract the attributes of the students’ documents, match them with information from the database, and grade according to the scoring criteria. Zhu and Shen (2013) argue that using the COM technology results in the problem of a complete analysis of the Office object library. Viewed this way, this technology shows limitations and does not supply assessment for all skills.Recently, Roshan et al. (2011) and Lahtonen and Isomöttönen (2012) present work with respect to a different format than the Office one. Lahtonen and Isomöttönen (2012) developed the Parsi Tool by exploiting the XML technology. Their main goal is to automatically assess the stylistic and technical correctness of Office documents as well as some basic IT skills, such as e-mail netiquette and e-mail list usage. The Parsi Tool has as an input the student’s document in a set of specific format (e.g., docx, pptx, xls). It also has an XML configuration file which represents the requirements of the assignments. This file contains the required style information for Word processing and presentations and further checkable items, such as Bold and Page numbering skills. As output, the Parsi Tool returns a grade from 1 to 5. The teacher may intervene when the tool cannot check properly, by updating and testing the checking functions.According to Fig. 1, some other systems apply AI techniques (Dowsing 1996; Dowsing 1998; Long et al. 2003). Dowsing and Long set a milestone for latter works on the assessment of Word processing skills being pioneers in this domain since 1996. They exploited AI techniques to assess the produced Word documents. The software developed to assess Word processing skills is a part of a project to assess other IT skills by a computer. It is based on the assessment of Rich Text Format (RTF) output from any standard Word processor. A comparison is performed between the examinees’ outputs and the teacher’s correct solution, followed by a categorization and error report by type. Also, Long et al. (2003) describe a set of rule-based methods and knowledge-based automated assessment systems for IT skills.Related to the last category “Others” in Fig. 1, Hill (2011) developed several tools for automated grading in terms of Microsoft Office software, particularly an automated assessment system for Microsoft Office Excel (Microsoft Excel Automated Grader—MEAGER) and an automated assessment system for Microsoft Office Access (Microsoft Access Database Automated Grading System—MADBAGS) (Hill 2004). Recently, his focus is on Microsoft Office Word and Microsoft Office PowerPoint programs. These grading systems exploit the Microsoft Word “Compare and Combine” function. This latter feature of Microsoft Office Word allows systems to merge documents to identify differences between them. Thus, the document of the correct version of the assignment given by the instructor is merged with the document produced by the student, and the differences are then recorded in a Microsoft Office Access table. The Word Grader counts obtained errors and embeds a grade report in the marked-up document. Even though Hill’s developments are powerful, they are limited by the use of Microsoft Office programs only, and as the author stated, the ability to assess new skills (e.g., manipulating text boxes) that are not compared by the Microsoft Word “Compare and Combine” function will be more difficult.Although the reviewed systems provide an authentic assessment, most of them lack flexibility in allowing teachers to customize exams by modifying questions or adding new ones, and defining their own grading criteria. After the thorough investigation of the state of the art in Office assessment systems, we argue that most of the examined systems implement the VBA technology, obviously, due to its inherent capability to build and communicate with Microsoft Office applications. However, it provides limited options for adapting the assessment process like modifying questions, which requires VBA programming skills by the teacher. A further drawback of the systems observed is the missing option for a human intervention in ambiguous and difficult assessment cases. Against this background, we focus on the XML format and its related technologies in the following.
[]
[]
Research and Practice in Technology Enhanced Learning
30613224
PMC6302837
10.1186/s41039-015-0014-0
Effect of active learning using program visualization in technology-constrained college classrooms
Multiple studies report that Computer Science (CS) instructors face problems on how to integrate visualizations in their teaching. This problem gets compounded for instructors in technology-constrained classrooms that are common in developing countries. In these classrooms, students are not able to interact with visualization directly; instead, their interaction is mediated by the instructor who alone may have access to the visualization. In the current study, we contrasted learning outcome from integrating program visualization at two different engagement levels in instructor-mediated classroom setting. The two levels were “Responding” (prediction activity with visualization) and “Viewing” (watching visualization with instructor commentary) as per Naps’ taxonomy. The study was conducted for a programming topic of medium complexity. We found the strategy of prediction with visualization (“Responding”) led to statistically significant higher active behavioral engagement and higher perception of learning among students than the strategy of watching the visualization with instructor commentary (“Viewing”). We also found statistically significant higher cognitive achievement in terms of the rate of problem solving for the “Responding” group, if the students had prior training in active learning. This study can serve as a reference guide to design effective integration of visualizations in instructor-mediated classrooms.
Theoretical background and related workIn this section, we focus on the existing work done to test learning outcome from program and algorithm visualizations in response to differing engagement levels with visualization, operationalized by different instructional strategies with visualization. We describe key theories on the effect of engagement level with visualization on learning outcome followed by literature survey of positive and negative empirical studies on CS topics. We also report studies on the students’ behavioral engagement while viewing visualizations and conclude with studies reporting moderating variables that affect learning from visualization.Theoretical backgroundFrom their meta-analysis of learning effectiveness studies for visualization in CS, Hundhausen et.al. (2002) postulated that how students interact with visualization has a significant impact on their learning from visualization. Based on this, Naps et.al (2002) proposed a taxonomy of six engagement levels for algorithm visualizations-No Viewing, Viewing, Responding, Changing, Constructing and Presenting-hypothesizing that learning will increase as the engagement level with visualization proceeds from “No Viewing” to “Presenting.” Thus, the “Responding” level was hypothesized to lead to better learning outcome with visualization than the “Viewing” level. In the “No Viewing” level, no visualization is involved. In the “Viewing” level, students simply watch the visualization. In the “Responding” level, students not only watch but also interact with the visualization by responding to the visual cues presented like answering exercise or prediction questions. In the “Changing” level, students interact with visualization by changing variable parameters. In the “Constructing” level, students create their own visualization whereas in “Presenting” level, they present their own visualizations to their peers. Myller et al. (2009) added four additional levels and termed these as the “Extended Engagement Taxonomy” (EET). Thus, the ten levels became “No Viewing,” “Viewing,” “Controlled Viewing,” “Entering Input,” “Changing,” “Modifying,” “Constructing,” “Presenting,” and “Reviewing” where “Controlled viewing” means students have control over navigation through the visualization. EET hypothesized that along with learning, collaboration among students will also increase with increasing levels of engagement. Sorva et al. (2013) proposed the 2DET engagement taxonomy consisting of two dimensions of direct engagement with visualization and content ownership (cognitive engagement). The 2DET hypothesizes that learning from program and algorithm visualizations increases along both the axes of direct engagement level and content ownership. Among all the engagement taxonomies with program and algorithm visualizations, Naps’ engagement levels with visualization have historically been one of the most explored conditions while measuring learning from visualizations. Naps’ hypotheses have been tested by multiple studies, but the results are mixed.Empirical studies testing Naps’ hypothesisNumerous studies have been done to test these hypotheses by contrasting learning at multiple levels of student engagement with program and algorithm visualizations. Among the studies confirming Naps’ hypothesis, Grissom et al. (2003) found learning gain increased with increasing student engagement for simple sorting algorithms (insertion and bubble sort) across “No viewing,” “Viewing,” and “Responding” through online quiz. Similar result was reported by Hansen et al. (2000) where instructional strategy used for the “Responding” level was interactive prediction and question-answering. Byrne et al. (1999) did a controlled experiment with CS majors who had algorithm analysis skills but no prior knowledge of the topic, binomial heap. These students did better in procedural understanding in post-test when at the “Responding” level (viewing with oral prediction) or “Viewing” level compared to the “No viewing-No prediction” group. However, the effect of visualization and prediction could not be isolated. Ben-Bassat Levy et al. (2003) did a field study at school level on programming topics like if while statements with the post-test containing questions on predicting output of a program code using Jeliot. They found significant learning gain for all students irrespective of their achievement level with average students gaining the most. Laakso et al. (2009) found learning gain for conceptual understanding at both “Viewing” and “Changing” levels with statistically significant gain at the “Changing” level on the topic Binary heap. However, this result was obtained only after correction for behavioral engagement of student pairs since all students did not perform at the expected level of engagement with visualization. Myller et al. (2009) tested their EET hypothesis and found strong correlation between behavioral engagement among students in terms of collaborative activity of pair programming and the engagement levels with visualization in EET.In contrast to the above studies, there are studies that did not find a difference in learning outcome at different engagement levels with visualization. Stasko et al. (1993) did not get any significant difference in procedural understanding between the “No viewing” group and group that could run the visualization on their input data sets (Changing level) on the topic of Pairing heap. A possible reason cited was the visualization design was not suited to novice learners. Jarc et al. (2000) found no difference in learning outcome (conceptual and procedural understanding) between “Responding” and “Viewing” where the “Responding” level was operationalized through automated prediction questions for a set of 11 algorithms. A probable reason given was that students in the “Responding” group adopted trial and error method to proceed with the prediction activity instead of focusing on learning. Hundhausen and Douglas (2000) compared learning at “Constructing” and “Viewing” levels for procedural understanding but did not get any significant difference for the topic of Quick sort.From the analysis of the above studies, the instructional strategies that have been reported to be successful with program and algorithm visualizations are prediction worksheets with visualization (Ben-Bassat Levy et al. 2003), exercise sheets (Laakso et al. 2009), integrated prediction activity (Hansen et al. 2000), and online quiz (Hansen et al. 2000).Factors influencing learning from visualizationCloser analysis of results of empirical studies, similar to those reported above, revealed other factors like topic complexity and learner characteristics, in addition to engagement level with visualization, influences the learning outcome from visualizations.Topic complexityJarc et al. (2000) found the “Responding” group performed better on difficult topics (graph search, Heap sort), though not significant. Ben-Bassat Levy et al. (2003) found no effect of visualization on simple topics. Urquiza-Fuentes and Velázquez-Iturbide (2013) found no difference in learning outcome between the three groups at “No viewing,” “Viewing,” and “Constructing” levels when the topic is simple like in-fix operators. For topics of medium complexity like user-defined data types, visualization does show an effect when contrasted with “No Viewing,” though no significant difference occurred between “Viewing” and “Constructing” levels. However, significant difference was obtained when the topic was of high complexity like recursive data types but in favor of the “Viewing” level rather than the “Constructing” level on analysis and synthesis level questions.Learner characteristicEffect of different learner characteristics on learning outcome from program and algorithm visualizations has been studied. Byrne et al. (1999) varied algorithm analysis skill of learners but did not find any significant effect of this skill on the learning outcome from visualizations. Another often studied learner characteristic is the achievement level. Jarc et al. (2000) found interactive prediction with visualization in a lab setting helped the better students but not the poorer ones. A possible reason given was the poorer students treated the prediction activity as a video game, focusing on being entertained rather than on learning. Ben-Bassat Levy et al. (2003) found mediocre students of tenth grade class gained significantly more on using PVs in lecture and lab classes than high- and low-level students, though they also showed some gain but not significant. They also reported learning gain only from the fifth assignment onwards citing time required by students to get accustomed to working with the PV tool. Isohanni and Knobelsdorf (2011) did a qualitative study of student interaction with the PV tool, VIP. They found students were able to adopt productive ways of using VIP for learning programming concepts only when they were sufficiently familiar with the tool. Besides, students take a long time to adapt to active learning strategies (Niemi 2002). They require training on how to execute active learning like collaborating with an unfamiliar classmate (Seidel and Tanner 2013) or reflecting on their solutions (Niemi 2002).
[ "24297286" ]
[ { "pmid": "24297286", "title": "\"What if students revolt?\"--considering student resistance: origins, options, and opportunities for investigation.", "abstract": "Instructors attempting new teaching methods may have concerns that students will resist nontraditional teaching methods. The authors provide an overview of research characterizing the nature of student resistance and exploring its origins. Additionally, they provide potential strategies for avoiding or addressing resistance and pose questions about resistance that may be ripe for research study." } ]
Research and Practice in Technology Enhanced Learning
30613236
PMC6302840
10.1186/s41039-016-0027-3
Code-reading support environment visualizing three fields and educational practice to understand nested loops
In this paper, we describe a code-reading support environment and practical classroom applications using this environment to understand nested loops. Previously, we developed a code-reading support system based on visualization of the relationships among the program code, target domain world, and operations. We implemented the proposed system in exercises with nested loops. The evaluation results suggested that students could frequently fulfill learning objectives using the proposed system. However, we also discovered that some students experienced a learning impasse in the classroom. We attempted to address these students with two supporting approaches: bridging the gap between the generalization structures in the program code and their corresponding operations and enabling learners to predict the behavior of the nested loops. In this paper, we extend our previous system with new functions based on our two supporting approaches. Further, we implement the system in another classroom for nested loops. We describe a correlation between the proposed system and an understanding of nested loops using pre-/post-test comparisons. We discuss how code reading using the proposed system allows learners to cultivate a superior understanding of the program code.
Related worksIn general, programming students learn algorithm, code readings, and coding in turn. They attend a lecture and receive algorithm instruction from their teacher. Then, they reproduce the behavior of the algorithm using certain input data and produce a sequence of concrete operations that represents the behavior of the algorithm. Subsequently, they abstract sequences of operations, grasp the relationship between the abstracted operations and the program code, and consequently understand the entire program code. Finally, they perform a coding exercise to confirm their understandings.Thus far, several intelligent tutoring systems have been developed to support programming learners. These include RoboProf (Daly & Horgan, 2004), JITS (Sykes & Franek, 2003), J-LATTE (Holland et al. 2003), and BITS (Butz et al. 2006). Moreover, several learning support systems based on visualizing algorithms have received attention, including TRAKLA2 (Malmi et al. 2004), Jeliot 3 (Moreno et al. 2004; Čisar et al. 2011), and ViLLE (Rajala et al. 2008). These systems can be classified from the standpoint of the tasks required for understanding an algorithm or program code and tend to support one or the other. We believe that an attractive learning target can be found in the gap between these two tasks. These systems, however, do not offer a suitable means for bridging the gap.Learners who have a proper understanding of an algorithm can reproduce its behavior with concrete data. A sequence of operations in LEPA is a sequence of natural-language descriptions representing the algorithm’s behavior. Hence, the operation sequence can be regarded as an externalization of the learner’s understanding of the algorithm. Other existing systems visualize the relationship between the program code and its target domain world. LEPA provides this as well and furthermore visualizes the relationship between a sequence of operations and its target domain. It also visualizes the correspondence of a code fragment to its operation. We expect that the visualization of these three fields and the relationships among them contributes to bridging the gap between the two tasks.When a program-comprehension task is assigned to a programmer, the procedure for reading the code is normally a dual process: first, recognizing the function of groups of statements and then piecing together these chunks to form ever-larger chunks (Shneiderman & Mayer, 1979). Programmers proceed through these steps hierarchically until the entire program is understood. LEPA offers a function to support learners in their endeavor to envisage behavior similar to that in the operation field. Learners can learn an entire series of code-reading process using LEPA by this function.Recently, learning support systems with an integrated development environment have been developed, aiming to support the entire programming exercise (Đanić et al. 2011; Gerdes et al. 2012; Neve et al. 2012). These systems do not focus on the code-reading stage. Consequently the learners only externalize their understanding of the algorithm and program code by coding. Learners and teachers cannot determine the cause of a learner’s lack of understanding: lack of algorithm understanding, program code understanding, or coding skills. Further, we can identify systems based on externalization with other methods than coding (Cooper et al. 2012). However, they are intended to support younger learners and cannot directly support users to read and write the program code. More discussions are required to introduce these systems into current programming education at the university level.In contrast, LEPA encourages the learner to abstract groups of statements in the program code and to externalize the abstracts in the form of tags. Learners trace the program code, observe the changes in the target domain field, pack the operations in the operation field according to the observation, and tag the packages according to the function. The learner’s package structure is the product of externalization with a series of these activities. LEPA does not include correct package structure solutions, hence never provides a solution, and never leads the learner to the correct solution. Thus, teachers can consider the package structures as the learner’s understandings externalized by the proposed system. As will be shown in the next section, we consider that these functions improve the quality of programming education.There are systems available that target nested loops, including AlgoTutor (Yoo et al. 2012) and the tutoring system developed by Dancik and Kumar (2003). However, it is not clear how they lead learners to understand or acquire the fundamental concepts described in the “Introduction” section. If any chunk corresponds to an iteration such as a loop, it is often the case that the learner’s recognition involves generalizing with variables. LEPA2 bridges the gap between hierarchical structures of chunks in a sequence of operations and those of descriptions in the program code and aims to improve the learning supports for nested loops by elaborating the strategy of a hierarchical procedure for reading code.
[]
[]
Research and Practice in Technology Enhanced Learning
30613216
PMC6302843
10.1007/s41039-015-0006-0
An exploration of problem posing-based activities as an assessment tool and as an instructional strategy
BackgroundProblem posing, the generation of questions by learners, has been shown to be an effective instructional strategy for teaching–learning of complex materials in domains such as mathematics. In this paper, we demonstrate the potential of problem posing in two dimensions. Firstly, we present how problem posing can result in unfolding of knowledge and hence how it can be used as an instructional strategy. Then we present another problem posing-based activity as an assessment tool in an Introductory Programming course (CS1).MethodTo explore the potential of problem posing as an instructional strategy, we conducted field studies in the two CS application courses (Data Structures (DS) and Artificial Intelligence (AI)), in which we provided a semi-structured problem posing situation to students. We performed inductive qualitative research and development the questions generated by students using grounded theory-based qualitative data analysis technique. To explore the potential of problem posing as an assessment tool, we conducted a field study in CS1 wherein we employed another problem posing (PP)-based activity in a large class for assessing the learning of computational thinking concepts in an introductory programming course and analysed how performance in traditional assessment tools (quiz score) is related with performance in our non-traditional assessment tool (quality of problems posed during a problem posing activity).ResultsFrom the studies in DS and AI courses we found that students pose questions and unfold knowledge using seven strategies — Apply, Organize, Probe, Compare, Connect, Vary, and Implement. From the field study performed in the CS1 course we found that the quality of the problems posed (difficulty level) were mostly aligned to the traditional assessment results in the case of novice learners but not in the case of advanced learners.
Related workPP has been explored by researchers in a number of domains, and dimensions. In Table 1, we present a range of research work which we found during our literature survey.Table 1Related research on problem posing[Ref]Domain (course)Mode (classroom/lab/online)Intervention/procedureSample/target subject (background and number)FindingsGubareva, (1992)BiochemistryClassroom lectureStudents were given guidelines of what type of problems to pose before performing PPUnavailableQuality of problems improves gradually with more and more PP practiceGraesser and Person (1994)Research Methodology (RM) and AlgebraTutorialPP between tutor and students in a tutoring sessionUndergraduates—RM N = 27, Seventh graders—Algebra N = 13Evidence—students were able to self-regulate their learning by asking questions when they spot knowledge deficitsSilver et al. (1996)Mathematics educationLab experimentInterleaved PP-problem solving-PP three-level activity on a given context53 middle school teachers and 28 prospective secondary school teachersSubjects shown some skills of PP. Subjects posed more problems before problem solving than during or after problem solving. PS influenced the focus in the second PP activitySilver (1997)Mathematics educationNANANADiscussed that inquiry-oriented mathematics instruction which includes PS and PP tasks and activities can assist students to develop more creative approaches to mathematicsEnglish (1998)GenericExperiment16 sessions (8 weeks) of PP program for improvement of PP skillsSix classes of 8-year-old students (N = 154)Experimental group shown significant improvement in the PP skills—ability to generate their own problemsCai and Hwang (2002)Quantitative aptitudeLab experimentThree pairs of problem solving (PS) and PP tasks were used in this study98 US and 155 China-6th grade studentsThere was a much stronger link between PS and PP for the Chinese sample than there was for the US sampleMestre (2002)PhysicsLab experimentStudents were asked to do PP based on the given situation and their prior knowledge4 undergradsPP is a powerful assessment tool for probing students’ understanding of physics concepts, as well as their ability to transfer their knowledge to novel contextsLavy and Bershadsky (2003)Mathematics educationLab experiment2 workshops with PP activities based on given problem were performed using “what-if-not?” strategy28 pre-service teachers (second/third year)Contribution: Categorization of the different kinds of posed problems using the “what-if-not?” strategyMcComas and Abraham (2004)GeneralClassroomNANACompiled taxonomy of question types. Proposed a 3-step technique to ask effective questions, and 8 factors for asking effective questions to teachersProfetto-McGrath et al. (2004)Nursing educationContext-based learning tutorial/seminarsThirty 90-min seminars were audio taped and analyzed using a Questioning Framework designed for this study30 nurse educators and their 314 studentsMajority of questions posed by tutors and students were framed at the low cognitive level. Recommendations: students and tutors should be trained on how to questionAkay and Boz (2009)Mathematics educationClassroomThe experimental group was demonstrated with 28 different PP activities41 prospective science teachersIt reaffirmed that PP (by teachers) should be used in mathematics classesToluk-Uçar (2009)Mathematics educationClassroomClassroom PP exercise-subjects posed problems on given symbolic situations95 pre-service primary school teachersPP had a positive impact on pre-service teachers’ understanding of fractions as well as on their views about what it means to know mathematicsKar et al. (2010)Mathematics educationLab experimentProspective teachers (PT) PP-PS tests. Each item in the PS test included patterns in PP tests76 (PTs)There was a significant relation between PP and PSLavy and Shriki (2010)Mathematics educationComputer-based environmentSubjects were given guidelines using the “what-if-not?” strategy25 PTsPTs perceived that engaging in the inquiry-based activity enhanced both their mathematical and meta-mathematical knowledgeCankoy and Darbaz (2010)Mathematics educationClassroom with PP as an instructional strategyExperimental group has followed a PP-based PS instruction for 10 weeks, whereas the control group has followed a traditional PS instruction53 third-grade students from an urban elementary schoolExperimental group was better than the control group students in terms of understanding the problem even after a 3-month gap between posttest and interventionÇildir and Sezen (2011)Physics educationLab experimentStudy sheets which consisted of 8 PP questions9 prospective physics teachers-sophomoresHigh scorers have higher PP skills than those with medium or lower scores; however, no significant difference was observed between those with medium or lower scores in terms of their PS skillsBeal and Cohen (2012)Mathematics and ScienceOnline collaborative learning environment (Teach Ourselves)Pose problems over web-based content-authoring and sharing systemMiddle school students, N = 224Evidence—students were able to generate problems on the online platformSengül and Katranci (2012)Mathematics educationLab experimentPP related to the “Sets” topic and then qualitative study of their activity56 sophomore prospective primary mathematics teachersSubjects had the most difficulty in adjusting the level of the problem posed to the level of the primary educationArikan et al. (2012)GeneralLab experiment15 PP-based questions and then qualitative study8 eleventh gradersThe PP activity can also be utilized by teachers as an alternative method of assessmentPintér (2012)Mathematics educationClassroomInitial question, and demo of the “what-if” methods of PP were presentedSmall sample of self-selected students in PS courseImprovement in posing problems of “what-if” typeCai et al. (2013)Mathematics educationClassroom activityCombination of PS and PP tasks given to students390 eleventh gradersConfirmed the validity of PP as a measure of curriculum effect on student learning. Contributed with qualitative analysis rubrics for the questions The literature survey shows that problem posing has been used as an instructional strategy mostly in the domains of mathematics and prose comprehension. Research in other domains is limited, particularly to physics education, nursing education, and biochemistry. To the best of our knowledge, there is a dearth of research that explores PP as an instructional strategy for teaching–learning of computer science or teaching–learning of engineering domain as a whole. Moreover, no significant research has been found, which talks about student PP skill as an object of instruction. One of the few research that has been found in this direction is about training pre-service teachers on effective question posing. Graesser and Person (1994), Akay and Boz (2009), Lavy and Shriki (2010), and Lavy and Bershadsky (2003) show how some instructions on PP can improve PP skill for some specific type of problems. McComas and Abraham (2004) and Profetto-McGrath et al. (2004) specifically establish need for effective teaching–learning strategies for developing PP skills. Gubareva (1992) talked about how could PP be used in building PP skills in the biochemistry domain. English (1998) and Lavy and Bershadsky (2003) show how some instructions on PP can improve PP skill for some specific type of problems. Beal and Cohen (2012) have demonstrated that mathematics PP skill was improved when the activity was carried out over an online collaborative learning environment.Mestre (2002), Cai et al. (2013), and Arikan et al. (2012) employ PP as an assessment tool. Toluk-Uçar (2009), Lavy and Shriki (2010), Silver (1997), Cankoy and Darbaz (2010), Gubareva (1992), English (1998), and Pintér (2012) demonstrate how PP can be used as an instructional strategy. Çildir and Sezen (2011) and Silver et al. (1996) talk about the relation between problem posing and problem solving. As far as our exploration of PP as an instructional strategy is concerned, the notion of PP that we are interested in is PP involving the generation of new questions around a given situation, wherein students use the PP activity as a way to unfold new knowledge, around conceptually related seed knowledge, in any given domain. We want that the PP situation should not restrict the posed questions around a specific problem solving task, as in Dillon (1982). However, we want that the PP situation should enable the generation of questions around the scope of a course, and/or a domain. This PP situation is described as a semi-structured PP situation, as opposed to the free and ill-structured PP situations (Stoyanova and Ellerton, 1996). The semi-structured PP situation enables divergent thinking and is driven by students’ intrinsic motivation and therefore positively affects problem posing (Lee and Cho 2007). To the best of our knowledge, there is no existing research that aims at exploring PP as an instructional strategy with this notion in computer science education research.
[ "2450716", "15245859" ]
[ { "pmid": "15245859", "title": "The questioning skills of tutors and students in a context based baccalaureate nursing program.", "abstract": "This paper explores, describes and compares the types and levels of questions asked by 30 randomly selected tutors (nurse educators) and their 314 students in context-based learning tutorial seminars in a Canadian baccalaureate nursing program. Thirty 90-min seminars were audio taped, transcribed and coded using a Questioning Framework designed for this study. The framework includes types and levels of questions, related wording and examples. The results of this study indicate that the majority of questions asked by tutors and students in the first three years of the program were framed at the low level (knowledge, comprehension, and application) and were aimed at seeking yes/no responses and factual information more so than probing. Although these questions are important to facilitate the teaching/learning process, educators and students need to increase the number of questions requiring analysis, synthesis, and evaluation as well as questions that involve probing, exploration, and explanation - questions believed to activate and facilitate critical thinking skills. Recommendations include the need for students and tutors to be taught how to question, the creation of a supportive environment for questioning and the use of appropriate strategies to teach the use of higher order questions. Future research using a cross sectional longitudinal design and qualitative approaches are also recommended. This study has direct implications for enhancing student learning and the development of nurse educators." } ]
Research and Practice in Technology Enhanced Learning
30613223
PMC6302857
10.1186/s41039-015-0012-2
A multi-layer map-oriented resource organization system for web-based self-directed learning combined with community-based learning
The main issue addressed in this paper is how to improve the learning situation of self-directed learning in resource search and organization from the web. In this paper, we have firstly proposed a multi-layer map model that visualizes basic learning behaviors when using the web for locating and organizing learning resources. It provides learners with the structures of the found resources, the tools for their semantic management, and also a simplified method to share the resources via the map representation. A system based on the proposed model has also been developed, that enables individual learners to easily locate suitable learning resources from the web by referring resource maps and also to organize them as personal topic maps. As community-based learning, by referring to a community topic map which merges all the personal topic maps created by individual self-directed learners, the learners can share their own resources and select those of other learners into their learning topics. As a result, the learners re-organize their personal topic maps by taking the resources from the community topic maps and at the same time contribute to the community topic map through their personal topic maps. A case study conducted to evaluate the effectiveness of the system showed several positive results which validated our proposal.
Related workAs web-based self-directed learning has become more and more eye-catching, attention from many researchers are being drawn. Being aware of the fact that it is difficult to provide adaptive learning resources to self-directed learners, Pythagoras and Demetrios (2005) introduced a methodology which generated all possible learning paths while matching the learning goals, enabling the learners to select the desired resources from the paths proposed; on the other hand, Kashihara et al. (2002) proposed a similar approach of providing the learners with the adaptive preview of a sequence of web pages as potential navigation path. Dragan and Marek (2006) adopted a different method of mapping ontology for the improvement for resource searching from a semantic web. For resource management, there were tools for constructing local indexes for learning resources found from the web (Hasegawa et al. 2003), in which a framework for reorganizing existing web-based learning resources with indexes representing their characteristics was designed, which consist of “How To Learn” indexes and “What To Learn” indexes, in order to build a learning resource database. As for community-based learning, the learning opportunities of social bookmarking service have also been discussed (Liu and Chang 2008).Although these researches relating to web-based learning have greatly enhanced the learning situation on the web from various points of view, they either targeted an enclosed learning environment, or certain educational hypermedia which involved not only the learner but also the instructor. Meanwhile, the basic learning behaviors of web-based self-directed learning usually occur in procession, but these research only focused on one or two learning situations and did not take into consideration the seamless combination of learning activities such as resource finding and organization.Concept map (Novak and Gowin 1984) and knowledge map (O’Donnell et al. 2002) are diagrams that represent ideas as node-link assemblies which has been prevalently studied in many researches. Back in the late 90s, Dansereau and Newbernm (1997) pointed out that semantic displays, such as knowledge maps, were becoming more prevalent in educational settings, and an experiment conducted by Chmielewski and Dansereau (1998) indicated that training participants on the construction and use of knowledge maps made participants recall more macro and micro level ideas from text passages than those without taking the training. Not only in educational setting but in learning contexts, there were also researches proving the concept/knowledge map to be more effective for attaining knowledge retention and transfer than reading text-based learning contents (McCagg and Dansereau 1991; John and Olusola 2006), and more beneficial working as navigational aids than a contents list (McDonald and Stevenson 1998). Meanwhile, there were also research indicating that the use of concept map can facilitate meaningful learning and be of value as a knowledge acquisition and sharing tool (Coffey et al. 2003). From the perspective of community-based learning, Fischer et al. (2002) found that by being provided with a content-specific visualization tool, both the process and out of the cooperative effort improved. Furthermore, collaborative concept mapping in a digital learning environment was also proved to be effective in overall learning gains and knowledge retention (Lin et al. 2012). As a result, the concept/knowledge mapping, as a visualization tool, has proved to be effective in both self-directed and community-based learning. For these reasons, in order to help those who constantly use the web for resource finding and organization, this research is setting off from the basis of visualizing the basic learning behavior of the learners such as searching for suitable information, organizing found learning information, and getting easier access to community-based well-organized learning resources through superimposed map representations. We target the open-ended learning resources on the web, with the purpose of providing learners with a user-friendly interface which intends to integrate self-directed learning into community-based learning.
[]
[]
Research and Practice in Technology Enhanced Learning
30613259
PMC6302858
10.1186/s41039-017-0046-8
Designing Reciprocative Dynamic Linking to improve learners’ Representational Competence in interactive learning environments
Learning from interactive learning environments enriched with multiple external representations (MERs) is often beneficial. The learning benefits of MERs highly rely on the development of Representational Competence. Representational Competence refers to an ability to translate and see relations between MERs. The relevant research findings have consistently reported learners’ difficulty in relating and translating in MERs due to insufficient development of Representational Competence. Although dynamic linking is one of the strategies recommended to address this issue, it offers mixed results. This paper reports design of a new interaction feature that overcomes some of the limitations of traditional dynamically linked representations. We designed an additional interaction in dynamically linked MERs to support learners’ cognitive demands; we refer to this as Reciprocative Dynamic Linking. The goal of this additional affordance was to strengthen learners’ cross-representation cognitive linkage by promoting Representational Competence. The paper reports the study conducted to investigate effects of Reciprocative Dynamic Linking on students’ Representational Competence. The said study was conducted in a course on Signals and Systems from Electrical Engineering program (N = 24). The subjects were assigned to two conditions: a Simulation and a Simulation with Reciprocative Dynamic Linking. The representation competence was assessed with an instrument for measuring Representational Competence within Signals and Systems domain. The effect of Reciprocative Dynamic Linking on learners’ cognitive load was also investigated. The results confirmed that Reciprocative Dynamic Linking could lead to improvement in Representational Competence and thus, higher learning for “Apply and Analyze Procedural knowledge” categories of tasks. Reciprocative Dynamic Linking also promoted germane cognitive load of learners, as it could offer the required cognitive support to improve learners’ Representational Competence. The findings from semi-structured interviews and screen capture analysis corroborated the results. This paper provides details of how to design Reciprocative Dynamic Linking in interactive learning environments and its effect on learners’ Representational Competence. Apart from establishing learning effectiveness of Reciprocative Dynamic Linking, the study further contributes by confirming the role of cognitive processing of learners while learning from interactive learning environments. The findings from the study suggest designing strategies not for just creating highly interactive learning environments but equipping a given learning environment with conducive interaction features that foster learning.
Related workThe main strength of MERs in ILEs lies in the different types of (dynamic) representations that can be included and its ability to combine different representations in one interface. MERs offer several learning benefits. Each representation in MERs can show specific aspects of the domain to be learnt. Different types of representations may be useful for different purposes, as they differ in their representational and computational efficiency (Larkin and Simon 1987). Teaching and learning with more representations facilitates and strengthens the learning process by providing several mutually referring sources of information (Kozma and Russell 1997; Grouws 1992) and leads to deeper learning. It has been reported that “the cognitive linking of representations creates a whole that is more than the sum of its parts … It enables us to ‘see’ complex ideas in a new way and apply them more effectively” (Kaput 1989, 1992). As reported in the research studies, students generally benefit from being exposed to a wide range of representations and perspectives on a problem, including underlying mathematical and scientific laws, engineering design strategies and objects, as well as the social context (Nathan et al. 2011, 2013; Walkington et al. 2011). It also further highlighted that the coordination of different representations in a cohesive manner and explicit identification of their relations supports student understanding.Learning from multiple representations is characterized by means of three key functions: (i) to provide complementary information and processes, (ii) to constrain interpretations, and (iii) to construct a deeper domain understanding (Ainsworth 1999). The major learning demands from MERs on learners are to understand the semantics of each representation, to understand which parts of the domain are represented, to relate the representations to each other, and to translate between the representations.In general, learners’ ability in translating between, seeing the relations between MERs and connecting across MERs, plays an important role in deciding learning effectiveness while learning from MERs (Ainsworth 1999, 2006; Kozma and Russell 1997). The Representational Competence of learners subsumes learners’ abilities to comprehend how two representations are related and how they can be used together. Thus, Representational Competence influences learning from MERs.When learning with separate representations, learners are required to relate separate sources of information, which may generate a heavy cognitive demand, leaving fewer resources for actual learning, especially in dynamically changing MERs. Thus, learning from MERs places demands on working memory and creates challenges for learners (van Someren et al. 1998), especially those with low prior knowledge (Kozma and Russell 1997; Yerushalmy 1989). These challenges can cause students to interact with simulations randomly, instead of systematically (de Jong and van Joolingen 1998). Such learning limitations affect learners’ understanding and results into a discourse that is constrained by the surface features of individual representations. Thus, the unique cognitive demand while learning MERs is to understand how to translate between representations in dynamic learning environments.Researchers recommended support for learners in this translation through appropriate design features and design guidelines (Tabachneck et al. 1994; Kozma 2003). A variety of approaches in the form of guidelines such as implicit cues, integrated representations, static linking, dynamic linking, and explicit instruction have been suggested to address students’ such difficulties (Ainsworth 2006; van der Meij and de Jong 2004). While Kozma (Kozma 2003) suggested design principles to increase connections between representations for supporting students’ domain understanding, DeFT (Design, Functions, Tasks) principles were implemented in the DEMIST learning environment (Ainsworth and VanLabeke 2004). These principles recommended dynamical linking (dyna-linking) in MERs, when multiple external representations are used to support complementary roles and information, and to constrain interpretation.Dyna-linking of MERs has been one such popular strategy for enabling translations between representations (Ainsworth 1999). With dynamically linked representations, actions performed on one representation are automatically shown in all other representations. It is expected that dyna-linking helps learners to establish relationships between representations (Kaput 1989; Scaife and Rogers 1996). It helps learners in accomplishing an important task of translating between representations. Two important learning requirements are considered while designing dyna-linked MERs; the need to learn content from complementary representations and the need to reduce cognitive load of making mental connections between representations (Wu and Puntambekar 2012). The learning benefits from dyna-linked MERs are attributed to the cognitive support extended to learner. As the translations between MERs are taken care by the technology in the learning environment, learners are freed to concentrate on interpreting the representations and their consequences. The Cognitive Theory of Multimedia Learning (Mayer 2001) and the dual channel assumption of Dual Coding Theory (Paivio 1986) support the use of dynamic linking of MERs to reduce the cognitive load upon learners.An environment using multiple dynamically linked representations can facilitate novices’ learning (Kozma and Russell 1997). While the simultaneously changing representations in dynamic linking have been conceived as a useful feature, it has also received criticism. Ainsworth (1999) cautioned that dynamic linking might leave a learner too passive in the learning process. Dynamic linking may discourage reflection on the nature of the translations, leading to a failure of learner in constructing the required understanding. Another problem with dynamic linking has been that with multiple dynamically changing representations, learners need to attend to changes that occur simultaneously in different regions of various representations, leading to cognitive overload (Lowe 2003). It must be noted that on the one side, while the feature of dyna-linking is being reported to offer cognitive support while learning from MERs, on the other hand, it has also been considered to induce more cognitive load due to the need to attend to changes that occur simultaneously in different regions of various representations. Empirical studies such as one with the SIMQUEST environment (van der Meij and de Jong 2006) found that simply, linking representations dynamically could not improve learning compared with non-linking. It showed some improvement in the learning only with spatially integrated linked representations. Such an integration of representations is not always possible, due to the nature of the learning materials or specific learning goals.Thus, the nature of results of dyna-linking in MERs is mixed, and hence, there is a need for further research to design dynamic linking with apt interactivity that would offer the necessary cognitive support to learners while translating among representations. This also highlights the relevance of the concept of cognitive load and cognitive load theory while learning with dyna-linked interactive learning environments. The basic idea of cognitive load theory is that cognitive capacity in working memory is limited; so that if a learning task requires too much capacity, learning will be hampered. The recommended remedy is to design instructional systems and features that optimize the use of working memory capacity and avoid cognitive overload.DeLeeuw and Mayer (2008) theorize that there are three types of cognitive processing (essential, extraneous, and generative) and place them in the triarchic model of cognitive load. Mayer proposed this model for organizing framework for the cognitive theory of multimedia learning and stated that a major goal of multimedia learning and instruction is to “manage essential processing, reduce extraneous processing and foster generative processing” (Mayer 2009). Intrinsic cognitive load occurs during the interaction between the nature of the material being learnt and the expertise of the learner. The second type, extraneous cognitive load, is caused by factors that aren’t central to the material to be learnt, such as presentation methods or activities that split attention between multiple sources of information, and these should be minimized as much as possible. The third type of cognitive load, germane cognitive load, enhances learning and results in task resources being devoted to schema acquisition and automation. Intrinsic cognitive load cannot be manipulated, but extraneous and germane cognitive loads can be manipulated. As germane cognitive load relates to learner’s engagement in cognitive processing such as mentally organizing the material and relating it to prior knowledge, it is important to channelize and design instructional design strategies to increase germane cognitive load. Thus, we consider designing interaction features to increase germane cognitive load as one of the strategies to offer necessary cognitive support to learners for optimizing cognitive resources while translating among the representations.On this backdrop, aptly designed interactive features in interactive learning environments can offer the necessary cognitive support to learners. Such features can ensure effective learning from dynamically linked MERs in technology-enhanced learning (TEL) environments. We designed “Reciprocative Dynamic Linking”; an additional interaction feature to offer the required cognitive support to learners while learning from MERs. The following section explains designing of “Reciprocative Dynamic Linking.”
[ "7808878" ]
[ { "pmid": "7808878", "title": "Measurement of cognitive load in instructional research.", "abstract": "The results of two of our recent empirical studies were considered to assess the usefulness of subjective ratings and cardiovascular measures of mental effort in instructional research. Based on its reliability and sensitivity, the subjective rating-scale technique met the requirements to be useful in instructional research whereas the cardiovascular technique did not. It was concluded that the usefulness of both measurement techniques in instructional research needs to be investigated further." } ]
Research and Practice in Technology Enhanced Learning
30613248
PMC6302863
10.1186/s41039-016-0041-5
Practices of algorithm education based on discovery learning using a program visualization system
In this paper, we describe three practical exercises relating to algorithm education. The exercises are based on a learning support system that offers visualization of program behavior. Systems with the ability to visualize program behavior are effective to promote the understanding of algorithm behavior. The introduction of these systems into an algorithm course is expected to allow learners to cultivate a more thorough understanding. However, almost all existing systems cannot incorporate the teacher’s intent of instruction that may be necessary to accommodate learners with different abilities by using a different instructional approach. Based on these considerations, we conducted classroom practice sessions as part of an algorithm course by incorporating the visualization system we developed in our previous work. Our system visualizes the target domain world according to the visualization policy defined by the teacher. Our aim with the practical classes is to enable learners to understand the properties of algorithms, such as the number of comparisons and data exchanges. The contents of the course are structured such that the properties of an algorithm can be understood by discovery learning in the practical work. In this paper, we provide an overview of our educational practices and learners’ responses and show that the framework we use in our practices can be established in algorithm classes. Furthermore, we summarize the requirements for the inclusion of discovery learning in the algorithm classes as the knowledge obtained from our practices.
Related worksThe ANIMAL (Rössling and Freisleben, 2002) is mentioned as a system with similar functions to our system. In the context of learning with ANIMAL, users of the system visualizing the behavior of programs are regarded as taking on one of the four different roles defined in (Price et al., 1998):User/viewer, agent observing the target domain world visualized by the system.Visualizer/animator, agent defining the visualization of the target domain world with the system.Software developer, agent developing the visualization system.Programmer, agent designing the algorithm or program code that is the target of visualization. The ANIMAL is a system that aims to achieve overall improvement in terms of learning support by providing sophisticated support for each of the four above-mentioned roles, by straightening the differences among them. In the context of learning with our system, learners take on the user/viewer role and the teacher takes on the visualizer/animator and programmer roles. The software developer role is taken on by us.The visualizer/animator in ANIMAL needs to use a script language named AnimalScript to define the visualization of the target domain world. Although the description capability increases significantly by using the script language, the cost associated with learning the language is a matter that cannot be ignored. Moreover, the amount of script required to define the visualization in ANIMAL is generally larger than the amount of visualization rules used in our system. For example, the sample script for a bubble sort algorithm bundled in ANIMAL consists of 170 lines of code, whereas our configuration file for bubble sort consists of 56 lines of rule. Rössling and Ackermann (2007) attempted to reduce the defining costs by bundling some ready-to-use sample scripts together with a GUI front-end for arranging the scripts. Likewise, our system is being extended to enable the visualizer/animator to use a front-end GUI to define the visualization rules.
[]
[]
Research and Practice in Technology Enhanced Learning
30613239
PMC6302875
10.1186/s41039-016-0030-8
A model-driven PBL application to support the authoring, delivery, and execution of PBL processes
As problem-based learning (PBL) is becoming more and more popular, there is also a growing interest in developing and using technologies in the implementation of PBL. However, teachers may have difficulties to design and deliver a pedagogically well-designed and technically smoothly executable online or blended PBL process on their own because they lack appropriate expertise in learning theories and design methods as well as a deeper understanding of the potential affordances of the available technologies. From this premise, we are committed to developing and testing methods and tools to support the design and delivery of online or hybrid PBL processes with high flexibility and a low threshold of usage requirements. This paper presents a technical approach to develop a web-based PBL application that supports both authoring and run-time usage. In comparison with other tools and technical approaches, it is concluded that a combined use of a model-driven approach and semi-structured data management appears to be a promising approach to effectively and efficiently support the authoring, delivering, and execution of design-time and run-time PBL processes.
Related workCurrently, there are two kinds of implementations that can flexibly support the PBL design: IMS-LD authoring tools and the LAMS (Dalziel 2003). About IMS-LD authoring tools, there are Reload (Reload 2005), MOT+ (Paquette et al. 2006), ASK-LDT (Karampiperis and Sampson 2005), CopperAuthor (CopperAuthor 2005), and CoSMoS (Miao 2005). These tools are very flexible to represent and support the design of different learning process models which include PBL models, and they belong to general learning design tools. About the LAMS, a study has shown that it also can be used for the PBL design (Richards and Cameron 2008).However, all of those tools above have some shortcomings in terms of supporting the specificities of the PBL design and implementation. From the perspective of supporting visual learning design, IMS-LD authoring tools and the LAMS miss the capability of facilitating teachers in developing a sound PBL process since they are too general and not PBL domain specific. Using IMS-LD authoring tools or the LAMS, users have to explicitly represent PBL features using higher abstract building blocks and data types. For example, they neither have the building blocks such as PBL-specific activity or artifact nor provide the types such as problem engagement or identify learning issue which are emphasized in PBL pedagogy. From the perspective of utilizing web technologies, some of these tools came about by adopting traditional software development concept, and most of them are desktop applications. As we know, desktop applications have high maintenance cost, are not anywhere available without pre-installation, and so further. From the perspective of data management approach, existing applications store learning design artifacts by using either traditional relational database or directly as XML files. Although relational databases are good at data storage and querying, they cannot manage this kind of semi-structured data well and flexible enough since they are relational and not schema free. XML file is a kind of ideal media for storing this kind of semi-structured data; one drawback is that it is not ideal for the data manipulation such as partial update, search, and sub-document sharing. Although there are some combination solutions to manage XML documents through relational database, XML queries are still inefficient (Shanmugasundaram et al. 2008).For supporting the PBL implementation, people possibly use the IMS-LD authoring tools to design PBL processes and produce the UoLs. Then, the UoLs can be interpreted by IMS-LD run-time players, such as CopperCore player (Martens and Vogten 2005) and SLED (McAndrew et al. 2005) in order to help the PBL implementation. Our application aims to be able generate the UoLs under the IMS-LD specification, so that we can make use of these existing run-time tools.
[ "17399857" ]
[ { "pmid": "17399857", "title": "Peeling back the layers of learning: a classroom model for problem-based learning.", "abstract": "This paper aims to provide an informative discussion with underpinning rationales about the use of a problem-based learning (PBL) classroom model, supported by a structured process for undertaking PBL. PBL was implemented as a main teaching and learning strategy for a diploma in nursing programme as advised by the Department of Health [Department of Health., 1999. Making a difference: Strengthening the Nursing, Midwifery and Health Visiting Contribution to Health and Health Care. Department of Health, London.] and the United Kingdom Central Council for nurses, midwifes and health visitors [United Kingdom Central Council for Nursing, Midwifery and Health Visiting, 1999. Fitness for Practice. UKCC, London.]. The implementation and change to the PBL approach is not without challenges, and so it was considered important to facilitate this change effectively. Through ongoing reflection, peer discussions and continuous review of the literature following studies at Masters Level, it was identified that the design of a model may guide students and facilitators who were new to the PBL process to help students identify relevant learning needs and thus enable them to achieve the learning outcomes of a dynamic curriculum [Darvill, A., 2000. Developing Problem-based Learning in the Nursing Education Curriculum: A Case Study. Unpublished MSc Dissertation, University of Huddersfield, Huddersfield; McLoughlin, M., 2002. An Exploration of the Role of the Problem-based Learning Facilitator: An Ethnographic Study of Role Transition in a Higher Education Institution 'Paradigm Shift or New Ways of Working'. Unpublished MSc Dissertation. University of Huddersfield, Huddersfield.]. In this paper the key components of the model will be described." } ]
JMIR Mental Health
30530455
PMC6303678
10.2196/10129
An eHealth Platform for the Support of a Brazilian Regional Network of Mental Health Care (eHealth-Interop): Development of an Interoperability Platform for Mental Care Integration
BackgroundThe electronic exchange of health-related data can support different professionals and services to act in a more coordinated and transparent manner and make the management of health service networks more efficient. Although mental health care is one of the areas that can benefit from a secure health information exchange (HIE), as it usually involves long-term and multiprofessional care, there are few published studies on this topic, particularly in low- and middle-income countries.ObjectiveThe aim of this study was to design, implement, and evaluate an electronic health (eHealth) platform that allows the technical and informational support of a Brazilian regional network of mental health care. This solution was to enable HIE, improve data quality, and identify and monitor patients over time and in different services.MethodsThe proposed platform is based on client-server architecture to be deployed on the Web following a Web services communication model. The interoperability information model was based on international and Brazilian health standards. To test platform usage, we have utilized the case of the mental health care network of the XIII Regional Health Department of the São Paulo state, Brazil. Data were extracted from 5 different sources, involving 26 municipalities, and included national demographic data, data from primary health care, data from requests for psychiatric hospitalizations performed by community services, and data obtained from 2 psychiatric hospitals about hospitalizations. Data quality metrics such as accuracy and completeness were evaluated to test the proposed solution.ResultsThe eHealth-Interop integration platform was designed, developed, and tested. It contains a built-in terminology server and a record linkage module to support patients’ identification and deduplication. The proposed interoperability environment was able to integrate information in the mental health care network case with the support of 5 international and national terminologies. In total, 27,353 records containing demographic and clinical data were integrated into eHealth-Interop. Of these records, 34.91% (9548/27,353) were identified as patients who were present in more than 1 data source with different levels of accuracy and completeness. The data quality analysis was performed on 26 demographic attributes for each integrable patient record, totaling 248,248 comparisons. In general, it was possible to achieve an improvement of 18.40% (45,678/248,248) in completeness and 1.10% (2731/248,248) in syntactic accuracy over the test dataset after integration and deduplication.ConclusionsThe proposed platform established an eHealth solution to fill the gap in the availability and quality of information within a network of health services to improve the continuity of care and the health services management. It has been successfully applied in the context of mental health care and is flexible to be tested in other areas of care.
Related WorkSeveral reviews analyzing the usage, barriers, facilitators, impact, and cost of HIE have been conducted [41,49-51]. Low data quality is one of the challenges described and a possible cause is poor patient matching process [41]. This issue becomes critical depending on the context, as in the case of homeless patients [52].In the technical domain, there are several papers proposing the implementation of platforms for the exchange of health information [53-55]. Yuksel et al [53] developed the SALUS platform (Scalable, standard-based Interoperability Framework for Sustainable Proactive Post Market Safety Studies), an ontology-based interoperability framework designed to conduct observational studies from data extracted from different data sources. Moraes et al [54] proposed a methodology for the exchange of information through multi-agent systems based on OpenEHR used for cardiac surgery planning. Rac-Albu et al [55] proposed a method, based on HL7 v2, for exchanging medical documents seeking the interoperability of health data in Romania.There are few HIE studies in the context of mental health. Cifuentes et al [56] analyzed strategies for care integration between mental health and primary care level. One barrier to achieve this integration is the lack of interoperability between information systems. Shank et al [57] evaluated, through a statewide survey, the behavioral health providers’ beliefs about HIE. The authors concluded that most providers support the use of HIE, although they also worry about the safety and cost of deploying these solutions.Although there are several studies and proposals for establishing an HIE environment, this remains an open problem, and its use is still limited [58]. In this paper, we described the whole process of development and use of an HIE tool in a challenging medical context, that is, in the case of mental health. In conjunction with key stakeholders, relevant processes were mapped, and interoperability data models were constructed. We described a multilayer conceptual architecture that supports data exchange. This proposal covers two important aspects: the problem of dealing with different health terminologies and the use of a detailed patient identifier process that encompasses patients without unique identifiers. We tested the solution in a real-world environment.
[ "22785067", "23737736", "14630762", "22855384", "18322289", "18971401", "21561655", "28182778", "23429814", "28355335", "28053659", "26801498", "20027489", "23786808", "24183568", "11825268", "26878761", "26678413", "28851681", "25599991", "23616891", "27123451", "27480749", "26359473", "22184253", "29881743" ]
[ { "pmid": "22785067", "title": "The balanced care model for global mental health.", "abstract": "BACKGROUND\nFor too long there have been heated debates between those who believe that mental health care should be largely or solely provided from hospitals and those who adhere to the view that community care should fully replace hospitals. The aim of this study was to propose a conceptual model relevant for mental health service development in low-, medium- and high-resource settings worldwide. Method We conducted a review of the relevant peer-reviewed evidence and a series of surveys including more than 170 individual experts with direct experience of mental health system change worldwide. We integrated data from these multiple sources to develop the balanced care model (BCM), framed in three sequential steps relevant to different resource settings.\n\n\nRESULTS\nLow-resource settings need to focus on improving the recognition and treatment of people with mental illnesses in primary care. Medium-resource settings in addition can develop 'general adult mental health services', namely (i) out-patient clinics, (ii) community mental health teams (CMHTs), (iii) acute in-patient services, (iv) community residential care and (v) work/occupation. High-resource settings, in addition to primary care and general adult mental health services, can also provide specialized services in these same five categories.\n\n\nCONCLUSIONS\nThe BCM refers both to a balance between hospital and community care and to a balance between all of the service components (e.g. clinical teams) that are present in any system, whether this is in low-, medium- or high-resource settings. The BCM therefore indicates that a comprehensive mental health system includes both community- and hospital-based components of care." }, { "pmid": "22855384", "title": "Quality of communication between primary health care and mental health care: an examination of referral and discharge letters.", "abstract": "In managing treatment for persons with mental illness, the primary care physician (PCP) needs to communicate with mental health (MH) professionals in various settings over time to provide appropriate management and continuity of care. However, effective communication between PCPs and MH specialists is often poor. The present study reviewed evidence on the quality of information transfer between PCPs and specialist MH providers for referral requests and after inpatient discharge. Twenty-three audit studies were identified that assessed the quality of content and nine that assessed strategies to improve quality. Results indicated that rates of item reporting were variable. Within the limited evidence on interventions to improve quality, use of structured forms showed positive results. Follow-up work can identify a minimum set of items to include in information transfers, along with item definitions and structures for holding this information. Then, methodologies for measuring data quality, including electronically generated performance metrics, can be developed." }, { "pmid": "18971401", "title": "Transforming mental health and substance abuse data systems in the United States.", "abstract": "State efforts to improve mental health and substance abuse service systems cannot overlook the fragmented data systems that reinforce the historical separateness of systems of care. These separate systems have discrete approaches to treatment, and there are distinct funding streams for state mental health, substance abuse, and Medicaid agencies. Transforming mental health and substance abuse services in the United States depends on resolving issues that underlie separate treatment systems--access barriers, uneven quality, disjointed coordination, and information silos across agencies and providers. This article discusses one aspect of transformation--the need for interoperable information systems. It describes current federal and state initiatives for improving data interoperability and the special issue of confidentiality associated with mental health and substance abuse treatment data. Some achievable steps for states to consider in reforming their behavioral health data systems are outlined. The steps include collecting encounter-level data; using coding that is compliant with the Health Insurance Portability and Accountability Act, including national provider identifiers; forging linkages with other state data systems and developing unique client identifiers among systems; investing in flexible and adaptable data systems and business processes; and finding innovative solutions to the difficult confidentiality restrictions on use of behavioral health data. Changing data systems will not in itself transform the delivery of care; however, it will enable agencies to exchange information about shared clients, to understand coordination problems better, and to track successes and failures of policy decisions." }, { "pmid": "21561655", "title": "The Brazilian health system: history, advances, and challenges.", "abstract": "Brazil is a country of continental dimensions with widespread regional and social inequalities. In this report, we examine the historical development and components of the Brazilian health system, focusing on the reform process during the past 40 years, including the creation of the Unified Health System. A defining characteristic of the contemporary health sector reform in Brazil is that it was driven by civil society rather than by governments, political parties, or international organisations. The advent of the Unified Health System increased access to health care for a substantial proportion of the Brazilian population, at a time when the system was becoming increasingly privatised. Much is still to be done if universal health care is to be achieved. Over the past 20 years, there have been other advances, including investments in human resources, science and technology, and primary care, and a substantial decentralisation process, widespread social participation, and growing public awareness of a right to health care. If the Brazilian health system is to overcome the challenges with which it is presently faced, strengthened political support is needed so that financing can be restructured and the roles of both the public and private sector can be redefined." }, { "pmid": "28182778", "title": "Epidemiology of multimorbidity within the Brazilian adult general population: Evidence from the 2013 National Health Survey (PNS 2013).", "abstract": "Middle-income countries are facing a growing challenge of adequate health care provision for people with multimorbidity. The objectives of this study were to explore the distribution of multimorbidity and to identify patterns of multimorbidity in the Brazilian general adult population. Data from 60202 adults, aged ≥18 years that completed the individual questionnaire of the National Health Survey 2013 (Portuguese: \"Pesquisa Nacional de Saúde\"-\"PNS\") was used. We defined multimorbidity as the presence of two or more chronic conditions, including self-reported diagnoses and responses to the 9-item Patient Health Questionnaire for depression. Multivariate Poisson regression analyses were used to explore relationship between multimorbidity and demographic factors. Exploratory tetrachoric factor analysis was performed to identify multimorbidity patterns. 24.2% (95% CI 23.5-24.9) of the study population were multimorbid, with prevalence rate ratios being significantly higher in women, older people and those with lowest educational level. Multimorbidity occurred earlier in women than in men, with half of the women and men aged 55-59 years and 65-69 years, respectively, were multimorbid. The absolute number of people with multimorbidity was approximately 2.5-fold higher in people younger than 65 years than older counterparts (9920 vs 3945). Prevalence rate ratios of any mental health disorder significantly increased with the number of physical conditions. 46.7% of the persons were assigned to at least one of three identified patterns of multimorbidity, including: \"cardio-metabolic\", \"musculoskeletal-mental\" and \"respiratory\" disorders. Multimorbidity in Brazil is as common as in more affluent countries. Women in Brazil develop diseases at younger ages than men. Our findings can inform a national action plan to prevent multimorbidity, reduce its burden and align health-care services more closely with patients' needs." }, { "pmid": "23429814", "title": "Setting priorities for mental health research in Brazil.", "abstract": "BACKGROUND\nThe main aim of this study is to review the agenda for research priorities of mental health in Brazil.\n\n\nMETHODOLOGY\nThe first step was to gather 28 experts (22 researchers, five policy makers, and the coordinator) representing all mental health fields from different geographical areas of the country. Participants were asked to list what they considered to be the most relevant mental health research questions for the country to address in the next 10 years. Seventeen participants answered this question; after redundancies were excluded, a total of 110 responses were collected. As the second step, participants were asked to rank which questions were the 35 most significant. The final step was to score 15 items for each of the 35 selected questions to determine whether it would be a) answerable, b) effective, c) deliverable, d) equitable, and e) effective at reducing the burden of mental health. The ten highest ranked questions were then selected.\n\n\nRESULTS\nThere were four questions addressing primary care with respect to a) the effectiveness of interventions, b) \"matrix support\", c) comparisons of different models of stepped care, and d) interventions to enhance identification and treatment of common mental disorders at the Family Health Program. The other questions were related to the evaluation of mental health services for adults and children/adolescents to clarify barriers to treatment in primary care, drug addiction, and severe mental disorders; to investigate the cost-benefit relationship of anti-psychotics; to design interventions to decrease alcohol consumption; and to apply new technologies (telemedicine) for education and supervision of non-specialists.\n\n\nCONCLUSION\nThis priority-setting research exercise highlighted a need for implementing investments at the primary-care level, particularly in the family health program; the urgent need to evaluate services; and policies to improve equity by increasing accessibility to services and testing interventions to reduce barriers for seeking mental health treatment." }, { "pmid": "28355335", "title": "The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment.", "abstract": "OBJECTIVE\nThis study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment.\n\n\nMETHODS\nWe have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014.\n\n\nRESULTS\nThis is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field.\n\n\nCONCLUSIONS\nThere is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded.\n\n\nOBJETIVO\nAnalisar o estágio atual do modelo de atenção à saúde mental do Sistema Único de Saúde, segundo seu financiamento, processos de governança e mecanismos de avaliação.\n\n\nMÉTODOS\nFoi realizada uma análise documental de portarias, informes técnicos, relatórios de conferência, resoluções e decretos de 2009 a 2014.\n\n\nRESULTADOS\nTrata-se de um momento de consolidação do modelo psicossocial, com ampliação da rede assistencial, inversão de financiamento para serviços comunitários com forte ênfase na área de crack e outras drogas. A saúde mental é uma área subfinanciada dentro do subfinanciamento crônico do Sistema Único de Saúde. O modelo de governança constrange o avanço de serviços essenciais, havendo a necessidade da incorporação de um processo de regionalização da gestão. Os mecanismos avaliativos no campo burocrático se mostram pouco incorporados à política de saúde.\n\n\nCONCLUSÕES\nÉ necessário ampliar o financiamento global da saúde e específico da saúde mental, que vem se constituindo como uma política exitosa. O foco atual da política se mostra anacrônico aos preceitos do modelo psicossocial. Aponta-se a necessidade de ampliação de mecanismos avaliativos." }, { "pmid": "28053659", "title": "A web-based information system for a regional public mental healthcare service network in Brazil.", "abstract": "BACKGROUND\nRegional networking between services that provide mental health care in Brazil's decentralized public health system is challenging, partly due to the simultaneous existence of services managed by municipal and state authorities and a lack of efficient and transparent mechanisms for continuous and updated communication between them. Since 2011, the Ribeirao Preto Medical School and the XIII Regional Health Department of the Sao Paulo state, Brazil, have been developing and implementing a web-based information system to facilitate an integrated care throughout a public regional mental health care network.\n\n\nCASE PRESENTATION\nAfter a profound on-site analysis, the structure of the network was identified and a web-based information system for psychiatric admissions and discharges was developed and implemented using a socio-technical approach. An information technology team liaised with mental health professionals, health-service managers, municipal and state health secretariats and judicial authorities. Primary care, specialized community services, general emergency and psychiatric wards services, that comprise the regional mental healthcare network, were identified and the system flow was delineated. The web-based system overcame the fragmentation of the healthcare system and addressed service specific needs, enabling: detailed patient information sharing; active coordination of the processes of psychiatric admissions and discharges; real-time monitoring; the patients' status reports; the evaluation of the performance of each service and the whole network. During a 2-year period of operation, it registered 137 services, 480 health care professionals and 4271 patients, with a mean number of 2835 accesses per month. To date the system is successfully operating and further expanding.\n\n\nCONCLUSION\nWe have successfully developed and implemented an acceptable, useful and transparent web-based information system for a regional mental healthcare service network in a medium-income country with a decentralized public health system. Systematic collaboration between an information technology team and a wide range of stakeholders is essential for the system development and implementation." }, { "pmid": "26801498", "title": "Impact of length of stay for first psychiatric admissions on the ratio of readmissions in subsequent years in a large Brazilian catchment area.", "abstract": "PURPOSE\nThis study aims to verify the impact that the length of stay has on the rates of readmission for patients who were first admitted to various inpatient psychiatric units in a large catchment area in a middle-income country.\n\n\nMETHODS\nThe study included all patients who were first admitted to the 108 acute psychiatric beds available in the catchment area of Ribeirão Preto, Brazil, for a period of 8 years. Demographic features, inpatient unit of discharge, diagnosis and length of stay were assessed by bivariate analysis. An analysis of the time span between first admission and readmission was also conducted using survival curves estimated by the Kaplan-Meier formula. For the analyses of the risk of readmissions, a logistic regression analysis was conducted.\n\n\nRESULTS\nFrom a total of 6261 patients admitted in the period of the survey, approximately one-third (2006) had at least one other readmission during the follow-up period. The rates per year of early readmission (within 90 days after discharge) varied from 16.1 to 20.9 %. The risk of readmission was higher immediately after discharge. The survival analysis showed that ultrashort length of stay (1-2 days) was associated with reduced odds of readmission, but multivariate logistic analysis showed no association between length of stay and the odds of readmissions. The predictors of early readmission included the diagnosis of depressive, bipolar, psychotic, and non-alcohol-related disorders, younger ages and unemployment.\n\n\nCONCLUSIONS\nDuration of the first psychiatric admission was not associated with a higher risk of readmissions. Predictors for early readmissions of first-time-admitted psychiatric patients seem to be more related to the severity of the psychiatric diagnosis and demographic characteristics." }, { "pmid": "20027489", "title": "Short admission in an emergency psychiatry unit can prevent prolonged lengths of stay in a psychiatric institution.", "abstract": "OBJECTIVE\nCharacterize and compare acute psychiatric admissions to the psychiatric wards of a general hospital (22 beds), a psychiatric hospital (80) and of an emergency psychiatry unit (6).\n\n\nMETHOD\nSurvey of the ratios and shares of the demographic, diagnostic and hospitalization variables involved in all acute admissions registered in a catchment area in Brazil between 1998 and 2004.\n\n\nRESULTS\nFrom the 11,208 admissions, 47.8% of the patients were admitted to a psychiatric hospital and 14.1% to a general hospital. The emergency psychiatry unit accounted for 38.1% of all admissions during the period, with a higher variability in occupancy rate and bed turnover during the years. Around 80% of the hospital stays lasted less than 20 days and in almost half of these cases, patients were discharged in 2 days. Although the total number of admissions remained stable during the years, in 2004, a 30% increase was seen compared to 2003. In 2004, bed turnover and occupancy rate at the emergency psychiatry unit increased.\n\n\nCONCLUSION\nThe increase in the number of psychiatric admissions in 2004 could be attributed to a lack of new community-based services available in the area beginning in 1998. Changes in the health care network did affect the emergency psychiatric service and the limitations of the community-based network could influence the rate of psychiatric admissions." }, { "pmid": "23786808", "title": "What do we actually mean by 'sociotechnical'? On values, boundaries and the problems of language.", "abstract": "The term 'sociotechnical' was first coined in the context of industrial democracy. In comparing two projects on shipping in Esso to help define the concept, the essential categories were found to be where systems boundaries were set, and what factors were considered to be relevant 'human' characteristics. This is often discussed in terms of values. During the nineteen-sixties and seventies sociotechnical theory related to the shop-floor work system, and contingency theory to the organisation as a whole, the two levels being distinct. With the coming of information technology, this distinction became blurred; the term 'socio-structural' is proposed to describe the whole system. IT sometimes is the operating technology, it sometimes supports the operating technology, or it may sometimes be mistaken for the operating technology. This is discussed with reference to recent air accidents." }, { "pmid": "11825268", "title": "SNOMED clinical terms: overview of the development process and project status.", "abstract": "Two large health care reference terminologies, SNOMED RT and Clinical Terms Version 3 , are in the process of being merged to form a comprehensive new work referred to as SNOMED Clinical Terms. The College of American Pathologists and the United Kingdom s National Health Service have entered into a collaborative agreement to develop this new work. Both organizations have extensive terminology development and maintenance experience. This paper discusses the process and status of SNOMED CT development and how the resources and expertise of both organizations are being used to develop this new terminological resource. The preliminary results of the merger process, including mapping, the merger of upper levels of each hierarchy, and attribute harmonization are also discussed." }, { "pmid": "26878761", "title": "Barriers and facilitators to exchanging health information: a systematic review.", "abstract": "OBJECTIVES\nWe conducted a systematic review of studies assessing facilitators and barriers to use of health information exchange (HIE).\n\n\nMETHODS\nWe searched MEDLINE, PsycINFO, CINAHL, and the Cochrane Library databases between January 1990 and February 2015 using terms related to HIE. English-language studies that identified barriers and facilitators of actual HIE were included. Data on study design, risk of bias, setting, geographic location, characteristics of the HIE, perceived barriers and facilitators to use were extracted and confirmed.\n\n\nRESULTS\nTen cross-sectional, seven multiple-site case studies, and two before-after studies that included data from several sources (surveys, interviews, focus groups, and observations of users) evaluated perceived barriers and facilitators to HIE use. The most commonly cited barriers to HIE use were incomplete information, inefficient workflow, and reports that the exchanged information that did not meet the needs of users. The review identified several facilitators to use.\n\n\nDISCUSSION\nIncomplete patient information was consistently mentioned in the studies conducted in the US but not mentioned in the few studies conducted outside of the US that take a collective approach toward healthcare. Individual patients and practices in the US may exercise the right to participate (or not) in HIE which effects the completeness of patient information available to be exchanged. Workflow structure and user roles are key but understudied.\n\n\nCONCLUSIONS\nWe identified several facilitators in the studies that showed promise in promoting electronic health data exchange: obtaining more complete patient information; thoughtful workflow that folds in HIE; and inclusion of users early in implementation." }, { "pmid": "26678413", "title": "Outcomes From Health Information Exchange: Systematic Review and Future Research Needs.", "abstract": "BACKGROUND\nHealth information exchange (HIE), the electronic sharing of clinical information across the boundaries of health care organizations, has been promoted to improve the efficiency, cost-effectiveness, quality, and safety of health care delivery.\n\n\nOBJECTIVE\nTo systematically review the available research on HIE outcomes and analyze future research needs.\n\n\nMETHODS\nData sources included citations from selected databases from January 1990 to February 2015. We included English-language studies of HIE in clinical or public health settings in any country. Data were extracted using dual review with adjudication of disagreements.\n\n\nRESULTS\nWe identified 34 studies on outcomes of HIE. No studies reported on clinical outcomes (eg, mortality and morbidity) or identified harms. Low-quality evidence generally finds that HIE reduces duplicative laboratory and radiology testing, emergency department costs, hospital admissions (less so for readmissions), and improves public health reporting, ambulatory quality of care, and disability claims processing. Most clinicians attributed positive changes in care coordination, communication, and knowledge about patients to HIE.\n\n\nCONCLUSIONS\nAlthough the evidence supports benefits of HIE in reducing the use of specific resources and improving the quality of care, the full impact of HIE on clinical outcomes and potential harms are inadequately studied. Future studies must address comprehensive questions, use more rigorous designs, and employ a standard for describing types of HIE.\n\n\nTRIAL REGISTRATION\nPROSPERO Registry No CRD42014013285; http://www.crd.york.ac.uk/PROSPERO/ display_record.asp?ID=CRD42014013285 (Archived by WebCite at http://www.webcitation.org/6dZhqDM8t)." }, { "pmid": "28851681", "title": "Is There Evidence of Cost Benefits of Electronic Medical Records, Standards, or Interoperability in Hospital Information Systems? Overview of Systematic Reviews.", "abstract": "BACKGROUND\nElectronic health (eHealth) interventions may improve the quality of care by providing timely, accessible information about one patient or an entire population. Electronic patient care information forms the nucleus of computerized health information systems. However, interoperability among systems depends on the adoption of information standards. Additionally, investing in technology systems requires cost-effectiveness studies to ensure the sustainability of processes for stakeholders.\n\n\nOBJECTIVE\nThe objective of this study was to assess cost-effectiveness of the use of electronically available inpatient data systems, health information exchange, or standards to support interoperability among systems.\n\n\nMETHODS\nAn overview of systematic reviews was conducted, assessing the MEDLINE, Cochrane Library, LILACS, and IEEE Library databases to identify relevant studies published through February 2016. The search was supplemented by citations from the selected papers. The primary outcome sought the cost-effectiveness, and the secondary outcome was the impact on quality of care. Independent reviewers selected studies, and disagreement was resolved by consensus. The quality of the included studies was evaluated using a measurement tool to assess systematic reviews (AMSTAR).\n\n\nRESULTS\nThe primary search identified 286 papers, and two papers were manually included. A total of 211 were systematic reviews. From the 20 studies that were selected after screening the title and abstract, 14 were deemed ineligible, and six met the inclusion criteria. The interventions did not show a measurable effect on cost-effectiveness. Despite the limited number of studies, the heterogeneity of electronic systems reported, and the types of intervention in hospital routines, it was possible to identify some preliminary benefits in quality of care. Hospital information systems, along with information sharing, had the potential to improve clinical practice by reducing staff errors or incidents, improving automated harm detection, monitoring infections more effectively, and enhancing the continuity of care during physician handoffs.\n\n\nCONCLUSIONS\nThis review identified some benefits in the quality of care but did not provide evidence that the implementation of eHealth interventions had a measurable impact on cost-effectiveness in hospital settings. However, further evidence is needed to infer the impact of standards adoption or interoperability in cost benefits of health care; this in turn requires further research." }, { "pmid": "25599991", "title": "Health information exchange implementation: lessons learned and critical success factors from a case study.", "abstract": "BACKGROUND\nMuch attention has been given to the proposition that the exchange of health information as an act, and health information exchange (HIE), as an entity, are critical components of a framework for health care change, yet little has been studied to understand the value proposition of implementing HIE with a statewide HIE. Such an organization facilitates the exchange of health information across disparate systems, thus following patients as they move across different care settings and encounters, whether or not they share an organizational affiliation. A sociotechnical systems approach and an interorganizational systems framework were used to examine implementation of a health system electronic medical record (EMR) system onto a statewide HIE, under a cooperative agreement with the Office of the National Coordinator for Health Information Technology, and its collaborating organizations.\n\n\nOBJECTIVE\nThe objective of the study was to focus on the implementation of a health system onto a statewide HIE; provide insight into the technical, organizational, and governance aspects of a large private health system and the Virginia statewide HIE (organizations with the shared goal of exchanging health information); and to understand the organizational motivations and value propositions apparent during HIE implementation.\n\n\nMETHODS\nWe used a formative evaluation methodology to investigate the first implementation of a health system onto the statewide HIE. Qualitative methods (direct observation, 36 hours), informal information gathering, semistructured interviews (N=12), and document analysis were used to gather data between August 12, 2012 and June 24, 2013. Derived from sociotechnical concepts, a Blended Value Collaboration Enactment Framework guided the data gathering and analysis to understand organizational stakeholders' perspectives across technical, organizational, and governance dimensions.\n\n\nRESULTS\nSeveral challenges, successes, and lessons learned during the implementation of a health system to the statewide HIE were found. The most significant perceived success was accomplishing the implementation, although many interviewees also underscored the value of a project champion with decision-making power. In terms of lessons learned, social reasons were found to be very significant motivators for early implementation, frequently outweighing economic motivations. It was clear that understanding the guides early in the project would have mitigated some of the challenges that emerged, and early communication with the electronic health record vendor so that they have a solid understanding of the undertaking was critical. An HIE implementations evaluation framework was found to be useful for assessing challenges, motivations, value propositions for participating, and success factors to consider for future implementations.\n\n\nCONCLUSIONS\nThis case study illuminates five critical success factors for implementation of a health system onto a statewide HIE. This study also reveals that organizations have varied motivations and value proposition perceptions for engaging in the exchange of health information, few of which, at the early stages, are economically driven." }, { "pmid": "23616891", "title": "The impact of health information exchange on health outcomes.", "abstract": "BACKGROUND AND OBJECTIVE\nHealthcare professionals, industry and policy makers have identified Health Information Exchange (HIE) as a solution to improve patient safety and overall quality of care. The potential benefits of HIE on healthcare have fostered its implementation and adoption in the United States. However,there is a dearth of publications that demonstrate HIE effectiveness. The purpose of this review was to identify and describe evidence of HIE impact on healthcare outcomes.\n\n\nMETHODS\nA database search was conducted. The inclusion criteria included original investigations in English that focused on a HIE outcome evaluation. Two independent investigators reviewed the articles. A qualitative coding approach was used to analyze the data.\n\n\nRESULTS\nOut of 207 abstracts retrieved, five articles met the inclusion criteria. Of these, 3 were randomized controlled trials, 1 involved retrospective review of data, and 1 was a prospective study. We found that HIE benefits on healthcare outcomes are still sparsely evaluated, and that among the measurements used to evaluate HIE healthcare utilization is the most widely used.\n\n\nCONCLUSIONS\nOutcomes evaluation is required to give healthcare providers and policy-makers evidence to incorporate in decision-making processes. This review showed a dearth of HIE outcomes data in the published peer reviewed literature so more research in this area is needed. Future HIE evaluations with different levels of interoperability should incorporate a framework that allows a detailed examination of HIE outcomes that are likely to positively affect care." }, { "pmid": "27123451", "title": "An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies.", "abstract": "Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information." }, { "pmid": "27480749", "title": "A methodology based on openEHR archetypes and software agents for developing e-health applications reusing legacy systems.", "abstract": "BACKGROUND AND OBJECTIVE\nIn Pervasive Healthcare, novel information and communication technologies are applied to support the provision of health services anywhere, at anytime and to anyone. Since health systems may offer their health records in different electronic formats, the openEHR Foundation prescribes the use of archetypes for describing clinical knowledge in order to achieve semantic interoperability between these systems. Software agents have been applied to simulate human skills in some healthcare procedures. This paper presents a methodology, based on the use of openEHR archetypes and agent technology, which aims to overcome the weaknesses typically found in legacy healthcare systems, thereby adding value to the systems.\n\n\nMETHODS\nThis methodology was applied in the design of an agent-based system, which was used in a realistic healthcare scenario in which a medical staff meeting to prepare a cardiac surgery has been supported. We conducted experiments with this system in a distributed environment composed by three cardiology clinics and a center of cardiac surgery, all located in the city of Marília (São Paulo, Brazil). We evaluated this system according to the Technology Acceptance Model.\n\n\nRESULTS\nThe case study confirmed the acceptance of our agent-based system by healthcare professionals and patients, who reacted positively with respect to the usefulness of this system in particular, and with respect to task delegation to software agents in general. The case study also showed that a software agent-based interface and a tools-based alternative must be provided to the end users, which should allow them to perform the tasks themselves or to delegate these tasks to other people.\n\n\nCONCLUSIONS\nA Pervasive Healthcare model requires efficient and secure information exchange between healthcare providers. The proposed methodology allows designers to build communication systems for the message exchange among heterogeneous healthcare systems, and to shift from systems that rely on informal communication of actors to a more automated and less error-prone agent-based system. Our methodology preserves significant investment of many years in the legacy systems and allows developers to extend them adding new features to these systems, by providing proactive assistance to the end-users and increasing the user mobility with an appropriate support." }, { "pmid": "26359473", "title": "Electronic Health Record Challenges, Workarounds, and Solutions Observed in Practices Integrating Behavioral Health and Primary Care.", "abstract": "PURPOSE\nThis article describes the electronic health record (EHR)-related experiences of practices striving to integrate behavioral health and primary care using tailored, evidenced-based strategies from 2012 to 2014; and the challenges, workarounds and initial health information technology (HIT) solutions that emerged during implementation.\n\n\nMETHODS\nThis was an observational, cross-case comparative study of 11 diverse practices, including 8 primary care clinics and 3 community mental health centers focused on the implementation of integrated care. Practice characteristics (eg, practice ownership, federal designation, geographic area, provider composition, EHR system, and patient panel characteristics) were collected using a practice information survey and analyzed to report descriptive information. A multidisciplinary team used a grounded theory approach to analyze program documents, field notes from practice observation visits, online diaries, and semistructured interviews.\n\n\nRESULTS\nEight primary care practices used a single EHR and 3 practices used 2 different EHRs, 1 to document behavioral health and 1 to document primary care information. Practices experienced common challenges with their EHRs' capabilities to 1) document and track relevant behavioral health and physical health information, 2) support communication and coordination of care among integrated teams, and 3) exchange information with tablet devices and other EHRs. Practices developed workarounds in response to these challenges: double documentation and duplicate data entry, scanning and transporting documents, reliance on patient or clinician recall for inaccessible EHR information, and use of freestanding tracking systems. As practices gained experience with integration, they began to move beyond workarounds to more permanent HIT solutions ranging in complexity from customized EHR templates, EHR upgrades, and unified EHRs.\n\n\nCONCLUSION\nIntegrating behavioral health and primary care further burdens EHRs. Vendors, in cooperation with clinicians, should intentionally design EHR products that support integrated care delivery functions, such as data documentation and reporting to support tracking patients with emotional and behavioral problems over time and settings, integrated teams working from shared care plans, template-driven documentation for common behavioral health conditions such as depression, and improved registry functionality and interoperability. This work will require financial support and cooperative efforts among clinicians, EHR vendors, practice assistance organizations, regulators, standards setters, and workforce educators." }, { "pmid": "22184253", "title": "Behavioral health providers' beliefs about health information exchange: a statewide survey.", "abstract": "OBJECTIVE\nTo assess behavioral health providers' beliefs about the benefits and barriers of health information exchange (HIE).\n\n\nMETHODS\nSurvey of a total of 2010 behavioral health providers in a Midwestern state (33% response rate), with questions based on previously reported open-ended beliefs elicitation interviews.\n\n\nRESULTS\nFactor analysis resulted in four groupings: beliefs that HIE would improve care and communication, add cost and time burdens, present access and vulnerability concerns, and impact workflow and control (positively and negatively). A regression model including all four factors parsimoniously predicted attitudes toward HIE. Providers clustered into two groups based on their beliefs: a majority (67%) were positive about the impact of HIE, and the remainder (33%) were negative. There were some professional/demographic differences between the two clusters of providers.\n\n\nDISCUSSION\nMost behavioral health providers are supportive of HIE; however, their adoption and use of it may continue to lag behind that of medical providers due to perceived cost and time burdens and concerns about access to and vulnerability of information." }, { "pmid": "29881743", "title": "Health Information Exchange Use (1990-2015): A Systematic Review.", "abstract": "BACKGROUND\nIn June 2014, the Office of the National Coordinator for Health Information Technology published a 10-year roadmap for the United States to achieve interoperability of electronic health records (EHR) by 2024. A key component of this strategy is the promotion of nationwide health information exchange (HIE). The 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act provided significant investments to achieve HIE.\n\n\nOBJECTIVE\nWe conducted a systematic literature review to describe the use of HIE through 2015.\n\n\nMETHODS\nWe searched MEDLINE, PsycINFO, CINAHL, and Cochrane databases (1990 - 2015); reference lists; and tables of contents of journals not indexed in the databases searched. We extracted data describing study design, setting, geographic location, characteristics of HIE implementation, analysis, follow-up, and results. Study quality was dual-rated using pre-specified criteria and discrepancies resolved through consensus.\n\n\nRESULTS\nWe identified 58 studies describing either level of use or primary uses of HIE. These were a mix of surveys, retrospective database analyses, descriptions of audit logs, and focus groups. Settings ranged from community-wide to multinational. Results suggest that HIE use has risen substantially over time, with 82% of non-federal hospitals exchanging information (2015), 38% of physician practices (2013), and 17-23% of long-term care facilities (2013). Statewide efforts, originally funded by HITECH, varied widely, with a small number of states providing the bulk of the data. Characteristics of greater use include the presence of an EHR, larger practice size, and larger market share of the health-system.\n\n\nCONCLUSIONS\nUse of HIE in the United States is growing but is still limited. Opportunities remain for expansion. Characteristics of successful implementations may provide a path forward." } ]
Frontiers in Neurorobotics
30618707
PMC6304372
10.3389/fnbot.2018.00086
DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information
Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators.
2. Related WorkIn the past few years, deep learning has demonstrated to be an effective tool for image classification (Simonyan and Zisserman, 2014), object detection (Girshick et al., 2014), as well as face and hand recognition tasks (Bambach et al., 2015; Parkhi et al., 2015). CNNs are currently the most popular building block for machine vision applications, thanks to their ability of effectively processing raw data, automating the feature extraction process that before required heavily hand-engineered procedures. When it comes to CNN applications in robotic manipulation, (Yang et al., 2015) proposed a system to learn manipulation actions by processing unconstrained videos. The system consists of two CNN-based recognition modules, one for classifying the hand grasp type and the other for object recognition. The manipulation action is hence learned and generated through a grammar parser module, which combines both the objects and the left and right hand grasping types. In this work, dynamic information on temporal correlation between video frames is not considered. In Bambach et al. (2015), it has been developed a hand-based activity recognition method to classify whole labeled video frames, to identify one among four activity types (playing cards, chess, Jenga and solving puzzle). To incorporate temporal dependencies, each frame is classified using a fixed-size temporal window centered on the frame. Results show that temporal information significantly increases the accuracy of activity recognition, even though a simple voting-based approach is used. Furthermore, the classification is intended on the whole video sequence, while the dynamic action components underpinning the complete task execution cannot be identified.To enable a successful exploitation and learning of robust spatio-temporal features, CNNs are often combined with RNN (Karpathy et al., 2014; Ng et al., 2015; Nguyen et al., 2017), typically exploiting LSTM recurrent cells which allow to store a compressed representation of medium-range temporal relationships in the input data.For instance, in the context of video captioning applications, Nguyen et al. (2017) proposed a method that combines a CNN to extract visual features from the video frames and two LSTM layers to generate a network prediction defined as a list of words describing hand grasping and object types. The visual features are frame-oriented: this means that there is no explicit information as concerns hand poses, neither any temporal sequence to characterize the dynamics of the hand movement and pose evolution. Such static descriptions are interesting for the analysis of contact configurations between the hand and objects but the dynamics of the action is not taken into account by this model.In Wang et al. (2018) the authors used a combination of autoencoders and support vector machine classifiers to automatically recognize everyday human activities (such as driving, walking, etc). However, the dynamic segmentation of action primitives, as we target in this paper for grasping, was out of the scope of the work. In Garcia-Hernando et al. (2017) a model for hand-pose estimation was proposed, which employed, in addition to RGB data (as we do in our work), depth information. In Sudhakaran and Lanz (2017) a combined usage of CNN and RNN, similar to the one we present in this manuscript, was proposed for the recognition of first person perspective interactions. The dataset they used to validate their architecture consists of first person interaction videos: each video contains one activity execution among different types of activities. Very recently, in Zhang Y. et al. (2018), a new benchmark dataset was released to evaluate state-of-the-art deep neural networks for activity recognition from egocentric videos. The focus was to recognize daily actions rather than manipulation primitives and the presence of depth information together with luminance and color was assumed. This consistently restricts the range of processable videos with respect to our general approach (intended to analyze videos from non-specialist sources such as YouTube).To the authors' best knowledge, our work is the first that applies machine learning techniques to extract dynamic time-related information of human hand poses from videos involving complex interactions of the hand with the environment. More specifically, the visual features are hand centric and the videos we use are labeled in a dynamic fashion, taking into account both pre-grasp and grasping actions.
[ "27271621", "28900393", "3978150", "9377276", "26959679", "27992265", "26017442", "26923030" ]
[ { "pmid": "27271621", "title": "A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition.", "abstract": "Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements) of wearable hand pose reconstruction (HPR) systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF) technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom) leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness." }, { "pmid": "28900393", "title": "Postural Hand Synergies during Environmental Constraint Exploitation.", "abstract": "Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. In doing that, a vast range of kinematic strategies can be observed. However, in this work we formulate the hypothesis that such ability can be described in terms of a synergistic behavior in the generation of hand postures, i.e., using a reduced set of commonly used kinematic patterns. This is in analogy with previous studies showing the presence of such behavior in different tasks, such as grasping. We investigated this hypothesis in experiments performed by six subjects, who were asked to grasp objects from a flat surface. We quantitatively characterized hand posture behavior from a kinematic perspective, i.e., the hand joint angles, in both pre-shaping and during the interaction with the environment. To determine the role of tactile feedback, we repeated the same experiments but with subjects wearing a rigid shell on the fingertips to reduce cutaneous afferent inputs. Results show the persistence of at least two postural synergies in all the considered experimental conditions and phases. Tactile impairment does not alter significantly the first two synergies, and contact with the environment generates a change only for higher order Principal Components. A good match also arises between the first synergy found in our analysis and the first synergy of grasping as quantified by previous work. The present study is motivated by the interest of learning from the human example, extracting lessons that can be applied in robot design and control. Thus, we conclude with a discussion on implications for robotics of our findings." }, { "pmid": "3978150", "title": "A theoretical model of phase transitions in human hand movements.", "abstract": "Earlier experimental studies by one of us (Kelso, 1981a, 1984) have shown that abrupt phase transitions occur in human hand movements under the influence of scalar changes in cycling frequency. Beyond a critical frequency the originally prepared out-of-phase, antisymmetric mode is replaced by a symmetrical, in-phase mode involving simultaneous activation of homologous muscle groups. Qualitatively, these phase transitions are analogous to gait shifts in animal locomotion as well as phenomena common to other physical and biological systems in which new \"modes\" or spatiotemporal patterns arise when the system is parametrically scaled beyond its equilibrium state (Haken, 1983). In this paper a theoretical model, using concepts central to the interdisciplinary field of synergetics and nonlinear oscillator theory, is developed, which reproduces (among other features) the dramatic change in coordinative pattern observed between the hands." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "26959679", "title": "What Makes for Effective Detection Proposals?", "abstract": "Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods." }, { "pmid": "27992265", "title": "Recent Data Sets on Object Manipulation: A Survey.", "abstract": "Data sets is crucial not only for model learning and evaluation but also to advance knowledge on human behavior, thus fostering mutual inspiration between neuroscience and robotics. However, choosing the right data set to use or creating a new data set is not an easy task, because of the variety of data that can be found in the related literature. The first step to tackle this issue is to collect and organize those that are available. In this work, we take a significant step forward by reviewing data sets that were published in the past 10 years and that are directly related to object manipulation and grasping. We report on modalities, activities, and annotations for each individual data set and we discuss our view on its use for object manipulation. We also compare the data sets and summarize them. Finally, we conclude the survey by providing suggestions and discussing the best practices for the creation of new data sets." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "26923030", "title": "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands.", "abstract": "The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project \"The Hand Embodied\" (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies." } ]
PLoS Computational Biology
30589834
PMC6307714
10.1371/journal.pcbi.1006578
Trajectory-based training enables protein simulations with accurate folding and Boltzmann ensembles in cpu-hours
An ongoing challenge in protein chemistry is to identify the underlying interaction energies that capture protein dynamics. The traditional trade-off in biomolecular simulation between accuracy and computational efficiency is predicated on the assumption that detailed force fields are typically well-parameterized, obtaining a significant fraction of possible accuracy. We re-examine this trade-off in the more realistic regime in which parameterization is a greater source of error than the level of detail in the force field. To address parameterization of coarse-grained force fields, we use the contrastive divergence technique from machine learning to train from simulations of 450 proteins. In our procedure, the computational efficiency of the model enables high accuracy through the precise tuning of the Boltzmann ensemble. This method is applied to our recently developed Upside model, where the free energy for side chains is rapidly calculated at every time-step, allowing for a smooth energy landscape without steric rattling of the side chains. After this contrastive divergence training, the model is able to de novo fold proteins up to 100 residues on a single core in days. This improved Upside model provides a starting point both for investigation of folding dynamics and as an inexpensive Bayesian prior for protein physics that can be integrated with additional experimental or bioinformatic data.
Related workContrastive divergence optimization has been applied to Gō-like protein potentials sampled with crankshaft Monte Carlo moves [22, 23]. These works optimized only tens of parameters, and the resulting model is used to fold protein G and 16-residue peptides.Other studies have trained protein energy functions using libraries of decoys [24]. Such efforts are challenging because atomic energy functions have rugged energy landscapes where even small structural differences can produce large energy differences. This ruggedness implies that scoring decoys by energy without first relaxing them is problematic for the sharply-defined force fields necessary to describe protein physics, a problem that contrastive divergence avoids.A distinction between contrastive divergence and traditional training methods, such as Z-score optimization [25], relates to the goal and the source of the decoys. In contrastive divergence, the critical task is to produce a high population of low RMSD structures with the model. Z-scoring training attempts to make the energy of the native state much lower than the average energy of an pre-constructed decoy library. This is problematic because the decoys may not have structures that exhibit the pathologies of a poorly-trained model. Additionally, we believe optimization should concentrate on the lowest energies that have significant Boltzmann probability, not the average energy which is dominated by highly-unlikely structures. Furthermore, it is difficult to evaluate the reliable energies of decoys without relaxing the decoys. Methods based on simulation ensembles (such as maximum likelihood and contrastive divergence) are well-defined and do not need pre-constructed decoy libraries.Podtelezhnikov et al. [26] apply contrastive divergence to few-parameter protein models to optimize the parameters of hydrogen bond geometry. Their work is similar to this paper but narrower in scope.The maximum likelihood method requires the computation of the derivative of the free energy, which involves a summation over an equilibrium ensemble. Such a requirement necessitates a very long simulation to update parameters. Still, this approach can be viable when used with very small proteins on which the simulations converge quickly. A variant of maximum likelihood is given in Ref. [27], where decoys are generated and a maximum likelihood model is fit to adjust the parameters to distinguish between near-native and far-from-native conformations. The potential is trained on a single protein, tryptophan cage, and then the resulting potential is applied to a number of α-helical proteins with some success.
[ "23889448", "19514729", "20442867", "22034434", "25255057", "27847872", "27378298", "10869041", "15314214", "15066438", "24683370", "26263302", "19791846" ]
[ { "pmid": "23889448", "title": "Simplified protein models: predicting folding pathways and structure using amino acid sequences.", "abstract": "We demonstrate the ability of simultaneously determining a protein's folding pathway and structure using a properly formulated model without prior knowledge of the native structure. Our model employs a natural coordinate system for describing proteins and a search strategy inspired by the observation that real proteins fold in a sequential fashion by incrementally stabilizing nativelike substructures or \"foldons.\" Comparable folding pathways and structures are obtained for the twelve proteins recently studied using atomistic molecular dynamics simulations [K. Lindorff-Larsen, S. Piana, R. O. Dror, D. E. Shaw, Science 334, 517 (2011)], with our calculations running several orders of magnitude faster. We find that nativelike propensities in the unfolded state do not necessarily determine the order of structure formation, a departure from a major conclusion of the molecular dynamics study. Instead, our results support a more expansive view wherein intrinsic local structural propensities may be enhanced or overridden in the folding process by environmental context. The success of our search strategy validates it as an expedient mechanism for folding both in silico and in vivo." }, { "pmid": "19514729", "title": "Optimized molecular dynamics force fields applied to the helix-coil transition of polypeptides.", "abstract": "Obtaining the correct balance of secondary structure propensities is a central priority in protein force-field development. Given that current force fields differ significantly in their alpha-helical propensities, a correction to match experimental results would be highly desirable. We have determined simple backbone energy corrections for two force fields to reproduce the fraction of helix measured in short peptides at 300 K. As validation, we show that the optimized force fields produce results in excellent agreement with nuclear magnetic resonance experiments for folded proteins and short peptides not used in the optimization. However, despite the agreement at ambient conditions, the dependence of the helix content on temperature is too weak, a problem shared with other force fields. A fit of the Lifson-Roig helix-coil theory shows that both the enthalpy and entropy of helix formation are too small: the helix extension parameter w agrees well with experiment, but its entropic and enthalpic components are both only about half the respective experimental estimates. Our structural and thermodynamic analyses point toward the physical origins of these shortcomings in current force fields, and suggest ways to address them in future force-field development." }, { "pmid": "20442867", "title": "Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.", "abstract": "Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp." }, { "pmid": "22034434", "title": "How fast-folding proteins fold.", "abstract": "An outstanding challenge in the field of molecular biology has been to understand the process by which proteins fold into their characteristic three-dimensional structures. Here, we report the results of atomic-level molecular dynamics simulations, over periods ranging between 100 μs and 1 ms, that reveal a set of common principles underlying the folding of 12 structurally diverse proteins. In simulations conducted with a single physics-based energy function, the proteins, representing all three major structural classes, spontaneously and repeatedly fold to their experimentally determined native structures. Early in the folding process, the protein backbone adopts a nativelike topology while certain secondary structure elements and a small number of nonlocal contacts form. In most cases, folding follows a single dominant route in which elements of the native structure appear in an order highly correlated with their propensity to form in the unfolded state." }, { "pmid": "25255057", "title": "Folding simulations for proteins with diverse topologies are accessible in days with a physics-based force field and implicit solvent.", "abstract": "The millisecond time scale needed for molecular dynamics simulations to approach the quantitative study of protein folding is not yet routine. One approach to extend the simulation time scale is to perform long simulations on specialized and expensive supercomputers such as Anton. Ideally, however, folding simulations would be more economical while retaining reasonable accuracy, and provide feedback on structure, stability and function rapidly enough if partnered directly with experiment. Approaches to this problem typically involve varied compromises between accuracy, precision, and cost; the goal here is to address whether simple implicit solvent models have become sufficiently accurate for their weaknesses to be offset by their ability to rapidly provide much more precise conformational data as compared to explicit solvent. We demonstrate that our recently developed physics-based model performs well on this challenge, enabling accurate all-atom simulated folding for 16 of 17 proteins with a variety of sizes, secondary structure, and topologies. The simulations were carried out using the Amber software on inexpensive GPUs, providing ∼1 μs/day per GPU, and >2.5 ms data presented here. We also show that native conformations are preferred over misfolded structures for 14 of the 17 proteins. For the other 3, misfolded structures are thermodynamically preferred, suggesting opportunities for further improvement." }, { "pmid": "27847872", "title": "Blind protein structure prediction using accelerated free-energy simulations.", "abstract": "We report a key proof of principle of a new acceleration method [Modeling Employing Limited Data (MELD)] for predicting protein structures by molecular dynamics simulation. It shows that such Boltzmann-satisfying techniques are now sufficiently fast and accurate to predict native protein structures in a limited test within the Critical Assessment of Structure Prediction (CASP) community-wide blind competition." }, { "pmid": "27378298", "title": "Performance of protein-structure predictions with the physics-based UNRES force field in CASP11.", "abstract": "Participating as the Cornell-Gdansk group, we have used our physics-based coarse-grained UNited RESidue (UNRES) force field to predict protein structure in the 11th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP11). Our methodology involved extensive multiplexed replica exchange simulations of the target proteins with a recently improved UNRES force field to provide better reproductions of the local structures of polypeptide chains. All simulations were started from fully extended polypeptide chains, and no external information was included in the simulation process except for weak restraints on secondary structure to enable us to finish each prediction within the allowed 3-week time window. Because of simplified UNRES representation of polypeptide chains, use of enhanced sampling methods, code optimization and parallelization and sufficient computational resources, we were able to treat, for the first time, all 55 human prediction targets with sizes from 44 to 595 amino acid residues, the average size being 251 residues. Complete structures of six single-domain proteins were predicted accurately, with the highest accuracy being attained for the T0769, for which the CαRMSD was 3.8 Å for 97 residues of the experimental structure. Correct structures were also predicted for 13 domains of multi-domain proteins with accuracy comparable to that of the best template-based modeling methods. With further improvements of the UNRES force field that are now underway, our physics-based coarse-grained approach to protein-structure prediction will eventually reach global prediction capacity and, consequently, reliability in simulating protein structure and dynamics that are important in biochemical processes.\n\n\nAVAILABILITY AND IMPLEMENTATION\nFreely available on the web at http://www.unres.pl/ CONTACT: [email protected]." }, { "pmid": "10869041", "title": "The PSIPRED protein structure prediction server.", "abstract": "SUMMARY\nThe PSIPRED protein structure prediction server allows users to submit a protein sequence, perform a prediction of their choice and receive the results of the prediction both textually via e-mail and graphically via the web. The user may select one of three prediction methods to apply to their sequence: PSIPRED, a highly accurate secondary structure prediction method; MEMSAT 2, a new version of a widely used transmembrane topology prediction method; or GenTHREADER, a sequence profile based fold recognition method.\n\n\nAVAILABILITY\nFreely available to non-commercial users at http://globin.bio.warwick.ac.uk/psipred/" }, { "pmid": "15314214", "title": "Random-coil behavior and the dimensions of chemically unfolded proteins.", "abstract": "Spectroscopic studies have identified a number of proteins that appear to retain significant residual structure under even strongly denaturing conditions. Intrinsic viscosity, hydrodynamic radii, and small-angle x-ray scattering studies, in contrast, indicate that the dimensions of most chemically denatured proteins scale with polypeptide length by means of the power-law relationship expected for random-coil behavior. Here we further explore this discrepancy by expanding the length range of characterized denatured-state radii of gyration (R(G)) and by reexamining proteins that reportedly do not fit the expected dimensional scaling. We find that only 2 of 28 crosslink-free, prosthetic-group-free, chemically denatured polypeptides deviate significantly from a power-law relationship with polymer length. The R(G) of the remaining 26 polypeptides, which range from 16 to 549 residues, are well fitted (r(2) = 0.988) by a power-law relationship with a best-fit exponent, 0.598 +/- 0.028, coinciding closely with the 0.588 predicted for an excluded volume random coil. Therefore, it appears that the mean dimensions of the large majority of chemically denatured proteins are effectively indistinguishable from the mean dimensions of a random-coil ensemble." }, { "pmid": "15066438", "title": "Early collapse is not an obligate step in protein folding.", "abstract": "The dimensions and secondary structure content of two proteins which fold in a two-state manner are measured within milliseconds of denaturant dilution using synchrotron-based, stopped-flow small-angle X-ray scattering and far-UV circular dichroism spectroscopy. Even upon a jump to strongly native conditions, neither ubiquitin nor common-type acylphosphatase contract prior to the major folding event. Circular dichroism and fluorescence indicate that negligible amounts of secondary and tertiary structures form in the burst phase. Thus, for these two denatured states, collapse and secondary structure formation are not energetically downhill processes even under aqueous, low-denaturant conditions. In addition, water appears to be as good a solvent as that with high concentrations of denaturant, when considering the over-all dimensions of the denatured state. However, the removal of denaturant does subtly alter the distribution of backbone dihedral phi,psi angles, most likely resulting in a shift from the polyproline II region to the helical region of the Ramachandran map. We consider the thermodynamic origins of these behaviors along with implications for folding mechanisms and computer simulations thereof." }, { "pmid": "24683370", "title": "Efficient Parameter Estimation of Generalizable Coarse-Grained Protein Force Fields Using Contrastive Divergence: A Maximum Likelihood Approach.", "abstract": "Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/." }, { "pmid": "26263302", "title": "A Maximum-Likelihood Approach to Force-Field Calibration.", "abstract": "A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions." }, { "pmid": "19791846", "title": "Progress and challenges in the automated construction of Markov state models for full protein systems.", "abstract": "Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding." } ]
JMIR mHealth and uHealth
30552085
PMC6315235
10.2196/mhealth.9623
Technology Adoption, Motivational Aspects, and Privacy Concerns of Wearables in the German Running Community: Field Study
BackgroundDespite the availability of a great variety of consumer-oriented wearable devices, perceived usefulness, user satisfaction, and privacy concerns have not been fully investigated in the field of wearable applications. It is not clear why healthy, active citizens equip themselves with wearable technology for running activities, and what privacy and data sharing features might influence their individual decisions.ObjectiveThe primary aim of the study was to shed light on motivational and privacy aspects of wearable technology used by healthy, active citizens. A secondary aim was to reevaluate smart technology adoption within the running community in Germany in 2017 and to compare it with the results of other studies and our own study from 2016.MethodsA questionnaire was designed to assess what wearable technology is used by runners of different ages and sex. Data on motivational factors were also collected. The survey was conducted at a regional road race event in May 2017, paperless via a self-implemented app. The demographic parameters of the sample cohort were compared with the event’s official starter list. In addition, the validation included comparison with demographic parameters of the largest German running events in Berlin, Hamburg, and Frankfurt/Main. Binary logistic regression analysis was used to investigate whether age, sex, or course distance were associated with device use. The same method was applied to analyze whether a runner’s age was predictive of privacy concerns, openness to voluntary data sharing, and level of trust in one’s own body for runners not using wearables (ie, technological assistance considered unnecessary in this group).ResultsA total of 845 questionnaires were collected. Use of technology for activity monitoring during events or training was prevalent (73.0%, 617/845) in this group. Male long-distance runners and runners in younger age groups (30-39 years: odds ratio [OR] 2.357, 95% CI 1.378-4.115; 40-49 years: OR 1.485, 95% CI 0.920-2.403) were more likely to use tracking devices, with ages 16 to 29 years as the reference group (OR 1). Where wearable technology was used, 42.0% (259/617) stated that they were not concerned if data might be shared by a device vendor without their consent. By contrast, 35.0% (216/617) of the participants would not accept this. In the case of voluntary sharing, runners preferred to exchange tracked data with friends (51.7%, 319/617), family members (43.4%, 268/617), or a physician (32.3%, 199/617). A large proportion (68.0%, 155/228) of runners not using technology stated that they preferred to trust what their own body was telling them rather than trust a device or an app (50-59 years: P<.001; 60-69 years: P=.008).ConclusionsA total of 136 distinct devices by 23 vendors or manufacturers and 17 running apps were identified. Out of 4, 3 runners (76.8%, 474/617) always trusted in the data tracked by their personal device. Data privacy concerns do, however, exist in the German running community, especially for older age groups (30-39 years: OR 1.041, 95% CI 0.371-0.905; 40-49 years: OR 1.421, 95% CI 0.813-2.506; 50-59 years: OR 2.076, 95% CI 1.813-3.686; 60-69 years: OR 2.394, 95% CI 0.957-6.183).
Related WorkA number of studies in a variety of different settings have been performed. These include several studies that have investigated the accuracy of commercially available wearable devices, mostly in laboratory settings, for example, treadmill experiments [12-14]. These studies mainly focused on technical features and capabilities of the devices and results from small sample sizes and homogenous cohorts, that is, younger and active males have been reported.A study conducted by Kaewkannate and Kim compared “the accuracy of four wearable devices in conjunction with user friendliness and satisfaction” [15], using a small cohort size (n=7), including 6 healthy male participants, and all participants being graduate students.By contrast, Mercer et al focused on older adults living with chronic illnesses [16]. They applied a mixed-methods approach to study the usability and usefulness of wearable activity trackers. The authors found “wearable activity trackers are perceived as useful and acceptable” for adults aged over 50 years. A different study examined the “Feasibility of Fitness Tracking with Urban Youth” with a body mass index of 23 or higher [17]. The findings indicate that “wearable devices alone are not sufficient to support significant changes in existing physical activity practices” for users (n=24) in younger age groups. Nevertheless, feasibility studies indicate that “monitor comfort and design and feedback features [are] important factors to children and adolescents” [18].Another study assessed the “acceptance and usage of wearable activity trackers in Canadian community-dwelling older adults” in a crossover design study [19]. For 20 adults, aged 55 years and older (mean 64 years), 2 wearable devices were given to participants who then rated different aspects of the devices and their use after 21 days of use. The authors report that “privacy was less of concern for older adults, but it may have stemmed from a lack of understanding of the privacy risks and implications.”In other research, however, privacy seems to be an important aspect for users of wearable devices or apps [20]. Other researchers have found that “individuals' decisions to adopt healthcare wearable devices are determined by their risk-benefit analyses” [21]. The authors concluded that “individuals' perceived privacy risk is formed by health information sensitivity, personal innovativeness, legislative protection, and perceived prestige.” Their findings suggest that consumers’ motivations and buying decisions are “determined by [an individual] risk-benefit assessment.”A review paper on ethical implications of user perceptions concluded that “wearable device users are highly concerned regarding privacy issues and consider informed consent as ‘very important’ when sharing information with third parties.” [22]. An explorative study including 82 participants investigated “privacy concerns and sensitivity regarding data gathered with wearables” [23]. The authors reported “that the participants would prefer to keep said data to themselves. Furthermore, user factors such as age, gender, and privacy behavior could not be identified as having an effect on sharing said data.” Yet, it remains an open question whether these findings are applicable to a broad and heterogeneous population, for example, a running community at a road running event.Alley et al determined “people's current use, interest and preferences for advanced [pedometer] trackers” via a cross-sectional Australia-wide telephone survey [24]. The authors found that 31% of the participants “considered counting steps the most important function and 30% regarded accuracy as the most important characteristic.” About half of the participants were hesitant toward using current activity tracking devices or expressed individual skepticism. According to this survey [24], the main reasons “for not wanting to use a tracker were, ‘I don't think it would help me’ (39%), and ‘I don't want to increase my activity’ (47%).” It is not clear whether these findings can be confirmed in similar study settings in other countries.
[ "29572200", "28246070", "26684758", "29084709", "28270382", "25668268", "26464801", "27220855", "26818775", "27881359", "29141837", "26404673", "26878757", "28155094", "26836780", "28732074" ]
[ { "pmid": "29572200", "title": "Evaluating the Impact of Physical Activity Apps and Wearables: Interdisciplinary Review.", "abstract": "BACKGROUND\nAlthough many smartphone apps and wearables have been designed to improve physical activity, their rapidly evolving nature and complexity present challenges for evaluating their impact. Traditional methodologies, such as randomized controlled trials (RCTs), can be slow. To keep pace with rapid technological development, evaluations of mobile health technologies must be efficient. Rapid alternative research designs have been proposed, and efficient in-app data collection methods, including in-device sensors and device-generated logs, are available. Along with effectiveness, it is important to measure engagement (ie, users' interaction and usage behavior) and acceptability (ie, users' subjective perceptions and experiences) to help explain how and why apps and wearables work.\n\n\nOBJECTIVES\nThis study aimed to (1) explore the extent to which evaluations of physical activity apps and wearables: employ rapid research designs; assess engagement, acceptability, as well as effectiveness; use efficient data collection methods; and (2) describe which dimensions of engagement and acceptability are assessed.\n\n\nMETHOD\nAn interdisciplinary scoping review using 8 databases from health and computing sciences. Included studies measured physical activity, and evaluated physical activity apps or wearables that provided sensor-based feedback. Results were analyzed using descriptive numerical summaries, chi-square testing, and qualitative thematic analysis.\n\n\nRESULTS\nA total of 1829 abstracts were screened, and 858 articles read in full. Of 111 included studies, 61 (55.0%) were published between 2015 and 2017. Most (55.0%, 61/111) were RCTs, and only 2 studies (1.8%) used rapid research designs: 1 single-case design and 1 multiphase optimization strategy. Other research designs included 23 (22.5%) repeated measures designs, 11 (9.9%) nonrandomized group designs, 10 (9.0%) case studies, and 4 (3.6%) observational studies. Less than one-third of the studies (32.0%, 35/111) investigated effectiveness, engagement, and acceptability together. To measure physical activity, most studies (90.1%, 101/111) employed sensors (either in-device [67.6%, 75/111] or external [23.4%, 26/111]). RCTs were more likely to employ external sensors (accelerometers: P=.005). Studies that assessed engagement (52.3%, 58/111) mostly used device-generated logs (91%, 53/58) to measure the frequency, depth, and length of engagement. Studies that assessed acceptability (57.7%, 64/111) most often used questionnaires (64%, 42/64) and/or qualitative methods (53%, 34/64) to explore appreciation, perceived effectiveness and usefulness, satisfaction, intention to continue use, and social acceptability. Some studies (14.4%, 16/111) assessed dimensions more closely related to usability (ie, burden of sensor wear and use, interface complexity, and perceived technical performance).\n\n\nCONCLUSIONS\nThe rapid increase of research into the impact of physical activity apps and wearables means that evaluation guidelines are urgently needed to promote efficiency through the use of rapid research designs, in-device sensors and user-logs to assess effectiveness, engagement, and acceptability. Screening articles was time-consuming because reporting across health and computing sciences lacked standardization. Reporting guidelines are therefore needed to facilitate the synthesis of evidence across disciplines." }, { "pmid": "28246070", "title": "Accuracy and Adoption of Wearable Technology Used by Active Citizens: A Marathon Event Field Study.", "abstract": "BACKGROUND\nToday, runners use wearable technology such as global positioning system (GPS)-enabled sport watches to track and optimize their training activities, for example, when participating in a road race event. For this purpose, an increasing amount of low-priced, consumer-oriented wearable devices are available. However, the variety of such devices is overwhelming. It is unclear which devices are used by active, healthy citizens and whether they can provide accurate tracking results in a diverse study population. No published literature has yet assessed the dissemination of wearable technology in such a cohort and related influencing factors.\n\n\nOBJECTIVE\nThe aim of this study was 2-fold: (1) to determine the adoption of wearable technology by runners, especially \"smart\" devices and (2) to investigate on the accuracy of tracked distances as recorded by such devices.\n\n\nMETHODS\nA pre-race survey was applied to assess which wearable technology was predominantly used by runners of different age, sex, and fitness level. A post-race survey was conducted to determine the accuracy of the devices that tracked the running course. Logistic regression analysis was used to investigate whether age, sex, fitness level, or track distance were influencing factors. Recorded distances of different device categories were tested with a 2-sample t test against each other.\n\n\nRESULTS\nA total of 898 pre-race and 262 post-race surveys were completed. Most of the participants (approximately 75%) used wearable technology for training optimization and distance recording. Females (P=.02) and runners in higher age groups (50-59 years: P=.03; 60-69 years: P<.001; 70-79 year: P=.004) were less likely to use wearables. The mean of the track distances recorded by mobile phones with combined app (mean absolute error, MAE=0.35 km) and GPS-enabled sport watches (MAE=0.12 km) was significantly different (P=.002) for the half-marathon event.\n\n\nCONCLUSIONS\nA great variety of vendors (n=36) and devices (n=156) were identified. Under real-world conditions, GPS-enabled devices, especially sport watches and mobile phones, were found to be accurate in terms of recorded course distances." }, { "pmid": "26684758", "title": "Systematic review of the validity and reliability of consumer-wearable activity trackers.", "abstract": "BACKGROUND\nConsumer-wearable activity trackers are electronic devices used for monitoring fitness- and other health-related metrics. The purpose of this systematic review was to summarize the evidence for validity and reliability of popular consumer-wearable activity trackers (Fitbit and Jawbone) and their ability to estimate steps, distance, physical activity, energy expenditure, and sleep.\n\n\nMETHODS\nSearches included only full-length English language studies published in PubMed, Embase, SPORTDiscus, and Google Scholar through July 31, 2015. Two people reviewed and abstracted each included study.\n\n\nRESULTS\nIn total, 22 studies were included in the review (20 on adults, 2 on youth). For laboratory-based studies using step counting or accelerometer steps, the correlation with tracker-assessed steps was high for both Fitbit and Jawbone (Pearson or intraclass correlation coefficients (CC) > =0.80). Only one study assessed distance for the Fitbit, finding an over-estimate at slower speeds and under-estimate at faster speeds. Two field-based studies compared accelerometry-assessed physical activity to the trackers, with one study finding higher correlation (Spearman CC 0.86, Fitbit) while another study found a wide range in correlation (intraclass CC 0.36-0.70, Fitbit and Jawbone). Using several different comparison measures (indirect and direct calorimetry, accelerometry, self-report), energy expenditure was more often under-estimated by either tracker. Total sleep time and sleep efficiency were over-estimated and wake after sleep onset was under-estimated comparing metrics from polysomnography to either tracker using a normal mode setting. No studies of intradevice reliability were found. Interdevice reliability was reported on seven studies using the Fitbit, but none for the Jawbone. Walking- and running-based Fitbit trials indicated consistently high interdevice reliability for steps (Pearson and intraclass CC 0.76-1.00), distance (intraclass CC 0.90-0.99), and energy expenditure (Pearson and intraclass CC 0.71-0.97). When wearing two Fitbits while sleeping, consistency between the devices was high.\n\n\nCONCLUSION\nThis systematic review indicated higher validity of steps, few studies on distance and physical activity, and lower validity for energy expenditure and sleep. The evidence reviewed indicated high interdevice reliability for steps, distance, energy expenditure, and sleep for certain Fitbit models. As new activity trackers and features are introduced to the market, documentation of the measurement properties can guide their use in research settings." }, { "pmid": "29084709", "title": "Determinants for Sustained Use of an Activity Tracker: Observational Study.", "abstract": "BACKGROUND\nA lack of physical activity is considered to cause 6% of deaths globally. Feedback from wearables such as activity trackers has the potential to encourage daily physical activity. To date, little research is available on the natural development of adherence to activity trackers or on potential factors that predict which users manage to keep using their activity tracker during the first year (and thereby increasing the chance of healthy behavior change) and which users discontinue using their trackers after a short time.\n\n\nOBJECTIVE\nThe aim of this study was to identify the determinants for sustained use in the first year after purchase. Specifically, we look at the relative importance of demographic and socioeconomic, psychological, health-related, goal-related, technological, user experience-related, and social predictors of feedback device use. Furthermore, this study tests the effect of these predictors on physical activity.\n\n\nMETHODS\nA total of 711 participants from four urban areas in France received an activity tracker (Fitbit Zip) and gave permission to use their logged data. Participants filled out three Web-based questionnaires: at start, after 98 days, and after 232 days to measure the aforementioned determinants. Furthermore, for each participant, we collected activity data tracked by their Fitbit tracker for 320 days. We determined the relative importance of all included predictors by using Random Forest, a machine learning analysis technique.\n\n\nRESULTS\nThe data showed a slow exponential decay in Fitbit use, with 73.9% (526/711) of participants still tracking after 100 days and 16.0% (114/711) of participants tracking after 320 days. On average, participants used the tracker for 129 days. Most important reasons to quit tracking were technical issues such as empty batteries and broken trackers or lost trackers (21.5% of all Q3 respondents, 130/601). Random Forest analysis of predictors revealed that the most influential determinants were age, user experience-related factors, mobile phone type, household type, perceived effect of the Fitbit tracker, and goal-related factors. We explore the role of those predictors that show meaningful differences in the number of days the tracker was worn.\n\n\nCONCLUSIONS\nThis study offers an overview of the natural development of the use of an activity tracker, as well as the relative importance of a range of determinants from literature. Decay is exponential but slower than may be expected from existing literature. Many factors have a small contribution to sustained use. The most important determinants are technical condition, age, user experience, and goal-related factors. This finding suggests that activity tracking is potentially beneficial for a broad range of target groups, but more attention should be paid to technical and user experience-related aspects of activity trackers." }, { "pmid": "28270382", "title": "Evaluating the Consistency of Current Mainstream Wearable Devices in Health Monitoring: A Comparison Under Free-Living Conditions.", "abstract": "BACKGROUND\nWearable devices are gaining increasing market attention; however, the monitoring accuracy and consistency of the devices remains unknown.\n\n\nOBJECTIVE\nThe purpose of this study was to assess the consistency of the monitoring measurements of the latest wearable devices in the state of normal activities to provide advice to the industry and support to consumers in making purchasing choices.\n\n\nMETHODS\nTen pieces of representative wearable devices (2 smart watches, 4 smart bracelets of Chinese brands or foreign brands, and 4 mobile phone apps) were selected, and 5 subjects were employed to simultaneously use all the devices and the apps. From these devices, intact health monitoring data were acquired for 5 consecutive days and analyzed on the degree of differences and the relationships of the monitoring measurements ​​by the different devices.\n\n\nRESULTS\nThe daily measurements by the different devices fluctuated greatly, and the coefficient of variation (CV) fluctuated in the range of 2-38% for the number of steps, 5-30% for distance, 19-112% for activity duration, .1-17% for total energy expenditure (EE), 22-100% for activity EE, 2-44% for sleep duration, and 35-117% for deep sleep duration. After integrating the measurement data of 25 days among the devices, the measurements of the number of steps (intraclass correlation coefficient, ICC=.89) and distance (ICC=.84) displayed excellent consistencies, followed by those of activity duration (ICC=.59) and the total EE (ICC=.59) and activity EE (ICC=.57). However, the measurements for sleep duration (ICC=.30) and deep sleep duration (ICC=.27) were poor. For most devices, there was a strong correlation between the number of steps and distance measurements (R2>.95), and for some devices, there was a strong correlation between activity duration measurements and EE measurements (R2>.7). A strong correlation was observed in the measurements of steps, distance and EE from smart watches and mobile phones of the same brand, Apple or Samsung (r>.88).\n\n\nCONCLUSIONS\nAlthough wearable devices are developing rapidly, the current mainstream devices are only reliable in measuring the number of steps and distance, which can be used as health assessment indicators. However, the measurement consistencies of activity duration, EE, sleep quality, and so on, are still inadequate, which require further investigation and improved algorithms." }, { "pmid": "26464801", "title": "Reliability and validity of ten consumer activity trackers.", "abstract": "BACKGROUND\nActivity trackers can potentially stimulate users to increase their physical activity behavior. The aim of this study was to examine the reliability and validity of ten consumer activity trackers for measuring step count in both laboratory and free-living conditions.\n\n\nMETHOD\nHealthy adult volunteers (n = 33) walked twice on a treadmill (4.8 km/h) for 30 min while wearing ten different activity trackers (i.e. Lumoback, Fitbit Flex, Jawbone Up, Nike+ Fuelband SE, Misfit Shine, Withings Pulse, Fitbit Zip, Omron HJ-203, Yamax Digiwalker SW-200 and Moves mobile application). In free-living conditions, 56 volunteers wore the same activity trackers for one working day. Test-retest reliability was analyzed with the Intraclass Correlation Coefficient (ICC). Validity was evaluated by comparing each tracker with the gold standard (Optogait system for laboratory and ActivPAL for free-living conditions), using paired samples t-tests, mean absolute percentage errors, correlations and Bland-Altman plots.\n\n\nRESULTS\nTest-retest analysis revealed high reliability for most trackers except for the Omron (ICC .14), Moves app (ICC .37) and Nike+ Fuelband (ICC .53). The mean absolute percentage errors of the trackers in laboratory and free-living conditions respectively, were: Lumoback (-0.2, -0.4), Fibit Flex (-5.7, 3.7), Jawbone Up (-1.0, 1.4), Nike+ Fuelband (-18, -24), Misfit Shine (0.2, 1.1), Withings Pulse (-0.5, -7.9), Fitbit Zip (-0.3, 1.2), Omron (2.5, -0.4), Digiwalker (-1.2, -5.9), and Moves app (9.6, -37.6). Bland-Altman plots demonstrated that the limits of agreement varied from 46 steps (Fitbit Zip) to 2422 steps (Nike+ Fuelband) in the laboratory condition, and 866 steps (Fitbit Zip) to 5150 steps (Moves app) in the free-living condition.\n\n\nCONCLUSION\nThe reliability and validity of most trackers for measuring step count is good. The Fitbit Zip is the most valid whereas the reliability and validity of the Nike+ Fuelband is low." }, { "pmid": "27220855", "title": "A comparison of wearable fitness devices.", "abstract": "BACKGROUND\nWearable trackers can help motivate you during workouts and provide information about your daily routine or fitness in combination with your smartphone without requiring potentially disruptive manual calculations or records. This paper summarizes and compares wearable fitness devices, also called \"fitness trackers\" or \"activity trackers.\" These devices are becoming increasingly popular in personal healthcare, motivating people to exercise more throughout the day without the need for lifestyle changes. The various choices in the market for wearable devices are also increasing, with customers searching for products that best suit their personal needs. Further, using a wearable device or fitness tracker can help people reach a fitness goal or finish line. Generally, companies display advertising for these kinds of products and depict them as beneficial, user friendly, and accurate. However, there are no objective research results to prove the veracity of their words. This research features subjective and objective experimental results, which reveal that some devices perform better than others.\n\n\nMETHODS\nThe four most popular wristband style wearable devices currently on the market (Withings Pulse, Misfit Shine, Jawbone Up24, and Fitbit Flex) are selected and compared. The accuracy of fitness tracking is one of the key components for fitness tracking, and some devices perform better than others. This research shows subjective and objective experimental results that are used to compare the accuracy of four wearable devices in conjunction with user friendliness and satisfaction of 7 real users. In addition, this research matches the opinions between reviewers on an Internet site and those of subjects when using the device.\n\n\nRESULTS\nWithings Pulse is the most friendly and satisfactory from the users' viewpoint. It is the most accurate and repeatable for step and distance tracking, which is the most important measurement of fitness tracking, followed by Fitbit Flex, Jawbone Up24, and Misfit Shine. In contrast, Misfit Shine has the highest score for design and hardware, which is also appreciated by users.\n\n\nCONCLUSIONS\nFrom the results of experiments on four wearable devices, it is determined that the most acceptable in terms of price and satisfaction levels is the Withings Pulse, followed by the Fitbit Flex, Jawbone Up24, and Misfit Shine." }, { "pmid": "26818775", "title": "Acceptance of Commercially Available Wearable Activity Trackers Among Adults Aged Over 50 and With Chronic Illness: A Mixed-Methods Evaluation.", "abstract": "BACKGROUND\nPhysical inactivity and sedentary behavior increase the risk of chronic illness and death. The newest generation of \"wearable\" activity trackers offers potential as a multifaceted intervention to help people become more active.\n\n\nOBJECTIVE\nTo examine the usability and usefulness of wearable activity trackers for older adults living with chronic illness.\n\n\nMETHODS\nWe recruited a purposive sample of 32 participants over the age of 50, who had been previously diagnosed with a chronic illness, including vascular disease, diabetes, arthritis, and osteoporosis. Participants were between 52 and 84 years of age (mean 64); among the study participants, 23 (72%) were women and the mean body mass index was 31 kg/m(2). Participants tested 5 trackers, including a simple pedometer (Sportline or Mio) followed by 4 wearable activity trackers (Fitbit Zip, Misfit Shine, Jawbone Up 24, and Withings Pulse) in random order. Selected devices represented the range of wearable products and features available on the Canadian market in 2014. Participants wore each device for at least 3 days and evaluated it using a questionnaire developed from the Technology Acceptance Model. We used focus groups to explore participant experiences and a thematic analysis approach to data collection and analysis.\n\n\nRESULTS\nOur study resulted in 4 themes: (1) adoption within a comfort zone; (2) self-awareness and goal setting; (3) purposes of data tracking; and (4) future of wearable activity trackers as health care devices. Prior to enrolling, few participants were aware of wearable activity trackers. Most also had been asked by a physician to exercise more and cited this as a motivation for testing the devices. None of the participants planned to purchase the simple pedometer after the study, citing poor accuracy and data loss, whereas 73% (N=32) planned to purchase a wearable activity tracker. Preferences varied but 50% felt they would buy a Fitbit and 42% felt they would buy a Misfit, Jawbone, or Withings. The simple pedometer had a mean acceptance score of 56/95 compared with 63 for the Withings, 65 for the Misfit and Jawbone, and 68 for the Fitbit. To improve usability, older users may benefit from devices that have better compatibility with personal computers or less-expensive Android mobile phones and tablets, and have comprehensive paper-based user manuals and apps that interpret user data.\n\n\nCONCLUSIONS\nFor older adults living with chronic illness, wearable activity trackers are perceived as useful and acceptable. New users may need support to both set up the device and learn how to interpret their data." }, { "pmid": "27881359", "title": "Feasibility and Effectiveness of Using Wearable Activity Trackers in Youth: A Systematic Review.", "abstract": "BACKGROUND\nThe proliferation and popularity of wearable activity trackers (eg, Fitbit, Jawbone, Misfit) may present an opportunity to integrate such technology into physical activity interventions. While several systematic reviews have reported intervention effects of using wearable activity trackers on adults' physical activity levels, none to date have focused specifically on children and adolescents.\n\n\nOBJECTIVE\nThe aim of this review was to examine the effectiveness of wearable activity trackers as a tool for increasing children's and adolescents' physical activity levels. We also examined the feasibility of using such technology in younger populations (age range 5-19 years).\n\n\nMETHODS\nWe conducted a systematic search of 5 electronic databases, reference lists, and personal archives to identify articles published up until August 2016 that met the inclusion criteria. Articles were included if they (1) specifically examined the use of a wearable device within an intervention or a feasibility study; (2) included participants aged 5-19 years old; (3) had a measure of physical activity as an outcome variable for intervention studies; (4) reported process data concerning the feasibility of the device in feasibility studies; and (5) were published in English. Data were analyzed in August 2016.\n\n\nRESULTS\nIn total, we identified and analyzed 5 studies (3 intervention, 2 feasibility). Intervention delivery ranged from 19 days to 3 months, with only 1 study using a randomized controlled trial design. Wearable activity trackers were typically combined with other intervention approaches such as goal setting and researcher feedback. While intervention effects were generally positive, the reported differences were largely nonsignificant. The feasibility studies indicated that monitor comfort and design and feedback features were important factors to children and adolescents.\n\n\nCONCLUSIONS\nThere is a paucity of research concerning the effectiveness and feasibility of wearable activity trackers as a tool for increasing children's and adolescents' physical activity levels. While there are some preliminary data to suggest these devices may have the potential to increase activity levels through self-monitoring and goal setting in the short term, more research is needed to establish longer-term effects on behavior." }, { "pmid": "29141837", "title": "User Acceptance of Wrist-Worn Activity Trackers Among Community-Dwelling Older Adults: Mixed Method Study.", "abstract": "BACKGROUND\nWearable activity trackers are newly emerging technologies with the anticipation for successfully supporting aging-in-place. Consumer-grade wearable activity trackers are increasingly ubiquitous in the market, but the attitudes toward, as well as acceptance and voluntary use of, these trackers in older population are poorly understood.\n\n\nOBJECTIVE\nThe aim of this study was to assess acceptance and usage of wearable activity trackers in Canadian community-dwelling older adults, using the potentially influential factors as identified in literature and technology acceptance model.\n\n\nMETHODS\nA mixed methods design was used. A total of 20 older adults aged 55 years and older were recruited from Southwestern Ontario. Participants used 2 different wearable activity trackers (Xiaomi Mi Band and Microsoft Band) separately for each segment in the crossover design study for 21 days (ie, 42 days total). A questionnaire was developed to capture acceptance and experience at the end of each segment, representing 2 different devices. Semistructured interviews were conducted with 4 participants, and a content analysis was performed.\n\n\nRESULTS\nParticipants ranged in age from 55 years to 84 years (mean age: 64 years). The Mi Band gained higher levels of acceptance (16/20, 80%) compared with the Microsoft Band (10/20, 50%). The equipment characteristics dimension scored significantly higher for the Mi Band (P<.05). The amount a participant was willing to pay for the device was highly associated with technology acceptance (P<.05). Multivariate logistic regression with 3 covariates resulted in an area under the curve of 0.79. Content analysis resulted in the formation of the following main themes: (1) smartphones as facilitators of wearable activity trackers; (2) privacy is less of a concern for wearable activity trackers, (3) value proposition: self-awareness and motivation; (4) subjective norm, social support, and sense of independence; and (5) equipment characteristics matter: display, battery, comfort, and aesthetics.\n\n\nCONCLUSIONS\nOlder adults were mostly accepting of wearable activity trackers, and they had a clear understanding of its value for their lives. Wearable activity trackers were uniquely considered more personal than other types of technologies, thereby the equipment characteristics including comfort, aesthetics, and price had a significant impact on the acceptance. Results indicated that privacy was less of concern for older adults, but it may have stemmed from a lack of understanding of the privacy risks and implications. These findings add to emerging research that investigates acceptance and factors that may influence acceptance of wearable activity trackers among older adults." }, { "pmid": "26404673", "title": "Unaddressed privacy risks in accredited health and wellness apps: a cross-sectional systematic assessment.", "abstract": "BACKGROUND\nPoor information privacy practices have been identified in health apps. Medical app accreditation programs offer a mechanism for assuring the quality of apps; however, little is known about their ability to control information privacy risks. We aimed to assess the extent to which already-certified apps complied with data protection principles mandated by the largest national accreditation program.\n\n\nMETHODS\nCross-sectional, systematic, 6-month assessment of 79 apps certified as clinically safe and trustworthy by the UK NHS Health Apps Library. Protocol-based testing was used to characterize personal information collection, local-device storage and information transmission. Observed information handling practices were compared against privacy policy commitments.\n\n\nRESULTS\nThe study revealed that 89% (n = 70/79) of apps transmitted information to online services. No app encrypted personal information stored locally. Furthermore, 66% (23/35) of apps sending identifying information over the Internet did not use encryption and 20% (7/35) did not have a privacy policy. Overall, 67% (53/79) of apps had some form of privacy policy. No app collected or transmitted information that a policy explicitly stated it would not; however, 78% (38/49) of information-transmitting apps with a policy did not describe the nature of personal information included in transmissions. Four apps sent both identifying and health information without encryption. Although the study was not designed to examine data handling after transmission to online services, security problems appeared to place users at risk of data theft in two cases.\n\n\nCONCLUSIONS\nSystematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians. Accreditation programs should, as a minimum, provide consistent and reliable warnings about possible threats and, ideally, require publishers to rectify vulnerabilities before apps are released." }, { "pmid": "26878757", "title": "Examining individuals' adoption of healthcare wearable devices: An empirical study from privacy calculus perspective.", "abstract": "BACKGROUND\nWearable technology has shown the potential of improving healthcare efficiency and reducing healthcare cost. Different from pioneering studies on healthcare wearable devices from technical perspective, this paper explores the predictors of individuals' adoption of healthcare wearable devices. Considering the importance of individuals' privacy perceptions in healthcare wearable devices adoption, this study proposes a model based on the privacy calculus theory to investigate how individuals adopt healthcare wearable devices.\n\n\nMETHOD\nThe proposed conceptual model was empirically tested by using data collected from a survey. The sample covers 333 actual users of healthcare wearable devices. Structural equation modeling (SEM) method was employed to estimate the significance of the path coefficients.\n\n\nRESULTS\nThis study reveals several main findings: (1) individuals' decisions to adopt healthcare wearable devices are determined by their risk-benefit analyses (refer to privacy calculus). In short, if an individual's perceived benefit is higher than perceived privacy risk, s/he is more likely to adopt the device. Otherwise, the device would not be adopted; (2) individuals' perceived privacy risk is formed by health information sensitivity, personal innovativeness, legislative protection, and perceived prestige; and (3) individuals' perceived benefit is determined by perceived informativeness and functional congruence. The theoretical and practical implications, limitations, and future research directions are then discussed." }, { "pmid": "28155094", "title": "Ethical Implications of User Perceptions of Wearable Devices.", "abstract": "Health Wearable Devices enhance the quality of life, promote positive lifestyle changes and save time and money in medical appointments. However, Wearable Devices store large amounts of personal information that is accessed by third parties without user consent. This creates ethical issues regarding privacy, security and informed consent. This paper aims to demonstrate users' ethical perceptions of the use of Wearable Devices in the health sector. The impact of ethics is determined by an online survey which was conducted from patients and users with random female and male division. Results from this survey demonstrate that Wearable Device users are highly concerned regarding privacy issues and consider informed consent as \"very important\" when sharing information with third parties. However, users do not appear to relate privacy issues with informed consent. Additionally, users expressed the need for having shorter privacy policies that are easier to read, a more understandable informed consent form that involves regulatory authorities and there should be legal consequences the violation or misuse of health information provided to Wearable Devices. The survey results present an ethical framework that will enhance the ethical development of Wearable Technology." }, { "pmid": "28732074", "title": "Who uses running apps and sports watches? Determinants and consumer profiles of event runners' usage of running-related smartphone applications and sports watches.", "abstract": "Individual and unorganized sports with a health-related focus, such as recreational running, have grown extensively in the last decade. Consistent with this development, there has been an exponential increase in the availability and use of electronic monitoring devices such as smartphone applications (apps) and sports watches. These electronic devices could provide support and monitoring for unorganized runners, who have no access to professional trainers and coaches. The purpose of this paper is to gain insight into the characteristics of event runners who use running-related apps and sports watches. This knowledge is useful from research, design, and marketing perspectives to adequately address unorganized runners' needs, and to support them in healthy and sustainable running through personalized technology. Data used in this study are drawn from the standardized online Eindhoven Running Survey 2014 (ERS14). In total, 2,172 participants in the Half Marathon Eindhoven 2014 completed the questionnaire (a response rate of 40.0%). Binary logistic regressions were used to analyze the impact of socio-demographic variables, running-related variables, and psychographic characteristics on the use of running-related apps and sports watches. Next, consumer profiles were identified. The results indicate that the use of monitoring devices is affected by socio-demographics as well as sports-related and psychographic variables, and this relationship depends on the type of monitoring device. Therefore, distinctive consumer profiles have been developed to provide a tool for designers and manufacturers of electronic running-related devices to better target (unorganized) runners' needs through personalized and differentiated approaches. Apps are more likely to be used by younger, less experienced and involved runners. Hence, apps have the potential to target this group of novice, less trained, and unorganized runners. In contrast, sports watches are more likely to be used by a different group of runners, older and more experienced runners with higher involvement. Although apps and sports watches may potentially promote and stimulate sports participation, these electronic devices do require a more differentiated approach to target specific needs of runners. Considerable efforts in terms of personalization and tailoring have to be made to develop the full potential of these electronic devices as drivers for healthy and sustainable sports participation." } ]
JMIR Medical Informatics
30578220
PMC6320437
10.2196/medinform.9979
Identifying Principles for the Construction of an Ontology-Based Knowledge Base: A Case Study Approach
BackgroundOntologies are key enabling technologies for the Semantic Web. The Web Ontology Language (OWL) is a semantic markup language for publishing and sharing ontologies.ObjectiveThe supply of customizable, computable, and formally represented molecular genetics information and health information, via electronic health record (EHR) interfaces, can play a critical role in achieving precision medicine. In this study, we used cystic fibrosis as an example to build an Ontology-based Knowledge Base prototype on Cystic Fibrobis (OntoKBCF) to supply such information via an EHR prototype. In addition, we elaborate on the construction and representation principles, approaches, applications, and representation challenges that we faced in the construction of OntoKBCF. The principles and approaches can be referenced and applied in constructing other ontology-based domain knowledge bases.MethodsFirst, we defined the scope of OntoKBCF according to possible clinical information needs about cystic fibrosis on both a molecular level and a clinical phenotype level. We then selected the knowledge sources to be represented in OntoKBCF. We utilized top-to-bottom content analysis and bottom-up construction to build OntoKBCF. Protégé-OWL was used to construct OntoKBCF. The construction principles included (1) to use existing basic terms as much as possible; (2) to use intersection and combination in representations; (3) to represent as many different types of facts as possible; and (4) to provide 2-5 examples for each type. HermiT 1.3.8.413 within Protégé-5.1.0 was used to check the consistency of OntoKBCF.ResultsOntoKBCF was constructed successfully, with the inclusion of 408 classes, 35 properties, and 113 equivalent classes. OntoKBCF includes both atomic concepts (such as amino acid) and complex concepts (such as “adolescent female cystic fibrosis patient”) and their descriptions. We demonstrated that OntoKBCF could make customizable molecular and health information available automatically and usable via an EHR prototype. The main challenges include the provision of a more comprehensive account of different patient groups as well as the representation of uncertain knowledge, ambiguous concepts, and negative statements and more complicated and detailed molecular mechanisms or pathway information about cystic fibrosis.ConclusionsAlthough cystic fibrosis is just one example, based on the current structure of OntoKBCF, it should be relatively straightforward to extend the prototype to cover different topics. Moreover, the principles underpinning its development could be reused for building alternative human monogenetic diseases knowledge bases.
Other Related WorkOur work was originally conceived in 2004 and mostly completed by 2009. At the time, there were no available resources that could meet all of the following criteria: consistent molecular genetic information and a clinical actionable, machine-processable format that was shareable, reusable, and customizable. We, thus, had to create our own resource to meet all the requirements. Biobanks might be a good source to provide such information. Nevertheless, the UK Biobank [38] started to recruit participants only in 2006, and the USA Biobank, as a part of the Precision Medicine Initiative, was founded only in 2016 [39].We used an ontology-based knowledge base to provide customizable molecular genetics and health information to EHR settings successfully. The ontology-based knowledge base provides potential in reusing and sharing, as well as the consistency of information. The creation of knowledge resources with fine granularity (not only disease names) has been recognized repeatedly [40,41] as the main challenge in bringing new information (eg, molecular genetics information) to an EHR. Although there are many existing databases for both genotypes and phenotypes of human beings, not all of them are organized in a machine-processable format. Therefore, the detailed construction principles and approaches for OntoKBCF reported in this paper can provide a reference to peers in the field.One recent example of knowledge base was reported by Samwald et al [42,43] who built a resource description framework or OWL knowledge base to provide support for clinical pharmacogenetics. Their work shares some similarities with OntoKBCF. For example, it provides consistent genomics information in clinical settings in a machine-processable format. Both projects use reasoners to maintain consistency. The two projects, however, have different focused application domain areas; Samwald et al’s work focuses on pharmacogenomics information, and OntoKBCF uses cystic fibrosis to demonstrate possible broader applications. In addition, Samwald et al’s work includes a query and answer part that supersedes OntoKBCF. Meanwhile, OntoKBCF demonstrates its usage via an EHR prototype; such demonstration was not included in Samwald et al’s [42,43] publications. There is no detailed description of the knowledge base construction in Samwald et al’s project. Thus, it is difficult to compare the representation and construction principles and approaches of the two projects in depth.In recent years, the broad importance of Semantic Web technologies has been recognized. There are many more studies that have used ontology-based knowledge bases to assist clinical tasks within an EHR. For example, Robles-Bykbaev et al [40] reported the use of a formal knowledge base model to assist in the generation of decision making and recommendations for communication disorders. They stated that their knowledge base is used to support data analysis and inference processes; however, the paper does not include the details of the organization and structure of the knowledge base. The Clinical Narrative Temporal Relation Ontology (CNTRO) [33,44] is a temporal-related ontology in the OWL. CNTRO is used for inference purposes in processing clinical narratives. The use cases [45] show promising results in processing short and simple adverse event narratives. The OWL query application [46] for CNTRO and the harmonization of CNTRO with other existing time ontologies may improve CNTRO [44] further and provide broader applications in analyzing clinical narratives. Another example is the research of Wang et al [41] and Hu et al [47]. They utilized an ontology-based clinical pathways knowledge base to generate personalized clinical pathways for clinicians by incorporating patients’ data. The clinical pathways knowledge base is independent of an EHR, and it can be shared by other systems. All these examples used ontology-based knowledge bases and integrated them with an EHR. There are other ontology examples that are utilized in settings other than in EHR, such as in mobile devices [48] or to facilitate social media data mining [49] for health purposes. While these recent studies demonstrate the general application of semantic technologies in health care, they do not consider the important clinical usage of relevant molecular genetic information.
[ "17911824", "21946299", "24997857", "27239556", "8412823", "10802651", "17584211", "10773783", "8825494", "10612815", "26931183", "21672956", "27733503", "26262008", "23466439", "25880555", "23920613", "22211182", "25540680", "23076712", "21445676", "23811542", "28739560" ]
[ { "pmid": "17911824", "title": "Ontology-based knowledge base model construction-OntoKBCF.", "abstract": "Semantic web technologies are used in the construction of a bio-health knowledge base model, which, when coupled with an Electronic Health Record (EHR), is to be used by clinicians. Specifically, this ontology provides the basis for a domain knowledge resource that attempts to bridge biological and clinical information. The prototype is focused on a Cystic Fibrosis exemplar, and the content of the model includes: Cochrane reviews; a time-oriented description; gene therapy; and the most common cystic fibrosis gene mutations. The facts within the model range from nucleo-base mutation and amino acid change to clinical phenotype. The knowledge is represented by layers from the micro level to the macro level. Here, emphasis is placed upon the details between levels (i.e., the vertical axis) and these are made available to bridge the knowledge from different levels. The description of gender, age, mutation and clinical manifestations are clues for matching points within an EHR system. OWL is the ontology representation language used and the output from Protégé-OWL is a XML-based file format, which facilitates further application and communication." }, { "pmid": "21946299", "title": "Incorporating personalized gene sequence variants, molecular genetics knowledge, and health knowledge into an EHR prototype based on the Continuity of Care Record standard.", "abstract": "OBJECTIVES\nThe current volume and complexity of genetic tests, and the molecular genetics knowledge and health knowledge related to interpretation of the results of those tests, are rapidly outstripping the ability of individual clinicians to recall, understand and convey to their patients information relevant to their care. The tailoring of molecular genetics knowledge and health knowledge in clinical settings is important both for the provision of personalized medicine and to reduce clinician information overload. In this paper we describe the incorporation, customization and demonstration of molecular genetic data (mainly sequence variants), molecular genetics knowledge and health knowledge into a standards-based electronic health record (EHR) prototype developed specifically for this study.\n\n\nMETHODS\nWe extended the CCR (Continuity of Care Record), an existing EHR standard for representing clinical data, to include molecular genetic data. An EHR prototype was built based on the extended CCR and designed to display relevant molecular genetics knowledge and health knowledge from an existing knowledge base for cystic fibrosis (OntoKBCF). We reconstructed test records from published case reports and represented them in the CCR schema. We then used the EHR to dynamically filter molecular genetics knowledge and health knowledge from OntoKBCF using molecular genetic data and clinical data from the test cases.\n\n\nRESULTS\nThe molecular genetic data were successfully incorporated in the CCR by creating a category of laboratory results called \"Molecular Genetics\" and specifying a particular class of test (\"Gene Mutation Test\") in this category. Unlike other laboratory tests reported in the CCR, results of tests in this class required additional attributes (\"Molecular Structure\" and \"Molecular Position\") to support interpretation by clinicians. These results, along with clinical data (age, sex, ethnicity, diagnostic procedures, and therapies) were used by the EHR to filter and present molecular genetics knowledge and health knowledge from OntoKBCF.\n\n\nCONCLUSIONS\nThis research shows a feasible model for delivering patient sequence variants and presenting tailored molecular genetics knowledge and health knowledge via a standards-based EHR system prototype. EHR standards can be extended to include the necessary patient data (as we have demonstrated in the case of the CCR), while knowledge can be obtained from external knowledge bases that are created and maintained independently from the EHR. This approach can form the basis for a personalized medicine framework, a more comprehensive standards-based EHR system and a potential platform for advancing translational research by both disseminating results and providing opportunities for new insights into phenotype-genotype relationships." }, { "pmid": "24997857", "title": "Integration of an OWL-DL knowledge base with an EHR prototype and providing customized information.", "abstract": "When clinicians use electronic health record (EHR) systems, their ability to obtain general knowledge is often an important contribution to their ability to make more informed decisions. In this paper we describe a method by which an external, formal representation of clinical and molecular genetic knowledge can be integrated into an EHR such that customized knowledge can be delivered to clinicians in a context-appropriate manner.Web Ontology Language-Description Logic (OWL-DL) is a formal knowledge representation language that is widely used for creating, organizing and managing biomedical knowledge through the use of explicit definitions, consistent structure and a computer-processable format, particularly in biomedical fields. In this paper we describe: 1) integration of an OWL-DL knowledge base with a standards-based EHR prototype, 2) presentation of customized information from the knowledge base via the EHR interface, and 3) lessons learned via the process. The integration was achieved through a combination of manual and automatic methods. Our method has advantages for scaling up to and maintaining knowledge bases of any size, with the goal of assisting clinicians and other EHR users in making better informed health care decisions." }, { "pmid": "8412823", "title": "The Unified Medical Language System.", "abstract": "In 1986, the National Library of Medicine began a long-term research and development project to build the Unified Medical Language System (UMLS). The purpose of the UMLS is to improve the ability of computer programs to \"understand\" the biomedical meaning in user inquiries and to use this understanding to retrieve and integrate relevant machine-readable information for users. Underlying the UMLS effort is the assumption that timely access to accurate and up-to-date information will improve decision making and ultimately the quality of patient care and research. The development of the UMLS is a distributed national experiment with a strong element of international collaboration. The general strategy is to develop UMLS components through a series of successive approximations of the capabilities ultimately desired. Three experimental Knowledge Sources, the Metathesaurus, the Semantic Network, and the Information Sources Map have been developed and are distributed annually to interested researchers, many of whom have tested and evaluated them in a range of applications. The UMLS project and current developments in high-speed, high-capacity international networks are converging in ways that have great potential for enhancing access to biomedical information." }, { "pmid": "17584211", "title": "The information-seeking behaviour of doctors: a review of the evidence.", "abstract": "This paper provides a narrative review of the available literature from the past 10 years (1996-2006) that focus on the information seeking behaviour of doctors. The review considers the literature in three sub-themes: Theme 1, the Information Needs of Doctors includes information need, frequency of doctors' questions and types of information needs; Theme 2, Information Seeking by Doctors embraces pattern of information resource use, time spent searching, barriers to information searching and information searching skills; Theme 3, Information Sources Utilized by Doctors comprises the number of sources utilized, comparison of information sources consulted, computer usage, ranking of information resources, printed resource use, personal digital assistant (PDA) use, electronic database use and the Internet. The review is wide ranging. It would seem that the traditional methods of face-to-face communication and use of hard-copy evidence still prevail amongst qualified medical staff in the clinical setting. The use of new technologies embracing the new digital age in information provision may influence this in the future. However, for now, it would seem that there is still research to be undertaken to uncover the most effective methods of encouraging clinicians to use the best evidence in everyday practice." }, { "pmid": "10773783", "title": "Genotype and phenotype in cystic fibrosis.", "abstract": "Cystic fibrosis (CF) is caused by mutations in the CF transmembrane conductance regulator (CFTR) gene which encodes a protein expressed in the apical membrane of exocrine epithelial cells. CFTR functions principally as a cAMP-induced chloride channel and appears capable of regulating other ion channels. Besides the most common mutation, DeltaF508, accounting for about 70% of CF chromosomes worldwide, more than 850 mutant alleles have been reported to the CF Genetic Analysis Consortium. These mutations affect CFTR through a variety of molecular mechanisms which can produce little or no functional CFTR at the apical membrane. This genotypic variation provides a rationale for phenotypic effects of the specific mutations. The extent to which various CFTR alleles contribute to clinical variation in CF is evaluated by genotype-phenotype studies. These demonstrated that the degree of correlation between CFTR genotype and CF phenotype varies between its clinical components and is highest for the pancreatic status and lowest for pulmonary disease. The poor correlation between CFTR genotype and severity of lung disease strongly suggests an influence of environmental and secondary genetic factors (CF modifiers). Several candidate genes related to innate and adaptive immune response have been implicated as pulmonary CF modifiers. In addition, the presence of a genetic CF modifier for meconium ileus has been demonstrated on human chromosome 19q13.2. The phenotypic spectrum associated with mutations in the CFTR gene extends beyond the classically defined CF. Besides patients with atypical CF, there are large numbers of so-called monosymptomatic diseases such as various forms of obstructive azoospermia, idiopathic pancreatitis or disseminated bronchiectasis associated with CFTR mutations uncharacteristic for CF. The composition, frequency and type of CFTR mutations/variants parallel the spectrum of CFTR-associated phenotypes, from classic CF to mild monosymptomatic presentations. Expansion of the spectrum of disease associated with the CFTR mutant genes creates a need for revision of the diagnostic criteria for CF and a dilemma for setting nosologic boundaries between CF and other diseases with CFTR etiology." }, { "pmid": "8825494", "title": "Cystic fibrosis: genotypic and phenotypic variations.", "abstract": "Cystic fibrosis (CF) is a common genetic disorder in the Caucasian population. The gene was identified in 1989 on the basis of its map location on chromosome 7. The encoded gene product, named cystic fibrosis transmembrane conductance regulator (CFTR), corresponds to a cAMP-regulated chloride channel found almost exclusively in the secretory epithelial cells. Although the major mutation that results in a single amino acid deletion (F508) accounts for 70% of the disease alleles, more than 550 additional mutant alleles of different forms have been detected. Many of these mutations can be divided into five general classes in terms of their demonstrated or presumed molecular consequences. In addition, a good correlation has been found between CFTR genotype and one of the clinical variables--pancreatic function status. An unexpected finding, however, is the documentation of CFTR mutations in patients with atypical CF disease presentations, including congenital absence of vas deferens and several pulmonary diseases. Thus, the implication of CFTR mutation is more profound than CF alone." }, { "pmid": "10612815", "title": "Mutation nomenclature extensions and suggestions to describe complex mutations: a discussion.", "abstract": "Consistent gene mutation nomenclature is essential for efficient and accurate reporting, testing, and curation of the growing number of disease mutations and useful polymorphisms being discovered in the human genome. While a codified mutation nomenclature system for simple DNA lesions has now been adopted broadly by the medical genetics community, it is inherently difficult to represent complex mutations in a unified manner. In this article, suggestions are presented for reporting just such complex mutations." }, { "pmid": "26931183", "title": "HGVS Recommendations for the Description of Sequence Variants: 2016 Update.", "abstract": "The consistent and unambiguous description of sequence variants is essential to report and exchange information on the analysis of a genome. In particular, DNA diagnostics critically depends on accurate and standardized description and sharing of the variants detected. The sequence variant nomenclature system proposed in 2000 by the Human Genome Variation Society has been widely adopted and has developed into an internationally accepted standard. The recommendations are currently commissioned through a Sequence Variant Description Working Group (SVD-WG) operating under the auspices of three international organizations: the Human Genome Variation Society (HGVS), the Human Variome Project (HVP), and the Human Genome Organization (HUGO). Requests for modifications and extensions go through the SVD-WG following a standard procedure including a community consultation step. Version numbers are assigned to the nomenclature system to allow users to specify the version used in their variant descriptions. Here, we present the current recommendations, HGVS version 15.11, and briefly summarize the changes that were made since the 2000 publication. Most focus has been on removing inconsistencies and tightening definitions allowing automatic data processing. An extensive version of the recommendations is available online, at http://www.HGVS.org/varnomen." }, { "pmid": "21672956", "title": "BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.", "abstract": "The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection." }, { "pmid": "27733503", "title": "Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration.", "abstract": "Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies." }, { "pmid": "26262008", "title": "An Ecosystem of Intelligent ICT Tools for Speech-Language Therapy Based on a Formal Knowledge Model.", "abstract": "The language and communication constitute the development mainstays of several intellectual and cognitive skills in humans. However, there are millions of people around the world who suffer from several disabilities and disorders related with language and communication, while most of the countries present a lack of corresponding services related with health care and rehabilitation. On these grounds, we are working to develop an ecosystem of intelligent ICT tools to support speech and language pathologists, doctors, students, patients and their relatives. This ecosystem has several layers and components, integrating Electronic Health Records management, standardized vocabularies, a knowledge database, an ontology of concepts from the speech-language domain, and an expert system. We discuss the advantages of such an approach through experiments carried out in several institutions assisting children with a wide spectrum of disabilities." }, { "pmid": "23466439", "title": "Creating personalised clinical pathways by semantic interoperability with electronic health records.", "abstract": "OBJECTIVE\nThere is a growing realisation that clinical pathways (CPs) are vital for improving the treatment quality of healthcare organisations. However, treatment personalisation is one of the main challenges when implementing CPs, and the inadequate dynamic adaptability restricts the practicality of CPs. The purpose of this study is to improve the practicality of CPs using semantic interoperability between knowledge-based CPs and semantic electronic health records (EHRs).\n\n\nMETHODS\nSimple protocol and resource description framework query language is used to gather patient information from semantic EHRs. The gathered patient information is entered into the CP ontology represented by web ontology language. Then, after reasoning over rules described by semantic web rule language in the Jena semantic framework, we adjust the standardised CPs to meet different patients' practical needs.\n\n\nRESULTS\nA CP for acute appendicitis is used as an example to illustrate how to achieve CP customisation based on the semantic interoperability between knowledge-based CPs and semantic EHRs. A personalised care plan is generated by comprehensively analysing the patient's personal allergy history and past medical history, which are stored in semantic EHRs. Additionally, by monitoring the patient's clinical information, an exception is recorded and handled during CP execution. According to execution results of the actual example, the solutions we present are shown to be technically feasible.\n\n\nCONCLUSION\nThis study contributes towards improving the clinical personalised practicality of standardised CPs. In addition, this study establishes the foundation for future work on the research and development of an independent CP system." }, { "pmid": "25880555", "title": "Pharmacogenomic knowledge representation, reasoning and genome-based clinical decision support based on OWL 2 DL ontologies.", "abstract": "BACKGROUND\nEvery year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics.\n\n\nMETHODS\nWe developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles.\n\n\nRESULTS\nOur methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners.\n\n\nCONCLUSIONS\nThe ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach." }, { "pmid": "23920613", "title": "An RDF/OWL knowledge base for query answering and decision support in clinical pharmacogenetics.", "abstract": "Genetic testing for personalizing pharmacotherapy is bound to become an important part of clinical routine. To address associated issues with data management and quality, we are creating a semantic knowledge base for clinical pharmacogenetics. The knowledge base is made up of three components: an expressive ontology formalized in the Web Ontology Language (OWL 2 DL), a Resource Description Framework (RDF) model for capturing detailed results of manual annotation of pharmacogenomic information in drug product labels, and an RDF conversion of relevant biomedical datasets. Our work goes beyond the state of the art in that it makes both automated reasoning as well as query answering as simple as possible, and the reasoning capabilities go beyond the capabilities of previously described ontologies." }, { "pmid": "22211182", "title": "CNTRO 2.0: A Harmonized Semantic Web Ontology for Temporal Relation Inferencing in Clinical Narratives.", "abstract": "The Clinical Narrative Temporal Relation Ontology (CNTRO) has been developed for the purpose of allowing temporal information of clinical data to be semantically annotated and queried, and using inference to expose new temporal features and relations based on the semantic assertions and definitions of the temporal aspects in the ontology. While CNTRO provides a formal semantic foundation to leverage the semantic-web techniques, it is still necessary to arrive at a shared set of semantics and operational rules with commonly used ontologies for the time domain. This paper introduces CNTRO 2.0, which tries to harmonize CNTRO 1.0 and a list of existing time ontologies or top-level ontologies into a unified model-an OWL based ontology of temporal relations for clinical research." }, { "pmid": "25540680", "title": "A use case study on late stent thrombosis for ontology-based temporal reasoning and analysis.", "abstract": "In this paper, we show how we have applied the Clinical Narrative Temporal Relation Ontology (CNTRO) and its associated temporal reasoning system (the CNTRO Timeline Library) to trend temporal information within medical device adverse event report narratives. 238 narratives documenting occurrences of late stent thrombosis adverse events from the Food and Drug Administration's (FDA) Manufacturing and User Facility Device Experience (MAUDE) database were annotated and evaluated using the CNTRO Timeline Library to identify, order, and calculate the duration of temporal events. The CNTRO Timeline Library had a 95% accuracy in correctly ordering events within the 238 narratives. 41 narratives included an event in which the duration was documented, and the CNTRO Timeline Library had an 80% accuracy in correctly determining these durations. 77 narratives included documentation of a duration between events, and the CNTRO Timeline Library had a 76% accuracy in determining these durations. This paper also includes an example of how this temporal output from the CNTRO ontology can be used to verify recommendations for length of drug administration, and proposes that these same tools could be applied to other medical device adverse event narratives in order to identify currently unknown temporal trends." }, { "pmid": "23076712", "title": "Time-related patient data retrieval for the case studies from the pharmacogenomics research network.", "abstract": "There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities." }, { "pmid": "21445676", "title": "Ontology-based clinical pathways with semantic rules.", "abstract": "Clinical Pathways (CP) enhance the quality of patient care, and are thus important in health management. However, there is a need to address the challenge of adaptation of treatment procedures in CP-that is, the treatment schemes must be re-modified once the clinical status and other care conditions of patients in the healthcare setting change, which happen frequently. In addition, the widespread and frequent use of Electronic Medical Records (EMR) implies an increasing need to combine CP with other healthcare information systems, especially EMR, in order to greatly improve healthcare quality and efficiency. This study proposed an ontology-based method to model CP: ontology was used to model CP domain terms; Semantic Web Rule language was used to model domain rules. In this way, the CP could reason over the rules, knowledge, and information collected, and provides automated error checking for the next steps of the treatment in runtime, which is adaptive to treatment procedures. To evaluate our method, we built a Lobectomia Pulmonalis CP and realized it based on an EMR system." }, { "pmid": "23811542", "title": "Development of an obesity management ontology based on the nursing process for the mobile-device domain.", "abstract": "BACKGROUND\nLifestyle modification is the most important factor in the management of obesity. It is therefore essential to enhance client participation in voluntary and continuous weight control.\n\n\nOBJECTIVE\nThe aim of this study was to develop an obesity management ontology for application in the mobile-device domain. We considered the concepts of client participation in behavioral modification for obesity management and focused on minimizing the amount of information exchange between the application and the database when providing tailored interventions.\n\n\nMETHODS\nAn obesity management ontology was developed in seven phases: (1) defining the scope of obesity management, (2) selecting a foundational ontology, (3) extracting the concepts, (4) assigning relationships between these concepts, (5) evaluating representative layers of ontology content, (6) representing the ontology formally with Protégé, and (7) developing a prototype application for obesity management.\n\n\nRESULTS\nBehavioral interventions, dietary advice, and physical activity were proposed as obesity management strategies. The nursing process was selected as a foundation of ontology, representing the obesity management process. We extracted 127 concepts, which included assessment data (eg, sex, body mass index, and waist circumference), inferred data to represent nursing diagnoses and evaluations (eg, degree of and reason for obesity, and success or failure of lifestyle modifications), and implementation (eg, education and advice). The relationship linking concepts were \"part of\", \"instance of\", \"derives of\", \"derives into\", \"has plan\", \"followed by\", and \"has intention\". The concepts and relationships were formally represented using Protégé. The evaluation score of the obesity management ontology was 4.5 out of 5. An Android-based obesity management application comprising both agent and client parts was developed.\n\n\nCONCLUSIONS\nWe have developed an ontology for representing obesity management with the nursing process as a foundation of ontology." }, { "pmid": "28739560", "title": "Ontology-Based Approach to Social Data Sentiment Analysis: Detection of Adolescent Depression Signals.", "abstract": "BACKGROUND\nSocial networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics.\n\n\nOBJECTIVE\nThe aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis.\n\n\nMETHODS\nThe domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts.\n\n\nRESULTS\nWe developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, \"academic stresses\" and \"suicide\" contributed negatively to the sentiment of adolescent depression.\n\n\nCONCLUSIONS\nThe ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology." } ]
JMIR Mental Health
30287415
PMC6324647
10.2196/mental.9235
Interaction and Engagement with an Anxiety Management App: Analysis Using Large-Scale Behavioral Data
BackgroundSAM (Self-help for Anxiety Management) is a mobile phone app that provides self-help for anxiety management. Launched in 2013, the app has achieved over one million downloads on the iOS and Android platform app stores. Key features of the app are anxiety monitoring, self-help techniques, and social support via a mobile forum (“the Social Cloud”). This paper presents unique insights into eMental health app usage patterns and explores user behaviors and usage of self-help techniques.ObjectiveThe objective of our study was to investigate behavioral engagement and to establish discernible usage patterns of the app linked to the features of anxiety monitoring, ratings of self-help techniques, and social participation.MethodsWe use data mining techniques on aggregate data obtained from 105,380 registered users of the app’s cloud services.ResultsEngagement generally conformed to common mobile participation patterns with an inverted pyramid or “funnel” of engagement of increasing intensity. We further identified 4 distinct groups of behavioral engagement differentiated by levels of activity in anxiety monitoring and social feature usage. Anxiety levels among all monitoring users were markedly reduced in the first few days of usage with some bounce back effect thereafter. A small group of users demonstrated long-term anxiety reduction (using a robust measure), typically monitored for 12-110 days, with 10-30 discrete updates and showed low levels of social participation.ConclusionsThe data supported our expectation of different usage patterns, given flexible user journeys, and varying commitment in an unstructured mobile phone usage setting. We nevertheless show an aggregate trend of reduction in self-reported anxiety across all minimally-engaged users, while noting that due to the anonymized dataset, we did not have information on users also enrolled in therapy or other intervention while using the app. We find several commonalities between these app-based behavioral patterns and traditional therapy engagement.
Related WorkWe know of no previous work that has looked at user engagement specifically with eMental Health tools. Previous similar work focusing on behavioral aspects of engagement with other kinds of service has looked at recognizable subgroups of users, engagement periods, and correlates of engagement in-app user populations. In their data mining investigation of over 12 million users of a weight loss app, Serrano et al [11] identified the following 3 main subgroups based on the number of times participants weighed in and the number of food days logged: occasional users, basic users, and power users. Power users (1%; 35,649/324,649 sample) showed successful weight loss in 72% of cases (25,916/35,649) compared with only 5% (12,796/262,813) for occasional users (80%; 262,813/324,649). On average, power users were slightly older, more likely to have friends also using the app, and more likely to take advantage of customization features. This indicates that more engaged users are more likely to achieve positive outcomes, something that we investigate in this study.Goyal et al [12] investigated the uptake of an app for heart disease prevention. They found that from their population of users, just 10% (5259/52,431) showed “high engagement” as measured in the number of completed in-app challenges with 85% (44,537/52,431) classed as low or very low engagers.In terms of engagement periods, a study of usage of an app for drug adherence showed that 27% (3209/11688) used the app for at least 84 days [13]. At 165 days, 15% (82/565) of users aged above 50 years were still using the app compared with 9% (46/530) of those aged below 50 years. After a year, only 1% (6/530) of users were still engaged.The primary focus of previous studies presented here was to consider the characteristics of longitudinal engagement and use this to gain an understanding of the user groups along with measuring usage of different features as an attribute. These studies serve as useful reference points in validating the metrics that we aimed to employ in our analysis.
[ "27458547", "23884863", "26162113", "20123251", "24486521", "22057287", "22215865", "28663164", "27301853", "27012937", "24194946", "5474283", "27856407", "21130939", "28768607", "23285271", "26165748", "25189522", "8370864", "26407772", "22903885", "8370863", "26235730", "3516036", "22275845", "27428034" ]
[ { "pmid": "27458547", "title": "A systematic review of reviews on the prevalence of anxiety disorders in adult populations.", "abstract": "BACKGROUND\nA fragmented research field exists on the prevalence of anxiety disorders. Here, we present the results of a systematic review of reviews on this topic. We included the highest quality studies to inform practice and policy on this issue.\n\n\nMETHOD\nUsing PRISMA methodology, extensive electronic and manual citation searches were performed to identify relevant reviews. Screening, data extraction, and quality assessment were undertaken by two reviewers. Inclusion criteria consisted of systematic reviews or meta-analyses on the prevalence of anxiety disorders that fulfilled at least half of the AMSTAR quality criteria.\n\n\nRESULTS\nWe identified a total of 48 reviews and described the prevalence of anxiety across population subgroups and settings, as reported by these studies. Despite the high heterogeneity of prevalence estimates across primary studies, there was emerging and compelling evidence of substantial prevalence of anxiety disorders generally (3.8-25%), and particularly in women (5.2-8.7%); young adults (2.5-9.1%); people with chronic diseases (1.4-70%); and individuals from Euro/Anglo cultures (3.8-10.4%) versus individuals from Indo/Asian (2.8%), African (4.4%), Central/Eastern European (3.2%), North African/Middle Eastern (4.9%), and Ibero/Latin cultures (6.2%).\n\n\nCONCLUSIONS\nThe prevalence of anxiety disorders is high in population subgroups across the globe. Recent research has expanded its focus to Asian countries, an increasingly greater number of physical and psychiatric conditions, and traumatic events associated with anxiety. Further research on illness trajectories and anxiety levels pre- and post-treatment is needed. Few studies have been conducted in developing and under-developed parts of the world and have little representation in the global literature." }, { "pmid": "23884863", "title": "The size, burden and cost of disorders of the brain in the UK.", "abstract": "AIM\nThe aim of this paper is to increase awareness of the prevalence and cost of psychiatric and neurological disorders (brain disorders) in the UK.\n\n\nMETHOD\nUK data for 18 brain disorders were extracted from a systematic review of European epidemiological data and prevalence rates and the costs of each disorder were summarized (2010 values).\n\n\nRESULTS\nThere were approximately 45 million cases of brain disorders in the UK, with a cost of €134 billion per annum. The most prevalent were headache, anxiety disorders, sleep disorders, mood disorders and somatoform disorders. However, the five most costly disorders (€ million) were: dementia: €22,164; psychotic disorders: €16,717; mood disorders: €19,238; addiction: €11,719; anxiety disorders: €11,687. Apart from psychosis, these five disorders ranked amongst those with the lowest direct medical expenditure per subject (<€3000). The approximate breakdown of costs was: 50% indirect costs, 25% direct non-medical and 25% direct healthcare costs.\n\n\nDISCUSSION\nThe prevalence and cost of UK brain disorders is likely to increase given the ageing population. Translational neurosciences research has the potential to develop more effective treatments but is underfunded. Addressing the clinical and economic challenges posed by brain disorders requires a coordinated effort at an EU and national level to transform the current scientific, healthcare and educational agenda." }, { "pmid": "26162113", "title": "Rebooting Psychotherapy Research and Practice to Reduce the Burden of Mental Illness.", "abstract": "Psychological interventions to treat mental health issues have developed remarkably in the past few decades. Yet this progress often neglects a central goal-namely, to reduce the burden of mental illness and related conditions. The need for psychological services is enormous, and only a small proportion of individuals in need actually receive treatment. Individual psychotherapy, the dominant model of treatment delivery, is not likely to be able to meet this need. Despite advances, mental health professionals are not likely to reduce the prevalence, incidence, and burden of mental illness without a major shift in intervention research and clinical practice. A portfolio of models of delivery will be needed. We illustrate various models of delivery to convey opportunities provided by technology, special settings and nontraditional service providers, self-help interventions, and the media. Decreasing the burden of mental illness also will depend on integrating prevention and treatment, developing assessment and a national database for monitoring mental illness and its burdens, considering contextual issues that influence delivery of treatment, and addressing potential tensions within the mental health professions. Finally, opportunities for multidisciplinary collaborations are discussed as key considerations for reducing the burden of mental illness." }, { "pmid": "20123251", "title": "Mental health problems and help-seeking behavior among college students.", "abstract": "Mental disorders are as prevalent among college students as same-aged non-students, and these disorders appear to be increasing in number and severity. The purpose of this report is to review the research literature on college student mental health, while also drawing comparisons to the parallel literature on the broader adolescent and young adult populations." }, { "pmid": "24486521", "title": "Changes in attitudes toward seeking mental health services: a 40-year cross-temporal meta-analysis.", "abstract": "Although rates of treatment seeking for mental health problems are increasing, this increase is driven primarily by antidepressant medication use, and a majority of individuals with mental health problems remain untreated. Helpseeking attitudes are thought to be a key barrier to mental health service use, although little is known about whether such attitudes have changed over time. Research on this topic is mixed with respect to whether helpseeking attitudes have become more or less positive. The aim of the current study was to help clarify this issue using a cross-temporal meta-analysis of scores on Fischer and Turner's (1970) helpseeking attitude measure among university students (N=6796) from 1968 to 2008. Results indicated that attitudes have become increasingly negative over time, r(44)=-0.53, p<0.01, with even stronger negative results when the data are weighted (w) for sample size and study variance, r(44)=-0.63, p<.001. This disconcerting finding may reflect the greater emphasis of Fischer and Turner's scale toward helpseeking for psychotherapy. Such attitudes may be increasingly negative as a result of the unintended negative effects of efforts in recent decades to reduce stigma and market biological therapies by medicalizing mental health problems." }, { "pmid": "22057287", "title": "Anxiety online: a virtual clinic: preliminary outcomes following completion of five fully automated treatment programs for anxiety disorders and symptoms.", "abstract": "BACKGROUND\nThe development of e-mental health interventions to treat or prevent mental illness and to enhance wellbeing has risen rapidly over the past decade. This development assists the public in sidestepping some of the obstacles that are often encountered when trying to access traditional face-to-face mental health care services.\n\n\nOBJECTIVE\nThe objective of our study was to investigate the posttreatment effectiveness of five fully automated self-help cognitive behavior e-therapy programs for generalized anxiety disorder (GAD), panic disorder with or without agoraphobia (PD/A), obsessive-compulsive disorder (OCD), posttraumatic stress disorder (PTSD), and social anxiety disorder (SAD) offered to the international public via Anxiety Online, an open-access full-service virtual psychology clinic for anxiety disorders.\n\n\nMETHODS\nWe used a naturalistic participant choice, quasi-experimental design to evaluate each of the five Anxiety Online fully automated self-help e-therapy programs. Participants were required to have at least subclinical levels of one of the anxiety disorders to be offered the associated disorder-specific fully automated self-help e-therapy program. These programs are offered free of charge via Anxiety Online.\n\n\nRESULTS\nA total of 225 people self-selected one of the five e-therapy programs (GAD, n = 88; SAD, n = 50; PD/A, n = 40; PTSD, n = 30; OCD, n = 17) and completed their 12-week posttreatment assessment. Significant improvements were found on 21/25 measures across the five fully automated self-help programs. At postassessment we observed significant reductions on all five anxiety disorder clinical disorder severity ratings (Cohen d range 0.72-1.22), increased confidence in managing one's own mental health care (Cohen d range 0.70-1.17), and decreases in the total number of clinical diagnoses (except for the PD/A program, where a positive trend was found) (Cohen d range 0.45-1.08). In addition, we found significant improvements in quality of life for the GAD, OCD, PTSD, and SAD e-therapy programs (Cohen d range 0.11-0.96) and significant reductions relating to general psychological distress levels for the GAD, PD/A, and PTSD e-therapy programs (Cohen d range 0.23-1.16). Overall, treatment satisfaction was good across all five e-therapy programs, and posttreatment assessment completers reported using their e-therapy program an average of 395.60 (SD 272.2) minutes over the 12-week treatment period.\n\n\nCONCLUSIONS\nOverall, all five fully automated self-help e-therapy programs appear to be delivering promising high-quality outcomes; however, the results require replication." }, { "pmid": "22215865", "title": "Efficacy, cost-effectiveness and acceptability of self-help interventions for anxiety disorders: systematic review.", "abstract": "BACKGROUND\nSelf-help interventions for psychiatric disorders represent an increasingly popular alternative to therapist-administered psychological therapies, offering the potential of increased access to cost-effective treatment.\n\n\nAIMS\nTo determine the efficacy, cost-effectiveness and acceptability of self-help interventions for anxiety disorders.\n\n\nMETHOD\nRandomised controlled trials (RCTs) of self-help interventions for anxiety disorders were identified by searching nine online databases. Studies were grouped according to disorder and meta-analyses were conducted where sufficient data were available. Overall meta-analyses of self-help v. waiting list and therapist-administered treatment were also undertaken. Methodological quality was assessed independently by two researchers according to criteria set out by the Cochrane Collaboration.\n\n\nRESULTS\nThirty-one RCTs met inclusion criteria for the review. Results of the overall meta-analysis comparing self-help with waiting list gave a significant effect size of 0.84 in favour of self-help. Comparison of self-help with therapist-administered treatments revealed a significant difference in favour of the latter with an effect size of 0.34. The addition of guidance and the presentation of multimedia or web-based self-help materials improved treatment outcome.\n\n\nCONCLUSIONS\nSelf-help interventions appear to be an effective way of treating individuals diagnosed with social phobia and panic disorder. Further research is required to evaluate the cost-effectiveness and acceptability of these interventions." }, { "pmid": "28663164", "title": "Assessing User Engagement of an mHealth Intervention: Development and Implementation of the Growing Healthy App Engagement Index.", "abstract": "BACKGROUND\nChildhood obesity is an ongoing problem in developed countries that needs targeted prevention in the youngest age groups. Children in socioeconomically disadvantaged families are most at risk. Mobile health (mHealth) interventions offer a potential route to target these families because of its relatively low cost and high reach. The Growing healthy program was developed to provide evidence-based information on infant feeding from birth to 9 months via app or website. Understanding user engagement with these media is vital to developing successful interventions. Engagement is a complex, multifactorial concept that needs to move beyond simple metrics.\n\n\nOBJECTIVE\nThe aim of our study was to describe the development of an engagement index (EI) to monitor participant interaction with the Growing healthy app. The index included a number of subindices and cut-points to categorize engagement.\n\n\nMETHODS\nThe Growing program was a feasibility study in which 300 mother-infant dyads were provided with an app which included 3 push notifications that was sent each week. Growing healthy participants completed surveys at 3 time points: baseline (T1) (infant age ≤3 months), infant aged 6 months (T2), and infant aged 9 months (T3). In addition, app usage data were captured from the app. The EI was adapted from the Web Analytics Demystified visitor EI. Our EI included 5 subindices: (1) click depth, (2) loyalty, (3) interaction, (4) recency, and (5) feedback. The overall EI summarized the subindices from date of registration through to 39 weeks (9 months) from the infant's date of birth. Basic descriptive data analysis was performed on the metrics and components of the EI as well as the final EI score. Group comparisons used t tests, analysis of variance (ANOVA), Mann-Whitney, Kruskal-Wallis, and Spearman correlation tests as appropriate. Consideration of independent variables associated with the EI score were modeled using linear regression models.\n\n\nRESULTS\nThe overall EI mean score was 30.0% (SD 11.5%) with a range of 1.8% - 57.6%. The cut-points used for high engagement were scores greater than 37.1% and for poor engagement were scores less than 21.1%. Significant explanatory variables of the EI score included: parity (P=.005), system type including \"app only\" users or \"both\" app and email users (P<.001), recruitment method (P=.02), and baby age at recruitment (P=.005).\n\n\nCONCLUSIONS\nThe EI provided a comprehensive understanding of participant behavior with the app over the 9-month period of the Growing healthy program. The use of the EI in this study demonstrates that rich and useful data can be collected and used to inform assessments of the strengths and weaknesses of the app and in turn inform future interventions." }, { "pmid": "27301853", "title": "Mining Health App Data to Find More and Less Successful Weight Loss Subgroups.", "abstract": "BACKGROUND\nMore than half of all smartphone app downloads involve weight, diet, and exercise. If successful, these lifestyle apps may have far-reaching effects for disease prevention and health cost-savings, but few researchers have analyzed data from these apps.\n\n\nOBJECTIVE\nThe purposes of this study were to analyze data from a commercial health app (Lose It!) in order to identify successful weight loss subgroups via exploratory analyses and to verify the stability of the results.\n\n\nMETHODS\nCross-sectional, de-identified data from Lose It! were analyzed. This dataset (n=12,427,196) was randomly split into 24 subsamples, and this study used 3 subsamples (combined n=972,687). Classification and regression tree methods were used to explore groupings of weight loss with one subsample, with descriptive analyses to examine other group characteristics. Data mining validation methods were conducted with 2 additional subsamples.\n\n\nRESULTS\nIn subsample 1, 14.96% of users lost 5% or more of their starting body weight. Classification and regression tree analysis identified 3 distinct subgroups: \"the occasional users\" had the lowest proportion (4.87%) of individuals who successfully lost weight; \"the basic users\" had 37.61% weight loss success; and \"the power users\" achieved the highest percentage of weight loss success at 72.70%. Behavioral factors delineated the subgroups, though app-related behavioral characteristics further distinguished them. Results were replicated in further analyses with separate subsamples.\n\n\nCONCLUSIONS\nThis study demonstrates that distinct subgroups can be identified in \"messy\" commercial app data and the identified subgroups can be replicated in independent samples. Behavioral factors and use of custom app features characterized the subgroups. Targeting and tailoring information to particular subgroups could enhance weight loss success. Future studies should replicate data mining analyses to increase methodology rigor." }, { "pmid": "27012937", "title": "Uptake of a Consumer-Focused mHealth Application for the Assessment and Prevention of Heart Disease: The <30 Days Study.", "abstract": "BACKGROUND\nLifestyle behavior modification can reduce the risk of cardiovascular disease, one of the leading causes of death worldwide, by up to 80%. We hypothesized that a dynamic risk assessment and behavior change tool delivered as a mobile app, hosted by a reputable nonprofit organization, would promote uptake among community members. We also predicted that the uptake would be influenced by incentives offered for downloading the mobile app.\n\n\nOBJECTIVE\nThe primary objective of our study was to evaluate the engagement levels of participants using the novel risk management app. The secondary aim was to assess the effect of incentives on the overall uptake and usage behaviors.\n\n\nMETHODS\nWe publicly launched the app through the iTunes App Store and collected usage data over 5 months. Aggregate information included population-level data on download rates, use, risk factors, and user demographics. We used descriptive statistics to identify usage patterns, t tests, and analysis of variance to compare group means. Correlation and regression analyses determined the relationship between usage and demographic variables.\n\n\nRESULTS\nWe captured detailed mobile usage data from 69,952 users over a 5-month period, of whom 23,727 (33.92%) were registered during a 1-month AIR MILES promotion. Of those who completed the risk assessment, 73.92% (42,380/57,330) were female, and 59.38% (34,042/57,330) were <30 years old. While the older demographic had significantly lower uptake than the younger demographic, with only 8.97% of users aged ≥51 years old downloading the app, the older demographic completed more challenges than their younger counterparts (F8, 52,422 = 55.10, P<.001). In terms of engagement levels, 84.94% (44,537/52,431) of users completed 1-14 challenges over a 30-day period, and 10.03% (5,259/52,431) of users completed >22 challenges. On average, users in the incentives group completed slightly more challenges during the first 30 days of the intervention (mean 7.9, SD 0.13) than those in the nonincentives group (mean 6.1, SD 0.06, t28870=-12.293, P<.001, d=0.12, 95% CI -2.02 to -1.47). The regression analysis suggested that sex, age group, ethnicity, having 5 of the risk factors (all but alcohol), incentives, and the number of family histories were predictors of the number of challenges completed by a user (F14, 56,538 = 86.644, P<.001, adjusted R(2) = .021).\n\n\nCONCLUSION\nWhile the younger population downloaded the app the most, the older population demonstrated greater sustained engagement. Behavior change apps have the potential to reach a targeted population previously thought to be uninterested in or unable to use mobile apps. The development of such apps should assume that older adults will in fact engage if the behavior change elements are suitably designed, integrated into daily routines, and tailored. Incentives may be the stepping-stone that is needed to guide the general population toward preventative tools and promote sustained behavior change." }, { "pmid": "24194946", "title": "User profiles of a smartphone application to support drug adherence--experiences from the iNephro project.", "abstract": "PURPOSE\nOne of the key problems in the drug therapy of patients with chronic conditions is drug adherence. In 2010 the initiative iNephro was launched (www.inephro.de). A software to support regular and correct drug intake was developed for a smartphone platform (iOS). The study investigated whether and how smartphone users deployed such an application.\n\n\nMETHODS\nTogether with cooperating partners the mobile application \"Medikamentenplan\" (\"Medication Plan\") was developed. Users are able to keep and alter a list of their regular medication. A memory function supports regular intake. The application can be downloaded free of charge from the App Store™ by Apple™. After individual consent of users from December 2010 to April 2012 2042338 actions were recorded and analysed from the downloaded applications. Demographic data were collected from 2279 users with a questionnaire.\n\n\nRESULTS\nOverall the application was used by 11688 smartphone users. 29% (3406/11688) used it at least once a week for at least four weeks. 27% (3209/11688) used the application for at least 84 days. 68% (1554/2279) of users surveyed were male, the stated age of all users was between 6-87 years (mean 44). 74% of individuals (1697) declared to be suffering from cardiovascular disease, 13% (292) had a previous history of transplantation, 9% (205) were suffering from cancer, 7% (168) reported an impaired renal function and 7% (161) suffered from diabetes mellitus. 69% (1568) of users were on <6 different medications, 9% (201) on 6 - 10 and 1% (26) on more than 10.\n\n\nCONCLUSION\nA new smartphone application, which supports drug adherence, was used regularly by chronically ill users with a wide range of diseases over a longer period of time. The majority of users so far were middle-aged and male." }, { "pmid": "27856407", "title": "Self-Monitoring Utilization Patterns Among Individuals in an Incentivized Program for Healthy Behaviors.", "abstract": "BACKGROUND\nThe advent of digital technology has enabled individuals to track meaningful biometric data about themselves. This novel capability has spurred nontraditional health care organizations to develop systems that aid users in managing their health. One of the most prolific systems is Walgreens Balance Rewards for healthy choices (BRhc) program, an incentivized, Web-based self-monitoring program.\n\n\nOBJECTIVE\nThis study was performed to evaluate health data self-tracking characteristics of individuals enrolled in the Walgreens' BRhc program, including the impact of manual versus automatic data entries through a supported device or apps.\n\n\nMETHODS\nWe obtained activity tracking data from a total of 455,341 BRhc users during 2014. Upon identifying users with sufficient follow-up data, we explored temporal trends in user participation.\n\n\nRESULTS\nThirty-four percent of users quit participating after a single entry of an activity. Among users who tracked at least two activities on different dates, the median length of participating was 8 weeks, with an average of 5.8 activities entered per week. Furthermore, users who participated for at least twenty weeks (28.3% of users; 33,078/116,621) consistently entered 8 to 9 activities per week. The majority of users (77%; 243,774/315,744) recorded activities through manual data entry alone. However, individuals who entered activities automatically through supported devices or apps participated roughly four times longer than their manual activity-entering counterparts (average 20 and 5 weeks, respectively; P<.001).\n\n\nCONCLUSIONS\nThis study provides insights into the utilization patterns of individuals participating in an incentivized, Web-based self-monitoring program. Our results suggest automated health tracking could significantly improve long-term health engagement." }, { "pmid": "21130939", "title": "A review of technology-assisted self-help and minimal contact therapies for anxiety and depression: is human contact necessary for therapeutic efficacy?", "abstract": "Technology-based self-help and minimal contact therapies have been proposed as effective and low-cost interventions for anxiety and mood disorders. The present article reviews the literature published before 2010 on these treatments for anxiety and depression using self-help and decreased therapist-contact interventions. Treatment studies are examined by disorder as well as amount of therapist contact, ranging from self-administered therapy and predominantly self-help interventions to minimal contact therapy where the therapist is actively involved in treatment but to a lesser degree than traditional therapy and predominantly therapist-administered treatments involving regular contact with a therapist for a typical number of sessions. In the treatment of anxiety disorders, it is concluded that self-administered and predominantly self-help interventions are most effective for motivated clients. Conversely, minimal-contact therapies have demonstrated efficacy for the greatest variety of anxiety diagnoses when accounting for both attrition and compliance. Additionally, predominantly self-help computer-based cognitive and behavioral interventions are efficacious in the treatment of subthreshold mood disorders. However, therapist-assisted treatments remain optimal in the treatment of clinical levels of depression. Although the most efficacious amount of therapist contact varies by disorder, computerized treatments have been shown to be a less-intensive, cost-effective way to deliver empirically validated treatments for a variety of psychological problems." }, { "pmid": "28768607", "title": "Peer Communication in Online Mental Health Forums for Young People: Directional and Nondirectional Support.", "abstract": "BACKGROUND\nThe Internet has the potential to help young people by reducing the stigma associated with mental health and enabling young people to access services and professionals which they may not otherwise access. Online support can empower young people, help them develop new online friendships, share personal experiences, communicate with others who understand, provide information and emotional support, and most importantly help them feel less alone and normalize their experiences in the world.\n\n\nOBJECTIVE\nThe aim of the research was to gain an understanding of how young people use an online forum for emotional and mental health issues. Specifically, the project examined what young people discuss and how they seek support on the forum (objective 1). Furthermore, it looked at how the young service users responded to posts to gain an understanding of how young people provided each other with peer-to-peer support (objective 2).\n\n\nMETHODS\nKooth is an online counseling service for young people aged 11-25 years and experiencing emotional and mental health problems. It is based in the United Kingdom and provides support that is anonymous, confidential, and free at the point of delivery. Kooth provided the researchers with all the online forum posts between a 2-year period, which resulted in a dataset of 622 initial posts and 3657 initial posts with responses. Thematic analysis was employed to elicit key themes from the dataset.\n\n\nRESULTS\nThe findings support the literature that online forums provide young people with both informational and emotional support around a wide array of topics. The findings from this large dataset also reveal that this informational or emotional support can be viewed as directive or nondirective. The nondirective approach refers to when young people provide others with support by sharing their own experiences. These posts do not include explicit advice to act in a particular way, but the sharing process is hoped to be of use to the poster. The directive approach, in contrast, involves individuals making an explicit suggestion of what they believe the poster should do.\n\n\nCONCLUSIONS\nThis study adds to the research exploring what young people discuss within online forums and provides insights into how these communications take place. Furthermore, it highlights the challenge that organizations may encounter in mediating support that is multidimensional in nature (informational-emotional, directive-nondirective)." }, { "pmid": "23285271", "title": "The effectiveness of an online support group for members of the community with depression: a randomised controlled trial.", "abstract": "BACKGROUND\nInternet support groups (ISGs) are popular, particularly among people with depression, but there is little high quality evidence concerning their effectiveness.\n\n\nAIM\nThe study aimed to evaluate the efficacy of an ISG for reducing depressive symptoms among community members when used alone and in combination with an automated Internet-based psychotherapy training program.\n\n\nMETHOD\nVolunteers with elevated psychological distress were identified using a community-based screening postal survey. Participants were randomised to one of four 12-week conditions: depression Internet Support Group (ISG), automated depression Internet Training Program (ITP), combination of the two (ITP+ISG), or a control website with delayed access to e-couch at 6 months. Assessments were conducted at baseline, post-intervention, 6 and 12 months.\n\n\nRESULTS\nThere was no change in depressive symptoms relative to control after 3 months of exposure to the ISG. However, both the ISG alone and the combined ISG+ITP group showed significantly greater reduction in depressive symptoms at 6 and 12 months follow-up than the control group. The ITP program was effective relative to control at post-intervention but not at 6 months.\n\n\nCONCLUSIONS\nISGs for depression are promising and warrant further empirical investigation.\n\n\nTRIAL REGISTRATION\nControlled-Trials.com ISRCTN65657330." }, { "pmid": "26165748", "title": "Minimal clinically important difference on the Beck Depression Inventory--II according to the patient's perspective.", "abstract": "BACKGROUND\nThe Beck Depression Inventory, 2nd edition (BDI-II) is widely used in research on depression. However, the minimal clinically important difference (MCID) is unknown. MCID can be estimated in several ways. Here we take a patient-centred approach, anchoring the change on the BDI-II to the patient's global report of improvement.\n\n\nMETHOD\nWe used data collected (n = 1039) from three randomized controlled trials for the management of depression. Improvement on a 'global rating of change' question was compared with changes in BDI-II scores using general linear modelling to explore baseline dependency, assessing whether MCID is best measured in absolute terms (i.e. difference) or as percent reduction in scores from baseline (i.e. ratio), and receiver operator characteristics (ROC) to estimate MCID according to the optimal threshold above which individuals report feeling 'better'.\n\n\nRESULTS\nImprovement in BDI-II scores associated with reporting feeling 'better' depended on initial depression severity, and statistical modelling indicated that MCID is best measured on a ratio scale as a percentage reduction of score. We estimated a MCID of a 17.5% reduction in scores from baseline from ROC analyses. The corresponding estimate for individuals with longer duration depression who had not responded to antidepressants was higher at 32%.\n\n\nCONCLUSIONS\nMCID on the BDI-II is dependent on baseline severity, is best measured on a ratio scale, and the MCID for treatment-resistant depression is larger than that for more typical depression. This has important implications for clinical trials and practice." }, { "pmid": "25189522", "title": "Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta-analysis.", "abstract": "We conducted a meta-analysis on the effects of client preferences on treatment satisfaction, completion, and clinical outcome. Our search of the literature resulted in 34 empirical articles describing 32 unique clinical trials that either randomized some clients to an active choice condition (shared decision making condition or choice of treatment) or assessed client preferences. Clients who were involved in shared decision making, chose a treatment condition, or otherwise received their preferred treatment evidenced higher treatment satisfaction (ESd=.34; p<.001), increased completion rates (ESOR=1.37; ESd=.17; p<.001), and superior clinical outcome (ESd=.15; p<.0001), compared to clients who were not involved in shared decision making, did not choose a treatment condition, or otherwise did not receive their preferred treatment. Although the effect sizes are modest in magnitude, they were generally consistent across several potential moderating variables including study design (preference versus active choice), psychoeducation (informed versus uninformed), setting (inpatient versus outpatient), client diagnosis (mental health versus other), and unit of randomization (client versus provider). Our findings highlight the clinical benefit of assessing client preferences, providing treatment choices when two or more efficacious options are available, and involving clients in treatment-related decisions when treatment options are not available." }, { "pmid": "8370864", "title": "A phase model of psychotherapy outcome: causal mediation of change.", "abstract": "A 3-phase model of psychotherapy outcome is proposed that entails progressive improvement of subjectively experienced well-being, reduction in symptomatology, and enhancement of life functioning. The model also predicts that movement into a later phase of treatment depends on whether progress has been made in an earlier phase. Thus, clinical improvement in subjective well-being potentiates symptomatic improvement, and clinical reduction in symptomatic distress potentiates life-functioning improvement. A large sample of psychotherapy patients provided self-reports of subjective well-being, symptomatic distress, and life functioning before beginning individual psychotherapy and after Sessions 2, 4, and 17 when possible. Changes in well-being, symptomatic distress, and life functioning means over this period were consistent with the 3-phase model. Measures of patient status on these 3 variables were converted into dichotomous improvement-nonimprovement scores between intake and each of Sessions 2, 4, and 17. An analysis of 2 x 2 cross-classification tables generated from these dichotomous measures suggested that improvement in well-being precedes and is a probabilistically necessary condition for reduction in symptomatic distress and that symptomatic improvement precedes and is a probabilistically necessary condition for improvement in life functioning." }, { "pmid": "26407772", "title": "How important are the common factors in psychotherapy? An update.", "abstract": "The common factors have a long history in the field of psychotherapy theory, research and practice. To understand the evidence supporting them as important therapeutic elements, the contextual model of psychotherapy is outlined. Then the evidence, primarily from meta-analyses, is presented for particular common factors, including alliance, empathy, expectations, cultural adaptation, and therapist differences. Then the evidence for four factors related to specificity, including treatment differences, specific ingredients, adherence, and competence, is presented. The evidence supports the conclusion that the common factors are important for producing the benefits of psychotherapy." }, { "pmid": "22903885", "title": "An interview study investigating experiences of psychological change without psychotherapy.", "abstract": "OBJECTIVES\nGiven that most people who experience psychological distress resolve this distress without the assistance of psychotherapy, the study sought to increase our understanding of naturally occurring change including the facilitators of this change.\n\n\nDESIGN\nThe study sought to replicate and extend earlier work in this area. The design involved recruiting participants who had experienced some form of psychological distress and had resolved this distress without accessing psychotherapy services.\n\n\nMETHODS\nQualitative methods were used for this study because the lived experience of the participants was of interest. Semi-structured interviews were used following a pro forma developed in earlier work. Interpretive Phenomenological Analysis was the analytical method adopted for this study to identify themes and patterns in the transcripts of the interviews of the participants.\n\n\nRESULTS\nData analysis identified the themes of identity, connection, threshold, desire to change, change as a sudden and gradual process, and thinking process. An unexpected finding was the subjectivity associated with deciding whether or not a problem had actually resolved.\n\n\nCONCLUSIONS\nThe results are discussed in terms of their implications for clinical practice including the apparent importance of people reaching an emotional threshold prior to change. A sense of identity also appears to be important in change experiences." }, { "pmid": "8370863", "title": "The shape of change in psychotherapy: longitudinal assessment of personal problems.", "abstract": "We propose 4 parameters that describe the course of change in the subjective intensity of personal problems during psychotherapy: (a) the problem's initial severity; (b) its rate of change (deterioration or improvement); (c) its instability (day-to-day variability in intensity); and (d) its curve (change in the rate of change during treatment). We constructed indexes of these parameters for 10 individualized personal problems rated 3 times per week by each of 40 clients (most were diagnosed as depressed) over the course of their 16-session treatment and associated assessment periods. Initial severity predicted problems' reported salience to clients. The rate of change parameter was correlated (across clients) with traditional pretreatment to posttreatment outcome measures. Instability was high, and problems dealing with tension symptoms and mood were more unstable than were problems dealing with relationships or self-esteem. Cutting across problem content were large individual differences among clients in the patterns of change." }, { "pmid": "26235730", "title": "Trajectories of Change in Psychotherapy.", "abstract": "OBJECTIVE\nThe current study used multilevel growth mixture modeling to ascertain groups of patients who had similar trajectories in their psychological functioning over the course of short-term treatment.\n\n\nMETHOD\nA total of 10,854 clients completed a measure of psychological functioning before each session. Psychological functioning was measured by the Behavioral Health Measure, which is an index of well-being, symptoms, and life-functioning. Clients who attended 5 to 25 sessions at 46 different university/college counseling centers and one community mental health center were included in this study. Client diagnoses and the specific treatment approaches were not known.\n\n\nRESULTS\nA 3-class solution was a good fit to the data. Clients in classes 1 and 3 had moderate severity in their initial psychological functioning scores, and clients in class 2 had more distressed psychological functioning scores. The trajectory for clients in class 1 was typified by early initial change, followed by a plateau, and then another gain in psychological functioning later in treatment. The trajectory for clients in class 2 demonstrated an initial decrease in functioning, followed by a rapid increase, and then a plateau. Last, the clients in class 3 had a steady increase of psychological functioning, in a more linear manner.\n\n\nCONCLUSION\nThe trajectories of change for clients are diverse, and they can ebb and flow more than traditional dose-effect and good-enough level models may suggest." }, { "pmid": "22275845", "title": "A lifespan view of anxiety disorders.", "abstract": "Neurodevelopmental changes over the lifespan, from childhood through adulthood into old age, have important implications for the onset, presentation, course, and treatment of anxiety disorders. This article presents data on anxiety disorders as they appear in older adults, as compared with earlier in life. In this article, we focus on aging-related changes in the epidemiology, presentation, and treatment of anxiety disorders. Also, this article describes some of the gaps and limitations in our understanding and suggests research directions that may elucidate the mechanisms of anxiety disorder development later in life. Finally we describe optimal management of anxiety disorders across the lifespan, in \"eight simple steps\" for practitioners." }, { "pmid": "27428034", "title": "Smartphone Applications for Mental Health.", "abstract": "Many adolescents and adults do not seek treatment for mental health symptoms. Smartphone applications (apps) may assist individuals with mental health concerns in alleviating symptoms or increasing understanding. This study seeks to characterize apps readily available to smartphone users seeking mental health information and/or support. Ten key terms were searched in the Apple iTunes and Google Play stores: mental health, depression, anxiety, schizophrenia, bipolar, trauma, trauma in schools, post traumatic stress disorder (PTSD), child trauma, and bullying. A content analysis of the first 20 application descriptions retrieved per category was conducted. Out of 300 nonduplicate applications, 208 (70%) were relevant to search topic, mental health or stress. The most common purported purpose for the apps was symptom relief (41%; n = 85) and general mental health education (18%; n = 37). The most frequently mentioned approaches to improving mental health were those that may benefit only milder symptoms such as relaxation (21%; n = 43). Most app descriptions did not include information to substantiate stated effectiveness of the application (59%; n = 123) and had no mention of privacy or security (89%; n = 185). Due to uncertainty of the helpfulness of readily available mental health applications, clinicians working with mental health patients should inquire about and provide guidance on application use, and patients should have access to ways to assess the potential utility of these applications. Strategic policy and research developments are likely needed to equip patients with applications for mental health, which are patient centered and evidence based." } ]
BMC Medical Informatics and Decision Making
30626381
PMC6325718
10.1186/s12911-018-0730-7
Utilizing dynamic treatment information for MACE prediction of acute coronary syndrome
BackgroundMain adverse cardiac events (MACE) are essentially composite endpoints for assessing safety and efficacy of treatment processes of acute coronary syndrome (ACS) patients. Timely prediction of MACE is highly valuable for improving the effects of ACS treatments. Most existing tools are specific to predict MACE by mainly using static patient features and neglecting dynamic treatment information during learning.MethodsWe address this challenge by developing a deep learning-based approach to utilize a large volume of heterogeneous electronic health record (EHR) for predicting MACE after ACS. Specifically, we obtain the deep representation of dynamic treatment features from EHR data, using the bidirectional recurrent neural network. And then, the extracted latent representation of treatment features can be utilized to predict whether a patient occurs MACE in his or her hospitalization.ResultsWe validate the effectiveness of our approach on a clinical dataset containing 2930 ACS patient samples with 232 static feature types and 2194 dynamic feature types. The performance of our best model for predicting MACE after ACS remains robust and reaches 0.713 and 0.764 in terms of AUC and Accuracy, respectively, and has over 11.9% (1.2%) and 1.9% (7.5%) performance gain of AUC (Accuracy) in comparison with both logistic regression and a boosted resampling model presented in our previous work, respectively. The results are statistically significant.ConclusionsWe hypothesize that our proposed model adapted to leverage dynamic treatment information in EHR data appears to boost the performance of MACE prediction for ACS, and can readily meet the demand clinical prediction of other diseases, from a large volume of EHR in an open-ended fashion.
Related workFrom the technique perspective, the work on MACE prediction can be categorized as cohort-based studies and data-driven studies, respectively.As a traditional approach of medical research, cohort-based studies have been widely adopted to investigate specific clinical hypothesis questions, e.g., the relationship between potential risk factors and MACE [23, 24]. In general, a hypothetical question is firstly proposed by clinical researchers, and then a group of subjects are recruited into the cohort and observed over a period, to collect data that may be relevant to the hypothesis. The prediction models can be furtherly developed based on the collected cohort data, via univariate, multivariate logistic regression or Cox proportional hazards regression model, etc. The most famous cohort-based models for MACE prediction include the Global Registry of Acute Coronary Events (GRACE) [2], the Thrombolysis in Myocardial Infarction (TIMI) [3], and the Platelet Glycoprotein IIb/IIIa Unstable Angina: Receptor Suppression Using Integrilin Therapy (PURSUIT) [5], etc.Although useful, there is a serious flaw of cohort-based studies: they usually select a small set of patient variables, to simplify the model and facility its use in clinical practice [25]. However, the inclusion of fewer risk factors into the model learning may lead to the degradation of the model’s predictive performance. On the contrary, more potential risk factors (e.g., Cystain C, homocysteine in MACE prediction) are recently identified in the literature [26], but are not included in the existing cohort-based models, and therefore eventually limits the value of the cohort-based models.Recently, with the widely application of EHR in healthcare facilities, thousands of data-driven models have been developed by exploring the huge potential of EHR data in various clinical applications, e.g., screening, diagnosis, treatment, prognosis and monitoring [27]. Compared with the traditional cohort-based studies, EHR data-driven models can well address the limitations of cohort-based studies [25].Early work on data-driven prediction has been performed based on conventional machine learning and data mining methods. For example, Hu et al. proposed a hybrid model that combines both random forest and support vector machine to predict the risk of MACE [11]. Bandyopadhyay et al. proposed a Bayesian network to predict cardiovascular risk [28]. In [29], a vector spline multinomial logistic regression model was presented to predict risks of patients with ovarian tumors. These works show the usefulness of utilizing medical data for clinical risk prediction. Recently, many deep learning models, e.g., Stacked Denoising Auto-encoder (SDAE), and Convolutional Neural Network (CNN), etc., have been adopted for the prediction/detection task in medical domain and achieved a promising performance. For example, Raghavendra et al. proposed a CNN-based model to diagnosis the glaucoma using digital fundus images [15], and afterwards they applied CNN to detect the myocardial infarction and ventricular arrhythmias in ECG singles [16, 17]. Huang et al. proposed a regularized SDAE to predict the risk of ACS patients [30]. Li et al. developed a deep belief network based model to predict the risk factors of bone disease progression [31].Although successful, not the full potential of EHR data has been explored. To the best of our knowledge, most of existing data-driven models were trained based on static patient features, and lack the ability to model time-dependent co-variates in the observation window such that an individual’s disease progression mediated by dynamic treatment information cannot be reliably measured, which limits the performance of predictive models.
[ "25249585", "19619694", "22077192", "25520374", "23803294", "16621883", "24222018", "10840005", "28065840", "9377276", "17715409", "15602020", "27206819", "20647183", "25579635", "28742027", "24979059", "26772608", "27521897" ]
[ { "pmid": "19619694", "title": "The expanded Global Registry of Acute Coronary Events: baseline characteristics, management practices, and hospital outcomes of patients with acute coronary syndromes.", "abstract": "BACKGROUND\nThe Global Registry of Acute Coronary Events (GRACE)-a prospective, multinational study of patients hospitalized with acute coronary syndromes (ACSs)-was designed to improve the quality of care for patients with an ACS. Expanded GRACE aims to test the feasibility of a simplified data collection tool and provision of quarterly feedback to index individual hospital management practices to an international reference cohort.\n\n\nMETHODS\nWe describe the objectives; study design; study and data management; and the characteristics, management, and hospital outcomes of patients > or =18 years old enrolled with a presumptive diagnosis of ACS.\n\n\nRESULTS\nFrom 2001 to 2007, 31,982 patients were enrolled at 184 hospitals in 25 countries; 30% were diagnosed with ST-segment elevation myocardial infarction, 31% with non-ST-segment myocardial infarction, 26% with unstable angina, and 12% with another cardiac/noncardiac final diagnosis. The median age was 65 (interquartile range 55-75) years; 24% were >75 years old, and 33% were women. In general, increases were observed over time across the spectrum of ACS (1) in the use in the first 24 hours and at discharge of aspirin, clopidogrel, beta-blockers, and angiotensin-converting enzyme inhibitors/receptor blockers; (2) in the use at discharge of statins; (3) in the early use of glycoprotein IIb/IIIa inhibitors and low-molecular-weight heparin; and (4) in the use of cardiac catheterization and percutaneous coronary intervention. An increase in the use of primary percutaneous coronary intervention and a similar decrease in the use of fibrinolysis in ST-segment elevation myocardial infarction were also seen.\n\n\nCONCLUSIONS\nOver the course of 7 years, general increases in the use of evidence-based therapies for ACS patients were observed in the expanded GRACE." }, { "pmid": "22077192", "title": "Rivaroxaban in patients with a recent acute coronary syndrome.", "abstract": "BACKGROUND\nAcute coronary syndromes arise from coronary atherosclerosis with superimposed thrombosis. Since factor Xa plays a central role in thrombosis, the inhibition of factor Xa with low-dose rivaroxaban might improve cardiovascular outcomes in patients with a recent acute coronary syndrome.\n\n\nMETHODS\nIn this double-blind, placebo-controlled trial, we randomly assigned 15,526 patients with a recent acute coronary syndrome to receive twice-daily doses of either 2.5 mg or 5 mg of rivaroxaban or placebo for a mean of 13 months and up to 31 months. The primary efficacy end point was a composite of death from cardiovascular causes, myocardial infarction, or stroke.\n\n\nRESULTS\nRivaroxaban significantly reduced the primary efficacy end point, as compared with placebo, with respective rates of 8.9% and 10.7% (hazard ratio in the rivaroxaban group, 0.84; 95% confidence interval [CI], 0.74 to 0.96; P=0.008), with significant improvement for both the twice-daily 2.5-mg dose (9.1% vs. 10.7%, P=0.02) and the twice-daily 5-mg dose (8.8% vs. 10.7%, P=0.03). The twice-daily 2.5-mg dose of rivaroxaban reduced the rates of death from cardiovascular causes (2.7% vs. 4.1%, P=0.002) and from any cause (2.9% vs. 4.5%, P=0.002), a survival benefit that was not seen with the twice-daily 5-mg dose. As compared with placebo, rivaroxaban increased the rates of major bleeding not related to coronary-artery bypass grafting (2.1% vs. 0.6%, P<0.001) and intracranial hemorrhage (0.6% vs. 0.2%, P=0.009), without a significant increase in fatal bleeding (0.3% vs. 0.2%, P=0.66) or other adverse events. The twice-daily 2.5-mg dose resulted in fewer fatal bleeding events than the twice-daily 5-mg dose (0.1% vs. 0.4%, P=0.04).\n\n\nCONCLUSIONS\nIn patients with a recent acute coronary syndrome, rivaroxaban reduced the risk of the composite end point of death from cardiovascular causes, myocardial infarction, or stroke. Rivaroxaban increased the risk of major bleeding and intracranial hemorrhage but not the risk of fatal bleeding. (Funded by Johnson & Johnson and Bayer Healthcare; ATLAS ACS 2-TIMI 51 ClinicalTrials.gov number, NCT00809965.)." }, { "pmid": "23803294", "title": "Cardiovascular disease epidemiology in Asia: an overview.", "abstract": "Cardiovascular disease (CVD) is the leading cause of death in the world and half of the cases of CVD are estimated to occur in Asia. Compared with Western countries, most Asian countries, except for Japan, South Korea, Singapore and Thailand, have higher age-adjusted mortality from CVD. In Japan, the mortality from CVD, especially stroke, has declined continuously from the 1960s to the 2000s, which has contributed to making Japan into the top-ranking country for longevity in the world. Hypertension and smoking are the most notable risk factors for stroke and coronary artery disease, whereas dyslipidemia and diabetes mellitus are risk factors for ischemic heart disease and ischemic stroke. The nationwide approach to hypertension prevention and control has contributed to a substantial decline in stroke mortality in Japan. Recent antismoking campaigns have contributed to a decline in the smoking rate among men. Conversely, the prevalence of dyslipidemia and diabetes mellitus increased from the 1980s to the 2000s and, therefore, the population-attributable risks of CVD for dyslipidemia and diabetes mellitus have increased moderately. To prevent future CVD in Asia, the intensive prevention programs for hypertension and smoking should be continued and that for emerging metabolic risk factors should be intensified in Japan. The successful intervention programs in Japan can be applied to other Asian countries." }, { "pmid": "16621883", "title": "Accuracy and impact of risk assessment in the primary prevention of cardiovascular disease: a systematic review.", "abstract": "OBJECTIVE\nTo determine the accuracy of assessing cardiovascular disease (CVD) risk in the primary prevention of CVD and its impact on clinical outcomes.\n\n\nDESIGN\nSystematic review.\n\n\nDATA SOURCES\nPublished studies retrieved from Medline and other databases. Reference lists of identified articles were inspected for further relevant articles.\n\n\nSELECTION OF STUDIES\nAny study that compared the predicted risk of coronary heart disease (CHD) or CVD, with observed 10-year risk based on the widely recommended Framingham methods (review A). Randomised controlled trials examining the effect on clinical outcomes of a healthcare professional assigning a cardiovascular risk score to people predominantly without CVD (review B).\n\n\nREVIEW METHODS\nData were extracted on the ratio of the predicted to the observed 10-year risk of CVD and CHD (review A), and on cardiovascular or coronary fatal or non-fatal events, risk factor levels, absolute cardiovascular or coronary risk, prescription of risk-reducing drugs and changes in health-related behaviour (review B).\n\n\nRESULTS\n27 studies with data from 71,727 participants on predicted and observed risk for either CHD or CVD were identified. For CHD, the predicted to observed ratios ranged from an underprediction of 0.43 (95% CI 0.27 to 0.67) in a high-risk population to an overprediction of 2.87 (95% CI 1.91 to 4.31) in a lower-risk population. In review B, four randomised controlled trials confined to people with hypertension or diabetes found no strong evidence that a cardiovascular risk assessment performed by a clinician improves health outcomes.\n\n\nCONCLUSION\nThe performance of the Framingham risk scores varies considerably between populations and evidence supporting the use of cardiovascular risk scores for primary prevention is scarce." }, { "pmid": "10840005", "title": "Predictors of outcome in patients with acute coronary syndromes without persistent ST-segment elevation. Results from an international trial of 9461 patients. The PURSUIT Investigators.", "abstract": "BACKGROUND\nAppropriate treatment policies should include an accurate estimate of a patient's baseline risk. Risk modeling to date has been underutilized in patients with acute coronary syndromes without persistent ST-segment elevation.\n\n\nMETHODS AND RESULTS\nWe analyzed the relation between baseline characteristics and the 30-day incidence of death and the composite of death or myocardial (re)infarction in 9461 patients with acute coronary syndromes without persistent ST-segment elevation enrolled in the PURSUIT trial [Platelet glycoprotein IIb/IIIa in Unstable angina: Receptor Suppression Using Integrilin (eptifibatide) Therapy]. Variables examined included demographics, history, hemodynamic condition, and symptom duration. Risk models were created with multivariable logistic regression and validated by bootstrapping techniques. There was a 3.6% mortality rate and 11.4% infarction rate by 30 days. More than 20 significant predictors for mortality and for the composite end point were identified. The most important baseline determinants of death were age (adjusted chi(2)=95), heart rate (chi(2)=32), systolic blood pressure (chi(2)=20), ST-segment depression (chi(2)=20), signs of heart failure (chi(2)=18), and cardiac enzymes (chi(2)=15). Determinants of mortality were generally also predictive of death or myocardial (re)infarction. Differences were observed, however, in the relative prognostic importance of predictive variables for mortality alone or the composite end point; for example, sex was a more important determinant of the composite end point (chi(2)=21) than of death alone (chi(2)=10). The accuracy of the prediction of the composite end point was less than that of mortality (C-index 0.67 versus 0.81).\n\n\nCONCLUSIONS\nThe occurrence of adverse events after presentation with acute coronary syndromes is affected by multiple factors. These factors should be considered in the clinical decision-making process." }, { "pmid": "28065840", "title": "MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.", "abstract": "OBJECTIVES\nMajor adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data.\n\n\nMETHODS\nThe proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model.\n\n\nRESULTS\nWe verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively.\n\n\nCONCLUSIONS\nWe consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "17715409", "title": "Long-term mortality after gastric bypass surgery.", "abstract": "BACKGROUND\nAlthough gastric bypass surgery accounts for 80% of bariatric surgery in the United States, only limited long-term data are available on mortality among patients who have undergone this procedure as compared with severely obese persons from a general population.\n\n\nMETHODS\nIn this retrospective cohort study, we determined the long-term mortality (from 1984 to 2002) among 9949 patients who had undergone gastric bypass surgery and 9628 severely obese persons who applied for driver's licenses. From these subjects, 7925 surgical patients and 7925 severely obese control subjects were matched for age, sex, and body-mass index. We determined the rates of death from any cause and from specific causes with the use of the National Death Index.\n\n\nRESULTS\nDuring a mean follow-up of 7.1 years, adjusted long-term mortality from any cause in the surgery group decreased by 40%, as compared with that in the control group (37.6 vs. 57.1 deaths per 10,000 person-years, P<0.001); cause-specific mortality in the surgery group decreased by 56% for coronary artery disease (2.6 vs. 5.9 per 10,000 person-years, P=0.006), by 92% for diabetes (0.4 vs. 3.4 per 10,000 person-years, P=0.005), and by 60% for cancer (5.5 vs. 13.3 per 10,000 person-years, P<0.001). However, rates of death not caused by disease, such as accidents and suicide, were 58% higher in the surgery group than in the control group (11.1 vs. 6.4 per 10,000 person-years, P=0.04).\n\n\nCONCLUSIONS\nLong-term total mortality after gastric bypass surgery was significantly reduced, particularly deaths from diabetes, heart disease, and cancer. However, the rate of death from causes other than disease was higher in the surgery group than in the control group." }, { "pmid": "15602020", "title": "Inflammatory markers and the risk of coronary heart disease in men and women.", "abstract": "BACKGROUND\nFew studies have simultaneously investigated the role of soluble tumor necrosis factor alpha (TNF-alpha) receptors types 1 and 2 (sTNF-R1 and sTNF-R2), C-reactive protein, and interleukin-6 as predictors of cardiovascular events. The value of these inflammatory markers as independent predictors remains controversial.\n\n\nMETHODS\nWe examined plasma levels of sTNF-R1, sTNF-R2, interleukin-6, and C-reactive protein as markers of risk for coronary heart disease among women participating in the Nurses' Health Study and men participating in the Health Professionals Follow-up Study in nested case-control analyses. Among participants who provided a blood sample and who were free of cardiovascular disease at baseline, 239 women and 265 men had a nonfatal myocardial infarction or fatal coronary heart disease during eight years and six years of follow-up, respectively. Using risk-set sampling, we selected controls in a 2:1 ratio with matching for age, smoking status, and date of blood sampling.\n\n\nRESULTS\nAfter adjustment for matching factors, high levels of interleukin-6 and C-reactive protein were significantly related to an increased risk of coronary heart disease in both sexes, whereas high levels of soluble TNF-alpha receptors were significant only among women. Further adjustment for lipid and nonlipid factors attenuated all associations; only C-reactive protein levels remained significant. The relative risk among all participants was 1.79 for those with C-reactive protein levels of at least 3.0 mg per liter, as compared with those with levels of less than 1.0 mg per liter (95 percent confidence interval, 1.27 to 2.51; P for trend <0.001). Additional adjustment for the presence or absence of diabetes and hypertension moderately attenuated the relative risk to 1.68 (95 percent confidence interval, 1.18 to 2.38; P for trend = 0.008).\n\n\nCONCLUSIONS\nElevated levels of inflammatory markers, particularly C-reactive protein, indicate an increased risk of coronary heart disease. Although plasma lipid levels were more strongly associated with an increased risk than were inflammatory markers, the level of C-reactive protein remained a significant contributor to the prediction of coronary heart disease." }, { "pmid": "25579635", "title": "A spline-based tool to assess and visualize the calibration of multiclass risk predictions.", "abstract": "When validating risk models (or probabilistic classifiers), calibration is often overlooked. Calibration refers to the reliability of the predicted risks, i.e. whether the predicted risks correspond to observed probabilities. In medical applications this is important because treatment decisions often rely on the estimated risk of disease. The aim of this paper is to present generic tools to assess the calibration of multiclass risk models. We describe a calibration framework based on a vector spline multinomial logistic regression model. This framework can be used to generate calibration plots and calculate the estimated calibration index (ECI) to quantify lack of calibration. We illustrate these tools in relation to risk models used to characterize ovarian tumors. The outcome of the study is the surgical stage of the tumor when relevant and the final histological outcome, which is divided into five classes: benign, borderline malignant, stage I, stage II-IV, and secondary metastatic cancer. The 5909 patients included in the study are randomly split into equally large training and test sets. We developed and tested models using the following algorithms: logistic regression, support vector machines, k nearest neighbors, random forest, naive Bayes and nearest shrunken centroids. Multiclass calibration plots are interesting as an approach to visualizing the reliability of predicted risks. The ECI is a convenient tool for comparing models, but is less informative and interpretable than calibration plots. In our case study, logistic regression and random forest showed the highest degree of calibration, and the naive Bayes the lowest." }, { "pmid": "28742027", "title": "A Regularized Deep Learning Approach for Clinical Risk Prediction of Acute Coronary Syndrome Using Electronic Health Records.", "abstract": "OBJECTIVE\nAcute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation.\n\n\nMETHODS\nThis study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result.\n\n\nRESULTS\nWe validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively.\n\n\nCONCLUSIONS\nThe obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain." }, { "pmid": "24979059", "title": "Identifying informative risk factors and predicting bone disease progression via deep belief networks.", "abstract": "Osteoporosis is a common disease which frequently causes death, permanent disability, and loss of quality of life in the geriatric population. Identifying risk factors for the disease progression and capturing the disease characteristics have received increasing attentions in the health informatics research. In data mining area, risk factors are features of the data and diagnostic results can be regarded as the labels to train a model for a regression or classification task. We develop a general framework based on the heterogeneous electronic health records (EHRs) for the risk factor (RF) analysis that can be used for informative RF selection and the prediction of osteoporosis. The RF selection is a task designed for ranking and explaining the semantics of informative RFs for preventing the disease and improving the understanding of the disease. Predicting the risk of osteoporosis in a prospective and population-based study is a task for monitoring the bone disease progression. We apply a variety of well-trained deep belief network (DBN) models which inherit the following good properties: (1) pinpointing the underlying causes of the disease in order to assess the risk of a patient in developing a target disease, and (2) discriminating between patients suffering from the disease and without the disease for the purpose of selecting RFs of the disease. A variety of DBN models can capture characteristics for different patient groups via a training procedure with the use of different samples. The case study shows that the proposed method can be efficiently used to select the informative RFs. Most of the selected RFs are validated by the medical literature and some new RFs will attract interests across the medical research. Moreover, the experimental analysis on a real bone disease data set shows that the proposed framework can successfully predict the progression of osteoporosis. The stable and promising performance on the evaluation metrics confirms the effectiveness of our model." }, { "pmid": "26772608", "title": "A calibration hierarchy for risk models was defined: from utopia to empirical data.", "abstract": "OBJECTIVE\nCalibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions.\n\n\nSTUDY DESIGN AND SETTING\nWe present results based on simulated data sets.\n\n\nRESULTS\nA common definition of calibration is \"having an event rate of R% among patients with a predicted risk of R%,\" which we refer to as \"moderate calibration.\" Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. \"Strong calibration\" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic.\n\n\nCONCLUSION\nStrong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration." }, { "pmid": "27521897", "title": "Using recurrent neural network models for early detection of heart failure onset.", "abstract": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months." } ]
Scientific Reports
30631101
PMC6328572
10.1038/s41598-018-37168-4
Automatic Coronary Wall and Atherosclerotic Plaque Segmentation from 3D Coronary CT Angiography
Coronary plaque burden measured by coronary computerized tomography angiography (CCTA), independent of stenosis, is a significant independent predictor of coronary heart disease (CHD) events and mortality. Hence, it is essential to develop comprehensive CCTA plaque quantification beyond existing subjective plaque volume or stenosis scoring methods. The purpose of this study is to develop a framework for automated 3D segmentation of CCTA vessel wall and quantification of atherosclerotic plaque, independent of the amount of stenosis, along with overcoming challenges caused by poor contrast, motion artifacts, severe stenosis, and degradation of image quality. Vesselness, region growing, and two sequential level sets are employed for segmenting the inner and outer wall to prevent artifact-defective segmentation. Lumen and vessel boundaries are joined to create the coronary wall. Curved multiplanar reformation is used to straighten the segmented lumen and wall using lumen centerline. In-vivo evaluation included CCTA stenotic and non-stenotic plaques from 41 asymptomatic subjects with 122 plaques of different characteristics against the individual and consensus of expert readers. Results demonstrate that the framework segmentation performed robustly by providing a reliable working platform for accelerated, objective, and reproducible atherosclerotic plaque characterization beyond subjective assessment of stenosis; can be potentially applicable for monitoring response to therapy.
Background and Related WorkMany paradigms were introduced for coronary artery lumen segmentation and stenosis detection from CCTA images. Optical flow techniques use visual cues (such as local changes in the lumen inner diameter) to guide segmentation, whereas machine learning approaches attempt to learn features that may not be recognizable or perceivable to human experts. Examples of optical flow approaches include the use of a Corkscrew tracking-based vessel extraction technique10,11 for lumen segmentation and centerline calculation. Marquering et al.12 employed a fast marching level set to estimate the initial lumen contour and a model-guided minimum cost approach13 to find the final contour. Wang et al.14 started with an initial centerline and iteratively applied level set and distance transformations to estimate the final centerline and vessel border. Schaap et al.15 employed the intensities along a given centerline to guide a graph cut algorithm for lumen segmentation. These techniques were limited to specific plaque types, e.g., calcified plaques. They were also only demonstrated in a limited number of cases. The performance of these techniques was generally sensitive to the initial centerline accuracy, the length of the stenosis, and the artery cross section shape.Machine learning-based techniques have been used to detect stenosis directly or segment lumen first then calculate the stenosis. Zuluaga et al.16,17 assumed the lesion is a local outlier compared to normal artery regions and employed intensity-based features with use of Support Vector Machines (SVM) to detect such outliers. The proposed metric was calculated in planes orthogonal to a given centerline, thus the final accuracy is also influenced by the given initial centerline. Further details about related coronary artery segmentation and stenosis detection techniques can be found in Kirisli et al.18.Other approaches detect stenosis by first detecting plaques. Kitamura et al.19 proposed a multi-label graph cut technique based on higher-order potentials and Hessian analysis to detect stenosis that exceeded 20%. Kang et al.20 proposed a two-stage technique. In the first stage, two independent stenosis detectors were applied: an SVM and a formula-based analytic method. In the second stage, an SVM based decision fusion algorithm used the output of the two detectors to provide a more accurate detection for lesions with stenosis of greater than 25%. Sivalingam et al.21 proposed a hybrid segmentation technique to segment the vessel wall. This technique used active contour models and a random forest regression with the segmentation being evaluated on five arteries with calcified plaques, mixed, or both.In addition to the focus on the detection of lumen stenosis, the majority of current techniques only detect stenosis that exceeds 20%. This presents little or no information about performance with small to moderate soft plaques that are an important contributor to CHD and predictor of future events, as noted earlier.In this work, we propose the first framework for 3D coronary CTA wall and plaque segmentation, regardless to the degree of stenosis, with a particular interest in the soft plaques of all sizes that cause mild or insignificant lumen stenosis. We compare its performance in a cohort of asymptomatic CHD subjects against three-expert individual and consensus delineations of the outer and inner lumen wall in different plaque size categories.
[ "25530442", "7634481", "8641024", "9426030", "3574413", "20395349", "22179539", "17868808", "21835321", "16741661", "15915942", "18215820", "20549375", "23837963", "26158081", "10920061", "23966421", "22551651", "19933621", "2407762", "19272853", "19098205", "25169177", "19632885", "19818675", "10628954" ]
[ { "pmid": "25530442", "title": "Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990-2013: a systematic analysis for the Global Burden of Disease Study 2013.", "abstract": "BACKGROUND\nUp-to-date evidence on levels and trends for age-sex-specific all-cause and cause-specific mortality is essential for the formation of global, regional, and national health policies. In the Global Burden of Disease Study 2013 (GBD 2013) we estimated yearly deaths for 188 countries between 1990, and 2013. We used the results to assess whether there is epidemiological convergence across countries.\n\n\nMETHODS\nWe estimated age-sex-specific all-cause mortality using the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data. We generally estimated cause of death as in the GBD 2010. Key improvements included the addition of more recent vital registration data for 72 countries, an updated verbal autopsy literature review, two new and detailed data systems for China, and more detail for Mexico, UK, Turkey, and Russia. We improved statistical models for garbage code redistribution. We used six different modelling strategies across the 240 causes; cause of death ensemble modelling (CODEm) was the dominant strategy for causes with sufficient information. Trends for Alzheimer's disease and other dementias were informed by meta-regression of prevalence studies. For pathogen-specific causes of diarrhoea and lower respiratory infections we used a counterfactual approach. We computed two measures of convergence (inequality) across countries: the average relative difference across all pairs of countries (Gini coefficient) and the average absolute difference across countries. To summarise broad findings, we used multiple decrement life-tables to decompose probabilities of death from birth to exact age 15 years, from exact age 15 years to exact age 50 years, and from exact age 50 years to exact age 75 years, and life expectancy at birth into major causes. For all quantities reported, we computed 95% uncertainty intervals (UIs). We constrained cause-specific fractions within each age-sex-country-year group to sum to all-cause mortality based on draws from the uncertainty distributions.\n\n\nFINDINGS\nGlobal life expectancy for both sexes increased from 65.3 years (UI 65.0-65.6) in 1990, to 71.5 years (UI 71.0-71.9) in 2013, while the number of deaths increased from 47.5 million (UI 46.8-48.2) to 54.9 million (UI 53.6-56.3) over the same interval. Global progress masked variation by age and sex: for children, average absolute differences between countries decreased but relative differences increased. For women aged 25-39 years and older than 75 years and for men aged 20-49 years and 65 years and older, both absolute and relative differences increased. Decomposition of global and regional life expectancy showed the prominent role of reductions in age-standardised death rates for cardiovascular diseases and cancers in high-income regions, and reductions in child deaths from diarrhoea, lower respiratory infections, and neonatal causes in low-income regions. HIV/AIDS reduced life expectancy in southern sub-Saharan Africa. For most communicable causes of death both numbers of deaths and age-standardised death rates fell whereas for most non-communicable causes, demographic shifts have increased numbers of deaths but decreased age-standardised death rates. Global deaths from injury increased by 10.7%, from 4.3 million deaths in 1990 to 4.8 million in 2013; but age-standardised rates declined over the same period by 21%. For some causes of more than 100,000 deaths per year in 2013, age-standardised death rates increased between 1990 and 2013, including HIV/AIDS, pancreatic cancer, atrial fibrillation and flutter, drug use disorders, diabetes, chronic kidney disease, and sickle-cell anaemias. Diarrhoeal diseases, lower respiratory infections, neonatal causes, and malaria are still in the top five causes of death in children younger than 5 years. The most important pathogens are rotavirus for diarrhoea and pneumococcus for lower respiratory infections. Country-specific probabilities of death over three phases of life were substantially varied between and within regions.\n\n\nINTERPRETATION\nFor most countries, the general pattern of reductions in age-sex specific mortality has been associated with a progressive shift towards a larger share of the remaining deaths caused by non-communicable disease and injuries. Assessing epidemiological convergence across countries depends on whether an absolute or relative measure of inequality is used. Nevertheless, age-standardised death rates for seven substantial causes are increasing, suggesting the potential for reversals in some countries. Important gaps exist in the empirical data for cause of death estimates for some countries; for example, no national data for India are available for the past decade.\n\n\nFUNDING\nBill & Melinda Gates Foundation." }, { "pmid": "8641024", "title": "Coronary plaque erosion without rupture into a lipid core. A frequent cause of coronary thrombosis in sudden coronary death.", "abstract": "BACKGROUND\nCoronary thrombosis has been reported to occur most frequently in lipid-rich plaques with rupture of a thin fibrous cap and contact of the thrombus with a pool of extracellular lipid. However, the frequency of coronary artery thrombosis with or without fibrous cap rupture in sudden coronary death is unknown. In this study, we compared the incidence and morphological characteristics of coronary thrombosis associated with plaque rupture versus thrombosis in eroded plaques without rupture.\n\n\nMETHODS AND RESULTS\nFifty consecutive cases of sudden death due to coronary artery thrombosis were studied by histology and immunohistochemistry. Plaque rupture of a fibrous cap with communication of the thrombus with a lipid pool was identified in 28 cases. Thrombi without rupture were present in 22 cases, all of which had superficial erosion of a proteoglycan-rich plaque. The mean age at death was 53 +/- 10 years in plaque rupture cases versus 44 +/- 7 years in eroded plaques without rupture (P < .02). In the plaque-rupture group, 5 of 28 (18%) were women versus 11 of 22 (50%) with eroded plaques (P = .03). The mean percent luminal area stenosis was 78 +/- 12% in plaque rupture and 70 +/- 11% in superficial erosion (P < .03). Plaque calcification was present in 69% of ruptures versus 23% of erosions (P < .002). In plaque ruptures, the fibrous cap was infiltrated by macrophages in 100% and T cells in 75% of cases compared with 50% (P < .0001) and 32% (P < .004), respectively, in superficial erosions. Clusters of smooth muscle cells adjacent to the thrombi were present in 95% of erosions versus 33% of ruptures (P < .0001). HLA-DR expression was more often seen in macrophages and T cells in ruptures (25 of 28 cases) compared with expression in macrophages in superficial erosion arteries (8 of 22 cases, P = .0002).\n\n\nCONCLUSIONS\nErosion of proteoglycan-rich and smooth muscle cell-rich plaques lacking a superficial lipid core or plaque rupture is a frequent finding in sudden death due to coronary thrombosis, comprising 44% of cases in the present study. These lesions are more often seen in younger individuals and women, have less luminal narrowing and less calcification, and less often have foci of macrophages and T cells compared with plaque ruptures." }, { "pmid": "9426030", "title": "Arterial calcification and not lumen stenosis is highly correlated with atherosclerotic plaque burden in humans: a histologic study of 723 coronary artery segments using nondecalcifying methodology.", "abstract": "OBJECTIVES\nThis study was designed to evaluate whether calcium deposition in the coronary arteries is related to atherosclerotic plaque burden and narrowing of the arterial lumen.\n\n\nBACKGROUND\nMany studies have recently documented the feasibility of electron beam computed tomography to detect and quantify coronary artery calcification in patients. Although these studies suggest a general relation between calcification and severity of coronary artery disease, the value of coronary calcium in defining atherosclerotic plaque and coronary lumen narrowing is unclear. Previous pathologic comparisons have failed to detail such a relation in identical histologic sections. This finding may be due to atherosclerotic remodeling.\n\n\nMETHODS\nA total of 37 nondecalcified coronary arteries were processed, sectioned at 3-mm intervals (723 sections) and evaluated by computer planimetry and densitometry.\n\n\nRESULTS\nA significant relation between calcium area and plaque area was found on a per-heart basis (n = 13, r = 0.87, p < 0.0001), per-artery basis (left anterior descending coronary artery [LAD]: n = 13, r = 0.89, p < 0.0001; left circumflex coronary artery [LCx]: n = 11, r = 0.7, p < 0.001; right coronary artery [RCA]: n = 13, r = 0.89, p < 0.0001) and per-segment basis (n = 723, r = 0.52, p < 0.0001). In contrast, a poor relation existed between residual histologic lumen area and calcium area for individual hearts (r = 0.48, p = NS), individual coronary arteries (LAD: r = 0.59, p = NS; LCx: r = 0.10, p = NS; RCA: r = 0.59, p = NS) and coronary segments (r = 0.07, p = NS). Longitudinal changes in external elastic lamina areas were highly correlated with changes in plaque area values (r = 0.60, p < 0.0001), whereas lumen area did not correlate with plaque size change (r = 0.01, p = NS).\n\n\nCONCLUSIONS\nCoronary calcium quantification is an excellent method of assessing atherosclerotic plaque presence at individual artery sites. Moreover, the amount of calcium correlates with the overall magnitude of atherosclerotic plaque burden. This study suggests that the remodeling phenomenon is the likely explanation for the lack of a good predictive value between lumen narrowing and quantification of mural calcification." }, { "pmid": "3574413", "title": "Compensatory enlargement of human atherosclerotic coronary arteries.", "abstract": "Whether human coronary arteries undergo compensatory enlargement in the presence of coronary disease has not been clarified. We studied histologic sections of the left main coronary artery in 136 hearts obtained at autopsy to determine whether atherosclerotic human coronary arteries enlarge in relation to plaque (lesion) area and to assess whether such enlargement preserves the cross-sectional area of the lumen. The area circumscribed by the internal elastic lamina (internal elastic lamina area) was taken as a measure of the area of the arterial lumen if no plaque had been present. The internal elastic lamina area correlated directly with the area of the lesion (r = 0.44, P less than 0.001), suggesting that coronary arteries enlarge as lesion area increases. Regression analysis yielded the following equation: Internal elastic lamina area = 9.26 + 0.88 (lesion area) + 0.026 (age) + 0.005 (heart weight). The correlation coefficient for the lesion area was significant (P less than 0.001), whereas the correlation coefficients for age and heart weight were not. The lumen area did not decrease in relation to the percentage of stenosis (lesion area/internal elastic lamina area X 100) for values between zero and 40 percent but did diminish markedly and in close relation to the percentage of stenosis for values above 40 percent (r = -0.73, P less than 0.001). We conclude that human coronary arteries enlarge in relation to plaque area and that functionally important lumen stenosis may be delayed until the lesion occupies 40 percent of the internal elastic lamina area. The preservation of a nearly normal lumen cross-sectional area despite the presence of a large plaque should be taken into account in evaluating atherosclerotic disease with use of coronary angiography." }, { "pmid": "20395349", "title": "The vascular biology of atherosclerosis and imaging targets.", "abstract": "The growing worldwide health challenge of atherosclerosis, together with advances in imaging technologies, have stimulated considerable interest in novel approaches to gauging this disease. The last several decades have witnessed a burgeoning in understanding of the molecular pathways involved in atherogenesis, lesion progression, and the mechanisms underlying the complications of human atherosclerotic plaques. The imaging of atherosclerosis is reaching beyond anatomy to encompass assessment of aspects of plaque biology related to the pathogenesis and complication of the disease. The harnessing of these biologic insights promises to provide a plethora of new targets for molecular imaging of atherosclerosis. The goals for the years to come must include translation of the experimental work to visualization of these appealing biologic targets in humans." }, { "pmid": "17868808", "title": "Prognostic value of multidetector coronary computed tomographic angiography for prediction of all-cause mortality.", "abstract": "OBJECTIVES\nThe purpose of this study was to examine the association of all-cause death with the coronary computed tomographic angiography (CCTA)-defined extent and severity of coronary artery disease (CAD).\n\n\nBACKGROUND\nThe prognostic value of identifying CAD by CCTA remains undefined.\n\n\nMETHODS\nWe examined a single-center consecutive cohort of 1,127 patients > or =45 years old with chest symptoms. Stenosis by CCTA was scored as minimal (<30%), mild (30% to 49%), moderate (50% to 69%), or severe (> or =70%) for each coronary artery. Plaque was assessed in 3 ways: 1) moderate or obstructive plaque; 2) CCTA score modified from Duke coronary artery score; and 3) simple clinical scores grading plaque extent and distribution. A 15.3 +/- 3.9-month follow-up of all-cause death was assessed using Cox proportional hazards models adjusted for pretest CAD likelihood and risk factors. Deaths were verified by the Social Security Death Index.\n\n\nRESULTS\nThe CCTA predictors of death included proximal left anterior descending artery stenosis and number of vessels with > or =50% and > or =70% stenosis (all p < 0.0001). A modified Duke CAD index, an angiographic score integrating proximal CAD, plaque extent, and left main (LM) disease, improved risk stratification (p < 0.0001). Patients with <50% stenosis had the highest survival at 99.7%. Survival worsened with higher-risk Duke scores, ranging from 96% survival for 1 stenosis > or =70% or 2 stenoses > or =50% (p = 0.013) to 85% survival for > or =50% LM artery stenosis (p < 0.0001). Clinical scores measuring plaque burden and distribution predicted 5% to 6% higher absolute death rate (6.6% vs. 1.6% and 8.4% vs. 2.5%; p = 0.05 for both).\n\n\nCONCLUSIONS\nIn patients with chest pain, CCTA identifies increased risk for all-cause death. Importantly, a negative CCTA portends an extremely low risk for death." }, { "pmid": "21835321", "title": "Age- and sex-related differences in all-cause mortality risk based on coronary computed tomography angiography findings results from the International Multicenter CONFIRM (Coronary CT Angiography Evaluation for Clinical Outcomes: An International Multicenter Registry) of 23,854 patients without known coronary artery disease.", "abstract": "OBJECTIVES\nWe examined mortality in relation to coronary artery disease (CAD) as assessed by ≥64-detector row coronary computed tomography angiography (CCTA).\n\n\nBACKGROUND\nAlthough CCTA has demonstrated high diagnostic performance for detection and exclusion of obstructive CAD, the prognostic findings of CAD by CCTA have not, to date, been examined for age- and sex-specific outcomes.\n\n\nMETHODS\nWe evaluated a consecutive cohort of 24,775 patients undergoing ≥64-detector row CCTA between 2005 and 2009 without known CAD who met inclusion criteria. In these patients, CAD by CCTA was defined as none (0% stenosis), mild (1% to 49% stenosis), moderate (50% to 69% stenosis), or severe (≥70% stenosis). CAD severity was judged on a per-patient, per-vessel, and per-segment basis. Time to mortality was estimated using multivariable Cox proportional hazards models.\n\n\nRESULTS\nAt a 2.3 ± 1.1-year follow-up, 404 deaths had occurred. In risk-adjusted analysis, both per-patient obstructive (hazard ratio [HR]: 2.60; 95% confidence interval [CI]: 1.94 to 3.49; p < 0.0001) and nonobstructive (HR: 1.60; 95% CI: 1.18 to 2.16; p = 0.002) CAD conferred increased risk of mortality compared with patients without evident CAD. Incident mortality was associated with a dose-response relationship to the number of coronary vessels exhibiting obstructive CAD, with increasing risk observed for nonobstructive (HR: 1.62; 95% CI: 1.20 to 2.19; p = 0.002), obstructive 1-vessel (HR: 2.00; 95% CI: 1.43 to 2.82; p < 0.0001), 2-vessel (HR: 2.92; 95% CI: 2.00 to 4.25; p < 0.0001), or 3-vessel or left main (HR: 3.70; 95% CI: 2.58 to 5.29; p < 0.0001) CAD. Importantly, the absence of CAD by CCTA was associated with a low rate of incident death (annualized death rate: 0.28%). When stratified by age <65 years versus ≥65 years, younger patients experienced higher hazards for death for 2-vessel (HR: 4.00; 95% CI: 2.16 to 7.40; p < 0.0001 vs. HR: 2.46; 95% CI: 1.51 to 4.02; p = 0.0003) and 3-vessel (HR: 6.19; 95% CI: 3.43 to 11.2; p < 0.0001 vs. HR: 3.10; 95% CI: 1.95 to 4.92; p < 0.0001) CAD. The relative hazard for 3-vessel CAD (HR: 4.21; 95% CI: 2.47 to 7.18; p < 0.0001 vs. HR: 3.27; 95% CI: 1.96 to 5.45; p < 0.0001) was higher for women as compared with men.\n\n\nCONCLUSIONS\nAmong individuals without known CAD, nonobstructive and obstructive CAD by CCTA are associated with higher rates of mortality, with risk profiles differing for age and sex. Importantly, absence of CAD is associated with a very favorable prognosis." }, { "pmid": "16741661", "title": "Localizing calcifications in cardiac CT data sets using a new vessel segmentation approach.", "abstract": "The new generation of multislice computed tomography (CT) scanners allows for the acquisition of high-resolution images of the heart. Based on that image data, the heart can be analyzed in a noninvasive way-improving the diagnosis of cardiovascular malfunctions on one hand, and the planning of an eventually necessary intervention on the other. One important parameter for the evaluation of the severity of a coronary artery disease is the number and localization of calcifications (hard plaques). This work presents a method for localizing these calcifications by employing a newly developed vessel segmentation approach. This extraction technique has been developed for, and tested with, contrast-enhanced CT data sets of the heart. The algorithm provides enough information to compute the vessel diameter along the extracted segment. An approach for automatically detecting calcified regions that combines diameter information and gray value analysis is presented. In addition, specially adapted methods for the visualization of these analysis results are described." }, { "pmid": "15915942", "title": "Towards quantitative analysis of coronary CTA.", "abstract": "The current high spatial and temporal resolution, multi-slice imaging capability, and ECG-gated reconstruction of multi-slice computed tomography (MSCT) allows the non-invasive 3D imaging of opacified coronary arteries. MSCT coronary angiography studies are currently carried out by the visual inspection of the degree of stenosis and it has been shown that the assessment with sensitivities and specificities of 90% and higher can be achieved. To increase the reproducibility of the analysis, we present a method that performs the quantitative analysis of coronary artery diseases with limited user interaction: only the positioning of one or two seed points is required. The method allows the segmentation of the entire left or right coronary tree by the positioning of a single seed point, and an extensive evaluation of a particular vessel segment by placing a proximal and distal seed point. The presented method consists of: (1) the segmentation of the coronary vessels, (2) the extraction of the vessel centerline, (3) the reformatting of the image volume, (4) a combination of longitudinal and transversal contour detection, and (5) the quantification of vessel morphological parameters. The method is illustrated in this paper by the segmentation of the left and right coronary trees and by the analysis of a coronary artery segment. The sensitivity of the positioning of the seed points is studied by varying the position of the proximal and distal seed points with a standard deviation of 6 and 8 mm (along the vessel's course) respectively. It is shown that only close to the individual seed points the vessel centerlines deviate and that for more than 80% of the centerlines the paths coincide. Since the quantification depends on the determination of the centerline, no user variability is expected as long as the seed points are positioned reasonably far away from the vessel lesion. The major bottleneck of MSCT imaging of the coronary arteries is the potential lack of image quality due to limitations in the spatial and temporal resolution, irregular or high heart beat, respiratory effects, and variations of the distribution of the contrast agent: the number of rejected vessel segments in diagnostic studies is currently still too high for implementation in routine clinical practice. Also for the automated quantitative analysis of the coronary arteries high image quality is required. However, based upon the trend in technological development of MSCT scanners, there is no doubt that the quantitative analysis of MSCT coronary angiography will benefit from these technological advances in the near future." }, { "pmid": "18215820", "title": "Robust simultaneous detection of coronary borders in complex images.", "abstract": "Visual estimation of coronary obstruction severity from angiograms suffers from poor inter- and intraobserver reproducibility and is often inaccurate. In spite of the widely recognized limitations of visual analysis, automated methods have not found widespread clinical use, in part because they too frequently fail to accurately identify vessel borders. The authors have developed a robust method for simultaneous detection of left and right coronary borders that is suitable for analysis of complex images with poor contrast, nearby or overlapping structures, or branching vessels. The reliability of the simultaneous border detection method and that of the authors' previously reported conventional border detection method were tested in 130 complex images, selected because conventional automated border detection might be expected to fail. Conventional analysis failed to yield acceptable borders in 65/130 or 50% of images. Simultaneous border detection was much more robust (p<.001) and failed in only 15/130 or 12% of complex images. Simultaneous border detection identified stenosis diameters that correlated significantly better with observer-derived stenosis diameters than did diameters obtained with conventional border detection (p<0.001), Simultaneous detection of left and right coronary borders is highly robust and has substantial promise for enhancing the utility of quantitative coronary angiography in the clinical setting." }, { "pmid": "20549375", "title": "Automatic detection of abnormal vascular cross-sections based on density level detection and support vector machines.", "abstract": "PURPOSE\nThe goal is to automatically detect anomalous vascular cross-sections to attract the radiologist's attention to possible lesions and thus reduce the time spent to analyze the image volume.\n\n\nMATERIALS AND METHODS\nWe assume that both lesions and calcifications can be considered as local outliers compared to a normal cross-section. Our approach uses an intensity metric within a machine learning scheme to differentiate normal and abnormal cross-sections. It is formulated as a Density Level Detection problem and solved using a Support Vector Machine (DLD-SVM). The method has been evaluated on 42 synthetic phantoms and on 9 coronary CT data sets annotated by 2 experts.\n\n\nRESULTS\nThe specificity of the method was 97.57% on synthetic data, and 86.01% on real data, while its sensitivity was 82.19 and 81.23%, respectively. The agreement with the observers, measured by the kappa coefficient, was substantial (κ = 0.72). After the learning stage, which is performed off-line, the average processing time was within 10 s per artery.\n\n\nCONCLUSIONS\nTo our knowledge, this is the first attempt to use the DLD-SVM approach to detect vascular abnormalities. Good specificity, sensitivity and agreement with experts, as well as a short processing time, show that our method can facilitate medical diagnosis and reduce evaluation time by attracting the reader's attention to suspect regions." }, { "pmid": "23837963", "title": "Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.", "abstract": "Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/." }, { "pmid": "26158081", "title": "Structured learning algorithm for detection of nonobstructive and obstructive coronary plaque lesions from computed tomography angiography.", "abstract": "Visual identification of coronary arterial lesion from three-dimensional coronary computed tomography angiography (CTA) remains challenging. We aimed to develop a robust automated algorithm for computer detection of coronary artery lesions by machine learning techniques. A structured learning technique is proposed to detect all coronary arterial lesions with stenosis [Formula: see text]. Our algorithm consists of two stages: (1) two independent base decisions indicating the existence of lesions in each arterial segment and (b) the final decision made by combining the base decisions. One of the base decisions is the support vector machine (SVM) based learning algorithm, which divides each artery into small volume patches and integrates several quantitative geometric and shape features for arterial lesions in each small volume patch by SVM algorithm. The other base decision is the formula-based analytic method. The final decision in the first stage applies SVM-based decision fusion to combine the two base decisions in the second stage. The proposed algorithm was applied to 42 CTA patient datasets, acquired with dual-source CT, where 21 datasets had 45 lesions with stenosis [Formula: see text]. Visual identification of lesions with stenosis [Formula: see text] by three expert readers, using consensus reading, was considered as a reference standard. Our method performed with high sensitivity (93%), specificity (95%), and accuracy (94%), with receiver operator characteristic area under the curve of 0.94. The proposed algorithm shows promising results in the automated detection of obstructive and nonobstructive lesions from CTA." }, { "pmid": "10920061", "title": "Noninvasive in vivo human coronary artery lumen and wall imaging using black-blood magnetic resonance imaging.", "abstract": "BACKGROUND\nHigh-resolution MRI has the potential to noninvasively image the human coronary artery wall and define the degree and nature of coronary artery disease. Coronary artery imaging by MR has been limited by artifacts related to blood flow and motion and by low spatial resolution.\n\n\nMETHODS AND RESULTS\nWe used a noninvasive black-blood (BB) MRI (BB-MR) method, free of motion and blood-flow artifacts, for high-resolution (down to 0.46 mm in-plane resolution and 3-mm slice thickness) imaging of the coronary artery lumen and wall. In vivo BB-MR of both normal and atherosclerotic human coronary arteries was performed in 13 subjects: 8 normal subjects and 5 patients with coronary artery disease. The average coronary wall thickness for each cross-sectional image was 0.75+/-0.17 mm (range, 0.55 to 1.0 mm) in the normal subjects. MR images of coronary arteries in patients with >/=40% stenosis as assessed by x-ray angiography showed localized wall thickness of 4.38+/-0.71 mm (range, 3.30 to 5.73 mm). The difference in maximum wall thickness between the normal subjects and patients was statistically significant (P<0.0001).\n\n\nCONCLUSIONS\nIn vivo high-spatial-resolution BB-MR provides a unique new method to noninvasively image and assess the morphological features of human coronary arteries. This may allow the identification of atherosclerotic disease before it is symptomatic. Further studies are necessary to identify the different plaque components and to assess lesions in asymptomatic patients and their outcomes." }, { "pmid": "23966421", "title": "Does coronary CT angiography improve risk stratification over coronary calcium scoring in symptomatic patients with suspected coronary artery disease? Results from the prospective multicenter international CONFIRM registry.", "abstract": "AIMS\nThe prognostic value of coronary artery calcium (CAC) scoring is well established and has been suggested for use to exclude significant coronary artery disease (CAD) for symptomatic individuals with CAD. Contrast-enhanced coronary computed tomographic angiography (CCTA) is an alternative modality that enables direct visualization of coronary stenosis severity, extent, and distribution. Whether CCTA findings of CAD add an incremental prognostic value over CAC in symptomatic individuals has not been extensively studied.\n\n\nMETHODS AND RESULTS\nWe prospectively identified symptomatic patients with suspected but without known CAD who underwent both CAC and CCTA. Symptoms were defined by the presence of chest pain or dyspnoea, and pre-test likelihood of obstructive CAD was assessed by the method of Diamond and Forrester (D-F). CAC was measured by the method of Agatston. CCTAs were graded for obstructive CAD (>70% stenosis); and CAD plaque burden, distribution, and location. Plaque burden was determined by a segment stenosis score (SSS), which reflects the number of coronary segments with plaque, weighted for stenosis severity. Plaque distribution was established by a segment-involvement score (SIS), which reflects the number of segments with plaque irrespective of stenosis severity. Finally, a modified Duke prognostic index-accounting for stenosis severity, plaque distribution, and plaque location-was calculated. Nested Cox proportional hazard models for a composite endpoint of all-cause mortality and non-fatal myocardial infarction (D/MI) were employed to assess the incremental prognostic value of CCTA over CAC. A total of 8627 symptomatic patients (50% men, age 56 ± 12 years) followed for 25 months (interquartile range 17-40 months) comprised the study cohort. By CAC, 4860 (56%) and 713 (8.3%) patients had no evident calcium or a score of >400, respectively. By CCTA, 4294 (49.8%) and 749 (8.7%) had normal coronary arteries or obstructive CAD, respectively. At follow-up, 150 patients experienced D/MI. CAC improved discrimination beyond D-F and clinical variables (area under the receiver-operator characteristic curve 0.781 vs. 0.788, P = 0.004). When added sequentially to D-F, clinical variables, and CAC, all CCTA measures of CAD improved discrimination of patients at risk for D/MI: obstructive CAD (0.82, P < 0.001), SSS (0.81, P < 0.001), SIS (0.81, P = 0.003), and Duke CAD prognostic index (0.82, P < 0.0001).\n\n\nCONCLUSION\nIn symptomatic patients with suspected CAD, CCTA adds incremental discriminatory power over CAC for discrimination of individuals at risk of death or MI." }, { "pmid": "22551651", "title": "The feasibility of 350 μm spatial resolution coronary magnetic resonance angiography at 3 T in humans.", "abstract": "PURPOSE\nThe purposes of this study were to (1) develop a high-resolution 3-T magnetic resonance angiography (MRA) technique with an in-plane resolution approximate to that of multidetector coronary computed tomography (MDCT) and a voxel size of 0.35 × 0.35 × 1.5 mm³ and to (2) investigate the image quality of this technique in healthy participants and preliminarily in patients with known coronary artery disease (CAD).\n\n\nMATERIALS AND METHODS\nA 3-T coronary MRA technique optimized for an image acquisition voxel as small as 0.35 × 0.35 × 1.5 mm³ (high-resolution coronary MRA [HRC]) was implemented and the coronary arteries of 22 participants were imaged. These included 11 healthy participants (average age, 28.5 years; 5 men) and 11 participants with CAD (average age, 52.9 years; 5 women) as identified on MDCT. In addition, the 11 healthy participants were imaged using a method with a more common spatial resolution of 0.7 × 1 × 3 mm³ (regular-resolution coronary MRA [RRC]). Qualitative and quantitative comparisons were made between the 2 MRA techniques.\n\n\nRESULTS\nNormal vessels and CAD lesions were successfully depicted at 350 × 350 μm² in-plane resolution with adequate signal-to-noise ratio (SNR) and contrast-to-noise ratio. The CAD findings were consistent among MDCT and HRC. The HRC showed a 47% improvement in sharpness despite a reduction in SNR (by 72%) and in contrast-to-noise ratio (by 86%) compared with the regular-resolution coronary MRA.\n\n\nCONCLUSION\nThis study, as a first step toward substantial improvement in the resolution of coronary MRA, demonstrates the feasibility of obtaining at 3 T a spatial resolution that approximates that of MDCT. The acquisition in-plane pixel dimensions are as small as 350 × 350 μm² with a 1.5-mm slice thickness. Although SNR is lower, the images have improved sharpness, resulting in image quality that allows qualitative identification of disease sites on MRA consistent with MDCT." }, { "pmid": "19933621", "title": "Coronary abnormalities in hyper-IgE recurrent infection syndrome: depiction at coronary MDCT angiography.", "abstract": "OBJECTIVE\nHyper-IgE recurrent infection syndrome (HIES or Job's syndrome) is a rare disorder affecting the immune system and connective tissues. The purpose of this study is to describe the coronary abnormalities in genetically confirmed HIES patients as depicted by coronary MDCT angiography (MDCTA).\n\n\nCONCLUSION\nCoronary MDCTA has provided an opportunity for noninvasive evaluation of the coronary arteries in patients with HIES. These coronary abnormalities vary from tortuosity to ectatic dilation and focal aneurysms of the coronary arteries. Such an evaluation has potential value in identifying new aspects of this disease and thereby providing better understanding of the pathophysiology of the disorder." }, { "pmid": "2407762", "title": "Quantification of coronary artery calcium using ultrafast computed tomography.", "abstract": "Ultrafast computed tomography was used to detect and quantify coronary artery calcium levels in 584 subjects (mean age 48 +/- 10 years) with (n = 109) and without (n = 475) clinical coronary artery disease. Fifty patients who underwent fluoroscopy and ultrafast computed tomography were also evaluated. Twenty contiguous 3 mm slices were obtained of the proximal coronary arteries. Total calcium scores were calculated based on the number, areas and peak Hounsfield computed tomographic numbers of the calcific lesions detected. In 88 subjects scored by two readers independently, interobserver agreement was excellent with identical total scores obtained in 70. Ultrafast computed tomography was more sensitive than fluoroscopy, detecting coronary calcium in 90% versus 52% of patients. There were significant differences (p less than 0.0001) in mean total calcium scores for those with versus those without clinical coronary artery disease by decade: 5 versus 132, age 30 to 39 years; 27 versus 291, age 40 to 49 years; 83 versus 462, age 50 to 59 years; and 187 versus 786, age 60 to 69 years. Sensitivity, specificity and predictive values for clinical coronary artery disease were calculated for several total calcium scores in each decade. For age groups 40 to 49 and 50 to 59 years, a total score of 50 resulted in a sensitivity of 71% and 74% and a specificity of 91% and 70%, respectively. For age group 60 to 69 years, a total score of 300 gave a sensitivity of 74% and a specificity of 81%. The negative predictive value of a 0 score was 98%, 94% and 100% for age groups 40 to 49, 50 to 59 and 60 to 69 years, respectively. Ultrafast computed tomography is an excellent tool for detecting and quantifying coronary artery calcium." }, { "pmid": "19098205", "title": "Traditional clinical risk assessment tools do not accurately predict coronary atherosclerotic plaque burden: a CT angiography study.", "abstract": "OBJECTIVE\nThe objective of our study was to determine the degree to which Framingham risk estimates and the National Cholesterol Education Program (NCEP) Adult Treatment Panel III core risk categories correlate with total coronary atherosclerotic plaque burden (calcified and noncalcified) as estimated on coronary CT angiograms.\n\n\nMATERIALS AND METHODS\nCoronary CT angiography was performed in 1,653 patients (1,089 men, 564 women) without a history of coronary heart disease (mean age+/-SD: men, 51.6+/-9.7 years; women, 56.9+/-10.5 years). The most common reasons for the examination were hypercholesterolemia, family history, hypertension, smoking, and atypical chest pain. The coronary tree was divided into 16 segments; four different methods were used to quantify the amount of atherosclerotic plaque or the degree of stenosis in each segment, and segment scores were combined to give total scores. Framingham risk estimates and NCEP risk categories were calculated for each patient.\n\n\nRESULTS\nCorrelation of plaque scores with the Framingham 10-year risk estimates were modest: Spearman's rho was 0.49-0.55. For all comparisons of NCEP risk categories to plaque score categories, the proportion of raw agreement, p(0), was less than 0.50. Cohen's kappa ranged from 0.18 to 0.20. Overall, 21% of the patients would have their perceived need for statins changed by using the coronary CTA plaque estimates in place of the NCEP core risk categories; 26% of the patients on statins had no detectable plaque.\n\n\nCONCLUSION\nCoronary risk stratification using a risk factor only-based scheme is a weak discriminator of the overall atherosclerotic plaque burden in individual patients. Patients with little or no plaque might be subjected to lifelong drug therapy, whereas many others with substantial plaque might be undertreated or not treated at all." }, { "pmid": "25169177", "title": "Accuracy of statin assignment using the 2013 AHA/ACC Cholesterol Guideline versus the 2001 NCEP ATP III guideline: correlation with atherosclerotic plaque imaging.", "abstract": "BACKGROUND\nAccurate assignment of statin therapy is a major public health issue.\n\n\nOBJECTIVES\nThe American Heart Association and the American College of Cardiology released a new guideline on the assessment of cardiovascular risk (GACR) to replace the 2001 National Cholesterol Education Program (NCEP) Adult Treatment Panel III recommendations. The aim of this study was to determine which method more accurately assigns statins to patients with features of coronary imaging known to have predictive value for cardiovascular events and whether more patients would be assigned to statins under the new method.\n\n\nMETHODS\nThe burden of coronary atherosclerosis on computed tomography angiography was measured in several ways on the basis of a 16-segment model. Whether to assign a given patient to statin therapy was compared between the NCEP and GACR guidelines.\n\n\nRESULTS\nA total of 3,076 subjects were studied (65.3% men, mean age 55.4 ± 10.3 years, mean age of women 58.9 ± 10.3 years). The probability of prescribing statins rose sharply with increasing plaque burden under the GACR compared with the NCEP guideline. Under the NCEP guideline, 59% of patients with ≥50% stenosis of the left main coronary artery and 40% of patients with ≥50% stenosis of other branches would not have been treated. The comparable results for the GACR were 19% and 10%. The use of low-density lipoprotein targets seriously degraded the accuracy of the NCEP guideline for statin assignment. The proportion of patients assigned to statin therapy was 15% higher under the GACR.\n\n\nCONCLUSIONS\nThe new American Heart Association/American College of Cardiology guideline matches statin assignment to total plaque burden better than the older guidelines, with only a modest increase in the number of patients who were assigned statins." }, { "pmid": "19632885", "title": "Standardized evaluation methodology and reference database for evaluating coronary artery centerline extraction algorithms.", "abstract": "Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms. This paper describes a standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms. The contribution of this work is fourfold: (1) a method is described to create a consensus centerline with multiple observers, (2) well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, (3) a database containing 32 cardiac CTA datasets with corresponding reference standard is described and made available, and (4) 13 coronary artery centerline extraction algorithms, implemented by different research groups, are quantitatively evaluated and compared. The presented evaluation framework is made available to the medical imaging community for benchmarking existing or newly developed coronary centerline extraction algorithms." }, { "pmid": "19818675", "title": "A review of 3D vessel lumen segmentation techniques: models, features and extraction schemes.", "abstract": "Vascular diseases are among the most important public health problems in developed countries. Given the size and complexity of modern angiographic acquisitions, segmentation is a key step toward the accurate visualization, diagnosis and quantification of vascular pathologies. Despite the tremendous amount of past and on-going dedicated research, vascular segmentation remains a challenging task. In this paper, we review state-of-the-art literature on vascular segmentation, with a particular focus on 3D contrast-enhanced imaging modalities (MRA and CTA). We structure our analysis along three axes: models, features and extraction schemes. We first detail model-based assumptions on the vessel appearance and geometry which can embedded in a segmentation approach. We then review the image features that can be extracted to evaluate these models. Finally, we discuss how existing extraction schemes combine model and feature information to perform the segmentation task. Each component (model, feature and extraction scheme) plays a crucial role toward the efficient, robust and accurate segmentation of vessels of interest. Along each axis of study, we discuss the theoretical and practical properties of recent approaches and highlight the most advanced and promising ones." }, { "pmid": "10628954", "title": "Model-based quantitation of 3-D magnetic resonance angiographic images.", "abstract": "Quantification of the degree of stenosis or vessel dimensions are important for diagnosis of vascular diseases and planning vascular interventions. Although diagnosis from three-dimensional (3-D) magnetic resonance angiograms (MRA's) is mainly performed on two-dimensional (2-D) maximum intensity projections, automated quantification of vascular segments directly from the 3-D dataset is desirable to provide accurate and objective measurements of the 3-D anatomy. A model-based method for quantitative 3-D MRA is proposed. Linear vessel segments are modeled with a central vessel axis curve coupled to a vessel wall surface. A novel image feature to guide the deformation of the central vessel axis is introduced. Subsequently, concepts of deformable models are combined with knowledge of the physics of the acquisition technique to accurately segment the vessel wall and compute the vessel diameter and other geometrical properties. The method is illustrated and validated on a carotid bifurcation phantom, with ground truth and medical experts as comparisons. Also, results on 3-D time-of-flight (TOF) MRA images of the carotids are shown. The approach is a promising technique to assess several geometrical vascular parameters directly on the source 3-D images, providing an objective mechanism for stenosis grading." } ]
IEEE Journal of Translational Engineering in Health and Medicine
30680252
PMC6331197
10.1109/JTEHM.2018.2886021
Laryngeal Pressure Estimation With a Recurrent Neural Network
Quantifying the physical parameters of voice production is essential for understanding the process of phonation and can aid in voice research and diagnosis. As an alternative to invasive measurements, they can be estimated by formulating an inverse problem using a numerical forward model. However, high-fidelity numerical models are often computationally too expensive for this. This paper presents a novel approach to train a long short-term memory network to estimate the subglottal pressure in the larynx at massively reduced computational cost using solely synthetic training data. We train the network on synthetic data from a numerical two-mass model and validate it on experimental data from 288 high-speed ex vivo video recordings of porcine vocal folds from a previous study. The training requires significantly fewer model evaluations compared with the previous optimization approach. On the test set, we maintain a comparable performance of 21.2% versus previous 17.7% mean absolute percentage error in estimating the subglottal pressure. The evaluation of one sample requires a vanishingly small amount of computation time. The presented approach is able to maintain estimation accuracy of the subglottal pressure at significantly reduced computational cost. The methodology is likely transferable to estimate other parameters and training with other numerical models. This improvement should allow the adoption of more sophisticated, high-fidelity numerical models of the larynx. The vast speedup is a critical step to enable a future clinical application and knowledge of parameters such as the subglottal pressure will aid in diagnosis and treatment selection.
II.Related WorkInverse problems to estimate the vocal fold parameters have been a topic of voice research for the last fifteen years and numerous studies employed these approaches [12]–[15]. Only limited research has been conducted on the applicability of deep learning approaches to these problems, although recently there has been great interest to apply deep learning approaches in medicine. The suggested approach builds on work from both areas, research on vocal fold parameter estimation and recent deep learning approaches.A.BackgroundThe human voice features oscillation of the vocal folds with up to 350 Hz during normal phonation. This necessitates sophisticated measurement techniques such as videostrobos-copy [16], [17] or high-speed video endoscopy [18]–[20]. Measurements are further impaired by the limiting anatomy of the human larynx. Therefore, complex experimental setups and numerical models are necessary to further the understanding of the human phonation.Recent research pursues several avenues ranging from sophisticated experimental research [21], [22] to numerical simulation [23], [24]. One aim of the research is gaining insight on the physical properties such as mass or stiffness of the vocal folds as they are critical for understanding the physical process [15], [25]–[27]. Knowledge of those properties is paramount, especially as parameters such as the subglottal pressure have been linked to dysphonia, the abnormal or impaired voice [3]–[6].B.Modeling and Inverse Problems in Voice ResearchModeling plays an essential role in understanding the human phonatory process by providing insight and information. It is also used to estimate vocal fold parameters through an inverse problem. This is achieved through the optimization of the parameters of a numerical model to minimize the difference between model behavior and a recording of real vocal fold dynamics. If the numerical model is physically sound and the optimization successful, it is possible to infer the parameters of the real vocal folds. First suggested by Döllinger et al. [12] this approach has been successfully used to, e.g., estimate the subglottal pressure [10] or study disorders such as unilateral vocal fold paralysis [14].The most commonly used numerical model in these endeavors is the two-mass model by Ishizaka and Flanagan [28]. Higher fidelity models based, e.g., on a Lattice Boltzmann approach [29] or Navier-Stokes airflow model [30]–[32] are available, but usually computationally too expensive to employ in an inverse problem. Detailed reviews of available models are given by Erath et al. [33] and Alipour et al. [23].C.Deep and Transfer Learning in Inverse ProblemsRecent years have seen great advances in deep learning, many of those in the domain of computer vision. In this field it has become common to use pre-trained convolutional neural networks (CNNs). This reduces the computational burden caused by the training and allows employing the CNNs on problem sets of limited size. Many previous studies investigated the generalization abilities and transferability of features in CNNs [34]–[36]. The ability of neural networks to transfer to similar problem domains has been researched intensively for other network architectures and in applied situations as well [11], [37]. CNNs have also been employed in image-related inverse problems such as denoising or image reconstruction [38], [39]. McCann et al. [38] also describe the difficulties arising from the limited amount of real training data available in a biomedical context and ways to generate data. However, as shown, e.g., by Jaderberg et al. [40] it is possible to train with synthetically created data, if insufficient real data are available and thereby utilize the generalization potential of neural networks.Finally, the application domains of deep learning are still being explored and recent results serve to highlight the potential of employing neural networks as surrogate models in numerical simulations. Ling et al. [41], e.g., demonstrated a successful application approximating a high-fidelity model in fluid mechanics and Paganini et al. [42] approximated simulation results in particle physics.
[ "24782443", "9377276", "10823481", "24224398", "139503", "23809566", "29230589", "12148815", "19272979", "16761837", "17518275", "1528610", "9001636", "25480074", "24771562", "18023324", "28372097", "29121085", "9682857", "24771563", "20329853", "18646995", "17902863", "24204083", "29437460", "27250162", "7699169", "11144591", "16504473", "12002868", "9104025", "21303014", "25480072", "11768701", "22965771", "28464670", "18537405", "3170944", "8914317" ]
[ { "pmid": "24782443", "title": "The prevalence of voice problems among adults in the United States.", "abstract": "OBJECTIVES/HYPOTHESIS\nDetermine the prevalence of voice problems and types of voice disorders among adults in the United States.\n\n\nSTUDY DESIGN\nCross-sectional analysis of a national health survey.\n\n\nMETHODS\nThe 2012 National Health Interview Survey was analyzed, identifying adult cases reporting a voice problem in the preceding 12 months. In addition to demographic data, specific data regarding visits to healthcare professionals for voice problems, diagnoses given, and severity of the voice problem were analyzed. The relationship between voice problems and lost workdays was investigated.\n\n\nRESULTS\nAn estimated 17.9 ± 0.5 million adults (mean age, 49.1 years; 62.9% ± 1.2% female) reported a voice problem (7.6% ± 0.2%). Overall, 10.0% ± 0.1% saw a healthcare professional for their voice problem, and 40.3% ± 1.8% were given a diagnosis. Females were more likely than males to report a voice problem (9.3% ± 0.3% vs. 5.9% ± 0.3%, P < .001). Overall, 22% and 11% reported their voice problem to be a moderate or a big/very big problem, respectively. Infectious laryngitis was the most common diagnosis mentioned (685,000 ± 86,000 cases, 17.8% ± 2.0%). Gastroesophageal reflux disease was mentioned in 308,000 ± 54,000 cases (8.0% ± 1.4%). The mean number of days affected with the voice problem in the past year was 56.2 ± 2.6 days. Respondents with a voice problem reported 7.4 ± 0.9 lost workdays in the past year versus 3.4 ± 0.1 lost workdays for those without (contrast, +4.0 lost workdays; P < .001).\n\n\nCONCLUSIONS\nVoice problems affect one in 13 adults annually. A relative minority seek healthcare for their voice problem, even though the self-reported subjective impact of the voice problem is significant." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "10823481", "title": "Estimated subglottic pressure in normal and dysphonic subjects.", "abstract": "The objective of this report was to study the clinical interest of estimated subglottic pressure (ESGP) with measurements of intraoral pressure according to the \"airway interrupted method.\" Twenty healthy female subjects and 27 dysphonic female patients were included and asked to produce sounds under different conditions of pitch and intensity. The ESGP increased with intensity and slightly with pitch in both patients and controls. A comparison between patients and controls showed significantly higher values of ESGP in patients under all conditions of intensity and pitch. For normal intensity and usual pitch, ESGP has been found to be 6.1 hectopascals (hPa) in control subjects and 8.25 hPa in patients (p = .002). Discriminant analysis of all the measured data showed that data recorded for low intensity (lowest possible intensity without whispering) and high pitch (9 semitones above the usual pitch) were the most discriminant. The authors concluded that ESGP allows good discrimination between dysphonic patients and control subjects and might be included in the basic clinical set of objective parameters." }, { "pmid": "24224398", "title": "Measurement of phonation threshold power in normal and disordered voice production.", "abstract": "OBJECTIVES\nPhonation threshold pressure (PTP) and phonation threshold flow (PTF) are useful aerodynamic parameters, but each is sensitive to different disorders. A single comprehensive aerodynamic parameter sensitive to a variety of disorders might be beneficial in quantitative voice assessment. We performed the first study of phonation threshold power (PTW) in human subjects.\n\n\nMETHODS\nPTP and PTF were measured in 100 normal subjects, 19 subjects with vocal fold immobility, and 94 subjects with a benign mass lesion. PTW was calculated from these two parameters. In 41 subjects with a polyp, measurements were obtained before and after excision. Receiver operating characteristic (ROC) analysis was used to determine the ability of the three parameters to distinguish between controls and disordered groups.\n\n\nRESULTS\nThe PTW (p < 0.001), PTP (p < 0.001), and PTF (p < 0.001) were different among the three groups. All parameters decreased after polyp excision. PTW had the highest area under the ROC curve for all analyses.\n\n\nCONCLUSIONS\nPTW is sensitive to the presence of mass lesions and vocal fold mobility disorders. Additionally, changes in PTW can be observed after excision of mass lesions. PTW could be a useful parameter to describe the aerodynamic inputs to voice production." }, { "pmid": "139503", "title": "Measurement of airflow in speech.", "abstract": "It has been shown previously that a mask-type wire screen pneumotachograph can be constructed with a time resolution of 1/2 msec. In this paper it is shown that with careful design a resolution of about 1/4 msec can be achieved. The various factors involved in optimizing such a mask are described. A major practical limitation in the design has been the need for a fast-responding differential pressure transducer to measure mask pressure; however, a simple electrical compensating network is described that allows the use of a nondifferential transducer. Also determined are the conditions under which breath moisture condensing on the wire screen affects the performance of the mask. An example is given of how a high-speed pneumotachograph can be used, with a measure of the pressure across a constriction in the vocal tract, to derive the time variation of the flow conductance and resistance at that constriction. The advantages of a high-speed pneumotachograph in laryngeal frequency extraction are also considered." }, { "pmid": "23809566", "title": "Subglottal pressure oscillations accompanying phonation.", "abstract": "Acoustic and aerodynamic properties of the voice source and vocal tract have been extensively analyzed during the last half century. Corresponding investigations of the subglottal system are rare but can be assumed to be relevant to voice production. In the present exploratory study, subglottal pressure was recorded in a male adult subject by means of tracheal puncture. Also recorded were the oral airflow and audio signals. Effects of vowel, phonation type, and vocal register shifts on the subglottal pressure waveform were examined. The moment of maximum flow declination rate was synchronous with the main positive peak of the subglottal pressure waveform. The three lowest subglottal resonance frequencies, determined by inverse filtering and long-term average spectra of the subglottal pressure during speech, were found to be about 500, 1220, and 2000Hz, irrespective of supraglottal variations and phonation type. However, the subglottal pressure waveform was affected by the supraglottal formants, whereas the radiated vowel spectra did not show clear influence by the subglottal resonances. The fundamental frequency immediately preceding and immediately following a register break in pitch glides did not show systematic relationships with formants or with the lowest subglottal resonance." }, { "pmid": "29230589", "title": "Physical parameter estimation from porcine ex vivo vocal fold dynamics in an inverse problem framework.", "abstract": "This study presents a framework for a direct comparison of experimental vocal fold dynamics data to a numerical two-mass-model (2MM) by solving the corresponding inverse problem of which parameters lead to similar model behavior. The introduced 2MM features improvements such as a variable stiffness and a modified collision force. A set of physiologically sensible degrees of freedom is presented, and three optimization algorithms are compared on synthetic vocal fold trajectories. Finally, a total of 288 high-speed video recordings of six excised porcine larynges were optimized to validate the proposed framework. Particular focus lay on the subglottal pressure, as the experimental subglottal pressure is directly comparable to the model subglottal pressure. Fundamental frequency, amplitude and objective function values were also investigated. The employed 2MM is able to replicate the behavior of the porcine vocal folds very well. The model trajectories' fundamental frequency matches the one of the experimental trajectories in [Formula: see text] of the recordings. The relative error of the model trajectory amplitudes is on average [Formula: see text]. The experiments feature a mean subglottal pressure of 10.16 (SD [Formula: see text]) [Formula: see text]; in the model, it was on average 7.61 (SD [Formula: see text]) [Formula: see text]. A tendency of the model to underestimate the subglottal pressure is found, but the model is capable of inferring trends in the subglottal pressure. The average absolute error between the subglottal pressure in the model and the experiment is 2.90 (SD [Formula: see text]) [Formula: see text] or [Formula: see text]. A detailed analysis of the factors affecting the accuracy in matching the subglottal pressure is presented." }, { "pmid": "12148815", "title": "Vibration parameter extraction from endoscopic image series of the vocal folds.", "abstract": "An approach is given to extract parameters affecting phonation based upon digital high-speed recordings of vocal fold vibrations and a biomechanical model. The main parameters which affect oscillation are vibrating masses, vocal fold tension, and subglottal air pressure. By combining digital high-speed observations with the two-mass-model by Ishizaka and Flanagan (1972) as modified by Steinecke and Herzel (1995), an inversion procedure has been developed which allows the identification and quantization of laryngeal asymmetries. The problem is regarded as an optimization procedure with a nonconvex objective function. For this kind of problem, the choice of appropriate initial values is important. This optimization procedure is based on spectral features of vocal fold movements. The applicability of the inversion procedure is first demonstrated in simulated vocal fold curves. Then, inversion results are presented for a healthy voice and a hoarse voice as a case of functional dysphonia caused by laryngeal asymmetry." }, { "pmid": "19272979", "title": "Improving reliability and accuracy of vibration parameters of vocal folds based on high-speed video and electroglottography.", "abstract": "Quantified vibration parameters of vocal folds, including parameters directly extracted from high-speed video (HSV) and electroglottography (EGG), and inverse parameters based on models, can accurately describe the mechanism of phonation and also classify the abnormal in clinics. In order to improve the reliability and accuracy of these parameters, this paper provides a method based on an integrated recording system. This system includes two parts: HSV and EGG, which can record vibration information of vocal folds simultaneously. An image processing approach that bases on Zernike moments operator and an improved level set algorithm is proposed to detect glottal edges at subpixel-level aiming at image series recorded by HSV. An approach is also introduced for EGG data to extract three kinds of characteristic points for special vibration instants. Finally, inverse parameters of vocal folds can be optimized by a genetic algorithm based on the experimental vibration behaviors synthesized with these parameters and the simulations of a two-mass model. The results of a normal phonation experiment indicate that the parameters extracted by this method are more accurate and reliable than those extracted by general methods, which were only on the basis of HSV data and with pixel-level processing approaches in former studies." }, { "pmid": "16761837", "title": "Classification of unilateral vocal fold paralysis by endoscopic digital high-speed recordings and inversion of a biomechanical model.", "abstract": "Hoarseness in unilateral vocal fold paralysis is mainly due to irregular vocal fold vibrations caused by asymmetries within the larynx physiology. By means of a digital high-speed camera vocal fold oscillations can be observed in real-time. It is possible to extract the irregular vocal fold oscillations from the high-speed recordings using appropriate image processing techniques. An inversion procedure is developed which adjusts the parameters of a biomechanical model of the vocal folds to reproduce the irregular vocal fold oscillations. Within the inversion procedure a first parameter approximation is achieved through a knowledge-based algorithm. The final parameter optimization is performed using a genetic algorithm. The performance of the inversion procedure is evaluated using 430 synthetically generated data sets. The evaluation results comprise an error estimation of the inversion procedure and show the reliability of the algorithm. The inversion procedure is applied to 15 healthy voice subjects and 15 subjects suffering from unilateral vocal fold paralysis. The optimized parameter sets allow a classification of pathologic and healthy vocal fold oscillations. The classification may serve as a basis for therapy selection and quantification of therapy outcome in case of unilateral vocal fold paralysis." }, { "pmid": "17518275", "title": "Extracting physiologically relevant parameters of vocal folds from high-speed video image series.", "abstract": "In this paper, a new method is proposed to extract the physiologically relevant parameters of the vocal fold mathematic model including masses, spring constants and damper constants from high-speed video (HSV) image series. This method uses a genetic algorithm to optimize the model parameters until the model and the realistic vocal folds have similar dynamic behavior. Numerical experiments theoretically test the validity of the proposed parameter estimation method. Then the validated method is applied to extract the physiologically relevant parameters from the glottal area series measured by HSV in an excised larynx model. With the estimated parameters, the vocal fold model accurately describes the vibration of the observed vocal folds. Further studies show that the proposed parameter estimation method can successfully detect the increase of longitudinal tension due to the vocal fold elongation from the glottal area signal. These results imply the potential clinical application of this method in inspecting the tissue properties of vocal fold." }, { "pmid": "1528610", "title": "Efficacy of videostroboscopy in the diagnosis of voice disorders.", "abstract": "While videostrobolaryngoscopy is not a new technique, its acceptance as a routine part of the voice evaluation has not been as forthcoming. Many are in agreement that the rigid fiberoptic telescopes in combination with standard VHS equipment provide a clear, magnified image that can be recorded and used for pretreatment and post-treatment comparisons, documentation, teaching, and research. Yet, some skepticism persists with regard to the ability of videolaryngoscopy and/or videostrobolaryngoscopy in changing the diagnosis and treatment outcome of patients with voice disorders as compared to indirect laryngoscopy. Two hundred ninety-two dysphonic patients were identified who underwent indirect as well as videolaryngoscopy with and without stroboscopic examination. Videostrobolaryngoscopy was found to alter the diagnosis and treatment outcome in 14% of the patients. It is most useful in patients with a diagnosis of functional dysphonia and vocal fold paralysis by indirect laryngoscopy. The increased illumination and magnification afforded by rigid fiberoptic telescopes during videolaryngoscopy, combined with the detailed assessment of glottic closure, mucosal wave, and amplitude characteristics provided by stroboscopic examination, allowed detection of subtle vocal fold pathology, otherwise missed by indirect laryngoscopy." }, { "pmid": "9001636", "title": "The contribution of videostroboscopy in daily ENT practice.", "abstract": "Videostroboscopy has now left specialized voice laboratories and must be made available to general ENT practice. It is based on an optical illusion giving the impression of slow motion from a series of successive vocal cycles but it is the least awkward system to examine the movement of vocal folds while waiting for the development of very high speed video recordings which would enable to visualize a real single cycle. The equipment comprises a stroboscope, a CCD type of camera, a video recorder, a microphone, a monitor or television screen and rigid optics and/or a fiberscope. An examination lasts about 1/2 an hour and must follow a protocol for which we have developed a simplified index card based on the one proposed by HIRANO and B less. This simplified version has the advantage of being easily integrated into daily ENT practice. A visual examination of the larynx was carried out on 732 patients. Videostroboscopy was undertaken in 69.2% of cases and was found to be useful or essential in 68% of the cases where it was carried out. Furthermore, it enabled to correct a diagnosis in 13% of these cases. It was especially performed in the assessment of benign organic pathologies where associated pathologies have to be looked for (nodules--87.7%; REINKE's edema--72.2%; mucosal cysts--92.6%; sulcus and scars--83.3%), as well as for tumors (73.7%). Among the dysfunctional pathologies, it was principally carried in cases of hypokinetic dysphonia (87.7%). Videostroboscopy is a pertinent examination because, when it was used, it enabled to modify the initial diagnosis in 17% of dysfunctional dysphonias, 20% of nodules, 23% of REINKE's edemas and 17% of granulomas." }, { "pmid": "25480074", "title": "Estimation of inferior-superior vocal fold kinematics from high-speed stereo endoscopic data in vivo.", "abstract": "Despite being an indispensable tool for both researchers and clinicians, traditional endoscopic imaging of the human vocal folds is limited in that it cannot capture their inferior-superior motion. A three-dimensional reconstruction technique using high-speed video imaging of the vocal folds in stereo is explored in an effort to estimate the inferior-superior motion of the medial-most edge of the vocal folds under normal muscle activation in vivo. Traditional stereo-matching algorithms from the field of computer vision are considered and modified to suit the specific challenges of the in vivo application. Inferior-superior motion of the medial vocal fold surface of three healthy speakers is reconstructed over one glottal cycle. The inferior-superior amplitude of the mucosal wave is found to be approximately 13 mm for normal modal voice, reducing to approximately 3 mm for strained falsetto voice, with uncertainty estimated at σ ≈ 2 mm and σ ≈ 1 mm, respectively. Sources of error, and their relative effects on the estimation of the inferior-superior motion, are considered and recommendations are made to improve the technique." }, { "pmid": "24771562", "title": "Quantifying spatiotemporal properties of vocal fold dynamics based on a multiscale analysis of phonovibrograms.", "abstract": "In order to objectively assess the laryngeal vibratory behavior, endoscopic high-speed cameras capture several thousand frames per second of the vocal folds during phonation. However, judging all inherent clinically relevant features is a challenging task and requires well-founded expert knowledge. In this study, an automated wavelet-based analysis of laryngeal high-speed videos based on phonovibrograms is presented. The phonovibrogram is an image representation of the spatiotemporal pattern of vocal fold vibration and constitutes the basis for a computer-based analysis of laryngeal dynamics. The features extracted from the wavelet transform are shown to be closely related to a basic set of video-based measurements categorized by the European Laryngological Society for a subjective assessment of pathologic voices. The wavelet-based analysis further offers information about irregularity and lateral asymmetry and asynchrony. It is demonstrated in healthy and pathologic subjects as well as for a surgical group that was examined before and after the removal of a vocal fold polyp. The features were found to not only classify glottal closure characteristics but also quantify the impact of pathologies on the vibratory behavior. The interpretability and the discriminative power of the proposed feature set show promising relevance for a computer-assisted diagnosis and classification of voice disorders." }, { "pmid": "18023324", "title": "Glottal airflow resistance in excised pig, sheep, and cow larynges.", "abstract": "This study was to investigate glottal flow resistance in excised pig, sheep, and cow larynges during phonation at different oscillation ranges and to examine the relation of the glottal flow resistance to the laryngeal geometry and vocal fold vibration. Several pig, sheep, and cow larynges were prepared, mounted on an excised larynx bench, and set into oscillation with pressurized, heated, and humidified air. Glottal adduction was controlled by either using two-pronged probes to press the arytenoids together, or by passing a suture to simulate the lateral cricoarytenoid muscle action. Each excised larynx was subjected to a series of pressure-flow experiments, with adduction and flow rate as independent variables. The subglottal pressure, fundamental frequency, and glottal flow resistance were treated as dependent variables. The subglottal pressure, electroglottograph (EGG), and flow rate signals were recorded during each experiment. Glottal flow resistance was calculated from the pressure and flow signals, whereas the EGG signal was used to extract fundamental frequency. Preliminary data indicated a nonlinear behavior in the pressure-flow relations of these larynges with increasing glottal resistance due to increases in adduction. The average glottal flow resistance was 35.3+/-14.8 cmH(2)O/(L/s) for the pig, 30.8+/-17.5 cmH(2)O/(L/s) for the sheep, and 26.9+/-14.9 cmH(2)O/(L/s) for the cow." }, { "pmid": "28372097", "title": "Automated setup for ex vivo larynx experiments.", "abstract": "Ex vivo larynx experiments are limited in time due to degeneration of the laryngeal tissues. In order to acquire a significant and comparable amount of data, automatization of current manual experimental procedures is desirable. A computer controlled, electro-mechanical setup was developed for time-dependent variation of specific physiological parameters, including adduction and elongation level of the vocal folds and glottal flow. The setup offers a standardized method to induce defined forces on the laryngeal cartilages. Furthermore, phonation onset is detected automatically and the subsequent measurement procedure is automated and standardized to improve the efficiency of the experimental process. The setup was validated using four ex vivo porcine larynges, whereas each validation measurement series was executed with one separate larynx. Altogether 31 single measurements were undertaken, which can be summed up to a total experimental time of about 4 min. Vocal fold elongation and adduction lead both to an increase in fundamental frequency and subglottal pressure. Measurement procedures like applying defined subglottal pressure steps and onset-offset detection were reliably executed. The setup allows for a computer-based parameter control, which enables fast experimental execution over a wide range of laryngeal configurations. This maximizes the number of measurements and reduces personal effort compared with manual procedures." }, { "pmid": "29121085", "title": "Biomechanical simulation of vocal fold dynamics in adults based on laryngeal high-speed videoendoscopy.", "abstract": "MOTIVATION\nHuman voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters.\n\n\nMETHODS\nThe vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder-Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed.\n\n\nRESULTS AND CONCLUSION\nThe biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process." }, { "pmid": "9682857", "title": "Finite element modeling of vocal fold vibration in normal phonation and hyperfunctional dysphonia: implications for the pathogenesis of vocal nodules.", "abstract": "A computer model of the vocal fold was developed using finite element modeling technology for studying mechanical stress distribution over vibrating vocal fold tissue. In a simulated normal phonation mode, mechanical stress was found to be lowest at the midpoint of the vocal fold and highest at tendon attachments. However, when other modes predominated, high mechanical stress could occur at the midpoint of the vocal folds. When a vocal fold mass was modeled, high shearing stress occurred at the base of the modeled vocal fold mass, suggesting that the presence of a vocal nodule or polyp is associated with high mechanical stress at the margins of the mass. This finding supports a hypothesis that mechanical intraepithelial stress plays an important role in the development of vocal nodules, polyps, and other lesions that are usually ascribed to hyperfunctional dysphonia." }, { "pmid": "24771563", "title": "Combining multiobjective optimization and cluster analysis to study vocal fold functional morphology.", "abstract": "Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology." }, { "pmid": "20329853", "title": "Biomechanical modeling of register transitions and the role of vocal tract resonators.", "abstract": "Biomechanical modeling and bifurcation theory are applied to study phonation onset and register transition. A four-mass body-cover model with a smooth geometry is introduced to reproduce characteristic features of chest and falsetto registers. Sub- and supraglottal resonances are modeled using a wave-reflection model. Simulations for increasing and decreasing subglottal pressure reveal that the phonation onset exhibits amplitude jumps and hysteresis referring to a subcritical Hopf bifurcation. The onset pressure is reduced due to vocal tract resonances. Hysteresis is observed also for the voice breaks at the chest-falsetto transition. Varying the length of the subglottal resonator has only minor effects on this register transition. Contrarily, supraglottal resonances have a strong effect on the pitch, at which the chest-falsetto transition is found. Experiment of glissando singing shows that the supraglottis has indeed an influence on the register transition." }, { "pmid": "18646995", "title": "On the application of the lattice Boltzmann method to the investigation of glottal flow.", "abstract": "The production of voice is directly related to the vibration of the vocal folds, which is generated by the interaction between the glottal flow and the tissue of the vocal folds. In the current study, the aerodynamics of the symmetric glottis is investigated numerically for a number of static configurations. The numerical investigation is based on the lattice Boltzmann method (LBM), which is an alternative approach within computational fluid dynamics. Compared to the traditional Navier-Stokes computational fluid dynamics methods, the LBM is relatively easy to implement and can deal with complex geometries without requiring a dedicated grid generator. The multiple relaxation time model was used to improve the numerical stability. The results obtained with LBM were compared to the results provided by a traditional Navier-Stokes solver and experimental data. It was shown that LBM results are satisfactory for all the investigated cases." }, { "pmid": "17902863", "title": "Asymmetric airflow and vibration induced by the Coanda effect in a symmetric model of the vocal folds.", "abstract": "A model constructed from Navier-Stokes equations and a two-mass vocal fold description is proposed in this study. The composite model not only has the capability to describe the aerodynamics in a vibratory glottis but also can be used to study the vocal fold vibration under the driving of the complex airflow in the glottis. Numerical simulations show that this model can predict self-oscillations of the coupled glottal aerodynamics and vocal fold system. The Coanda effect could occur in the vibratory glottis even though the vocal folds have left-right symmetric prephonatory shape and tissue properties. The Coanda effect causes the asymmetric flow in the glottis and the difference in the driving force on the left and right vocal folds. The different pressures applied to the left and right vocal folds induce their displacement asymmetry. By using various lung pressures (0.6-2.0 kPa) to drive the composite model, it was found that the asymmetry of the vocal fold displacement is increased from 1.87% to 11.2%. These simulation results provide numerical evidence for the presence of asymmetric flow in the vibratory glottis; moreover, they indicate that glottal aerodynamics is an important factor in inducing the asymmetric vibration of the vocal folds." }, { "pmid": "24204083", "title": "Investigation of prescribed movement in fluid-structure interaction simulation for the human phonation process.", "abstract": "In a partitioned approach for computational fluid-structure interaction (FSI) the coupling between fluid and structure causes substantial computational resources. Therefore, a convenient alternative is to reduce the problem to a pure flow simulation with preset movement and applying appropriate boundary conditions. This work investigates the impact of replacing the fully-coupled interface condition with a one-way coupling. To continue to capture structural movement and its effect onto the flow field, prescribed wall movements from separate simulations and/or measurements are used. As an appropriate test case, we apply the different coupling strategies to the human phonation process, which is a highly complex interaction of airflow through the larynx and structural vibration of the vocal folds (VF). We obtain vocal fold vibrations from a fully-coupled simulation and use them as input data for the simplified simulation, i.e. just solving the fluid flow. All computations are performed with our research code CFS++, which is based on the finite element (FE) method. The presented results show that a pure fluid simulation with prescribed structural movement can substitute the fully-coupled approach. However, caution must be used to ensure accurate boundary conditions on the interface, and we found that only a pressure driven flow correctly responds to the physical effects when using specified motion." }, { "pmid": "29437460", "title": "Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters.", "abstract": "Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000×. This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond." }, { "pmid": "27250162", "title": "Non-stationary Bayesian estimation of parameters from a body cover model of the vocal folds.", "abstract": "The evolution of reduced-order vocal fold models into clinically useful tools for subject-specific diagnosis and treatment hinges upon successfully and accurately representing an individual patient in the modeling framework. This, in turn, requires inference of model parameters from clinical measurements in order to tune a model to the given individual. Bayesian analysis is a powerful tool for estimating model parameter probabilities based upon a set of observed data. In this work, a Bayesian particle filter sampling technique capable of estimating time-varying model parameters, as occur in complex vocal gestures, is introduced. The technique is compared with time-invariant Bayesian estimation and least squares methods for determining both stationary and non-stationary parameters. The current technique accurately estimates the time-varying unknown model parameter and maintains tight credibility bounds. The credibility bounds are particularly relevant from a clinical perspective, as they provide insight into the confidence a clinician should have in the model predictions." }, { "pmid": "7699169", "title": "Bifurcations in an asymmetric vocal-fold model.", "abstract": "A two-mass model of vocal-fold vibrations is analyzed with methods from nonlinear dynamics. Bifurcations are located in parameter planes of physiological interest (subglottal pressure, stiffness of the folds). It is shown that a sufficiently large tension imbalance of the left and right vocal fold induces bifurcations to subharmonic regimes, toroidal oscillations, and chaos. The corresponding attractors are characterized by phase portraits, spectra, and next-maximum maps. The relevance of these simulations for voice disorders such as laryngeal paralysis is discussed." }, { "pmid": "11144591", "title": "Irregular vocal-fold vibration--high-speed observation and modeling.", "abstract": "Direct observations of nonstationary asymmetric vocal-fold oscillations are reported. Complex time series of the left and the right vocal-fold vibrations are extracted from digital high-speed image sequences separately. The dynamics of the corresponding high-speed glottograms reveals transitions between low-dimensional attractors such as subharmonic and quasiperiodic oscillations. The spectral components of either oscillation are given by positive linear combinations of two fundamental frequencies. Their ratio is determined from the high-speed sequences and is used as a parameter of laryngeal asymmetry in model calculations. The parameters of a simplified asymmetric two-mass model of the larynx are preset by using experimental data. Its bifurcation structure is explored in order to fit simulations to the observed time series. Appropriate parameter settings allow the reproduction of time series and differentiated amplitude contours with quantitative agreement. In particular, several phase-locked episodes ranging from 4:5 to 2:3 rhythms are generated realistically with the model." }, { "pmid": "16504473", "title": "Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.", "abstract": "The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate." }, { "pmid": "12002868", "title": "Glottal flow through a two-mass model: comparison of Navier-Stokes solutions with simplified models.", "abstract": "A new numerical model of the vocal folds is presented based on the well-known two-mass models of the vocal folds. The two-mass model is coupled to a model of glottal airflow based on the incompressible Navier-Stokes equations. Glottal waves are produced using different initial glottal gaps and different subglottal pressures. Fundamental frequency, glottal peak flow, and closed phase of the glottal waves have been compared with values known from the literature. The phonation threshold pressure was determined for different initial glottal gaps. The phonation threshold pressure obtained using the flow model with Navier-Stokes equations corresponds better to values determined in normal phonation than the phonation threshold pressure obtained using the flow model based on the Bernoulli equation. Using the Navier-Stokes equations, an increase of the subglottal pressure causes the fundamental frequency and the glottal peak flow to increase, whereas the fundamental frequency in the Bernoulli-based model does not change with increasing pressure." }, { "pmid": "9104025", "title": "Acoustic interactions of the voice source with the lower vocal tract.", "abstract": "The linear source-filter theory of speech production assumes that vocal fold vibration is independent of the vocal tract. The justification is that the glottis often behaves as a high-impedance (constant flow) source. Recent imaging of the vocal tract has demonstrated, however, that the epilarynx tube is quite narrow, making the input impedance to the vocal tract comparable to the glottal impedance. Strong interactions can exist, therefore. In particular, the inertance of the vocal tract facilitates vocal fold vibration by lowering the oscillation threshold pressure. This has a significant impact on singing. Not only does the epilarynx tube produce the desirable singer's formant (vocal ring), but it acts like the mouthpiece of a trumpet to shape the flow and influence the mode of vibration. Effects of the piriform sinuses, pharynx expansion, and nasal coupling are also discussed." }, { "pmid": "21303014", "title": "Observation and analysis of in vivo vocal fold tissue instabilities produced by nonlinear source-filter coupling: a case study.", "abstract": "Different source-related factors can lead to vocal fold instabilities and bifurcations referred to as voice breaks. Nonlinear coupling in phonation suggests that changes in acoustic loading can also be responsible for this unstable behavior. However, no in vivo visualization of tissue motion during these acoustically induced instabilities has been reported. Simultaneous recordings of laryngeal high-speed videoendoscopy, acoustics, aerodynamics, electroglottography, and neck skin acceleration are obtained from a participant consistently exhibiting voice breaks during pitch glide maneuvers. Results suggest that acoustically induced and source-induced instabilities can be distinguished at the tissue level. Differences in vibratory patterns are described through kymography and phonovibrography; measures of glottal area, open/speed quotient, and amplitude/phase asymmetry; and empirical orthogonal function decomposition. Acoustically induced tissue instabilities appear abruptly and exhibit irregular vocal fold motion after the bifurcation point, whereas source-induced ones show a smoother transition. These observations are also reflected in the acoustic and acceleration signals. Added aperiodicity is observed after the acoustically induced break, and harmonic changes appear prior to the bifurcation for the source-induced break. Both types of breaks appear to be subcritical bifurcations due to the presence of hysteresis and amplitude changes after the frequency jumps. These results are consistent with previous studies and the nonlinear source-filter coupling theory." }, { "pmid": "25480072", "title": "Modeling the effects of a posterior glottal opening on vocal fold dynamics with implications for vocal hyperfunction.", "abstract": "Despite the frequent observation of a persistent opening in the posterior cartilaginous glottis in normal and pathological phonation, its influence on the self-sustained oscillations of the vocal folds is not well understood. The effects of a posterior gap on the vocal fold tissue dynamics and resulting acoustics were numerically investigated using a specially designed flow solver and a reduced-order model of human phonation. The inclusion of posterior gap areas of 0.03-0.1 cm(2) reduced the energy transfer from the fluid to the vocal folds by more than 42%-80% and the radiated sound pressure level by 6-14 dB, respectively. The model was used to simulate vocal hyperfucntion, i.e., patterns of vocal misuse/abuse associated with many of the most common voice disorders. In this first approximation, vocal hyperfunction was modeled by introducing a compensatory increase in lung air pressure to regain the vocal loudness level that was produced prior to introducing a large glottal gap. This resulted in a significant increase in maximum flow declination rate and amplitude of unsteady flow, thereby mimicking clinical studies. The amplitude of unsteady flow was found to be linearly correlated with collision forces, thus being an indicative measure of vocal hyperfunction." }, { "pmid": "11768701", "title": "Comparison of the phonation-related structures among pig, dog, white-tailed deer, and human larynges.", "abstract": "There is an important need for good animal models of the larynx for the study of the physiology of phonation. The dog's larynx has been used as an animal model for more than 2 centuries of phonatory research. However, there is some evidence that the pig larynx has advantages over the dog larynx as a model of phonation. Another larynx that is readily available is the deer larynx. In this comparative study, the laryngeal anatomy and function were examined in 4 species--human, pig, dog, and white-tailed deer. Particular attention was directed to those structures that one would predict could affect phonation, from the anatomic and biomechanical point of view. Although the vocal fold length was similar for all 4 species, the larynges described differed in some phonation-related characteristics. The data suggest that from a structural perspective, the pig larynx is a superior model for phonatory research." }, { "pmid": "22965771", "title": "Spatiotemporal analysis of vocal fold vibrations between children and adults.", "abstract": "OBJECTIVES/HYPOTHESIS\nAim of the study is to quantify differences in spatiotemporal features of vibratory motion in typically developing prepubertal children and adults with use of high speed digital imaging.\n\n\nSTUDY DESIGN\nProspective case-control study.\n\n\nMETHODS\nVocal fold oscillations of 31 children and 35 adults were analyzed. Endoscopic high-speed imaging was performed during sustained phonation at typical pitch and loudness. Quantitative technique of Phonovibrogram was used to compute spatiotemporal features. Spatial features are represented by opening and closing angles along the anterior and posterior parts of the vocal folds, as well as by left-right symmetry ratio. Temporal features are represented by the cycle-to-cycle variability of the spatial features. Group differences (adult females, adult males, and children) were statistically investigated.\n\n\nRESULTS\nStatistical differences were more pronounced in the temporal behavior compared to the spatial behavior. Children demonstrated greater cycle-to-cycle variability in oscillations compared to adults. Most differences between children and adults were found for temporal characteristics along the anterior parts during closing phase. The spatiotemporal features differed more between children and males than between children and females. Both adults and children showed equally high left-right symmetry.\n\n\nCONCLUSIONS\nResults suggest a more unstable phonation in children than in adults, yielding increased perturbation in periodicity. Children demonstrated longer phase delay in the anterior/posterior and medio-lateral parts during the opening phase compared to adults. The data presented may provide the bases for differentiating normal vibratory characteristics from the disordered in the pediatric population, and eventually assist in aiding the clinical utility of high speed imaging." }, { "pmid": "28464670", "title": "An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.", "abstract": "The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted." }, { "pmid": "18537405", "title": "Phonatory characteristics of excised pig, sheep, and cow larynges.", "abstract": "The purpose of this study was to examine the phonatory characteristics of pig, sheep, and cow excised larynges and to find out which of these animal species is the best model for human phonation. Excised pig, sheep, and cow larynges were prepared and mounted over a tapered tube on the excised bench that supplied pressurized, heated, and humidified air in a manner similar to that for excised canine models. Each excised larynx was subjected to a series of pressure-flow experiments with adduction as major control parameter. The subglottal pressure, electroglottograph (EGG), mean flow rate, audio signal, and sound pressure level were recorded during each experiment. EGG signal was used to extract the fundamental frequency. It was found that pressure-frequency relations were nonlinear for these species with large rate of frequency changes for the pig. The average oscillation frequencies for these species were 220+/-57 Hz for the pig, 102+/-33 Hz for the sheep, and 73+/-10 Hz for the cow. The average phonation threshold pressure for the pig was 7.4+/-2.0 cm H(2)O, 6.9+/-2.9 cm H(2)O for the sheep, and 4.4+/-2.3 cm H(2)O for the cow." }, { "pmid": "3170944", "title": "Glottal airflow and transglottal air pressure measurements for male and female speakers in soft, normal, and loud voice.", "abstract": "Measurements on the inverse filtered airflow waveform (the \"glottal waveform\") and of estimated average transglottal pressure and glottal airflow were made from noninvasive recordings of productions of syllable sequences in soft, normal, and loud voice for 25 male and 20 female speakers. Statistical analyses showed that with change from normal to loud voice, both males and females produced loud voice with increased pressure, accompanied by increased ac flow and increased maximum airflow declination rate. With change from normal voice, soft voice was produced with decreased pressure, ac flow and maximum airflow declination rate, and increased dc and average flow. Within the loudness conditions, there was no significant male-female difference in air pressure. Several glottal waveform parameters separated males and females in normal and loud voice. The data indicate higher ac flow and higher maximum airflow declination rate for males. In soft voice, the male and female glottal waveforms were more alike, and there was no significant difference in maximum airflow declination rate. The dc flow did not differ significantly between males and females. Possible relevance to biomechanical differences and differences in voice source characteristics between males and females and across loudness conditions is discussed." }, { "pmid": "8914317", "title": "Glottal volume velocity waveform characteristics in subjects with and without vocal training, related to gender, sound intensity, fundamental frequency, and age.", "abstract": "Glottal volume velocity waveform characteristics of 224 subjects, categorized in four groups according to gender and vocal training, were determined, and their relations to sound-pressure level, fundamental frequency, intra-oral pressure, and age were analyzed. Subjects phonated at three intensity conditions. The glottal volume velocity waveforms were obtained by inverse filtering the oral flow. Glottal volume velocity waveforms were parameterized with flow-based (minimum flow, ac flow, average flow, maximum flow declination rate) and time-based parameters (closed quotient, closing quotient, speed quotient), as well as with derived parameters (vocal efficiency and glottal resistance). Higher sound-pressure levels, intra-oral pressures, and flow-parameter values (ac flow, maximum flow declination rate) were observed, when compared with previous investigations. These higher values might be the result of the specific phonation tasks (stressed /ae/ vowel in a word and a sentence) or filtering processes. Few statistically significant (p < 0.01) differences in parameters were found between untrained and trained subjects [the maximum flow declination rate and the closing quotient were higher in trained women (p < 0.001), and the speed quotient was higher in trained men (p < 0.005)]. Several statistically significant parameter differences were found between men and women [minimum flow, ac flow, average flow, maximum flow declination rate, closing quotient, glottal resistance (p < 0.001), and closed quotient (p < 0.005)]. Significant effects of intensity condition were observed on ac flow, maximum flow declination rate, closing quotient, and vocal efficiency in women (p < 0.005), and on minimum flow, ac flow, average flow, maximum flow declination rate, closed quotient, and vocal efficiency in men (p < 0.01)." } ]
Frontiers in Computational Neuroscience
30687053
PMC6333865
10.3389/fncom.2018.00100
Bio-inspired Analysis of Deep Learning on Not-So-Big Data Using Data-Prototypes
Deep artificial neural networks are feed-forward architectures capable of very impressive performances in diverse domains. Indeed stacking multiple layers allows a hierarchical composition of local functions, providing efficient compact mappings. Compared to the brain, however, such architectures are closer to a single pipeline and require huge amounts of data, while concrete cases for either human or machine learning systems are often restricted to not-so-big data sets. Furthermore, interpretability of the obtained results is a key issue: since deep learning applications are increasingly present in society, it is important that the underlying processes be accessible and understandable to every one. In order to target these challenges, in this contribution we analyze how considering prototypes in a rather generalized sense (with respect to the state of the art) allows to reasonably work with small data sets while providing an interpretable view of the obtained results. Some mathematical interpretation of this proposal is discussed. Sensitivity to hyperparameters is a key issue for reproducible deep learning results, and is carefully considered in our methodology. Performances and limitations of the proposed setup are explored in details, under different hyperparameter sets, in an analogous way as biological experiments are conducted. We obtain a rather simple architecture, easy to explain, and which allows, combined with a standard method, to target both performances and interpretability.
2. Related Works2.1. Prototypes in LiteratureThe term prototypes can be found in the literature under different meanings: a priori information, representation in clusters, quantification of space, as we discuss now.Jetley and Torr (2015) proposes a prototypical priors layer, with a priori chosen prototype images of road signs, encoded using a HoG (histogram of oriented gradients) descriptor, and added as fixed units of the penultimate layer. Here, the network is ultimately trained to match the HoG representation of the prototypes but it is assumed to exist a standard representative image per class in the input space to be encoded as a prototype unit.In prototypical networks (Snell et al., 2017), prototypes are defined as class centroids in the feature space spanned by the embedding CNN. In this simple approach, the nearest-prototype is used to classify a given sample. This proved to be effective on the considered datasets. However, this does not respond to the scenario of metric learning proposed in Song et al. (2017). Also derived from this work, Gaussian prototypical networks (Fort, 2017) predict a covariance radius for each prototype, yielding some insight on the discriminating force of each of them.Self-organizing maps, or the dictionary sparse representation methods (Rubinstein et al., 2010), perform a prototypical sampling on the data space (Hecht and Gepperth, 2016) and allow to represent what is known about a data distribution, using methods quoted in prototypical networks and known as optimizing statistical criteria (Banerjee et al., 2005).Bio-inspired models also consider the notion of prototypes, as in e.g., (Serre et al., 2007) which introduce a general framework for the recognition of complex visual scenes, that follows the organization of visual cortex building an increasingly complex and invariant feature representation and considering a redundant dictionary of features for object categorization. Furthermore, Viéville and Crahay (2004) has related the notion of prototypes to a SVM model of the inferior temporal object recognition brain area.2.2. Few-Shot Learning and Learning to LearnThe importance of learning new concepts from small data-sets motivated the study of few-shot learning tasks. Few-shot learning refers to the ability of learning to discriminate between N unseen classes given k examples of each, with k usually below 5 (Triantafillou et al., 2017). This task is also referred to as k-shot N-way learning. One-shot learning is a special case of this setting, where the system must generalize from a single example for each class. Slightly different is zero-shot learning, where no example of a given class is available in the target domain (e.g., images) whereas information on the categories originates from other domains (e.g., textual descriptions) (Goodfellow et al., 2016d).Learning a deep discriminative model for a large number of classes has high data requirement, and is prone to overfitting if applied directly in a few-shot data framework. For this reason, usually some form of transfer learning is used to tackle this problem: a model is trained on other classes before attacking the N new ones. Since the model learned in this pre-training phase will have to be adapted to the target classes, few-shot classification can also be seen as a form of “learning how to learn,” which is a very sensible framework when the goal is to learn whichever new categories to come. In this spirit, pre-training is a meta-training phase, where we learn a meta-model that can learn any set of N classes given to it. Using the model on new k-shot N-way tasks corresponds to a meta-testing phase, where for each task some inner training may take place.There are different ways of implementing this meta-learning setting4, but any such should have a meta-training and meta-testing phase, with their respective class-wise disjoint datasets DMtrain and DMtest. Details on each phase vary between propositions, but globally meta-training works like some sort of pre-training, where the model is trained on some discriminative task over the classes in DMtrain, while constructing a representation that will be useful to generalize to new classes in DMtest. In this sense, this methodology can still be seen as a form of transfer learning. Later, meta-testing phase will consist of actually performing the k-shot N-way task multiple times, for different subsets of N classes drawn from DMtest. Each subset itself has to be divided into training and test splits, in order to have kN reference samples for the new classes from one and evaluate performance on the other.To give a clearer example, let us present a common organization of meta-learning: episodic training. First proposed by Vinyals et al. (2016), it has continued to be adopted in other recent works (Santoro et al., 2016; Fort, 2017; Ravi and Larochelle, 2017; Ren et al., 2017; Snell et al., 2017). Arguing that approximating meta-training to meta-testing conditions could enhance learning, they proposed that, during each step in the meta-training phase, the model be trained on a different k-shot N-way task, with new N classes drawn from DMtrain. Each such task is called an episode, and demands not only kN examples as a training (or support) set but also some examples of the N classes to serve as a test or query set. During this phase, the error over this query set can be used to adjust the model, while in meta-testing only the support set will be used as a reference to classify the query samples.In line with the above discussion, many of the recent works focused on this problem, seek to learn an embedding space over the meta-training set that will hopefully generalize for unseen classes in meta-testing. This common feature space allows to compare test with training examples to decide on their class, usually with some kind of nearest neighbors algorithm, but sometimes resorting to more complicated models. Siamese nets (Koch et al., 2015) are an early example of such models, in which two identical networks are used to map a pair of examples into a learned metric space so that they can be compared via a distance function. The whole network is trained to predict whether an input pair belongs or not to the same class, this being repeated for multiple pairs. Matching networks (Vinyals et al., 2016) work on a one-shot setting, also relying on learning a network that will map support examples of each class to an embedding space, to be later matched to the query example. The embedding is composed by CNNs providing inputs to LSTMs, providing a context aware embedding that models dependence between the CNN feature vectors for each support point, and also between support points and query point. Prototypical nets (Snell et al., 2017) also learn an embedding based on a CNN, with the matching between support and query samples performed by a nearest class means classifier. The CNN is adjusted during meta-training, with a cross entropy loss function over the points in the query set, for multiple episodes.This line of work is in close relation with metric learning, which aims to adapt a metric function over feature vectors for a given dataset (Bellet et al., 2013). If a significant metric is learned, distance-based classification becomes a relevant alternative, as exemplified by Mensink et al. (2013). They consider metric learning methods for two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM), and propose new methods for NCM allowing to model more complex class distributions with multiples centers per class. This possible complexity in class distribution cannot be captured by local metric learning methods, as is the case with current deep metric learning. As discussed by Song et al. (2017) they are incapable of identifying scattered classes, with multiple clusters in the space. In response, they propose to learn an embedding function that directly maximizes a clustering metric (normalized mutual information).Using memory augmented networks has also been explored in the context of few-shot learning (Santoro et al., 2016). The idea is to build on top of a Neural Turing Machine (Graves et al., 2014), an implementation of a content-based access memory for neural networks, adapting it to the one-shot learning task. This is yet another meta-learning approach, where the recurrent network is not trained to predict directly a specific set of classes, but tries to predict the right classification at each time-step based on the sample-class associations it could learn and keep in memory on the previous time steps. It still needs to see a large number (more than 10 k) of episodes to make good predictions, incrementally making better predictions as it sees up to 10 examples of the same class. Compared to human memory, its functioning could be seen as a working memory, that can match new samples to recently seen examples but limited to a low amount of distinct categories.2.3. InterpretabilityInterest in the field of interpretable machine learning has raised in the last years, partly inspired by the ever-growing impact of machine learning systems in society. Nevertheless, interpretability is a broad term still lacking a precise definition, with open discussions on what it is and how to quantify it (Lipton, 2016; Doshi-Velez and Kim, 2017). One common understanding is to equate it to explainability—the ability to provide explanations to a model's predictions or, even better, the reasoning process behind its predictions.Understanding the reasoning behind complex models such as deep neural networks is a difficult task. In visual recognition applications, one common effort is to try to visualize what types of features have been learned by certain units or layers of a convolutional network. This is usually achieved by optimizing network input to maximize the activations of the layer of interest. Since the first visualizations produced with DeconvNet (Zeiler and Fergus, 2014), there have been many propositions to improve the quality of the input reconstruction, including the use of regularization priors that enforce more “natural-looking” images (see Olah et al., 2017, for a review). Another way of producing such images is to search the training set for image patches maximizing the activations. This approach has the advantage of producing real examples, although not necessarily specifying which features in the image led to it to be put in a particular category.In-between these feature visualization strategies is the problem of attribution, where the goal is to identify which regions of the input image were responsible for maximizing a chosen activation (Bach et al., 2015; Sundararajan et al., 2017). An example of attribution procedure is LIME, a method that locally approximates the model around the output prediction and goes back to the input image highlighting superpixels most responsible for its predicted class (Ribeiro et al., 2016). Other works are interested in generating some salience map information over the input image (Bach et al., 2015; Shrikumar et al., 2017). While feature visualization is still abstract and away from verbal human-level explanation, attribution or salience maps are grounded on real images and can provide some level of justification.Even though these methods were developed having deep CNNs in mind, some of them can be generally applied to explain other classes of models. The explanation procedure itself can be seen as an explanation model, trained to provide justifications given the inputs and outputs of the black-box prediction model. Lundberg and Lee (2017) defines additive feature attribution methods, a class of local explanation models, unifying diverse approaches from literature (including Bach et al., 2015; Ribeiro et al., 2016; Shrikumar et al., 2017). Another way to use an accessory model as explanation is to distill the network into a class of allegedly interpretable models—such as decision trees—training the explanation model to mimic the networks predictions (Frosst and Hinton, 2017).
[ "26161953", "11690606", "29994192", "28117445", "9507973", "26017442", "19191599", "15982753", "24051724", "22879517", "17224612", "27244717", "26886976", "15483393" ]
[ { "pmid": "26161953", "title": "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.", "abstract": "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package." }, { "pmid": "11690606", "title": "Integrated model of visual processing.", "abstract": "Cortical processing of visual information requires that information be exchanged between neurons coding for distant regions in the visual field. It is argued that feedback connections are the best candidates for such rapid long-distance interconnections. In the integrated model, information arriving in the cortex from the magnocellular layers of the lateral geniculate nucleus is first sent and processed in the parietal cortex that is very rapidly activated by a visual stimulus. Results from this first-pass computation are then sent back by feedback connections to areas V1 and V2 that act as 'active black-boards' for the rest of the visual cortical areas: information retroinjected from the parietal cortex is used to guide further processing of parvocellular and koniocellular information in the inferotemporal cortex." }, { "pmid": "29994192", "title": "Few-Example Object Detection with Model Communication.", "abstract": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels." }, { "pmid": "28117445", "title": "Dermatologist-level classification of skin cancer with deep neural networks.", "abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care." }, { "pmid": "9507973", "title": "Rapid categorization of natural images by rhesus monkeys.", "abstract": "Two rhesus macaques were tested on a categorization task in which they had to classify previously unseen photographs flashed for only 80 ms. One monkey was trained to respond to the presence of an animal, the second to the presence of food. Although the monkeys were not quite as accurate as humans tested on the same material, they nevertheless performed this very challenging visual task remarkably well. Furthermore, their reaction times were considerably shorter than even the fastest human subject. Such data, combined with the detailed knowledge of the monkey's visual system, provide a severe challenge to current theories of visual processing. They also argue that this form of rapid visual categorization is fundamentally similar in both monkeys and humans." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "19191599", "title": "Nonlinear extraction of independent components of natural images using radial gaussianization.", "abstract": "We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons." }, { "pmid": "15982753", "title": "Subcortical loops through the basal ganglia.", "abstract": "Parallel, largely segregated, closed-loop projections are an important component of cortical-basal ganglia-cortical connectional architecture. Here, we present the hypothesis that such loops involving the neocortex are neither novel nor the first evolutionary example of closed-loop architecture involving the basal ganglia. Specifically, we propose that a phylogenetically older, closed-loop series of subcortical connections exists between the basal ganglia and brainstem sensorimotor structures, a good example of which is the midbrain superior colliculus. Insofar as this organization represents a general feature of brain architecture, cortical and subcortical inputs to the basal ganglia might act independently, co-operatively or competitively to influence the mechanisms of action selection." }, { "pmid": "24051724", "title": "Distance-based image classification: generalizing to new classes at near-zero cost.", "abstract": "We study large-scale image classification methods that can incorporate new classes and training images continuously over time at negligible cost. To this end, we consider two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers, and introduce a new metric learning approach for the latter. We also introduce an extension of the NCM classifier to allow for richer class representations. Experiments on the ImageNet 2010 challenge dataset, which contains over 10(6) training images of 1,000 classes, show that, surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier. Moreover, the NCM performance is comparable to that of linear SVMs which obtain current state-of-the-art performance. Experimentally, we study the generalization performance to classes that were not used to learn the metrics. Using a metric learned on 1,000 classes, we show results for the ImageNet-10K dataset which contains 10,000 classes, and obtain performance that is competitive with the current state-of-the-art while being orders of magnitude faster. Furthermore, we show how a zero-shot class prior based on the ImageNet hierarchy can improve performance when few training images are available." }, { "pmid": "22879517", "title": "The pulvinar regulates information transmission between cortical areas based on attention demands.", "abstract": "Selective attention mechanisms route behaviorally relevant information through large-scale cortical networks. Although evidence suggests that populations of cortical neurons synchronize their activity to preferentially transmit information about attentional priorities, it is unclear how cortical synchrony across a network is accomplished. Based on its anatomical connectivity with the cortex, we hypothesized that the pulvinar, a thalamic nucleus, regulates cortical synchrony. We mapped pulvino-cortical networks within the visual system, using diffusion tensor imaging, and simultaneously recorded spikes and field potentials from these interconnected network sites in monkeys performing a visuospatial attention task. The pulvinar synchronized activity between interconnected cortical areas according to attentional allocation, suggesting a critical role for the thalamus not only in attentional selection but more generally in regulating information transmission across the visual cortex." }, { "pmid": "17224612", "title": "Robust object recognition with cortex-like mechanisms.", "abstract": "We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex and invariant feature representation by alternating between a template matching and a maximum pooling operation. We demonstrate the strength of the approach on a range of recognition tasks: From invariant single object recognition in clutter to multiclass categorization problems and complex scene understanding tasks that rely on the recognition of both shape-based as well as texture-based objects. Given the biological constraints that the system had to satisfy, the approach performs surprisingly well: It has the capability of learning from only a few training examples and competes with state-of-the-art systems. We also discuss the existence of a universal, redundant dictionary of features that could handle the recognition of most object categories. In addition to its relevance for computer vision, the success of this approach suggests a plausibility proof for a class of feedforward models of object recognition in cortex." }, { "pmid": "27244717", "title": "Fully Convolutional Networks for Semantic Segmentation.", "abstract": "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image." }, { "pmid": "26886976", "title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.", "abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks." }, { "pmid": "15483393", "title": "Using an Hebbian learning rule for multi-class SVM classifiers.", "abstract": "Regarding biological visual classification, recent series of experiments have enlighten the fact that data classification can be realized in the human visual cortex with latencies of about 100-150 ms, which, considering the visual pathways latencies, is only compatible with a very specific processing architecture, described by models from Thorpe et al. Surprisingly enough, this experimental evidence is in coherence with algorithms derived from the statistical learning theory. More precisely, there is a double link: on one hand, the so-called Vapnik theory offers tools to evaluate and analyze the biological model performances and on the other hand, this model is an interesting front-end for algorithms derived from the Vapnik theory. The present contribution develops this idea, introducing a model derived from the statistical learning theory and using the biological model of Thorpe et al. We experiment its performances using a restrained sign language recognition experiment. This paper intends to be read by biologist as well as statistician, as a consequence basic material in both fields have been reviewed." } ]
Frontiers in Neurorobotics
30687057
PMC6336031
10.3389/fnbot.2018.00081
Acceptability Study of A3-K3 Robotic Architecture for a Neurorobotics Painting
In this paper, authors present a novel architecture for controlling an industrial robot via Brain Computer Interface. The robot used is a Series 2000 KR 210-2. The robotic arm was fitted with DI drawing devices that clamp, hold and manipulate various artistic media like brushes, pencils, pens. User selected a high-level task, for instance a shape or movement, using a human machine interface and the translation in robot movement was entirely demanded to the Robot Control Architecture defining a plan to accomplish user's task. The architecture was composed by a Human Machine Interface based on P300 Brain Computer Interface and a robotic architecture composed by a deliberative layer and a reactive layer to translate user's high-level command in a stream of movement for robots joints. To create a real-case scenario, the architecture was presented at Ars Electronica Festival, where the A3-K3 architecture has been used for painting. Visitors completed a survey to address 4 self-assessed different dimensions related to human-robot interaction: the technology knowledge, the personal attitude, the innovativeness and the satisfaction. The obtained results have led to further exploring the border of human-robot interaction, highlighting the possibilities of human expression in the interaction process with a machine to create art.
1.2. State of Art and Related Work in the FieldLucchiari et al. (2016) studied connection between cerebral rythms and creative process. Many studies used Brain Computer Interface to create a neuroprosthetic control systems to stimulates organism to reanimate the arts (Moritz et al., 2008; Ethier et al., 2012; Knudsen et al., 2014). The main limitation of these approaches consists in the use of invasive Brain Computer Interface to achieve devices control. Nijholt and Nam (2015) have addressed challenges in designing BCI applications related to the experience of art. Andujar et al. (2015) propose a definition for artistic brain-computer interfaces (artistic BCI) from a passive BCI perspective in four fields: human-computer interaction, neurophysiology, art, and computing. Wadeson et al. (2015) reviewed the literature on artistic BCIs by classifying four types of user control: selective control, passive control, direct control and collaborative control. Botrel et al. (2014) explained a Brain Painting for painting using the event related potentials.
[ "10576401", "15079079", "22522928", "25411486", "18923392", "21151375", "1464675", "28727554", "28298888" ]
[ { "pmid": "10576401", "title": "Comparison of P300 from passive and active tasks for auditory and visual stimuli.", "abstract": "The P300 event-related brain potential (ERP) was elicited with a stimulus-sequence paradigm for auditory and visual stimuli in separate active and passive task response conditions. Auditory stimuli in the passive task yielded P300 waveforms similar to those obtained from the active task condition. Visual stimuli in the passive task yielded much smaller P300 waveforms that were not morphologically consistent with those from the active task. The results suggest that auditory stimuli produce more robust P300 components than visual stimuli in passive task situations." }, { "pmid": "15079079", "title": "Activation of the prefrontal cortex in the human visual aesthetic perception.", "abstract": "Visual aesthetic perception (\"aesthetics\") or the capacity to visually perceive a particular attribute added to other features of objects, such as form, color, and movement, was fixed during human evolutionary lineage as a trait not shared with any great ape. Although prefrontal brain expansion is mentioned as responsible for the appearance of such human trait, no current knowledge exists on the role of prefrontal areas in the aesthetic perception. The visual brain consists of \"several parallel multistage processing systems, each specialized in a given task such as, color or motion\" [Bartels, A. & Zeki, S. (1999) Proc. R. Soc. London Ser. B 265, 2327-2332]. Here we report the results of an experiment carried out with magnetoencephalography which shows that the prefrontal area is selectively activated in humans during the perception of objects qualified as \"beautiful\" by the participants. Therefore, aesthetics can be hypothetically considered as an attribute perceived by means of a particular brain processing system, in which the prefrontal cortex seems to play a key role." }, { "pmid": "22522928", "title": "Restoration of grasp following paralysis through brain-controlled stimulation of muscles.", "abstract": "Patients with spinal cord injury lack the connections between brain and spinal cord circuits that are essential for voluntary movement. Clinical systems that achieve muscle contraction through functional electrical stimulation (FES) have proven to be effective in allowing patients with tetraplegia to regain control of hand movements and to achieve a greater measure of independence in daily activities. In existing clinical systems, the patient uses residual proximal limb movements to trigger pre-programmed stimulation that causes the paralysed muscles to contract, allowing use of one or two basic grasps. Instead, we have developed an FES system in primates that is controlled by recordings made from microelectrodes permanently implanted in the brain. We simulated some of the effects of the paralysis caused by C5 or C6 spinal cord injury by injecting rhesus monkeys with a local anaesthetic to block the median and ulnar nerves at the elbow. Then, using recordings from approximately 100 neurons in the motor cortex, we predicted the intended activity of several of the paralysed muscles, and used these predictions to control the intensity of stimulation of the same muscles. This process essentially bypassed the spinal cord, restoring to the monkeys voluntary control of their paralysed muscles. This achievement is a major advance towards similar restoration of hand function in human patients through brain-controlled FES. We anticipate that in human patients, this neuroprosthesis would allow much more flexible and dexterous use of the hand than is possible with existing FES systems." }, { "pmid": "25411486", "title": "Dissociating movement from movement timing in the rat primary motor cortex.", "abstract": "Neural encoding of the passage of time to produce temporally precise movements remains an open question. Neurons in several brain regions across different experimental contexts encode estimates of temporal intervals by scaling their activity in proportion to the interval duration. In motor cortex the degree to which this scaled activity relies upon afferent feedback and is guided by motor output remains unclear. Using a neural reward paradigm to dissociate neural activity from motor output before and after complete spinal transection, we show that temporally scaled activity occurs in the rat hindlimb motor cortex in the absence of motor output and after transection. Context-dependent changes in the encoding are plastic, reversible, and re-established following injury. Therefore, in the absence of motor output and despite a loss of afferent feedback, thought necessary for timed movements, the rat motor cortex displays scaled activity during a broad range of temporally demanding tasks similar to that identified in other brain regions." }, { "pmid": "18923392", "title": "Direct control of paralysed muscles by cortical neurons.", "abstract": "A potential treatment for paralysis resulting from spinal cord injury is to route control signals from the brain around the injury by artificial connections. Such signals could then control electrical stimulation of muscles, thereby restoring volitional movement to paralysed limbs. In previously separate experiments, activity of motor cortex neurons related to actual or imagined movements has been used to control computer cursors and robotic arms, and paralysed muscles have been activated by functional electrical stimulation. Here we show that Macaca nemestrina monkeys can directly control stimulation of muscles using the activity of neurons in the motor cortex, thereby restoring goal-directed movements to a transiently paralysed arm. Moreover, neurons could control functional stimulation equally well regardless of any previous association to movement, a finding that considerably expands the source of control signals for brain-machine interfaces. Monkeys learned to use these artificial connections from cortical cells to muscles to generate bidirectional wrist torques, and controlled multiple neuron-muscle pairs simultaneously. Such direct transforms from cortical activity to muscle stimulation could be implemented by autonomous electronic circuitry, creating a relatively natural neuroprosthesis. These results are the first demonstration that direct artificial connections between cortical cells and muscles can compensate for interrupted physiological pathways and restore volitional control of movement to paralysed limbs." }, { "pmid": "21151375", "title": "Brain Painting: First Evaluation of a New Brain-Computer Interface Application with ALS-Patients and Healthy Volunteers.", "abstract": "Brain-computer interfaces (BCIs) enable paralyzed patients to communicate; however, up to date, no creative expression was possible. The current study investigated the accuracy and user-friendliness of P300-Brain Painting, a new BCI application developed to paint pictures using brain activity only. Two different versions of the P300-Brain Painting application were tested: A colored matrix tested by a group of ALS-patients (n = 3) and healthy participants (n = 10), and a black and white matrix tested by healthy participants (n = 10). The three ALS-patients achieved high accuracies; two of them reaching above 89% accuracy. In healthy subjects, a comparison between the P300-Brain Painting application (colored matrix) and the P300-Spelling application revealed significantly lower accuracy and P300 amplitudes for the P300-Brain Painting application. This drop in accuracy and P300 amplitudes was not found when comparing the P300-Spelling application to an adapted, black and white matrix of the P300-Brain Painting application. By employing a black and white matrix, the accuracy of the P300-Brain Painting application was significantly enhanced and reached the accuracy of the P300-Spelling application. ALS-patients greatly enjoyed P300-Brain Painting and were able to use the application with the same accuracy as healthy subjects. P300-Brain Painting enables paralyzed patients to express themselves creatively and to participate in the prolific society through exhibitions." }, { "pmid": "1464675", "title": "The P300 wave of the human event-related potential.", "abstract": "The P300 wave is a positive deflection in the human event-related potential. It is most commonly elicited in an \"oddball\" paradigm when a subject detects an occasional \"target\" stimulus in a regular train of standard stimuli. The P300 wave only occurs if the subject is actively engaged in the task of detecting the targets. Its amplitude varies with the improbability of the targets. Its latency varies with the difficulty of discriminating the target stimulus from the standard stimuli. A typical peak latency when a young adult subject makes a simple discrimination is 300 ms. In patients with decreased cognitive ability, the P300 is smaller and later than in age-matched normal subjects. The intracerebral origin of the P300 wave is not known and its role in cognition not clearly understood. The P300 may have multiple intracerebral generators, with the hippocampus and various association areas of the neocortex all contributing to the scalp-recorded potential. The P300 wave may represent the transfer of information to consciousness, a process that involves many different regions of the brain." }, { "pmid": "28727554", "title": "A Human-Humanoid Interaction Through the Use of BCI for Locked-In ALS Patients Using Neuro-Biological Feedback Fusion.", "abstract": "This paper illustrates a new architecture for a human-humanoid interaction based on EEG-brain computer interface (EEG-BCI) for patients affected by locked-in syndrome caused by Amyotrophic Lateral Sclerosis (ALS). The proposed architecture is able to recognise users' mental state accordingly to the biofeedback factor , based on users' attention, intention, and focus, that is used to elicit a robot to perform customised behaviours. Experiments have been conducted with a population of eight subjects: four ALS patients in a near locked-in status with normal ocular movement and four healthy control subjects enrolled for age, education, and computer expertise. The results showed as three ALS patients have completed the task with 96.67% success; the healthy controls with 100% success; the fourth ALS has been excluded from the results for his low general attention during the task; the analysis of factor highlights as ALS subjects have shown stronger (81.20%) than healthy controls (76.77%). Finally, a post-hoc analysis is provided to show how robotic feedback helps in maintaining focus on expected task. These preliminary data suggest that ALS patients could successfully control a humanoid robot through a BCI architecture, potentially enabling them to conduct some everyday tasks and extend their presence in the environment." }, { "pmid": "28298888", "title": "Reaching and Grasping a Glass of Water by Locked-In ALS Patients through a BCI-Controlled Humanoid Robot.", "abstract": "Locked-in Amyotrophic Lateral Sclerosis (ALS) patients are fully dependent on caregivers for any daily need. At this stage, basic communication and environmental control may not be possible even with commonly used augmentative and alternative communication devices. Brain Computer Interface (BCI) technology allows users to modulate brain activity for communication and control of machines and devices, without requiring a motor control. In the last several years, numerous articles have described how persons with ALS could effectively use BCIs for different goals, usually spelling. In the present study, locked-in ALS patients used a BCI system to directly control the humanoid robot NAO (Aldebaran Robotics, France) with the aim of reaching and grasping a glass of water. Four ALS patients and four healthy controls were recruited and trained to operate this humanoid robot through a P300-based BCI. A few minutes training was sufficient to efficiently operate the system in different environments. Three out of the four ALS patients and all controls successfully performed the task with a high level of accuracy. These results suggest that BCI-operated robots can be used by locked-in ALS patients as an artificial alter-ego, the machine being able to move, speak and act in his/her place." } ]
eLife
30652683
PMC6342523
10.7554/eLife.38173
CaImAn an open source tool for scalable calcium imaging data analysis
Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.
Related workSource extractionSome source extraction methods attempt the detection of neurons in static images using supervised or unsupervised learning methods. Examples of unsupervised methods on summary images include graph-cut approaches applied to the correlation image (Kaifosh et al., 2014; Spaen et al., 2017), and dictionary learning (Pachitariu et al., 2013). Supervised learning methods based on boosting (Valmianski et al., 2010), or, more recently, deep neural networks have also been applied to the problem of neuron detection (Apthorpe et al., 2016; Klibisz et al., 2017). While these methods can be efficient in detecting the locations of neurons, they cannot infer the underlying activity nor do they readily offer ways to deal with the spatial overlap of different components.To extract temporal traces jointly with the spatial footprints of the components one can use methods that directly represent the full spatio-temporal data using matrix factorization approaches for example independent component analysis (ICA) (Mukamel et al., 2009), constrained nonnegative matrix factorization (CNMF) (Pnevmatikakis et al., 2016) (and its adaptation to one-photon data (Zhou et al., 2018)), clustering based approaches (Pachitariu et al., 2017), dictionary learning (Petersen et al., 2017), or active contour models (Reynolds et al., 2017). Such spatio-temporal methods are unsupervised, and focus on detecting active neurons by considering the spatio-temporal activity of a component as a contiguous set of pixels within the FOV that are correlated in time. While such methods tend to offer a direct decomposition of the data in a set of sources with activity traces in an unsupervised way, in principle they require processing of the full dataset, and thus are quickly rendered intractable. Possible approaches to deal with the data size include distributed processing in High Performance Computing (HPC) clusters (Freeman et al., 2014), spatio-temporal decimation (Friedrich et al., 2017a), and dimensionality reduction (Pachitariu et al., 2017). Recently, Giovannucci et al., 2017 prototyped an online algorithm (OnACID), by adapting matrix factorization setups (Pnevmatikakis et al., 2016; Mairal et al., 2010), to operate on calcium imaging streaming data and thus natively deal with large data rates. For a full review see (Pnevmatikakis, 2018).DeconvolutionFor the problem of predicting spikes from fluorescence traces, both supervised and unsupervised methods have been explored. Supervised methods rely on the use of labeled data to train or fit biophysical or neural network models (Theis et al., 2016), although semi-supervised that jointly learn a generative model for fluorescence traces have also been proposed (Speiser et al., 2017). Unsupervised methods can be either deterministic, such as sparse non-negative deconvolution (Vogelstein et al., 2010; Pnevmatikakis et al., 2016) that give a single estimate of the deconvolved neural activity, or probabilistic, that aim to also characterize the uncertainty around these estimates (e.g., (Pnevmatikakis et al., 2013; Deneux et al., 2016)). A recent community benchmarking effort (Berens et al., 2017) characterizes the similarities and differences of various available methods.
[ "23524393", "25663846", "27251287", "28301770", "23868258", "27432255", "18836457", "25068736", "28771570", "28291787", "21212780", "25295002", "27881303", "19778505", "25532138", "29483642", "26774160", "28782629", "30529147", "24836920", "29085906", "29069591", "20711183", "27300105", "27151639", "20610792", "25024921", "20554834", "26818514", "29469809" ]
[ { "pmid": "23524393", "title": "Whole-brain functional imaging at cellular resolution using light-sheet microscopy.", "abstract": "Brain function relies on communication between large populations of neurons across multiple brain areas, a full understanding of which would require knowledge of the time-varying activity of all neurons in the central nervous system. Here we use light-sheet microscopy to record activity, reported through the genetically encoded calcium indicator GCaMP5G, from the entire volume of the brain of the larval zebrafish in vivo at 0.8 Hz, capturing more than 80% of all neurons at single-cell resolution. Demonstrating how this technique can be used to reveal functionally defined circuits across the brain, we identify two populations of neurons with correlated activity patterns. One circuit consists of hindbrain neurons functionally coupled to spinal cord neuropil. The other consists of an anatomically symmetric population in the anterior hindbrain, with activity in the left and right halves oscillating in antiphase, on a timescale of 20 s, and coupled to equally slow oscillations in the inferior olive." }, { "pmid": "25663846", "title": "Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms.", "abstract": "We report a new 3D microscopy technique that allows volumetric imaging of living samples at ultra-high speeds: Swept, confocally-aligned planar excitation (SCAPE) microscopy. While confocal and two-photon microscopy have revolutionized biomedical research, current implementations are costly, complex and limited in their ability to image 3D volumes at high speeds. Light-sheet microscopy techniques using two-objective, orthogonal illumination and detection require a highly constrained sample geometry, and either physical sample translation or complex synchronization of illumination and detection planes. In contrast, SCAPE microscopy acquires images using an angled, swept light-sheet in a single-objective, en-face geometry. Unique confocal descanning and image rotation optics map this moving plane onto a stationary high-speed camera, permitting completely translationless 3D imaging of intact samples at rates exceeding 20 volumes per second. We demonstrate SCAPE microscopy by imaging spontaneous neuronal firing in the intact brain of awake behaving mice, as well as freely moving transgenic Drosophila larvae." }, { "pmid": "27251287", "title": "A shared neural ensemble links distinct contextual memories encoded close in time.", "abstract": "Recent studies suggest that a shared neural ensemble may link distinct memories encoded close in time. According to the memory allocation hypothesis, learning triggers a temporary increase in neuronal excitability that biases the representation of a subsequent memory to the neuronal ensemble encoding the first memory, such that recall of one memory increases the likelihood of recalling the other memory. Here we show in mice that the overlap between the hippocampal CA1 ensembles activated by two distinct contexts acquired within a day is higher than when they are separated by a week. Several findings indicate that this overlap of neuronal ensembles links two contextual memories. First, fear paired with one context is transferred to a neutral context when the two contexts are acquired within a day but not across a week. Second, the first memory strengthens the second memory within a day but not across a week. Older mice, known to have lower CA1 excitability, do not show the overlap between ensembles, the transfer of fear between contexts, or the strengthening of the second memory. Finally, in aged mice, increasing cellular excitability and activating a common ensemble of CA1 neurons during two distinct context exposures rescued the deficit in linking memories. Taken together, these findings demonstrate that contextual memories encoded close in time are linked by directing storage into overlapping ensembles. Alteration of these processes by ageing could affect the temporal structure of memories, thus impairing efficient recall of related information." }, { "pmid": "28301770", "title": "Imaging and Optically Manipulating Neuronal Ensembles.", "abstract": "The neural code that relates the firing of neurons to the generation of behavior and mental states must be implemented by spatiotemporal patterns of activity across neuronal populations. These patterns engage selective groups of neurons, called neuronal ensembles, which are emergent building blocks of neural circuits. We review optical and computational methods, based on two-photon calcium imaging and two-photon optogenetics, to detect, characterize, and manipulate neuronal ensembles in three dimensions. We review data using these methods in the mammalian cortex that demonstrate the existence of neuronal ensembles in the spontaneous and evoked cortical activity in vitro and in vivo. Moreover, two-photon optogenetics enable the possibility of artificially imprinting neuronal ensembles into awake, behaving animals and of later recalling those ensembles selectively by stimulating individual cells. These methods could enable deciphering the neural code and also be used to understand the pathophysiology of and design novel therapies for neurological and mental diseases." }, { "pmid": "23868258", "title": "Ultrasensitive fluorescent proteins for imaging neuronal activity.", "abstract": "Fluorescent calcium sensors are widely used to image neural activity. Using structure-based mutagenesis and neuron-based screening, we developed a family of ultrasensitive protein calcium sensors (GCaMP6) that outperformed other sensors in cultured neurons and in zebrafish, flies and mice in vivo. In layer 2/3 pyramidal neurons of the mouse visual cortex, GCaMP6 reliably detected single action potentials in neuronal somata and orientation-tuned synaptic calcium transients in individual dendritic spines. The orientation tuning of structurally persistent spines was largely stable over timescales of weeks. Orientation tuning averaged across spine populations predicted the tuning of their parent cell. Although the somata of GABAergic neurons showed little orientation tuning, their dendrites included highly tuned dendritic segments (5-40-µm long). GCaMP6 sensors thus provide new windows into the organization and dynamics of neural circuits over multiple spatial and temporal scales." }, { "pmid": "27432255", "title": "Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo.", "abstract": "Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo." }, { "pmid": "18836457", "title": "High-speed, miniaturized fluorescence microscopy in freely moving mice.", "abstract": "A central goal in biomedicine is to explain organismic behavior in terms of causal cellular processes. However, concurrent observation of mammalian behavior and underlying cellular dynamics has been a longstanding challenge. We describe a miniaturized (1.1 g mass) epifluorescence microscope for cellular-level brain imaging in freely moving mice, and its application to imaging microcirculation and neuronal Ca(2+) dynamics." }, { "pmid": "25068736", "title": "Mapping brain activity at scale with cluster computing.", "abstract": "Understanding brain function requires monitoring and interpreting the activity of large networks of neurons during behavior. Advances in recording technology are greatly increasing the size and complexity of neural data. Analyzing such data will pose a fundamental bottleneck for neuroscience. We present a library of analytical tools called Thunder built on the open-source Apache Spark platform for large-scale distributed computing. The library implements a variety of univariate and multivariate analyses with a modular, extendable structure well-suited to interactive exploration and analysis development. We demonstrate how these analyses find structure in large-scale neural data, including whole-brain light-sheet imaging data from fictively behaving larval zebrafish, and two-photon imaging data from behaving mouse. The analyses relate neuronal responses to sensory input and behavior, run in minutes or less and can be used on a private cluster or in the cloud. Our open-source framework thus holds promise for turning brain activity mapping efforts into biological insights." }, { "pmid": "28771570", "title": "Multi-scale approaches for high-speed imaging and analysis of large neural populations.", "abstract": "Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to \"zoom out\" by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution." }, { "pmid": "28291787", "title": "Fast online deconvolution of calcium imaging data.", "abstract": "Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm 3progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of O(105) traces of whole-brain larval zebrafish imaging data on a laptop." }, { "pmid": "21212780", "title": "In vivo two-photon imaging of sensory-evoked dendritic calcium signals in cortical neurons.", "abstract": "Neurons in cortical sensory regions receive modality-specific information through synapses that are located on their dendrites. Recently, the use of two-photon microscopy combined with whole-cell recordings has helped to identify visually evoked dendritic calcium signals in mouse visual cortical neurons in vivo. The calcium signals are restricted to small dendritic domains ('hotspots') and they represent visual synaptic inputs that are highly tuned for orientation and direction. This protocol describes the experimental procedures for the recording and the analysis of these visually evoked dendritic calcium signals. The key points of this method include delivery of fluorescent calcium indicators through the recording patch pipette, selection of an appropriate optical plane with many dendrites, hyperpolarization of the membrane potential and two-photon imaging. The whole protocol can be completed in 5-6 h, including 1-2 h of two-photon calcium imaging in combination with stable whole-cell recordings." }, { "pmid": "25295002", "title": "SIMA: Python software for analysis of dynamic fluorescence imaging data.", "abstract": "Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/." }, { "pmid": "27881303", "title": "The Serotonergic System Tracks the Outcomes of Actions to Mediate Short-Term Motor Learning.", "abstract": "To execute accurate movements, animals must continuously adapt their behavior to changes in their bodies and environments. Animals can learn changes in the relationship between their locomotor commands and the resulting distance moved, then adjust command strength to achieve a desired travel distance. It is largely unknown which circuits implement this form of motor learning, or how. Using whole-brain neuronal imaging and circuit manipulations in larval zebrafish, we discovered that the serotonergic dorsal raphe nucleus (DRN) mediates short-term locomotor learning. Serotonergic DRN neurons respond phasically to swim-induced visual motion, but little to motion that is not self-generated. During prolonged exposure to a given motosensory gain, persistent DRN activity emerges that stores the learned efficacy of motor commands and adapts future locomotor drive for tens of seconds. The DRN's ability to track the effectiveness of motor intent may constitute a computational building block for the broader functions of the serotonergic system. VIDEO ABSTRACT." }, { "pmid": "19778505", "title": "Automated analysis of cellular signals from large-scale calcium imaging data.", "abstract": "Recent advances in fluorescence imaging permit studies of Ca(2+) dynamics in large numbers of cells, in anesthetized and awake behaving animals. However, unlike for electrophysiological signals, standardized algorithms for assigning optically recorded signals to individual cells have not yet emerged. Here, we describe an automated sorting procedure that combines independent component analysis and image segmentation for extracting cells' locations and their dynamics with minimal human supervision. In validation studies using simulated data, automated sorting significantly improved estimation of cellular signals compared to conventional analysis based on image regions of interest. We used automated procedures to analyze data recorded by two-photon Ca(2+) imaging in the cerebellar vermis of awake behaving mice. Our analysis yielded simultaneous Ca(2+) activity traces for up to >100 Purkinje cells and Bergmann glia from single recordings. Using this approach, we found microzones of Purkinje cells that were stable across behavioral states and in which synchronous Ca(2+) spiking rose significantly during locomotion." }, { "pmid": "25532138", "title": "Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo.", "abstract": "We describe an all-optical strategy for simultaneously manipulating and recording the activity of multiple neurons with cellular resolution in vivo. We performed simultaneous two-photon optogenetic activation and calcium imaging by coexpression of a red-shifted opsin and a genetically encoded calcium indicator. A spatial light modulator allows tens of user-selected neurons to be targeted for spatiotemporally precise concurrent optogenetic activation, while simultaneous fast calcium imaging provides high-resolution network-wide readout of the manipulation with negligible optical cross-talk. Proof-of-principle experiments in mouse barrel cortex demonstrate interrogation of the same neuronal population during different behavioral states and targeting of neuronal ensembles based on their functional signature. This approach extends the optogenetic toolkit beyond the specificity obtained with genetic or viral approaches, enabling high-throughput, flexible and long-term optical interrogation of functionally defined neural circuits with single-cell and single-spike resolution in the mouse brain in vivo." }, { "pmid": "29483642", "title": "A robotic multidimensional directed evolution approach applied to fluorescent voltage reporters.", "abstract": "We developed a new way to engineer complex proteins toward multidimensional specifications using a simple, yet scalable, directed evolution strategy. By robotically picking mammalian cells that were identified, under a microscope, as expressing proteins that simultaneously exhibit several specific properties, we can screen hundreds of thousands of proteins in a library in just a few hours, evaluating each along multiple performance axes. To demonstrate the power of this approach, we created a genetically encoded fluorescent voltage indicator, simultaneously optimizing its brightness and membrane localization using our microscopy-guided cell-picking strategy. We produced the high-performance opsin-based fluorescent voltage reporter Archon1 and demonstrated its utility by imaging spiking and millivolt-scale subthreshold and synaptic activity in acute mouse brain slices and in larval zebrafish in vivo. We also measured postsynaptic responses downstream of optogenetically controlled neurons in C. elegans." }, { "pmid": "26774160", "title": "Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.", "abstract": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data." }, { "pmid": "28782629", "title": "NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data.", "abstract": "BACKGROUND\nMotion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. The motion artifacts in two-photon microscopy recordings can be non-rigid, arising from the finite time of raster scanning and non-uniform deformations of the brain medium.\n\n\nNEW METHOD\nWe introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. NoRMCorre operates by splitting the field of view (FOV) into overlapping spatial patches along all directions. The patches are registered at a sub-pixel resolution for rigid translation against a regularly updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid artifacts in a piecewise-rigid manner.\n\n\nEXISTING METHODS\nExisting approaches either do not scale well in terms of computational performance or are targeted to non-rigid artifacts arising just from the finite speed of raster scanning, and thus cannot correct for non-rigid motion observable in datasets from a large FOV.\n\n\nRESULTS\nNoRMCorre can be run in an online mode resulting in comparable to or even faster than real time motion registration of streaming data. We evaluate its performance with simple yet intuitive metrics and compare against other non-rigid registration methods on simulated data and in vivo two-photon calcium imaging datasets. Open source Matlab and Python code is also made available.\n\n\nCONCLUSIONS\nThe proposed method and accompanying code can be useful for solving large scale image registration problems in calcium imaging, especially in the presence of non-rigid deformations." }, { "pmid": "30529147", "title": "Analysis pipelines for calcium imaging data.", "abstract": "Calcium imaging is a popular tool among neuroscientists because of its capability to monitor in vivo large neural populations across weeks with single neuron and single spike resolution. Before any downstream analysis, the data needs to be pre-processed to extract the location and activity of the neurons and processes in the observed field of view. The ever increasing size of calcium imaging datasets necessitates scalable analysis pipelines that are reproducible and fully automated. This review focuses on recent methods for addressing the pre-processing problems that arise in calcium imaging data analysis, and available software tools for high throughput analysis pipelines." }, { "pmid": "24836920", "title": "Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy.", "abstract": "High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging." }, { "pmid": "29085906", "title": "ABLE: An Activity-Based Level Set Segmentation Algorithm for Two-Photon Calcium Imaging Data.", "abstract": "We present an algorithm for detecting the location of cells from two-photon calcium imaging data. In our framework, multiple coupled active contours evolve, guided by a model-based cost function, to identify cell boundaries. An active contour seeks to partition a local region into two subregions, a cell interior and exterior, in which all pixels have maximally \"similar\" time courses. This simple, local model allows contours to be evolved predominantly independently. When contours are sufficiently close, their evolution is coupled, in a manner that permits overlap. We illustrate the ability of the proposed method to demix overlapping cells on real data. The proposed framework is flexible, incorporating no prior information regarding a cell's morphology or stereotypical temporal activity, which enables the detection of cells with diverse properties. We demonstrate algorithm performance on a challenging mouse in vitro dataset, containing synchronously spiking cells, and a manually labelled mouse in vivo dataset, on which ABLE (the proposed method) achieves a 67.5% success rate." }, { "pmid": "29069591", "title": "Tracking the Same Neurons across Multiple Days in Ca2+ Imaging Data.", "abstract": "Ca2+ imaging techniques permit time-lapse recordings of neuronal activity from large populations over weeks. However, without identifying the same neurons across imaging sessions (cell registration), longitudinal analysis of the neural code is restricted to population-level statistics. Accurate cell registration becomes challenging with increased numbers of cells, sessions, and inter-session intervals. Current cell registration practices, whether manual or automatic, do not quantitatively evaluate registration accuracy, possibly leading to data misinterpretation. We developed a probabilistic method that automatically registers cells across multiple sessions and estimates the registration confidence for each registered cell. Using large-scale Ca2+ imaging data recorded over weeks from the hippocampus and cortex of freely behaving mice, we show that our method performs more accurate registration than previously used routines, yielding estimated error rates <5%, and that the registration is scalable for many sessions. Thus, our method allows reliable longitudinal analysis of the same neurons over long time periods." }, { "pmid": "20711183", "title": "Parallel processing of visual space by neighboring neurons in mouse visual cortex.", "abstract": "Visual cortex shows smooth retinotopic organization on the macroscopic scale, but it is unknown how receptive fields are organized at the level of neighboring neurons. This information is crucial for discriminating among models of visual cortex. We used in vivo two-photon calcium imaging to independently map ON and OFF receptive field subregions of local populations of layer 2/3 neurons in mouse visual cortex. Receptive field subregions were often precisely shared among neighboring neurons. Furthermore, large subregions seem to be assembled from multiple smaller, non-overlapping subregions of other neurons in the same local population. These experiments provide, to our knowledge, the first characterization of the diversity of receptive fields in a dense local network of visual cortex and reveal elementary units of receptive field organization. Our results suggest that a limited pool of afferent receptive fields is available to a local population of neurons and reveal new organizational principles for the neural circuitry of the mouse visual cortex." }, { "pmid": "27300105", "title": "A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging.", "abstract": "Imaging is used to map activity across populations of neurons. Microscopes with cellular resolution have small (." }, { "pmid": "27151639", "title": "Benchmarking Spike Rate Inference in Population Calcium Imaging.", "abstract": "A fundamental challenge in calcium imaging has been to infer spike rates of neurons from the measured noisy fluorescence traces. We systematically evaluate different spike inference algorithms on a large benchmark dataset (>100,000 spikes) recorded from varying neural tissue (V1 and retina) using different calcium indicators (OGB-1 and GCaMP6). In addition, we introduce a new algorithm based on supervised learning in flexible probabilistic models and find that it performs better than other published techniques. Importantly, it outperforms other algorithms even when applied to entirely new datasets for which no simultaneously recorded data is available. Future data acquired in new experimental conditions can be used to further improve the spike prediction accuracy and generalization performance of the model. Finally, we show that comparing algorithms on artificial data is not informative about performance on real data, suggesting that benchmarking different methods with real-world datasets may greatly facilitate future algorithmic developments in neuroscience." }, { "pmid": "20610792", "title": "Automatic identification of fluorescently labeled brain cells for rapid functional imaging.", "abstract": "The on-line identification of labeled cells and vessels is a rate-limiting step in scanning microscopy. We use supervised learning to formulate an algorithm that rapidly and automatically tags fluorescently labeled somata in full-field images of cortex and constructs an optimized scan path through these cells. A single classifier works across multiple subjects, regions of the cortex of similar depth, and different magnification and contrast levels without the need to retrain the algorithm. Retraining only has to be performed when the morphological properties of the cells change significantly. In conjunction with two-photon laser scanning microscopy and bulk-labeling of cells in layers 2/3 of rat parietal cortex with a calcium indicator, we can automatically identify ∼ 50 cells within 1 min and sample them at ∼ 100 Hz with a signal-to-noise ratio of ∼ 10." }, { "pmid": "25024921", "title": "scikit-image: image processing in Python.", "abstract": "scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org." }, { "pmid": "20554834", "title": "Fast nonnegative deconvolution for spike train inference from population calcium imaging.", "abstract": "Fluorescent calcium indicators are becoming increasingly popular as a means for observing the spiking activity of large neuronal populations. Unfortunately, extracting the spike train of each neuron from a raw fluorescence movie is a nontrivial problem. This work presents a fast nonnegative deconvolution filter to infer the approximately most likely spike train of each neuron, given the fluorescence observations. This algorithm outperforms optimal linear deconvolution (Wiener filtering) on both simulated and biological data. The performance gains come from restricting the inferred spike trains to be positive (using an interior-point method), unlike the Wiener filter. The algorithm runs in linear time, and is fast enough that even when simultaneously imaging >100 neurons, inference can be performed on the set of all observed traces faster than real time. Performing optimal spatial filtering on the images further refines the inferred spike train estimates. Importantly, all the parameters required to perform the inference can be estimated using only the fluorescence data, obviating the need to perform joint electrophysiological and imaging calibration experiments." }, { "pmid": "26818514", "title": "Resolution of High-Frequency Mesoscale Intracortical Maps Using the Genetically Encoded Glutamate Sensor iGluSnFR.", "abstract": "Wide-field-of-view mesoscopic cortical imaging with genetically encoded sensors enables decoding of regional activity and connectivity in anesthetized and behaving mice; however, the kinetics of most genetically encoded sensors can be suboptimal for in vivo characterization of frequency bands higher than 1-3 Hz. Furthermore, existing sensors, in particular those that measure calcium (genetically encoded calcium indicators; GECIs), largely monitor suprathreshold activity. Using a genetically encoded sensor of extracellular glutamate and in vivo mesoscopic imaging, we demonstrate rapid kinetics of virally transduced or transgenically expressed glutamate-sensing fluorescent reporter iGluSnFR. In both awake and anesthetized mice, we imaged an 8 × 8 mm field of view through an intact transparent skull preparation. iGluSnFR revealed cortical representation of sensory stimuli with rapid kinetics that were also reflected in correlation maps of spontaneous cortical activities at frequencies up to the alpha band (8-12 Hz). iGluSnFR resolved temporal features of sensory processing such as an intracortical reverberation during the processing of visual stimuli. The kinetics of iGluSnFR for reporting regional cortical signals were more rapid than those for Emx-GCaMP3 and GCaMP6s and comparable to the temporal responses seen with RH1692 voltage sensitive dye (VSD), with similar signal amplitude. Regional cortical connectivity detected by iGluSnFR in spontaneous brain activity identified functional circuits consistent with maps generated from GCaMP3 mice, GCaMP6s mice, or VSD sensors. Viral and transgenic iGluSnFR tools have potential utility in normal physiology, as well as neurologic and psychiatric pathologies in which abnormalities in glutamatergic signaling are implicated.\n\n\nSIGNIFICANCE STATEMENT\nWe have characterized the usage of virally transduced or transgenically expressed extracellular glutamate sensor iGluSnFR to perform wide-field-of-view mesoscopic imaging of cortex in both anesthetized and awake mice. Probes for neurotransmitter concentration enable monitoring of brain activity and provide a more direct measure of regional functional activity that is less dependent on nonlinearities associated with voltage-gated ion channels. We demonstrate functional maps of extracellular glutamate concentration and that this sensor has rapid kinetics that enable reporting high-frequency signaling. This imaging strategy has utility in normal physiology and pathologies in which altered glutamatergic signaling is observed. Moreover, we provide comparisons between iGluSnFR and genetically encoded calcium indicators and voltage-sensitive dyes." }, { "pmid": "29469809", "title": "Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data.", "abstract": "In vivo calcium imaging through microendoscopic lenses enables imaging of previously inaccessible neuronal populations deep within the brains of freely moving animals. However, it is computationally challenging to extract single-neuronal activity from microendoscopic data, because of the very large background fluctuations and high spatial overlaps intrinsic to this recording modality. Here, we describe a new constrained matrix factorization approach to accurately separate the background and then demix and denoise the neuronal signals of interest. We compared the proposed method against previous independent components analysis and constrained nonnegative matrix factorization approaches. On both simulated and experimental data recorded from mice, our method substantially improved the quality of extracted cellular signals and detected more well-isolated neural signals, especially in noisy data regimes. These advances can in turn significantly enhance the statistical power of downstream analyses, and ultimately improve scientific conclusions derived from microendoscopic data." } ]
Heliyon
30705989
PMC6348230
10.1016/j.heliyon.2019.e01141
Associating learning technology to sustain the environment through green mobile applications
Today, there is a great urge to “Go Green” in many facets in our life, such as reducing the consumption of energy and creating more eco-friendly products as a solution to reduce the crisis that we might face in the future. Computing devices are quickly spreading in our world. Thinking of the amount of devices that are harming the environment through their energy consumption and their toxic wastes urges us to think of a solution that preserves the environment and our well-being. With “quality education” a major issue for the UN 2030 Sustainable Development Agenda, it is a must that we loop together the “quality education” with education per se. It is imperative that we engage the youth in attaining this objective, for it is not enough that we empower them with the tools, we must encourage them to look at the current issues outside the box, through a worldwide view, where the focus nowadays is on the environment, be it in reducing fuel usage, decreasing air and water pollution, and recycling, in brief, being environment friendly. In this paper, we stress the increased number of mobile devices usage and show the importance of having green applications to save the environment and preserve our health. The paper proposes a model with four metrics – energy, economic metric, performance, and energy/performance metric that aims to provide a design for mobile applications that address some of the concerns related to the environment. The proposed model was implemented on a mobile application and the results are compared to a regular application that does not take into consideration the “green” environment. The results proved that the application that follows the defined metrics preserves the environment and performs better than the regular application design. Thus, it presents a stepping stone towards linking learning technology while at the same time sustaining the environment.
2Related workGreen is the term of utmost concerns in our days and was a subject mainly related to environment friendly metrics which has now expanded to computing. Chaudhary et al. [7] discussed the migration from paper to databases in the last decade, and wanted to show that the large storage devices used for saving the databases are posing a threat to the environment as a lot of power is needed to maintain the data centers up and running. Wang and Khan [8] provided a study about the metrics that can help in building a green data center. Blackburn [9] tackled green data centers and presented metrics related to data center's efficiency. Belady [10] discussed data centers power efficiency metrics. In the paper, metrics for measuring power consumption were presented to calculate (specifically) energy consumption and data center's efficiency. Kipp et al. [11] targeted developing methodologies and models to reduce environmental impacts of IT systems. The work focused on the evaluation of IT infrastructure based on the energy efficiency of the hardware environment. Talebi and Thomas [12] presented specific techniques that enable green computing to be integrated in classrooms and research laboratories. These include turning off the equipment between classes, use the computers' power saving modes, eliminate phantom loads, upgrade to extend computer lifecycle, and many other techniques. On a similar note, Mahmoud and Ahmad [13] also discussed techniques and models to build a green application. The paper focused on energy and resources consumptions.The GREENSOFT model is a well-known model when talking about green software engineering. Naumann et al. [14, 15] attempted to classify green software engineering. The authors explored a software life cycle, and the effects it goes through its different phases. Kocak et al. [16] evaluated the software product and the environmental characteristics of the software using a framework based on ANP, which is a widely used approach to evaluate and prioritize environmental criteria. The main feature in this paper is the developed framework which can help in analyzing and determining the need for each criterion when developing the software model and defining each requirement of the software. Hindle [17] tackled the impact of software evolution and its effect on power consumption by providing metrics to measure power consumption, and by studying the application behavior when it gets updated.Archiving requires significant storage and process capabilities. In their paper, Bigazzi and Bertini [18] discussed the environmental impact of data archiving and how to add a green measure to it. The paper discussed the sustainability measures in systems, which can be broken into three areas: environmental, economic and social. Atrey et al. [19] discussed the negative aspect that cloud computing has on the environment. In fact, cloud computing provides “unlimited” capabilities; however, the cost for such performance and storage will yield pollution due to the high power consumption and the CO2 emission. The work focused on describing the metrics to analyze power consumption. Rivoire [20] presented a green computing model, green metrics and statistics about the adaptability of green computing models. Siso et al. [21] described the negative effects of power and heat consumption in data centers; in addition, they provided metrics to calculate them. Metrics are classified into categories and are ordered by importance.In our paper, and in contrast with the above discussed papers, we introduce green metrics for mobile applications that will account for the device's entire component in order to reach an optimal green model that will be environment and mobile friendly. In particular, we present metrics that take into consideration the CPU, the memory of the device, the battery and power consumption; whereas, in the previous works, researchers focused on one aspect of a green model, and in particular most of the studies tackle the energy consumption metric and neglect the rest of the components present on the device that might and will harm the environment when neglected during the software design and development process.
[ "22142669" ]
[ { "pmid": "22142669", "title": "Mobile phone use for contacting emergency services in life-threatening circumstances.", "abstract": "BACKGROUND\nThe potential health benefits of mobile phone use have not been widely studied, except for telemedicine-type applications.\n\n\nSTUDY OBJECTIVES\nThis study seeks to determine whether initial contact with emergency services via a mobile phone in life-threatening situations is associated with potential health benefits when compared to contact via a landline.\n\n\nMETHODS\nA record-linkage study was carried out in which data from all emergency dispatches for immediately life-threatening events from a United Kingdom county ambulance service were linked to the Patient Admission System at two major local hospitals. Mortality (at the scene, at the emergency department [ED], and during hospitalization); transfer to the ED; admission (inpatient care, and intensive care unit); and length of stay were analyzed for calls classified as Code Red (immediately life-threatening) by initial exposure (mobile phone vs. landline), while controlling for potential confounding variables.\n\n\nRESULTS\nOf 354,199 ambulances dispatched to attend emergency incidents, 66% transported patients to the hospital while 2% stood down due to death at the scene. Mobile phone compared to landline reporting of emergencies resulted in significant reductions in the risk of death at the scene (odds ratio [OR] 0.77), but not for death in the ED or during inpatient admission. The risk of being transferred to the ED and subsequent inpatient admission were significantly lower with reporting from mobile phones compared to landline (OR 0.93 and OR 0.82, respectively).\n\n\nCONCLUSIONS\nIn this study, evidence of statistical association was demonstrated between the use of mobile phones to alert ambulance services in life-threatening situations and improved outcomes for patients." } ]
Frontiers in Genetics
30728826
PMC6351489
10.3389/fgene.2018.00685
Multiple Partial Regularized Nonnegative Matrix Factorization for Predicting Ontological Functions of lncRNAs
Long non-coding RNAs (LncRNA) are critical regulators for biological processes, which are highly related to complex diseases. Even though the next generation sequence technology facilitates the discovery of a great number of lncRNAs, the knowledge about the functions of lncRNAs is limited. Thus, it is promising to predict the functions of lncRNAs, which shed light on revealing the mechanisms of complex diseases. The current algorithms predict the functions of lncRNA by using the features of protein-coding genes. Generally speaking, these algorithms fuse heterogeneous genomic data to construct lncRNA-gene associations via a linear combination, which cannot fully characterize the function-lncRNA relations. To overcome this issue, we present an nonnegative matrix factorization algorithm with multiple partial regularization (aka MPrNMF) to predict the functions of lncRNAs without fusing the heterogeneous genomic data. In details, for each type of genomic data, we construct the lncRNA-gene associations, resulting in multiple associations. The proposed method integrates separately them via regularization strategy, rather than fuse them into a single type of associations. The results demonstrate that the proposed algorithm outperforms state-of-the-art methods based network-analysis. The model and algorithm provide an effective way to explore the functions of lncRNAs.
2. Related WorksIn this section, we first introduce the mathematical notations that are widely used in the forthcoming sections. Then, we review state-of-the-art methods for the prediction of lncRNA functions.2.1. NotationsThe notations are summarized in Table 1. Let n be the number of entities in the networks. Generally speaking, let no be the number of ontological functions in Gene Ontology (GO), ng be the number of proteins (genes) in the PPI network, nl be the number of lncRNAs in the co-expression network. Let Gg, Gl be the PPI and lncRNA co-expression networks, respectively. The adjacency matrix for Gg, denoted by Wg, corresponds to a ng×ng matrix whose element wij[g] is the weight on edge (vi, vj) in Gg. The degree of vertex vi in Gg is the sum of weights on edges connecting vi, i.e., di[g]=∑jwij[g]. The degree matrix Dg is the diagonal matrix with degree sequence of Gg, i.e., Dg=diag(d1[g],d2[g],…,dn[g]). The Laplacian matrix of Gg is defined as Lg=I-Dg-1/2WgDg-1/2. Analogously, the adjacent matrix of Gl is denoted by Wl. Let Ll be the Laplacian matrix for Gl. The associations between heterogeneous entities are denoted by matrix. Specifically, let X be the known lncRNA-ontology associations, Y be the known gene-lncRNA associations, and Y1(Y2) be the known lncRNA-disease (gene-disease) associations, respectively.Table 1Notations and descriptions.SymbolDefinition and descriptionno, ng, nlNumber of ontological functions, genes and lncRNAsGgraph with vertex set V and edge set EXKnown lincRNA-ontology associationsY1, Y2Known lncRNA-gene associationsGgProtein-Protein interaction (PPI) networkGlLncRNA co-expression networkW¯gNormalized adjacent matrix of the PPI network W¯g=D-1/2WgD-1/2W¯lNormalized adjacent matrix of lncRNA co-expression network W¯g=D-1/2WlD-1/2LgNormalized Laplacian matrix of Gg, i.e., Lg=I-W¯gLlNormalized Laplacian matrix of Gl, i.e., Ll=I-W¯l2.2. Related AlgorithmsThe label propagation algorithm is successfully applied to predict phenotype-gene associations with various backgrounds (Li and Patra, 2010; Vanunu et al., 2010), where the principle of the label propagation algorithms is illustrated in Figure 1A. In details, label propagation assumes that the well connected lncRNAs in Gl are very likely to be the same label, which leads to the following objective function (1)JLP=θtr(X^LlX^′)+(1-θ)||X^-X||2, where X^ is the predicted lncRNA-ontology associations, θ ∈ (0, 1) is the parameter controlling the contributions of two terms in Equation (1), tr(A) is the trace of matrix A, i.e., tr(A)=∑iaii and ||A|| is the l2 norm of matrix A. In Equation (1), the first item characterizes how the predicted lncRNA-ontology associations X^ is consistent with the lncRNA co-expressed network, while the second one measures the good the predicted associations fit the initial labeling.Figure 1The flowchart of the current algorithms based on network analysis: (A) label propagation method based on the lncRNA co-expression network, (B) label propagation method based on the bio-colored network.However, the number of predicted associations is largely determined by the sparsity of the known associations in X. When X is very sparse, the number of predicted associations is limited. Actually, X is very sparse since the GO functions of vast majority of lncRNAs are unknown. Fortunately, the GO functions of most proteins are known. Thus, the available algorithms overcome this limitation of the label propagation algorithm via integrating the proteins and lncRNAs as shown in Figure 1B. Specifically, given the known protein-GO associations X, PPI network Gg, lncRNA co-expression network Gl and lncRNA-gene associations Y, the ultimate goal is to predict the lncRNA-ontology associations via integrative analysis of heterogeneous data. The lnc-GFP algorithm (Guo et al., 2013) follows the label propagation method by using the bi-colored network, which is defined as(2)C=[WlYY′Wg].Thus, the objective function in Equation (1) is transformed into(3)JLP=θtr(X^LCX^′)+(1-θ)||X^-X||2,where LC is the Laplacian matrix of the bi-colored network C. The KATZLGO method (Zhang et al., 2017) predicts the GO functions of lncRNAs by using the KATZ score of the bi-colored network, which counts the paths with various lengths in the bi-colored networks.The bi-colored based methods make use of lncRNA-gene associations to predict the functions of lncRNAs. To explore the knowledge in Gl and Gg, Petergrosso et al. (2017) proposed the dual label propagation (DLP) to predict the phenotome-genome associations. Specifically, the objective function in Equation(1) based on the DLP model can be re-written as(4)JDLP=||X^-X||2+βtr(X^LgX^′)+γtr(X^LlX^′),where β ≥ 0, γ ≥ 0 are tuning parameters. The first item measures the consistence between the predicted associations and the bi-colored network, and the last two ones measures the smoothness in the PPI and lncRNA networks.Most of the available algorithms for the prediction of LncRNA functions are based on the bi-colored network model. In this study, we investigate the possibility to predict the functions of lncRNAs via integrating multiple networks, where each type of genomic data is used to construct the lncRNA-gene associations.
[ "29140524", "26041786", "23132350", "19182780", "28241135", "25599403", "25707511", "10548103", "15173114", "26748401", "20215462", "21247874", "30349036", "28501295", "19188922", "23463315", "24776770", "24463510", "24608367", "25392420", "24840979", "20090828", "30250064", "27071099", "29701681", "29293953", "19144990", "24824789", "28534780", "30224759", "26134276" ]
[ { "pmid": "29140524", "title": "NONCODEV5: a comprehensive annotation database for long non-coding RNAs.", "abstract": "NONCODE (http://www.bioinfo.org/noncode/) is a systematic database that is dedicated to presenting the most complete collection and annotation of non-coding RNAs (ncRNAs), especially long non-coding RNAs (lncRNAs). Since NONCODE 2016 was released two years ago, the amount of novel identified ncRNAs has been enlarged by the reduced cost of next-generation sequencing, which has produced an explosion of newly identified data. The third-generation sequencing revolution has also offered longer and more accurate annotations. Moreover, accumulating evidence confirmed by biological experiments has provided more comprehensive knowledge of lncRNA functions. The ncRNA data set was expanded by collecting newly identified ncRNAs from literature published over the past two years and integration of the latest versions of RefSeq and Ensembl. Additionally, pig was included in the database for the first time, bringing the total number of species to 17. The number of lncRNAs in NONCODEv5 increased from 527 336 to 548 640. NONCODEv5 also introduced three important new features: (i) human lncRNA-disease relationships and single nucleotide polymorphism-lncRNA-disease relationships were constructed; (ii) human exosome lncRNA expression profiles were displayed; (iii) the RNA secondary structures of NONCODE human transcripts were predicted. NONCODEv5 is also accessible through http://www.noncode.org/." }, { "pmid": "26041786", "title": "Revealing protein-lncRNA interaction.", "abstract": "Long non-coding RNAs (lncRNAs) are associated to a plethora of cellular functions, most of which require the interaction with one or more RNA-binding proteins (RBPs); similarly, RBPs are often able to bind a large number of different RNAs. The currently available knowledge is already drawing an intricate network of interactions, whose deregulation is frequently associated to pathological states. Several different techniques were developed in the past years to obtain protein-RNA binding data in a high-throughput fashion. In parallel, in silico inference methods were developed for the accurate computational prediction of the interaction of RBP-lncRNA pairs. The field is growing rapidly, and it is foreseeable that in the near future, the protein-lncRNA interaction network will rise, offering essential clues for a better understanding of lncRNA cellular mechanisms and their disease-associated perturbations." }, { "pmid": "23132350", "title": "Long non-coding RNAs function annotation: a global prediction method based on bi-colored networks.", "abstract": "More and more evidences demonstrate that the long non-coding RNAs (lncRNAs) play many key roles in diverse biological processes. There is a critical need to annotate the functions of increasing available lncRNAs. In this article, we try to apply a global network-based strategy to tackle this issue for the first time. We develop a bi-colored network based global function predictor, long non-coding RNA global function predictor ('lnc-GFP'), to predict probable functions for lncRNAs at large scale by integrating gene expression data and protein interaction data. The performance of lnc-GFP is evaluated on protein-coding and lncRNA genes. Cross-validation tests on protein-coding genes with known function annotations indicate that our method can achieve a precision up to 95%, with a suitable parameter setting. Among the 1713 lncRNAs in the bi-colored network, the 1625 (94.9%) lncRNAs in the maximum connected component are all functionally characterized. For the lncRNAs expressed in mouse embryo stem cells and neuronal cells, the inferred putative functions by our method highly match those in the known literature." }, { "pmid": "19182780", "title": "Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals.", "abstract": "There is growing recognition that mammalian cells produce many thousands of large intergenic transcripts. However, the functional significance of these transcripts has been particularly controversial. Although there are some well-characterized examples, most (>95%) show little evidence of evolutionary conservation and have been suggested to represent transcriptional noise. Here we report a new approach to identifying large non-coding RNAs using chromatin-state maps to discover discrete transcriptional units intervening known protein-coding loci. Our approach identified approximately 1,600 large multi-exonic RNAs across four mouse cell types. In sharp contrast to previous collections, these large intervening non-coding RNAs (lincRNAs) show strong purifying selection in their genomic loci, exonic sequences and promoter regions, with greater than 95% showing clear evolutionary conservation. We also developed a functional genomics approach that assigns putative functions to each lincRNA, demonstrating a diverse range of roles for lincRNAs in processes from embryonic stem cell pluripotency to cell proliferation. We obtained independent functional validation for the predictions for over 100 lincRNAs, using cell-based assays. In particular, we demonstrate that specific lincRNAs are transcriptionally regulated by key transcription factors in these processes such as p53, NFkappaB, Sox2, Oct4 (also known as Pou5f1) and Nanog. Together, these results define a unique collection of functional lincRNAs that are highly conserved and implicated in diverse biological processes." }, { "pmid": "28241135", "title": "An atlas of human long non-coding RNAs with accurate 5' ends.", "abstract": "Long non-coding RNAs (lncRNAs) are largely heterogeneous and functionally uncharacterized. Here, using FANTOM5 cap analysis of gene expression (CAGE) data, we integrate multiple transcript collections to generate a comprehensive atlas of 27,919 human lncRNA genes with high-confidence 5' ends and expression profiles across 1,829 samples from the major human primary cell types and tissues. Genomic and epigenomic classification of these lncRNAs reveals that most intergenic lncRNAs originate from enhancers rather than from promoters. Incorporating genetic and expression data, we show that lncRNAs overlapping trait-associated single nucleotide polymorphisms are specifically expressed in cell types relevant to the traits, implicating these lncRNAs in multiple diseases. We further demonstrate that lncRNAs overlapping expression quantitative trait loci (eQTL)-associated single nucleotide polymorphisms of messenger RNAs are co-expressed with the corresponding messenger RNAs, suggesting their potential roles in transcriptional regulation. Combining these findings with conservation data, we identify 19,175 potentially functional lncRNAs in the human genome." }, { "pmid": "25599403", "title": "The landscape of long noncoding RNAs in the human transcriptome.", "abstract": "Long noncoding RNAs (lncRNAs) are emerging as important regulators of tissue physiology and disease processes including cancer. To delineate genome-wide lncRNA expression, we curated 7,256 RNA sequencing (RNA-seq) libraries from tumors, normal tissues and cell lines comprising over 43 Tb of sequence from 25 independent studies. We applied ab initio assembly methodology to this data set, yielding a consensus human transcriptome of 91,013 expressed genes. Over 68% (58,648) of genes were classified as lncRNAs, of which 79% were previously unannotated. About 1% (597) of the lncRNAs harbored ultraconserved elements, and 7% (3,900) overlapped disease-associated SNPs. To prioritize lineage-specific, disease-associated lncRNA expression, we employed non-parametric differential expression testing and nominated 7,942 lineage- or cancer-associated lncRNA genes. The lncRNA landscape characterized here may shed light on normal biology and cancer pathogenesis and may be valuable for future biomarker development." }, { "pmid": "25707511", "title": "LncRNA2Function: a comprehensive resource for functional investigation of human lncRNAs based on RNA-seq data.", "abstract": "BACKGROUND\nThe GENCODE project has collected over 10,000 human long non-coding RNA (lncRNA) genes. However, the vast majority of them remain to be functionally characterized. Computational investigation of potential functions of human lncRNA genes is helpful to guide further experimental studies on lncRNAs.\n\n\nRESULTS\nIn this study, based on expression correlation between lncRNAs and protein-coding genes across 19 human normal tissues, we used the hypergeometric test to functionally annotate a single lncRNA or a set of lncRNAs with significantly enriched functional terms among the protein-coding genes that are significantly co-expressed with the lncRNA(s). The functional terms include all nodes in the Gene Ontology (GO) and 4,380 human biological pathways collected from 12 pathway databases. We successfully mapped 9,625 human lncRNA genes to GO terms and biological pathways, and then developed the first ontology-driven user-friendly web interface named lncRNA2Function, which enables researchers to browse the lncRNAs associated with a specific functional term, the functional terms associated with a specific lncRNA, or to assign functional terms to a set of human lncRNA genes, such as a cluster of co-expressed lncRNAs. The lncRNA2Function is freely available at http://mlg.hit.edu.cn/lncrna2function.\n\n\nCONCLUSIONS\nThe LncRNA2Function is an important resource for further investigating the functions of a single human lncRNA, or functionally annotating a set of human lncRNAs of interest." }, { "pmid": "10548103", "title": "Learning the parts of objects by non-negative matrix factorization.", "abstract": "Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign." }, { "pmid": "15173114", "title": "Coexpression analysis of human genes across many microarray data sets.", "abstract": "We present a large-scale analysis of mRNA coexpression based on 60 large human data sets containing a total of 3924 microarrays. We sought pairs of genes that were reliably coexpressed (based on the correlation of their expression profiles) in multiple data sets, establishing a high-confidence network of 8805 genes connected by 220,649 \"coexpression links\" that are observed in at least three data sets. Confirmed positive correlations between genes were much more common than confirmed negative correlations. We show that confirmation of coexpression in multiple data sets is correlated with functional relatedness, and show how cluster analysis of the network can reveal functionally coherent groups of genes. Our findings demonstrate how the large body of accumulated microarray data can be exploited to increase the reliability of inferences about gene function." }, { "pmid": "26748401", "title": "LncRNA TUG1 acts as a tumor suppressor in human glioma by promoting cell apoptosis.", "abstract": "Previous studies have revealed multiple functional roles of long non-coding RNA taurine upregulated gene 1 in different types of malignant tumors, except for human glioma. Here, it was designed to study the potential function of taurine upregulated gene 1 in glioma pathogenesis focusing on its regulation on cell apoptosis. The expression of taurine upregulated gene 1 in glioma tissues was detected by quantitative RT-PCR and compared with that in adjacent normal tissues. Further correlation analysis was conducted to show the relationship between taurine upregulated gene 1 expression and different clinicopathologic parameters. Functional studies were performed to investigate the influence of taurine upregulated gene 1 on apoptosis and cell proliferation by using Annexin V/PI staining and cell counting kit-8 assays, respectively. And, caspase activation and Bcl-2 expression were analyzed to explore taurine upregulated gene 1-induced mechanism. taurine upregulated gene 1 expression was significantly inhibited in glioma and showed significant correlation with WHO Grade, tumor size and overall survival. Further experiments revealed that the dysregulation of taurine upregulated gene 1 affected the apoptosis and cell proliferation of glioma cells. Moreover, taurine upregulated gene 1 could induce the activation of caspase-3 and-9, with inhibited expression of Bcl-2, implying the mechanism in taurine upregulated gene 1-induced apoptosis. taurine upregulated gene 1 promoted cell apoptosis of glioma cells by activating caspase-3 and -9-mediated intrinsic pathways and inhibiting Bcl-2-mediated anti-apoptotic pathways, acting as a tumor suppressor in human glioma. This study provided new insights for the function of taurine upregulated gene 1 in cancer biology, and suggested a potent application of taurine upregulated gene 1 overexpression for glioma therapy." }, { "pmid": "20215462", "title": "Genome-wide inferring gene-phenotype relationship by walking on the heterogeneous network.", "abstract": "MOTIVATION\nClinical diseases are characterized by distinct phenotypes. To identify disease genes is to elucidate the gene-phenotype relationships. Mutations in functionally related genes may result in similar phenotypes. It is reasonable to predict disease-causing genes by integrating phenotypic data and genomic data. Some genetic diseases are genetically or phenotypically similar. They may share the common pathogenetic mechanisms. Identifying the relationship between diseases will facilitate better understanding of the pathogenetic mechanism of diseases.\n\n\nRESULTS\nIn this article, we constructed a heterogeneous network by connecting the gene network and phenotype network using the phenotype-gene relationship information from the OMIM database. We extended the random walk with restart algorithm to the heterogeneous network. The algorithm prioritizes the genes and phenotypes simultaneously. We use leave-one-out cross-validation to evaluate the ability of finding the gene-phenotype relationship. Results showed improved performance than previous works. We also used the algorithm to disclose hidden disease associations that cannot be found by gene network or phenotype network alone. We identified 18 hidden disease associations, most of which were supported by literature evidence.\n\n\nAVAILABILITY\nThe MATLAB code of the program is available at http://www3.ntu.edu.sg/home/aspatra/research/Yongjin_BI2010.zip." }, { "pmid": "21247874", "title": "Large-scale prediction of long non-coding RNA functions in a coding-non-coding gene co-expression network.", "abstract": "Although accumulating evidence has provided insight into the various functions of long-non-coding RNAs (lncRNAs), the exact functions of the majority of such transcripts are still unknown. Here, we report the first computational annotation of lncRNA functions based on public microarray expression profiles. A coding-non-coding gene co-expression (CNC) network was constructed from re-annotated Affymetrix Mouse Genome Array data. Probable functions for altogether 340 lncRNAs were predicted based on topological or other network characteristics, such as module sharing, association with network hubs and combinations of co-expression and genomic adjacency. The functions annotated to the lncRNAs mainly involve organ or tissue development (e.g. neuron, eye and muscle development), cellular transport (e.g. neuronal transport and sodium ion, acid or lipid transport) or metabolic processes (e.g. involving macromolecules, phosphocreatine and tyrosine)." }, { "pmid": "30349036", "title": "Long non-coding RNA-dependent mechanism to regulate heme biosynthesis and erythrocyte development.", "abstract": "In addition to serving as a prosthetic group for enzymes and a hemoglobin structural component, heme is a crucial homeostatic regulator of erythroid cell development and function. While lncRNAs modulate diverse physiological and pathological cellular processes, their involvement in heme-dependent mechanisms is largely unexplored. In this study, we elucidated a lncRNA (UCA1)-mediated mechanism that regulates heme metabolism in human erythroid cells. We discovered that UCA1 expression is dynamically regulated during human erythroid maturation, with a maximal expression in proerythroblasts. UCA1 depletion predominantly impairs heme biosynthesis and arrests erythroid differentiation at the proerythroblast stage. Mechanistic analysis revealed that UCA1 physically interacts with the RNA-binding protein PTBP1, and UCA1 functions as an RNA scaffold to recruit PTBP1 to ALAS2 mRNA, which stabilizes ALAS2 mRNA. These results define a lncRNA-mediated posttranscriptional mechanism that provides a new dimension into how the fundamental heme biosynthetic process is regulated as a determinant of erythrocyte development." }, { "pmid": "28501295", "title": "Discovering DNA methylation patterns for long non-coding RNAs associated with cancer subtypes.", "abstract": "Despite growing evidence demonstrates that the long non-coding ribonucleic acids (lncRNAs) are critical modulators for cancers, the knowledge about the DNA methylation patterns of lncRNAs is quite limited. We develop a systematic analysis pipeline to discover DNA methylation patterns for lncRNAs across multiple cancer subtypes from probe, gene and network levels. By using The Cancer Genome Atlas (TCGA) breast cancer methylation data, the pipeline discovers various DNA methylation patterns for lncRNAs across four major subtypes such as luminal A, luminal B, her2-enriched as well as basal-like. On the probe and gene level, we find that both differentially methylated probes and lncRNAs are subtype specific, while the lncRNAs are not as specific as probes. On the network level, the pipeline constructs differential co-methylation lncRNA network for each subtype. Then, it identifies both subtype specific and common lncRNA modules by simultaneously analyzing multiple networks. We show that the lncRNAs in subtype specific and common modules differ greatly in terms of topological structure, sequence conservation as well as expression. Furthermore, the subtype specific lncRNA modules serve as biomarkers to improve significantly the accuracy of breast cancer subtypes prediction. Finally, the common lncRNA modules associate with survival time of patients, which is critical for cancer therapy." }, { "pmid": "19188922", "title": "Long non-coding RNAs: insights into functions.", "abstract": "In mammals and other eukaryotes most of the genome is transcribed in a developmentally regulated manner to produce large numbers of long non-coding RNAs (ncRNAs). Here we review the rapidly advancing field of long ncRNAs, describing their conservation, their organization in the genome and their roles in gene regulation. We also consider the medical implications, and the emerging recognition that any transcript, regardless of coding potential, can have an intrinsic function as an RNA." }, { "pmid": "23463315", "title": "Structure and function of long noncoding RNAs in epigenetic regulation.", "abstract": "Genomes of complex organisms encode an abundance and diversity of long noncoding RNAs (lncRNAs) that are expressed throughout the cell and fulfill a wide variety of regulatory roles at almost every stage of gene expression. These roles, which encompass sensory, guiding, scaffolding and allosteric capacities, derive from folded modular domains in lncRNAs. In this diverse functional repertoire, we focus on the well-characterized ability for lncRNAs to function as epigenetic modulators. Many lncRNAs bind to chromatin-modifying proteins and recruit their catalytic activity to specific sites in the genome, thereby modulating chromatin states and impacting gene expression. Considering this regulatory potential in combination with the abundance of lncRNAs suggests that lncRNAs may be part of a broad epigenetic regulatory network." }, { "pmid": "24776770", "title": "The rise of regulatory RNA.", "abstract": "Discoveries over the past decade portend a paradigm shift in molecular biology. Evidence suggests that RNA is not only functional as a messenger between DNA and protein but also involved in the regulation of genome organization and gene expression, which is increasingly elaborate in complex organisms. Regulatory RNA seems to operate at many levels; in particular, it plays an important part in the epigenetic processes that control differentiation and development. These discoveries suggest a central role for RNA in human evolution and ontogeny. Here, we review the emergence of the previously unsuspected world of regulatory RNA from a historical perspective." }, { "pmid": "24463510", "title": "The evolution of lncRNA repertoires and expression patterns in tetrapods.", "abstract": "Only a very small fraction of long noncoding RNAs (lncRNAs) are well characterized. The evolutionary history of lncRNAs can provide insights into their functionality, but the absence of lncRNA annotations in non-model organisms has precluded comparative analyses. Here we present a large-scale evolutionary study of lncRNA repertoires and expression patterns, in 11 tetrapod species. We identify approximately 11,000 primate-specific lncRNAs and 2,500 highly conserved lncRNAs, including approximately 400 genes that are likely to have originated more than 300 million years ago. We find that lncRNAs, in particular ancient ones, are in general actively regulated and may function predominantly in embryonic development. Most lncRNAs evolve rapidly in terms of sequence and expression levels, but tissue specificities are often conserved. We compared expression patterns of homologous lncRNA and protein-coding families across tetrapods to reconstruct an evolutionarily conserved co-expression network. This network suggests potential functions for lncRNAs in fundamental processes such as spermatogenesis and synaptic transmission, but also in more specific mechanisms such as placenta development through microRNA production." }, { "pmid": "24608367", "title": "Ty3 reverse transcriptase complexed with an RNA-DNA hybrid shows structural and functional asymmetry.", "abstract": "Retrotransposons are a class of mobile genetic elements that replicate by converting their single-stranded RNA intermediate to double-stranded DNA through the combined DNA polymerase and ribonuclease H (RNase H) activities of the element-encoded reverse transcriptase (RT). Although a wealth of structural information is available for lentiviral and gammaretroviral RTs, equivalent studies on counterpart enzymes of long terminal repeat (LTR)-containing retrotransposons, from which they are evolutionarily derived, is lacking. In this study, we report the first crystal structure of a complex of RT from the Saccharomyces cerevisiae LTR retrotransposon Ty3 in the presence of its polypurine tract-containing RNA-DNA hybrid. In contrast to its retroviral counterparts, Ty3 RT adopts an asymmetric homodimeric architecture whose assembly is substrate dependent. Moreover, our structure and biochemical data suggest that the RNase H and DNA polymerase activities are contributed by individual subunits of the homodimer." }, { "pmid": "25392420", "title": "COXPRESdb in 2015: coexpression database for animal species by DNA-microarray and RNAseq-based expression data with multiple quality assessment systems.", "abstract": "The COXPRESdb (http://coxpresdb.jp) provides gene coexpression relationships for animal species. Here, we report the updates of the database, mainly focusing on the following two points. For the first point, we added RNAseq-based gene coexpression data for three species (human, mouse and fly), and largely increased the number of microarray experiments to nine species. The increase of the number of expression data with multiple platforms could enhance the reliability of coexpression data. For the second point, we refined the data assessment procedures, for each coexpressed gene list and for the total performance of a platform. The assessment of coexpressed gene list now uses more reasonable P-values derived from platform-specific null distribution. These developments greatly reduced pseudo-predictions for directly associated genes, thus expanding the reliability of coexpression data to design new experiments and to discuss experimental results." }, { "pmid": "24840979", "title": "Noncoding RNA and its associated proteins as regulatory elements of the immune system.", "abstract": "The rapid changes in gene expression that accompany developmental transitions, stress responses and proliferation are controlled by signal-mediated coordination of transcriptional and post-transcriptional mechanisms. In recent years, understanding of the mechanics of these processes and the contexts in which they are employed during hematopoiesis and immune challenge has increased. An important aspect of this progress is recognition of the importance of RNA-binding proteins and noncoding RNAs. These have roles in the development and function of the immune system and in pathogen life cycles, and they represent an important aspect of intracellular immunity." }, { "pmid": "20090828", "title": "Associating genes and protein complexes with disease via network propagation.", "abstract": "A fundamental challenge in human health is the identification of disease-causing genes. Recently, several studies have tackled this challenge via a network-based approach, motivated by the observation that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein or functional interactions. However, most of these approaches use only local network information in the inference process and are restricted to inferring single gene associations. Here, we provide a global, network-based method for prioritizing disease genes and inferring protein complex associations, which we call PRINCE. The method is based on formulating constraints on the prioritization function that relate to its smoothness over the network and usage of prior information. We exploit this function to predict not only genes but also protein complex associations with a disease of interest. We test our method on gene-disease association data, evaluating both the prioritization achieved and the protein complexes inferred. We show that our method outperforms extant approaches in both tasks. Using data on 1,369 diseases from the OMIM knowledgebase, our method is able (in a cross validation setting) to rank the true causal gene first for 34% of the diseases, and infer 139 disease-related complexes that are highly coherent in terms of the function, expression and conservation of their member proteins. Importantly, we apply our method to study three multi-factorial diseases for which some causal genes have been found already: prostate cancer, alzheimer and type 2 diabetes mellitus. PRINCE's predictions for these diseases highly match the known literature, suggesting several novel causal genes and protein complexes for further investigation." }, { "pmid": "30250064", "title": "Genome-wide screening of NEAT1 regulators reveals cross-regulation between paraspeckles and mitochondria.", "abstract": "The long noncoding RNA NEAT1 (nuclear enriched abundant transcript 1) nucleates the formation of paraspeckles, which constitute a type of nuclear body with multiple roles in gene expression. Here we identify NEAT1 regulators using an endogenous NEAT1 promoter-driven enhanced green fluorescent protein reporter in human cells coupled with genome-wide RNAi screens. The screens unexpectedly yield gene candidates involved in mitochondrial functions as essential regulators of NEAT1 expression and paraspeckle formation. Depletion of mitochondrial proteins and treatment of mitochondrial stressors both lead to aberrant NEAT1 expression via ATF2 as well as altered morphology and numbers of paraspeckles. These changes result in enhanced retention of mRNAs of nuclear-encoded mitochondrial proteins (mito-mRNAs) in paraspeckles. Correspondingly, NEAT1 depletion has profound effects on mitochondrial dynamics and function by altering the sequestration of mito-mRNAs in paraspeckles. Overall, our data provide a rich resource for understanding NEAT1 and paraspeckle regulation, and reveal a cross-regulation between paraspeckles and mitochondria." }, { "pmid": "27071099", "title": "Stability-driven nonnegative matrix factorization to interpret spatial gene expression and build local gene networks.", "abstract": "Spatial gene expression patterns enable the detection of local covariability and are extremely useful for identifying local gene interactions during normal development. The abundance of spatial expression data in recent years has led to the modeling and analysis of regulatory networks. The inherent complexity of such data makes it a challenge to extract biological information. We developed staNMF, a method that combines a scalable implementation of nonnegative matrix factorization (NMF) with a new stability-driven model selection criterion. When applied to a set ofDrosophilaearly embryonic spatial gene expression images, one of the largest datasets of its kind, staNMF identified 21 principal patterns (PP). Providing a compact yet biologically interpretable representation ofDrosophilaexpression patterns, PP are comparable to a fate map generated experimentally by laser ablation and show exceptional promise as a data-driven alternative to manual annotations. Our analysis mapped genes to cell-fate programs and assigned putative biological roles to uncharacterized genes. Finally, we used the PP to generate local transcription factor regulatory networks. Spatially local correlation networks were constructed for six PP that span along the embryonic anterior-posterior axis. Using a two-tail 5% cutoff on correlation, we reproduced 10 of the 11 links in the well-studied gap gene network. The performance of PP with theDrosophiladata suggests that staNMF provides informative decompositions and constitutes a useful computational lens through which to extract biological insight from complex and often noisy gene expression data." }, { "pmid": "29701681", "title": "Regularized Multi-View Subspace Clustering for Common Modules Across Cancer Stages.", "abstract": "Discovering the common modules that are co-expressed across various stages can lead to an improved understanding of the underlying molecular mechanisms of cancers. There is a shortage of efficient tools for integrative analysis of gene expression and protein interaction networks for discovering common modules associated with cancer progression. To address this issue, we propose a novel regularized multi-view subspace clustering (rMV-spc) algorithm to obtain a representation matrix for each stage and a joint representation matrix that balances the agreement across various stages. To avoid the heterogeneity of data, the protein interaction network is incorporated into the objective of rMV-spc via regularization. Based on the interior point algorithm, we solve the optimization problem to obtain the common modules. By using artificial networks, we demonstrate that the proposed algorithm outperforms state-of-the-art methods in terms of accuracy. Furthermore, the rMV-spc discovers common modules in breast cancer networks based on the breast data, and these modules serve as biomarkers to predict stages of breast cancer. The proposed model and algorithm effectively integrate heterogeneous data for dynamic modules." }, { "pmid": "29293953", "title": "Ontological function annotation of long non-coding RNAs through hierarchical multi-label classification.", "abstract": "Motivation\nLong non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs.\n\n\nResults\nConsidering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods.\n\n\nAvailability and implementation\nThe source code and data are available at http://denglab.org/NeuraNetL2GO/.\n\n\nContact\[email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online." }, { "pmid": "19144990", "title": "A myelopoiesis-associated regulatory intergenic noncoding RNA transcript within the human HOXA cluster.", "abstract": "We have identified an intergenic transcriptional activity that is located between the human HOXA1 and HOXA2 genes, shows myeloid-specific expression, and is up-regulated during granulocytic differentiation. The novel gene, termed HOTAIRM1 (HOX antisense intergenic RNA myeloid 1), is transcribed antisense to the HOXA genes and originates from the same CpG island that embeds the start site of HOXA1. The transcript appears to be a noncoding RNA containing no long open-reading frame; sucrose gradient analysis shows no association with polyribosomal fractions. HOTAIRM1 is the most prominent intergenic transcript expressed and up-regulated during induced granulocytic differentiation of NB4 promyelocytic leukemia and normal human hematopoietic cells; its expression is specific to the myeloid lineage. Its induction during retinoic acid (RA)-driven granulocytic differentiation is through RA receptor and may depend on the expression of myeloid cell development factors targeted by RA signaling. Knockdown of HOTAIRM1 quantitatively blunted RA-induced expression of HOXA1 and HOXA4 during the myeloid differentiation of NB4 cells, and selectively attenuated induction of transcripts for the myeloid differentiation genes CD11b and CD18, but did not noticeably impact the more distal HOXA genes. These findings suggest that HOTAIRM1 plays a role in the myelopoiesis through modulation of gene expression in the HOXA cluster." }, { "pmid": "24824789", "title": "Long intergenic non-coding RNA HOTAIRM1 regulates cell cycle progression during myeloid maturation in NB4 human promyelocytic leukemia cells.", "abstract": "HOTAIRM1 is a long intergenic non-coding RNA encoded in the human HOXA gene cluster, with gene expression highly specific for maturing myeloid cells. Knockdown of HOTAIRM1 in the NB4 acute promyelocytic leukemia cell line retarded all-trans retinoid acid (ATRA)-induced granulocytic differentiation, resulting in a significantly larger population of immature and proliferating cells that maintained cell cycle progression from G1 to S phases. Correspondingly, HOTAIRM1 knockdown resulted in retained expression of many otherwise ATRA-suppressed cell cycle and DNA replication genes, and abated ATRA induction of cell surface leukocyte activation, defense response, and other maturation-related genes. Resistance to ATRA-induced cell cycle arrest at the G1/S phase transition in knockdown cells was accompanied by retained expression of ITGA4 (CD49d) and decreased induction of ITGAX (CD11c). The coupling of cell cycle progression with temporal dynamics in the expression patterns of these integrin genes suggests a regulated switch to control the transit from the proliferative phase to granulocytic maturation. Furthermore, ITGAX was among a small number of genes showing perturbation in transcript levels upon HOTAIRM1 knockdown even without ATRA treatment, suggesting a direct pathway of regulation. These results indicate that HOTAIRM1 provides a regulatory link in myeloid maturation by modulating integrin-controlled cell cycle progression at the gene expression level." }, { "pmid": "28534780", "title": "KATZLGO: Large-Scale Prediction of LncRNA Functions by Using the KATZ Measure Based on Multiple Networks.", "abstract": "Aggregating evidences have shown that long non-coding RNAs (lncRNAs) generally play key roles in cellular biological processes such as epigenetic regulation, gene expression regulation at transcriptional and post-transcriptional levels, cell differentiation, and others. However, most lncRNAs have not been functionally characterized. There is an urgent need to develop computational approaches for function annotation of increasing available lncRNAs. In this article, we propose a global network-based method, KATZLGO, to predict the functions of human lncRNAs at large scale. A global network is constructed by integrating three heterogeneous networks: lncRNA-lncRNA similarity network, lncRNA-protein association network, and protein-protein interaction network. The KATZ measure is then employed to calculate similarities between lncRNAs and proteins in the global network. We annotate lncRNAs with Gene Ontology (GO) terms of their neighboring protein-coding genes based on the KATZ similarity scores. The performance of KATZLGO is evaluated on a manually annotated lncRNA benchmark and a protein-coding gene benchmark with known function annotations. KATZLGO significantly outperforms state-of-the-art computational method both in maximum F-measure and coverage. Furthermore, we apply KATZLGO to predict functions of human lncRNAs and successfully map 12,318 human lncRNA genes to GO terms." }, { "pmid": "30224759", "title": "LncGata6 maintains stemness of intestinal stem cells and promotes intestinal tumorigenesis.", "abstract": "The intestinal epithelium harbours remarkable self-renewal capacity that is driven by Lgr5+ intestinal stem cells (ISCs) at the crypt base. However, the molecular mechanism controlling Lgr5+ ISC stemness is incompletely understood. We show that a Gata6 long noncoding RNA (lncGata6) is highly expressed in ISCs. LncGata6 knockout or conditional knockout in ISCs impairs the stemness of ISCs and epithelial regeneration. Mechanistically, lncGata6 recruits the NURF complex onto the Ehf promoter to induce its transcription, which promotes the expression of Lgr4/5 to enhance Wnt signalling activation. Moreover, the human orthologue lncGATA6 is highly expressed in the cancer stem cells of colorectal cancer and promotes tumour initiation and progression. Antisense oligonucleotides against lncGATA6 exhibit strong therapeutic efficacy on colorectal cancer. Thus, targeting lncGATA6 will have potential clinical applications in colorectal cancer treatment as an ideal therapeutic target." }, { "pmid": "26134276", "title": "Similarity computation strategies in the microRNA-disease network: a survey.", "abstract": "Various microRNAs have been demonstrated to play roles in a number of human diseases. Several microRNA-disease network reconstruction methods have been used to describe the association from a systems biology perspective. The key problem for the network is the similarity computation model. In this article, we reviewed the main similarity computation methods and discussed these methods and future works. This survey may prompt and guide systems biology and bioinformatics researchers to build more perfect microRNA-disease associations and may make the network relationship clear for medical researchers." } ]
Scientific Reports
30696866
PMC6351532
10.1038/s41598-018-37257-4
Epithelium segmentation using deep learning in H&E-stained prostate specimens with immunohistochemistry as reference standard
Given the importance of gland morphology in grading prostate cancer (PCa), automatically differentiating between epithelium and other tissues is an important prerequisite for the development of automated methods for detecting PCa. We propose a new deep learning method to segment epithelial tissue in digitised hematoxylin and eosin (H&E) stained prostatectomy slides using immunohistochemistry (IHC) as reference standard. We used IHC to create a precise and objective ground truth compared to manual outlining on H&E slides, especially in areas with high-grade PCa. 102 tissue sections were stained with H&E and subsequently restained with P63 and CK8/18 IHC markers to highlight epithelial structures. Afterwards each pair was co-registered. First, we trained a U-Net to segment epithelial structures in IHC using a subset of the IHC slides that were preprocessed with color deconvolution. Second, this network was applied to the remaining slides to create the reference standard used to train a second U-Net on H&E. Our system accurately segmented both intact glands and individual tumour epithelial cells. The generalisation capacity of our system is shown using an independent external dataset from a different centre. We envision this segmentation as the first part of a fully automated prostate cancer grading pipeline.
Related WorkExisting research on segmenting epithelial tissue has shown promise in PCa specimens. Gertych et al.8 used a support vector machine to distinguish between stroma and epithelial glands and applied this to a dataset of 20 patients containing specimens of Gleason grade 3 and 4. Hand crafted features, based on intensity and spatial relationship of pixels, were derived from H&E specimens that had been preprocessed using color deconvolution. Naik et al.9 employ Bayesian classifiers to segment glands, relying on the presence of lumen in the glands. The segmentation was applied to Gleason grade 3 and 4, and benign tissue samples; not on the less common but more aggressive pattern 5. Gleason grade 5 can express in the form of single-cell strands or nests, or solid sheets (with or without central necrosis) of malignant cells with no or minimal lumen formation; obviously, this could hinder a segmentation method that relies on the presence of lumina. Singh et al.10 employed a multi-step approach based on logistic regression to segment epithelium, distinguishing between glands, lumen, peri-acinar retraction clefting and stroma. Both Gertych et al.8 and Naik et al.9 used the segmentation results as a first step towards automated Gleason grading.Advances in deep learning have resulted in new methods for performing segmentation. Deep learning methods generally outperform hand crafted features on segmentation tasks in digital pathology, for example on H&E and IHC stained breast and colon tissue specimens11. On the dataset from Gertych et al.8, Li et al.12 show a clear performance increase when using deep learning models to segment PCa in comparison to classical machine learning methods. Deep learning methods also show good performances on segmenting glands, for example in colorectal tissue13.Previously, we performed a pilot study on epithelium segmentation comparing U-Net versus regular fully convolutional networks using 30 radical prostatectomy slides and a small, manually annotated, test set14. We achieved the best segmentation performance using a 4-layer-deep U-Net, but found that the performance of our network capped due to errors in the reference standard. Moreover, a low number of samples, in particular few high grade PCa specimens, limits the applicability to daily practice.Most of the existing studies on epithelium segmentation in prostate suffer from small datasets or focus on a subset of the occurring grades. In this paper we did not exclude any Gleason grades or gland morphology.
[ "22421083", "20006878", "11531144", "26362074", "28653016", "29854182", "30081241", "26166626", "26625400", "17492115" ]
[ { "pmid": "22421083", "title": "A contemporary update on pathology reporting for prostate cancer: biopsy and radical prostatectomy specimens.", "abstract": "CONTEXT\nThe diagnosis of and reporting parameters for prostate cancer (PCa) have evolved over time, yet they remain key components in predicting clinical outcomes.\n\n\nOBJECTIVE\nUpdate pathology reporting standards for PCa.\n\n\nEVIDENCE ACQUISITION\nA thorough literature review was performed for articles discussing PCa handling, grading, staging, and reporting published as of September 15, 2011. Electronic articles published ahead of print were also considered. Proceedings of recent international conferences addressing these areas were extensively reviewed.\n\n\nEVIDENCE SYNTHESIS\nTwo main areas of reporting were examined: (1) prostatic needle biopsy, including handling, contemporary Gleason grading, extent of involvement, and high-risk lesions/precursors and (2) radical prostatectomy (RP), including sectioning, multifocality, Gleason grading, staging of organ-confined and extraprostatic disease, lymph node involvement, tumor volume, and lymphovascular invasion. For each category, consensus views, controversial areas, and clinical import were reviewed.\n\n\nCONCLUSIONS\nModern prostate needle biopsy and RP reports are extremely detailed so as to maximize clinical utility. Accurate diagnosis of cancer-specific features requires up-to-date knowledge of grading, quantitation, and staging criteria. While some areas remain controversial, efforts to codify existing knowledge have had a significant impact on pathology practice." }, { "pmid": "20006878", "title": "An update of the Gleason grading system.", "abstract": "PURPOSE\nAn update is provided of the Gleason grading system, which has evolved significantly since its initial description.\n\n\nMATERIALS AND METHODS\nA search was performed using the MEDLINE(R) database and referenced lists of relevant studies to obtain articles concerning changes to the Gleason grading system.\n\n\nRESULTS\nSince the introduction of the Gleason grading system more than 40 years ago many aspects of prostate cancer have changed, including prostate specific antigen testing, transrectal ultrasound guided prostate needle biopsy with greater sampling, immunohistochemistry for basal cells that changed the classification of prostate cancer and new prostate cancer variants. The system was updated at a 2005 consensus conference of international experts in urological pathology, under the auspices of the International Society of Urological Pathology. Gleason score 2-4 should rarely if ever be diagnosed on needle biopsy, certain patterns (ie poorly formed glands) originally considered Gleason pattern 3 are now considered Gleason pattern 4 and all cribriform cancer should be graded pattern 4. The grading of variants and subtypes of acinar adenocarcinoma of the prostate, including cancer with vacuoles, foamy gland carcinoma, ductal adenocarcinoma, pseudohyperplastic carcinoma and small cell carcinoma have also been modified. Other recent issues include reporting secondary patterns of lower and higher grades when present to a limited extent, and commenting on tertiary grade patterns which differ depending on whether the specimen is from needle biopsy or radical prostatectomy. Whereas there is little debate on the definition of tertiary pattern on needle biopsy, this issue is controversial in radical prostatectomy specimens. Although tertiary Gleason patterns are typically added to pathology reports, they are routinely omitted in practice since there is no simple way to incorporate them in predictive nomograms/tables, research studies and patient counseling. Thus, a modified radical prostatectomy Gleason scoring system was recently proposed to incorporate tertiary Gleason patterns in an intuitive fashion. For needle biopsy with different cores showing different grades, the current recommendation is to report the grades of each core separately, whereby the highest grade tumor is selected as the grade of the entire case to determine treatment, regardless of the percent involvement. After the 2005 consensus conference several studies confirmed the superiority of the modified Gleason system as well as its impact on urological practice.\n\n\nCONCLUSIONS\nIt is remarkable that nearly 40 years after its inception the Gleason grading system remains one of the most powerful prognostic factors for prostate cancer. This system has remained timely because of gradual adaptations by urological pathologists to accommodate the changing practice of medicine." }, { "pmid": "11531144", "title": "Quantification of histochemical staining by color deconvolution.", "abstract": "OBJECTIVE\nTo develop a flexible method of separation and quantification of immunohistochemical staining by means of color image analysis.\n\n\nSTUDY DESIGN\nAn algorithm was developed to deconvolve the color information acquired with red-green-blue (RGB) cameras and to calculate the contribution of each of the applied stains based on stain-specific RGB absorption. The algorithm was tested using different combinations of diaminobenzidine, hematoxylin and eosin at different staining levels.\n\n\nRESULTS\nQuantification of the different stains was not significantly influenced by the combination of multiple stains in a single sample. The color deconvolution algorithm resulted in comparable quantification independent of the stain combinations as long as the histochemical procedures did not influence the amount of stain in the sample due to bleaching because of stain solubility and saturation of staining was prevented.\n\n\nCONCLUSION\nThis image analysis algorithm provides a robust and flexible method for objective immunohistochemical analysis of samples stained with up to three different stains using a laboratory microscope, standard RGB camera setup and the public domain program NIH Image." }, { "pmid": "26362074", "title": "Machine learning approaches to analyze histological images of tissues from radical prostatectomies.", "abstract": "Computerized evaluation of histological preparations of prostate tissues involves identification of tissue components such as stroma (ST), benign/normal epithelium (BN) and prostate cancer (PCa). Image classification approaches have been developed to identify and classify glandular regions in digital images of prostate tissues; however their success has been limited by difficulties in cellular segmentation and tissue heterogeneity. We hypothesized that utilizing image pixels to generate intensity histograms of hematoxylin (H) and eosin (E) stains deconvoluted from H&E images numerically captures the architectural difference between glands and stroma. In addition, we postulated that joint histograms of local binary patterns and local variance (LBPxVAR) can be used as sensitive textural features to differentiate benign/normal tissue from cancer. Here we utilized a machine learning approach comprising of a support vector machine (SVM) followed by a random forest (RF) classifier to digitally stratify prostate tissue into ST, BN and PCa areas. Two pathologists manually annotated 210 images of low- and high-grade tumors from slides that were selected from 20 radical prostatectomies and digitized at high-resolution. The 210 images were split into the training (n=19) and test (n=191) sets. Local intensity histograms of H and E were used to train a SVM classifier to separate ST from epithelium (BN+PCa). The performance of SVM prediction was evaluated by measuring the accuracy of delineating epithelial areas. The Jaccard J=59.5 ± 14.6 and Rand Ri=62.0 ± 7.5 indices reported a significantly better prediction when compared to a reference method (Chen et al., Clinical Proteomics 2013, 10:18) based on the averaged values from the test set. To distinguish BN from PCa we trained a RF classifier with LBPxVAR and local intensity histograms and obtained separate performance values for BN and PCa: JBN=35.2 ± 24.9, OBN=49.6 ± 32, JPCa=49.5 ± 18.5, OPCa=72.7 ± 14.8 and Ri=60.6 ± 7.6 in the test set. Our pixel-based classification does not rely on the detection of lumens, which is prone to errors and has limitations in high-grade cancers and has the potential to aid in clinical studies in which the quantification of tumor content is necessary to prognosticate the course of the disease. The image data set with ground truth annotation is available for public use to stimulate further research in this area." }, { "pmid": "28653016", "title": "Gland segmentation in prostate histopathological images.", "abstract": "Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shapes and sizes of glands combined with the tedious manual observation task can result in inaccurate assessment. There are also discrepancies and low-level agreement among pathologists, especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. An automated gland segmentation system can highlight various glandular shapes and structures for further analysis by the pathologist. These objective highlighted patterns can help reduce the assessment variability. We propose an automated gland segmentation system. Forty-three hematoxylin and eosin-stained images were acquired from prostate cancer tissue slides and were manually annotated for gland, lumen, periacinar retraction clefting, and stroma regions. Our automated gland segmentation system was trained using these manual annotations. It identifies these regions using a combination of pixel and object-level classifiers by incorporating local and spatial information for consolidating pixel-level classification results into object-level segmentation. Experimental results show that our method outperforms various texture and gland structure-based gland segmentation algorithms in the literature. Our method has good performance and can be a promising tool to help decrease interobserver variability among pathologists." }, { "pmid": "29854182", "title": "A Multi-scale U-Net for Semantic Segmentation of Histological Images from Radical Prostatectomies.", "abstract": "Gleason grading of histological images is important in risk assessment and treatment planning for prostate cancer patients. Much research has been done in classifying small homogeneous cancer regions within histological images. However, semi-supervised methods published to date depend on pre-selected regions and cannot be easily extended to an image of heterogeneous tissue composition. In this paper, we propose a multi-scale U-Net model to classify images at the pixel-level using 224 histological image tiles from radical prostatectomies of 20 patients. Our model was evaluated by a patient-based 10-fold cross validation, and achieved a mean Jaccard index of 65.8% across 4 classes (stroma, Gleason 3, Gleason 4 and benign glands), and 75.5% for 3 classes (stroma, benign glands, prostate cancer), outperforming other methods." }, { "pmid": "30081241", "title": "Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach.", "abstract": "In this paper, we propose a method for automatically annotating slide images from colorectal tissue samples. Our objective is to segment glandular epithelium in histological images from tissue slides submitted to different staining techniques, including usual haematoxylin-eosin (H&E) as well as immunohistochemistry (IHC). The proposed method makes use of Deep Learning and is based on a new convolutional network architecture. Our method achieves better performances than the state of the art on the H&E images of the GlaS challenge contest, whereas it uses only the haematoxylin colour channel extracted by colour deconvolution from the RGB images in order to extend its applicability to IHC. The network only needs to be fine-tuned on a small number of additional examples to be accurate on a new IHC dataset. Our approach also includes a new method of data augmentation to achieve good generalisation when working with different experimental conditions and different IHC markers. We show that our methodology enables to automate the compartmentalisation of the IHC biomarker analysis, results concurring highly with manual annotations." }, { "pmid": "26166626", "title": "A Contemporary Prostate Cancer Grading System: A Validated Alternative to the Gleason Score.", "abstract": "BACKGROUND\nDespite revisions in 2005 and 2014, the Gleason prostate cancer (PCa) grading system still has major deficiencies. Combining of Gleason scores into a three-tiered grouping (6, 7, 8-10) is used most frequently for prognostic and therapeutic purposes. The lowest score, assigned 6, may be misunderstood as a cancer in the middle of the grading scale, and 3+4=7 and 4+3=7 are often considered the same prognostic group.\n\n\nOBJECTIVE\nTo verify that a new grading system accurately produces a smaller number of grades with the most significant prognostic differences, using multi-institutional and multimodal therapy data.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBetween 2005 and 2014, 20,845 consecutive men were treated by radical prostatectomy at five academic institutions; 5501 men were treated with radiotherapy at two academic institutions.\n\n\nOUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS\nOutcome was based on biochemical recurrence (BCR). The log-rank test assessed univariable differences in BCR by Gleason score. Separate univariable and multivariable Cox proportional hazards used four possible categorizations of Gleason scores.\n\n\nRESULTS AND LIMITATIONS\nIn the surgery cohort, we found large differences in recurrence rates between both Gleason 3+4 versus 4+3 and Gleason 8 versus 9. The hazard ratios relative to Gleason score 6 were 1.9, 5.1, 8.0, and 11.7 for Gleason scores 3+4, 4+3, 8, and 9-10, respectively. These differences were attenuated in the radiotherapy cohort as a whole due to increased adjuvant or neoadjuvant hormones for patients with high-grade disease but were clearly seen in patients undergoing radiotherapy only. A five-grade group system had the highest prognostic discrimination for all cohorts on both univariable and multivariable analysis. The major limitation was the unavoidable use of prostate-specific antigen BCR as an end point as opposed to cancer-related death.\n\n\nCONCLUSIONS\nThe new PCa grading system has these benefits: more accurate grade stratification than current systems, simplified grading system of five grades, and lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa.\n\n\nPATIENT SUMMARY\nWe looked at outcomes for prostate cancer (PCa) treated with radical prostatectomy or radiation therapy and validated a new grading system with more accurate grade stratification than current systems, including a simplified grading system of five grades and a lowest grade is 1, as opposed to 6, with the potential to reduce overtreatment of PCa." }, { "pmid": "26625400", "title": "Patch-Based Nonlinear Image Registration for Gigapixel Whole Slide Images.", "abstract": "OBJECTIVE\nImage registration of whole slide histology images allows the fusion of fine-grained information-like different immunohistochemical stains-from neighboring tissue slides. Traditionally, pathologists fuse this information by looking subsequently at one slide at a time. If the slides are digitized and accurately aligned at cell level, automatic analysis can be used to ease the pathologist's work. However, the size of those images exceeds the memory capacity of regular computers.\n\n\nMETHODS\nWe address the challenge to combine a global motion model that takes the physical cutting process of the tissue into account with image data that is not simultaneously globally available. Typical approaches either reduce the amount of data to be processed or partition the data into smaller chunks to be processed separately. Our novel method first registers the complete images on a low resolution with a nonlinear deformation model and later refines this result on patches by using a second nonlinear registration on each patch. Finally, the deformations computed on all patches are combined by interpolation to form one globally smooth nonlinear deformation. The NGF distance measure is used to handle multistain images.\n\n\nRESULTS\nThe method is applied to ten whole slide image pairs of human lung cancer data. The alignment of 85 corresponding structures is measured by comparing manual segmentations from neighboring slides. Their offset improves significantly, by at least 15%, compared to the low-resolution nonlinear registration.\n\n\nCONCLUSION/SIGNIFICANCE\nThe proposed method significantly improves the accuracy of multistain registration which allows us to compare different antibodies at cell level." }, { "pmid": "17492115", "title": "Intensity gradient based registration and fusion of multi-modal images.", "abstract": "OBJECTIVES\nA particular problem in image registration arises for multi-modal images taken from different imaging devices and/or modalities. Starting in 1995, mutual information has shown to be a very successful distance measure for multi-modal image registration. Therefore, mutual information is considered to be the state-of-the-art approach to multi-modal image registration. However, mutual information has also a number of well-known drawbacks. Its main disadvantage is that it is known to be highly non-convex and has typically many local maxima.\n\n\nMETHODS\nThis observation motivates us to seek a different image similarity measure which is better suited for optimization but as well capable to handle multi-modal images.\n\n\nRESULTS\nIn this work, we investigate an alternative distance measure which is based on normalized gradients.\n\n\nCONCLUSIONS\nAs we show, the alternative approach is deterministic, much simpler, easier to interpret, fast and straightforward to implement, faster to compute, and also much more suitable to numerical optimization." } ]
BMC Medical Informatics and Decision Making
30700291
PMC6354330
10.1186/s12911-019-0733-z
Early prediction of acute kidney injury following ICU admission using a multivariate panel of physiological measurements
BackgroundThe development of acute kidney injury (AKI) during an intensive care unit (ICU) admission is associated with increased morbidity and mortality.MethodsOur objective was to develop and validate a data driven multivariable clinical predictive model for early detection of AKI among a large cohort of adult critical care patients. We utilized data form the Medical Information Mart for Intensive Care III (MIMIC-III) for all patients who had a creatinine measured for 3 days following ICU admission and excluded patients with pre-existing condition of Chronic Kidney Disease and Acute Kidney Injury on admission. Data extracted included patient age, gender, ethnicity, creatinine, other vital signs and lab values during the first day of ICU admission, whether the patient was mechanically ventilated during the first day of ICU admission, and the hourly rate of urine output during the first day of ICU admission.ResultsUtilizing the demographics, the clinical data and the laboratory test measurements from Day 1 of ICU admission, we accurately predicted max serum creatinine level during Day 2 and Day 3 with a root mean square error of 0.224 mg/dL. We demonstrated that using machine learning models (multivariate logistic regression, random forest and artificial neural networks) with demographics and physiologic features can predict AKI onset as defined by the current clinical guideline with a competitive AUC (mean AUC 0.783 by our all-feature, logistic-regression model), while previous models aimed at more specific patient cohorts.ConclusionsExperimental results suggest that our model has the potential to assist clinicians in identifying patients at greater risk of new onset of AKI in critical care setting. Prospective trials with independent model training and external validation cohorts are needed to further evaluate the clinical utility of this approach and potentially instituting interventions to decrease the likelihood of developing AKI.
Related workNumerous previously published studies describe AKI prediction models using EHR data [14–24]. Most models had modest performance with area under the receiver operating curves (AUC) approximating 0.75. However, many studies focus on specific patient groups such as cardiac surgery patients, septic shock patients, and elderly patients, or focus on the validation of novel biomarkers. Less work has been performed for general intensive care populations despite the fact that ICU patients also have high risk of AKI. Many previous studies also have small patient population due to specific focus. In addition, there is still a gap between existing studies and the need to identify high-risk AKI patients as early as possible. The approaches and goals of this study differ from previously published reports in that it aims to address these questions by utilizing a large clinical database and building a predictive model that enables early AKI detection. Many prior AKI prediction models, while nonetheless clinically useful in many settings, i) rely on various static scoring algorithms, often including a limited set of features in part to facilitate human (offline) computation; ii) incorporate non-routine biomarkers (e.g. NGAL) in predictions; and/or iii) do not model temporal progression of clinical, laboratory and other predictive information, which has been shown to be effective for clinical predictive modeling [25]. As a result, many previously developed models are not optimally suited for clinical decision making that forecasts AKI in a general patient population. For example, a predictive model that incorporates a limited set of predictors and, in particular, a limited array of clinical interventions as predictors, could not identify the impact that changes in clinical care might have on AKI risk. Likewise, models that rely heavily on biomarkers that are not routinely tested would be unable to accurately screen for AKI risk in a general patient population. Our approach, in contrast, involves the careful modeling of a wide array of predictor data including clinical treatments and the temporal aggregation of predictor data. Including a wide array of predictors may permit the models to provide predictions that are more patient-specific and suitable for clinical scenario testing. In addition, our approach focuses on the early prediction of AKI on patients who do not meet AKI criteria on admission to the ICU, thus targeting a population that could benefit from early preventive strategies that can prevent the development of AKI or minimize its clinical impact. This is important, given that prior studies utilizing automated AKI detection (as opposed to prediction) show limited effectiveness of therapeutic interventions in patients already meeting AKI criteria [26]. We expect that the types of models we develop and validate in the context of this study will have wide-ranging clinical applications.Our study builds on top of previous studies by integrating the previously identified risk factors for AKI in ICU patients described in the literature including hemodynamic instability, hypoxemia, anemia, inflammation, coagulopathy, liver failure, acidosis, renal/metabolic derangement, and demographics/admission characteristics. In the current study, we investigated the incidence of AKI and the risk factors associated with its development in an ICU population. Our objective was to develop a prediction model capable of discriminating adult patients at high risk of developing new AKI early in their admission to the ICU.
[ "17314324", "23394211", "23394215", "19602973", "22473149", "22833807", "21734090", "21771324", "17331245", "22617274", "16177006", "27025458", "25943719", "24855282", "24050856", "23222415", "23222145", "26705194", "27124567", "22067631", "27219127", "19414839", "16760447", "27329638", "12512033", "3928249" ]
[ { "pmid": "17314324", "title": "Incidence and outcomes in acute kidney injury: a comprehensive population-based study.", "abstract": "Epidemiological studies of acute kidney injury (AKI) and acute-on-chronic renal failure (ACRF) are surprisingly sparse and confounded by differences in definition. Reported incidences vary, with few studies being population-based. Given this and our aging population, the incidence of AKI may be much higher than currently thought. We tested the hypothesis that the incidence is higher by including all patients with AKI (in a geographical population base of 523,390) regardless of whether they required renal replacement therapy irrespective of the hospital setting in which they were treated. We also tested the hypothesis that the Risk, Injury, Failure, Loss, and End-Stage Kidney (RIFLE) classification predicts outcomes. We identified all patients with serum creatinine concentrations > or =150 micromol/L (male) or > or =130 micromol/L (female) over a 6-mo period in 2003. Clinical outcomes were obtained from each patient's case records. The incidences of AKI and ACRF were 1811 and 336 per million population, respectively. Median age was 76 yr for AKI and 80.5 yr for ACRF. Sepsis was a precipitating factor in 47% of patients. The RIFLE classification was useful for predicting full recovery of renal function (P < 0.001), renal replacement therapy requirement (P < 0.001), length of hospital stay [excluding those who died during admission (P < 0.001)], and in-hospital mortality (P = 0.035). RIFLE did not predict mortality at 90 d or 6 mo. Thus the incidence of AKI is much higher than previously thought, with implications for service planning and providing information to colleagues about methods to prevent deterioration of renal function. The RIFLE classification is useful for identifying patients at greatest risk of adverse short-term outcomes." }, { "pmid": "23394211", "title": "Diagnosis, evaluation, and management of acute kidney injury: a KDIGO summary (Part 1).", "abstract": "Acute kidney injury (AKI) is a common and serious problem affecting millions and causing death and disability for many. In 2012, Kidney Disease: Improving Global Outcomes completed the first ever, international, multidisciplinary, clinical practice guideline for AKI. The guideline is based on evidence review and appraisal, and covers AKI definition, risk assessment, evaluation, prevention, and treatment. In this review we summarize key aspects of the guideline including definition and staging of AKI, as well as evaluation and nondialytic management. Contrast-induced AKI and management of renal replacement therapy will be addressed in a separate review. Treatment recommendations are based on systematic reviews of relevant trials. Appraisal of the quality of the evidence and the strength of recommendations followed the Grading of Recommendations Assessment, Development and Evaluation approach. Limitations of the evidence are discussed and a detailed rationale for each recommendation is provided." }, { "pmid": "23394215", "title": "Contrast-induced acute kidney injury and renal support for acute kidney injury: a KDIGO summary (Part 2).", "abstract": "Acute kidney injury (AKI) is a common and serious problem affecting millions and causing death and disability for many. In 2012, Kidney Disease: Improving Global Outcomes completed the first ever international multidisciplinary clinical practice guideline for AKI. The guideline is based on evidence review and appraisal, and covers AKI definition, risk assessment, evaluation, prevention, and treatment. Two topics, contrast-induced AKI and management of renal replacement therapy, deserve special attention because of the frequency in which they are encountered and the availability of evidence. Recommendations are based on systematic reviews of relevant trials. Appraisal of the quality of the evidence and the strength of recommendations followed the Grading of Recommendations Assessment, Development and Evaluation approach. Limitations of the evidence are discussed and a detailed rationale for each recommendation is provided. This review is an abridged version of the guideline and provides additional rationale and commentary for those recommendation statements that most directly impact the practice of critical care." }, { "pmid": "19602973", "title": "Incidence and outcomes of acute kidney injury in intensive care units: a Veterans Administration study.", "abstract": "OBJECTIVES\n: To examine the effect of severity of acute kidney injury or renal recovery on risk-adjusted mortality across different intensive care unit settings. Acute kidney injury in intensive care unit patients is associated with significant mortality.\n\n\nDESIGN\n: Retrospective observational study.\n\n\nSETTING\n: There were 325,395 of 617,927 consecutive admissions to all 191 Veterans Affairs ICUs across the country.\n\n\nPATIENTS\n: Large national cohort of patients admitted to Veterans Affairs ICUs and who developed acute kidney injury during their intensive care unit stay.\n\n\nMEASUREMENTS AND MAIN RESULTS\n: Outcome measures were hospital mortality, and length of stay. Acute kidney injury was defined as a 0.3-mg/dL increase in creatinine relative to intensive care unit admission and categorized into Stage I (0.3 mg/dL to <2 times increase), Stage II (> or =2 and <3 times increase), and Stage III (> or =3 times increase or dialysis requirement). Association of mortality and length of stay with acute kidney injury stages and renal recovery was examined. Overall, 22% (n = 71,486) of patients developed acute kidney injury (Stage I: 17.5%; Stage II: 2.4%; Stage III: 2%); 16.3% patients met acute kidney injury criteria within 48 hrs, with an additional 5.7% after 48 hrs of intensive care unit admission. Acute kidney injury frequency varied between 9% and 30% across intensive care unit admission diagnoses. After adjusting for severity of illness in a model that included urea and creatinine on admission, odds of death increased with increasing severity of acute kidney injury. Stage I odds ratio = 2.2 (95% confidence interval, 2.17-2.30); Stage II odds ratio = 6.1 (95% confidence interval, 5.74, 6.44); and Stage III odds ratio = 8.6 (95% confidence interval, 8.07-9.15). Acute kidney injury patients with sustained elevation of creatinine experienced higher mortality risk than those who recovered.\n\n\nINTERVENTIONS\n: None.\n\n\nCONCLUSIONS\n: Admission diagnosis and severity of illness influence frequency and severity of acute kidney injury. Small elevations in creatinine in the intensive care unit are associated with increased risk-adjusted mortality across all intensive care unit settings, whereas renal recovery was associated with a protective effect. Strategies to prevent even mild acute kidney injury or promote renal recovery may improve survival." }, { "pmid": "22473149", "title": "Acute kidney injury and mortality in hospitalized patients.", "abstract": "BACKGROUND\nThe objective of this study was to determine the incidence of acute kidney injury (AKI) and its relation with mortality among hospitalized patients.\n\n\nMETHODS\nAnalysis of hospital discharge and laboratory data from an urban academic medical center over a 1-year period. We included hospitalized adult patients receiving two or more serum creatinine (sCr) measurements. We excluded prisoners, psychiatry, labor and delivery, and transferred patients, 'bedded outpatients' as well as individuals with a history of kidney transplant or chronic dialysis. We defined AKI as (a) an increase in sCr of ≥0.3 mg/dl; (b) an increase in sCr to ≥150% of baseline, or (c) the initiation of dialysis in a patient with no known history of prior dialysis. We identified factors associated with AKI as well as the relationships between AKI and in-hospital mortality.\n\n\nRESULTS\nAmong the 19,249 hospitalizations included in the analysis, the incidence of AKI was 22.7%. Older persons, Blacks, and patients with reduced baseline kidney function were more likely to develop AKI (all p < 0.001). Among AKI cases, the most common primary admitting diagnosis groups were circulatory diseases (25.4%) and infection (16.4%). After adjustment for age, sex, race, admitting sCr concentration, and the severity of illness index, AKI was independently associated with in-hospital mortality (adjusted odds ratio 4.43, 95% confidence interval 3.68-5.35).\n\n\nCONCLUSIONS\nAKI occurred in over 1 of 5 hospitalizations and was associated with a more than fourfold increased likelihood of death. These observations highlight the importance of AKI recognition as well as the association of AKI with mortality in hospitalized patients." }, { "pmid": "22833807", "title": "Biomarkers for the prediction of acute kidney injury: a narrative review on current status and future challenges.", "abstract": "Acute kidney injury (AKI) is strongly associated with increased morbidity and mortality in critically ill patients. Efforts to change its clinical course have failed because clinically available therapeutic measures are currently lacking, and early detection is impossible with serum creatinine (SCr). The demand for earlier markers has prompted the discovery of several candidates to serve this purpose. In this paper, we review available biomarker studies on the early predictive performance in developing AKI in adult critically ill patients. We make an effort to present the results from the perspective of possible clinical utility." }, { "pmid": "21734090", "title": "Predictors of acute kidney injury in septic shock patients: an observational cohort study.", "abstract": "BACKGROUND AND OBJECTIVES\nAcute kidney injury (AKI) is a frequent complication in critically ill patients and sepsis is the most common contributing factor. We aimed to determine the risk factors associated with AKI development in patients with septic shock.\n\n\nDESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS\nObservational cohort study consisted of consecutive adults with septic shock admitted to a medical intensive care unit (ICU) of a tertiary care academic hospital from July 2005 to September 2007. AKI was defined according to RIFLE criteria (urine output and creatinine criteria). Demographic, clinical, and treatment variables were reviewed. Main outcomes measured were AKI occurrence, all-cause hospital mortality, and hospital and ICU length of stay.\n\n\nRESULTS\nThree hundred ninety patients met inclusion criteria, of which 237 (61%) developed AKI. AKI development was independently associated with delay to initiation of adequate antibiotics, intra-abdominal sepsis, blood product transfusion, use of angiotensin-converting enzyme inhibitor/angiotensin-receptor blocker, and body mass index (kg/m²). Higher baseline GFR and successful early goal directed resuscitation were associated with a decreased risk of AKI. Hospital mortality was significantly greater in patients who developed AKI (49 versus 34%).\n\n\nCONCLUSIONS\nIn a contemporary cohort of patients with septic shock, both patient and health care delivery risk factors seemed to be important for AKI development." }, { "pmid": "21771324", "title": "Oliguria as predictive biomarker of acute kidney injury in critically ill patients.", "abstract": "INTRODUCTION\nDuring critical illness, oliguria is often used as a biomarker of acute kidney injury (AKI). However, its relationship with the subsequent development of AKI has not been prospectively evaluated.\n\n\nMETHODS\nWe documented urine output and daily serum creatinine concentration in patients admitted for more than 24 hours in seven intensive care units (ICUs) from six countries over a period of two to four weeks. Oliguria was defined by a urine output < 0.5 ml/kg/hr. Data were collected until the occurrence of creatinine-defined AKI (AKI-Cr), designated by RIFLE-Injury class or greater using creatinine criteria (RIFLE-I[Cr]), or until ICU discharge. Episodes of oliguria were classified by longest duration of consecutive oliguria during each day were correlated with new AKI-Cr the next day, examining cut-offs for oliguria of greater than 1,2,3,4,5,6, or 12 hr duration,\n\n\nRESULTS\nWe studied 239 patients during 723 days. Overall, 32 patients had AKI on ICU admission, while in 23, AKI-Cr developed in ICU. Oliguria of greater than one hour was significantly associated with AKI-Cr the next day. On receiver-operator characteristic area under the curve (ROCAUC) analysis, oliguria showed fair predictive ability for AKI-Cr (ROCAUC = 0.75; CI:0.64-0.85). The presence of 4 hrs or more oliguria provided the best discrimination (sensitivity 52% (0.31-0.73%), specificity 86% (0.84-0.89%), positive likelihood ratio 3.8 (2.2-5.6), P < 0.0001) with negative predictive value of 98% (0.97-0.99). Oliguria preceding AKI-Cr was more likely to be associated with lower blood pressure, higher heart rate and use of vasopressors or inotropes and was more likely to prompt clinical intervention. However, only 30 of 487 individual episodes of oliguria preceded the new occurrence of AKI-Cr the next day.\n\n\nCONCLUSIONS\nOliguria was significantly associated with the occurrence of new AKI-Cr, however oliguria occurred frequently compared to the small number of patients (~10%) developing AKI-Cr in the ICU, so that most episodes of oliguria were not followed by renal injury. Consequently, the occurrence of short periods (1-6 hr) of oliguria lacked utility in discriminating patients with incipient AKI-Cr (positive likelihood ratios of 2-4, with > 10 considered indicative of a useful screening test). However, oliguria accompanied by hemodynamic compromise or increasing vasopressor dose may represent a clinically useful trigger for other early biomarkers of renal injury." }, { "pmid": "17331245", "title": "Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury.", "abstract": "INTRODUCTION\nAcute kidney injury (AKI) is a complex disorder for which currently there is no accepted definition. Having a uniform standard for diagnosing and classifying AKI would enhance our ability to manage these patients. Future clinical and translational research in AKI will require collaborative networks of investigators drawn from various disciplines, dissemination of information via multidisciplinary joint conferences and publications, and improved translation of knowledge from pre-clinical research. We describe an initiative to develop uniform standards for defining and classifying AKI and to establish a forum for multidisciplinary interaction to improve care for patients with or at risk for AKI.\n\n\nMETHODS\nMembers representing key societies in critical care and nephrology along with additional experts in adult and pediatric AKI participated in a two day conference in Amsterdam, The Netherlands, in September 2005 and were assigned to one of three workgroups. Each group's discussions formed the basis for draft recommendations that were later refined and improved during discussion with the larger group. Dissenting opinions were also noted. The final draft recommendations were circulated to all participants and subsequently agreed upon as the consensus recommendations for this report. Participating societies endorsed the recommendations and agreed to help disseminate the results.\n\n\nRESULTS\nThe term AKI is proposed to represent the entire spectrum of acute renal failure. Diagnostic criteria for AKI are proposed based on acute alterations in serum creatinine or urine output. A staging system for AKI which reflects quantitative changes in serum creatinine and urine output has been developed.\n\n\nCONCLUSION\nWe describe the formation of a multidisciplinary collaborative network focused on AKI. We have proposed uniform standards for diagnosing and classifying AKI which will need to be validated in future studies. The Acute Kidney Injury Network offers a mechanism for proceeding with efforts to improve patient outcomes." }, { "pmid": "22617274", "title": "Acute kidney injury.", "abstract": "Acute kidney injury (formerly known as acute renal failure) is a syndrome characterised by the rapid loss of the kidney's excretory function and is typically diagnosed by the accumulation of end products of nitrogen metabolism (urea and creatinine) or decreased urine output, or both. It is the clinical manifestation of several disorders that affect the kidney acutely. Acute kidney injury is common in hospital patients and very common in critically ill patients. In these patients, it is most often secondary to extrarenal events. How such events cause acute kidney injury is controversial. No specific therapies have emerged that can attenuate acute kidney injury or expedite recovery; thus, treatment is supportive. New diagnostic techniques (eg, renal biomarkers) might help with early diagnosis. Patients are given renal replacement therapy if acute kidney injury is severe and biochemical or volume-related, or if uraemic-toxaemia-related complications are of concern. If patients survive their illness and do not have premorbid chronic kidney disease, they typically recover to dialysis independence. However, evidence suggests that patients who have had acute kidney injury are at increased risk of subsequent chronic kidney disease." }, { "pmid": "16177006", "title": "Acute kidney injury, mortality, length of stay, and costs in hospitalized patients.", "abstract": "The marginal effects of acute kidney injury on in-hospital mortality, length of stay (LOS), and costs have not been well described. A consecutive sample of 19,982 adults who were admitted to an urban academic medical center, including 9210 who had two or more serum creatinine (SCr) determinations, was evaluated. The presence and degree of acute kidney injury were assessed using absolute and relative increases from baseline to peak SCr concentration during hospitalization. Large increases in SCr concentration were relatively rare (e.g., >or=2.0 mg/dl in 105 [1%] patients), whereas more modest increases in SCr were common (e.g., >or=0.5 mg/dl in 1237 [13%] patients). Modest changes in SCr were significantly associated with mortality, LOS, and costs, even after adjustment for age, gender, admission International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis, severity of illness (diagnosis-related group weight), and chronic kidney disease. For example, an increase in SCr >or=0.5 mg/dl was associated with a 6.5-fold (95% confidence interval 5.0 to 8.5) increase in the odds of death, a 3.5-d increase in LOS, and nearly 7500 dollars in excess hospital costs. Acute kidney injury is associated with significantly increased mortality, LOS, and costs across a broad spectrum of conditions. Moreover, outcomes are related directly to the severity of acute kidney injury, whether characterized by nominal or percentage changes in serum creatinine." }, { "pmid": "27025458", "title": "Prediction and detection models for acute kidney injury in hospitalized older adults.", "abstract": "BACKGROUND\nAcute Kidney Injury (AKI) occurs in at least 5 % of hospitalized patients and can result in 40-70 % morbidity and mortality. Even following recovery, many subjects may experience progressive deterioration of renal function. The heterogeneous etiology and pathophysiology of AKI complicates its diagnosis and medical management and can add to poor patient outcomes and incur substantial hospital costs. AKI is predictable and may be avoidable if early risk factors are identified and utilized in the clinical setting. Timely detection of undiagnosed AKI in hospitalized patients can also lead to better disease management.\n\n\nMETHODS\nData from 25,521 hospital stays in one calendar year of patients 60 years and older was collected from a large health care system. Four machine learning models (logistic regression, support vector machines, decision trees and naïve Bayes) along with their ensemble were tested for AKI prediction and detection tasks. Patient demographics, laboratory tests, medications and comorbid conditions were used as the predictor variables. The models were compared using the area under ROC curve (AUC) evaluation metric.\n\n\nRESULTS\nLogistic regression performed the best for AKI detection (AUC 0.743) and was a close second to the ensemble for AKI prediction (AUC ensemble: 0.664, AUC logistic regression: 0.660). History of prior AKI, use of combination drugs such as ACE inhibitors, NSAIDS and diuretics, and presence of comorbid conditions such as respiratory failure were found significant for both AKI detection and risk prediction.\n\n\nCONCLUSIONS\nThe machine learning models performed fairly well on both predicting AKI and detecting undiagnosed AKI. To the best of our knowledge, this is the first study examining the difference between prediction and detection of AKI. The distinction has clinical relevance, and can help providers either identify at risk subjects and implement preventative strategies or manage their treatment depending on whether AKI is predicted or detected." }, { "pmid": "25943719", "title": "The urine sediment as a biomarker of kidney disease.", "abstract": "The modern era of medicine has ushered in new diagnostic technologies to assist the clinician in evaluating patients with kidney disease. The birth of automated urine analysis technology and centralized laboratory testing has unfortunately made examination of urine sediment by physicians a rare event. At the same time, identifying novel urine biomarkers for kidney disease has become a research priority in nephrology, and the search for the \"renal troponin\" has progressed at a fast pace. Despite this, urine sediment examination remains a time-honored test that provides a wealth of information about the patient's kidney condition and performs favorably as a urinary biomarker. It alerts the clinician to the presence of kidney disease and provides diagnostic information that often identifies the compartment of kidney injury. In addition, sediment findings may guide therapy and assist in prognostication. As such, it is premature to abandon urine sediment examination. It may be more appropriate to combine urine sediment examination with new candidate biomarkers that enter clinical practice to create a \"diagnostic panel\" that provides clinicians with a useful battery of diagnostic tests. To accomplish this, we as nephrologists must encourage continued training and maintenance of competency in urine sediment examination." }, { "pmid": "24855282", "title": "Developing risk prediction models for kidney injury and assessing incremental value for novel biomarkers.", "abstract": "The field of nephrology is actively involved in developing biomarkers and improving models for predicting patients' risks of AKI and CKD and their outcomes. However, some important aspects of evaluating biomarkers and risk models are not widely appreciated, and statistical methods are still evolving. This review describes some of the most important statistical concepts for this area of research and identifies common pitfalls. Particular attention is paid to metrics proposed within the last 5 years for quantifying the incremental predictive value of a new biomarker." }, { "pmid": "24050856", "title": "Simplified clinical risk score to predict acute kidney injury after aortic surgery.", "abstract": "OBJECTIVE\nThe authors identified risk factors for acute kidney injury (AKI) defined by risk, injury, failure, loss, end-stage (RIFLE) criteria after aortic surgery with cardiopulmonary bypass and constructed a simplified risk score for the prediction of AKI.\n\n\nDESIGN\nRetrospective and observational.\n\n\nSETTING\nSingle large university hospital.\n\n\nPARTICIPANTS\nPatients (737) who underwent aortic surgery with cardiopulmonary bypass between 1997 and 2010.\n\n\nMAIN RESULTS\nMultivariate logistic regression analysis was used to evaluate risk factors. A scoring model was developed in a randomly selected derivation cohort (n = 417), and was validated on the remaining patients. The scoring model was developed with a score based on regression β-coefficient, and was compared with previous indices as measured by the area under the receiver operating characteristic curve (AUC). The incidence of AKI was 29.0%, and 5.8% required renal replacement therapy. Independent risk factors for AKI were age older than 60 years, preoperative glomerular filtration rate <60 mL/min/1.73 m(2), left ventricular ejection fraction <55%, operation time >7 hours, intraoperative urine output <0.5 mL/kg/h, and intraoperative furosemide use. The authors made a score by weighting them at 1 point each. The risk score was valid in predicting AKI, and the AUC was 0.74 [95% confidence interval (CI): 0.69 to 0.79], which was similar to that in the validation cohort: 0.74 (95% CI: 0.69 to 0.80; p = 0.97). The risk-scoring model showed a better performance compared with previously reported indices.\n\n\nCONCLUSIONS\nThe model would provide a simplified clinical score stratifying the risk of postoperative AKI in patients undergoing aortic surgery." }, { "pmid": "23222415", "title": "Comparison and clinical suitability of eight prediction models for cardiac surgery-related acute kidney injury.", "abstract": "BACKGROUND\nCardiac surgery-related acute kidney injury (CS-AKI) results in increased morbidity and mortality. Different models have been developed to identify patients at risk of CS-AKI. While models that predict dialysis and CS-AKI defined by the RIFLE criteria are available, their predictive power and clinical applicability have not been compared head to head.\n\n\nMETHODS\nOf 1388 consecutive adult cardiac surgery patients operated with cardiopulmonary bypass, risk scores of eight prediction models were calculated. Four models were only applicable to a subgroup of patients. The area under the receiver operating curve (AUROC) was calculated for all levels of CS-AKI and for need for dialysis (AKI-D) for each risk model and compared for the models applicable to the largest subgroup (n = 1243).\n\n\nRESULTS\nThe incidence of AKI-D was 1.9% and for CS-AKI 9.3%. The models of Rahmanian, Palomba and Aronson could not be used for preoperative risk assessment as postoperative data are necessary. The three best AUROCs for AKI-D were of the model of Thakar: 0.93 [95% confidence interval (CI) 0.91-0.94], Fortescue: 0.88 (95% CI 0.87-0.90) and Wijeysundera: 0.87 (95% CI 0.85-0.89). The three best AUROCs for CS-AKI-risk were 0.75 (95% CI 0.73-0.78), 0.74 (95% CI 0.71-0.76) and 0.70 (95% CI 0.73-0.78), for Thakar, Mehta and both Fortescue and Wijeysundera, respectively. The model of Thakar performed significantly better compared with the models of Mehta, Rahmanian, Fortescue and Wijeysundera (all P-values <0.01) at different levels of severity of CS-AKI.\n\n\nCONCLUSIONS\nThe Thakar model offers the best discriminative value to predict CS-AKI and is applicable in a preoperative setting and for all patients undergoing cardiac surgery." }, { "pmid": "23222145", "title": "Informed consent and routinisation.", "abstract": "This article introduces the notion of 'routinisation' into discussions of informed consent. It is argued that the routinisation of informed consent poses a threat to the protection of the personal autonomy of a patient through the negotiation of informed consent. On the basis of a large survey, we provide evidence of the routinisation of informed consent in various types of interaction on the internet; among these, the routinisation of consent to the exchange of health related information. We also provide evidence that the extent of the routinisation of informed consent is dependent on the character of the information exchanged, and we uncover a range of causes of routinisation. Finally, the article discusses possible ways of countering the problem of routinisation of informed consent." }, { "pmid": "26705194", "title": "Risk prediction models for acute kidney injury following major noncardiac surgery: systematic review.", "abstract": "BACKGROUND\nAcute kidney injury (AKI) is a serious complication of major noncardiac surgery. Risk prediction models for AKI following noncardiac surgery may be useful for identifying high-risk patients to target with prevention strategies.\n\n\nMETHODS\nWe conducted a systematic review of risk prediction models for AKI following major noncardiac surgery. MEDLINE, EMBASE, BIOSIS Previews and Web of Science were searched for articles that (i) developed or validated a prediction model for AKI following major noncardiac surgery or (ii) assessed the impact of a model for predicting AKI following major noncardiac surgery that has been implemented in a clinical setting.\n\n\nRESULTS\nWe identified seven models from six articles that described a risk prediction model for AKI following major noncardiac surgeries. Three studies developed prediction models for AKI requiring renal replacement therapy following liver transplantation, three derived prediction models for AKI based on the Risk, Injury, Failure, Loss of kidney function, End-stage kidney disease (RIFLE) criteria following liver resection and one study developed a prediction model for AKI following major noncardiac surgical procedures. The final models included between 4 and 11 independent variables, and c-statistics ranged from 0.79 to 0.90. None of the models were externally validated.\n\n\nCONCLUSIONS\nRisk prediction models for AKI after major noncardiac surgery are available; however, these models lack validation, studies of clinical implementation and impact analyses. Further research is needed to develop, validate and study the clinical impact of such models before broad clinical uptake." }, { "pmid": "27124567", "title": "Development of a Prediction Model of Early Acute Kidney Injury in Critically Ill Children Using Electronic Health Record Data.", "abstract": "OBJECTIVE\nAcute kidney injury is independently associated with poor outcomes in critically ill children. However, the main biomarker of acute kidney injury, serum creatinine, is a late marker of injury and can cause a delay in diagnosis. Our goal was to develop and validate a data-driven multivariable clinical prediction model of acute kidney injury in a general PICU using electronic health record data.\n\n\nDESIGN\nDerivation and validation of a prediction model using retrospective data.\n\n\nPATIENTS\nAll patients 1 month to 21 years old admitted between May 2003 and March 2015 without acute kidney injury at admission and alive and in the ICU for at least 24 hours.\n\n\nSETTING\nA multidisciplinary, tertiary PICU.\n\n\nINTERVENTION\nThe primary outcome was early acute kidney injury, which was defined as new acute kidney injury developed in the ICU within 72 hours of admission. Multivariable logistic regression was performed to derive the Pediatric Early AKI Risk Score using electronic health record data from the first 12 hours of ICU stay.\n\n\nMEASUREMENTS AND MAIN RESULTS\nA total of 9,396 patients were included in the analysis, of whom 4% had early acute kidney injury, and these had significantly higher mortality than those without early acute kidney injury (26% vs 3.3%; p < 0.001). Thirty-three candidate variables were tested. The final model had seven predictors and had good discrimination (area under the curve 0.84) and appropriate calibration. The model was validated in two validation sets and maintained good discrimination (area under the curves, 0.81 and 0.86).\n\n\nCONCLUSION\nWe developed and validated the Pediatric Early AKI Risk Score, a data-driven acute kidney injury clinical prediction model that has good discrimination and calibration in a general PICU population using only electronic health record data that is objective, available in real time during the first 12 hours of ICU care and generalizable across PICUs. This prediction model was designed to be implemented in the form of an automated clinical decision support system and could be used to guide preventive, therapeutic, and research strategies." }, { "pmid": "22067631", "title": "Impact of real-time electronic alerting of acute kidney injury on therapeutic intervention and progression of RIFLE class.", "abstract": "OBJECTIVE\nTo evaluate whether a real-time electronic alert system or \"AKI sniffer,\" which is based on the RIFLE classification criteria (Risk, Injury and Failure), would have an impact on therapeutic interventions and acute kidney injury progression.\n\n\nDESIGN\nProspective intervention study.\n\n\nSETTING\nSurgical and medical intensive care unit in a tertiary care hospital.\n\n\nPATIENTS\nA total of 951 patients having in total 1,079 admission episodes were admitted during the study period (prealert control group: 227, alert group: 616, and postalert control group: 236).\n\n\nINTERVENTIONS\nThree study phases were compared: A 1.5-month prealert control phase in which physicians were blinded for the acute kidney injury sniffer and a 3-month intervention phase with real-time alerting of worsening RIFLE class through the Digital Enhanced Cordless Technology telephone system followed by a second 1.5-month postalert control phase.\n\n\nMEASUREMENTS AND MAIN RESULTS\nA total of 2593 acute kidney injury alerts were recorded with a balanced distribution over all study phases. Most acute kidney injury alerts were RIFLE class risk (59.8%) followed by RIFLE class injury (34.1%) and failure (6.1%). A higher percentage of patients in the alert group received therapeutic intervention within 60 mins after the acute kidney injury alert (28.7% in alert group vs. 7.9% and 10.4% in the pre- and postalert control groups, respectively, p μ .001). In the alert group, more patients received fluid therapy (23.0% vs. 4.9% and 9.2%, p μ .01), diuretics (4.2% vs. 2.6% and 0.8%, p μ .001), or vasopressors (3.9% vs. 1.1% and 0.8%, p μ .001). Furthermore, these patients had a shorter time to intervention (p μ .001). A higher proportion of patients in the alert group showed return to a baseline kidney function within 8 hrs after an acute kidney injury alert \"from normal to risk\" compared with patients in the control group (p = .048).\n\n\nCONCLUSIONS\nThe real-time alerting of every worsening RIFLE class by the acute kidney injury sniffer increased the number and timeliness of early therapeutic interventions. The borderline significant improvement of short-term renal outcome in the RIFLE class risk patients needs to be confirmed in a large multicenter trial." }, { "pmid": "27219127", "title": "MIMIC-III, a freely accessible critical care database.", "abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework." }, { "pmid": "19414839", "title": "A new equation to estimate glomerular filtration rate.", "abstract": "BACKGROUND\nEquations to estimate glomerular filtration rate (GFR) are routinely used to assess kidney function. Current equations have limited precision and systematically underestimate measured GFR at higher values.\n\n\nOBJECTIVE\nTo develop a new estimating equation for GFR: the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation.\n\n\nDESIGN\nCross-sectional analysis with separate pooled data sets for equation development and validation and a representative sample of the U.S. population for prevalence estimates.\n\n\nSETTING\nResearch studies and clinical populations (\"studies\") with measured GFR and NHANES (National Health and Nutrition Examination Survey), 1999 to 2006.\n\n\nPARTICIPANTS\n8254 participants in 10 studies (equation development data set) and 3896 participants in 16 studies (validation data set). Prevalence estimates were based on 16,032 participants in NHANES.\n\n\nMEASUREMENTS\nGFR, measured as the clearance of exogenous filtration markers (iothalamate in the development data set; iothalamate and other markers in the validation data set), and linear regression to estimate the logarithm of measured GFR from standardized creatinine levels, sex, race, and age.\n\n\nRESULTS\nIn the validation data set, the CKD-EPI equation performed better than the Modification of Diet in Renal Disease Study equation, especially at higher GFR (P < 0.001 for all subsequent comparisons), with less bias (median difference between measured and estimated GFR, 2.5 vs. 5.5 mL/min per 1.73 m(2)), improved precision (interquartile range [IQR] of the differences, 16.6 vs. 18.3 mL/min per 1.73 m(2)), and greater accuracy (percentage of estimated GFR within 30% of measured GFR, 84.1% vs. 80.6%). In NHANES, the median estimated GFR was 94.5 mL/min per 1.73 m(2) (IQR, 79.7 to 108.1) vs. 85.0 (IQR, 72.9 to 98.5) mL/min per 1.73 m(2), and the prevalence of chronic kidney disease was 11.5% (95% CI, 10.6% to 12.4%) versus 13.1% (CI, 12.1% to 14.0%).\n\n\nLIMITATION\nThe sample contained a limited number of elderly people and racial and ethnic minorities with measured GFR.\n\n\nCONCLUSION\nThe CKD-EPI creatinine equation is more accurate than the Modification of Diet in Renal Disease Study equation and could replace it for routine clinical use.\n\n\nPRIMARY FUNDING SOURCE\nNational Institute of Diabetes and Digestive and Kidney Diseases." }, { "pmid": "27329638", "title": "Using Machine Learning to Predict Laboratory Test Results.", "abstract": "OBJECTIVES\nWhile clinical laboratories report most test results as individual numbers, findings, or observations, clinical diagnosis usually relies on the results of multiple tests. Clinical decision support that integrates multiple elements of laboratory data could be highly useful in enhancing laboratory diagnosis.\n\n\nMETHODS\nUsing the analyte ferritin in a proof of concept, we extracted clinical laboratory data from patient testing and applied a variety of machine-learning algorithms to predict ferritin test results using the results from other tests. We compared predicted with measured results and reviewed selected cases to assess the clinical value of predicted ferritin.\n\n\nRESULTS\nWe show that patient demographics and results of other laboratory tests can discriminate normal from abnormal ferritin results with a high degree of accuracy (area under the curve as high as 0.97, held-out test data). Case review indicated that predicted ferritin results may sometimes better reflect underlying iron status than measured ferritin.\n\n\nCONCLUSIONS\nThese findings highlight the substantial informational redundancy present in patient test results and offer a potential foundation for a novel type of clinical decision support aimed at integrating, interpreting, and enhancing the diagnostic value of multianalyte sets of clinical laboratory test results." }, { "pmid": "12512033", "title": "Model for end-stage liver disease (MELD) and allocation of donor livers.", "abstract": "BACKGROUND & AIMS\nA consensus has been reached that liver donor allocation should be based primarily on liver disease severity and that waiting time should not be a major determining factor. Our aim was to assess the capability of the Model for End-Stage Liver Disease (MELD) score to correctly rank potential liver recipients according to their severity of liver disease and mortality risk on the OPTN liver waiting list.\n\n\nMETHODS\nThe MELD model predicts liver disease severity based on serum creatinine, serum total bilirubin, and INR and has been shown to be useful in predicting mortality in patients with compensated and decompensated cirrhosis. In this study, we prospectively applied the MELD score to estimate 3-month mortality to 3437 adult liver transplant candidates with chronic liver disease who were added to the OPTN waiting list at 2A or 2B status between November, 1999, and December, 2001.\n\n\nRESULTS\nIn this study cohort with chronic liver disease, 412 (12%) died during the 3-month follow-up period. Waiting list mortality increased directly in proportion to the listing MELD score. Patients having a MELD score <9 experienced a 1.9% mortality, whereas patients having a MELD score > or =40 had a mortality rate of 71.3%. Using the c-statistic with 3-month mortality as the end point, the area under the receiver operating characteristic (ROC) curve for the MELD score was 0.83 compared with 0.76 for the Child-Turcotte-Pugh (CTP) score (P < 0.001).\n\n\nCONCLUSIONS\nThese data suggest that the MELD score is able to accurately predict 3-month mortality among patients with chronic liver disease on the liver waiting list and can be applied for allocation of donor livers." }, { "pmid": "3928249", "title": "APACHE II: a severity of disease classification system.", "abstract": "This paper presents the form and validation results of APACHE II, a severity of disease classification system. APACHE II uses a point score based upon initial values of 12 routine physiologic measurements, age, and previous health status to provide a general measure of severity of disease. An increasing score (range 0 to 71) was closely correlated with the subsequent risk of hospital death for 5815 intensive care admissions from 13 hospitals. This relationship was also found for many common diseases. When APACHE II scores are combined with an accurate description of disease, they can prognostically stratify acutely ill patients and assist investigators comparing the success of new or differing forms of therapy. This scoring index can be used to evaluate the use of hospital resources and compare the efficacy of intensive care in different hospitals or over time." } ]
Frontiers in Psychology
30740078
PMC6355693
10.3389/fpsyg.2019.00058
CROCUFID: A Cross-Cultural Food Image Database for Research on Food Elicited Affective Responses
We present CROCUFID: a CROss-CUltural Food Image Database that currently contains 840 images, including 479 food images with detailed metadata and 165 images of non-food items. The database includes images of sweet, savory, natural, and processed food from Western and Asian cuisines. To create sufficient variability in valence and arousal we included images of food with different degrees of appetitiveness (fresh, unfamiliar, molded or rotten, spoiled, and partly consumed). We used a standardized photographing protocol, resulting in high resolution images depicting all food items on a standard background (a white plate), seen from a fixed viewing (45°) angle. CROCUFID is freely available under the CC-By Attribution 4.0 International license and hosted on the OSF repository. The advantages of the CROCUFID database over other databases are its (1) free availability, (2) full coverage of the valence – arousal space, (3) use of standardized recording methods, (4) inclusion of multiple cuisines and unfamiliar foods, (5) availability of normative and demographic data, (6) high image quality and (7) capability to support future (e.g., virtual and augmented reality) applications. Individuals from the United Kingdom (N = 266), North-America (N = 275), and Japan (N = 264) provided normative ratings of valence, arousal, perceived healthiness, and desire-to-eat using visual analog scales (VAS). In addition, for each image we computed 17 characteristics that are known to influence affective observer responses (e.g., texture, regularity, complexity, and colorfulness). Significant differences between groups and significant correlations between image characteristics and normative ratings were in accordance with previous research, indicating the validity of CROCUFID. We expect that CROCUFID will facilitate comparability across studies and advance experimental research on the determinants of food-elicited emotions. We plan to extend CROCUFID in the future with images of food from a wide range of different cuisines and with non-food images (for applications in for instance neuro-physiological studies). We invite researchers from all parts of the world to contribute to this effort by creating similar image sets that can be linked to this collection, so that CROCUFID will grow into a truly multicultural food database.
Related WorkIn this section we first describe the characteristics of some currently and publicly available food image databases that have been designed to support neuroscientific and behavioral research on human eating behavior and preferences. Then we review the databases that have been constructed to develop and train automatic food recognition and ingredient or recipe retrieval algorithms. To the best of our knowledge, these databases are currently the only ones publicly available that contain images of a wide range of different cuisines. However, they generally appear to be unsuitable for systematic research on human food-related behavior since they typically contain real-life images with largely varying backgrounds, taken from different points of view, with different scales and rotation angles and under varying lighting conditions. Next, we describe the value of CROCUFID for studies on the effects of environmental characteristics and background context on human food experience. Finally, we discuss how CROCUFID can be used to perform cross-cultural food studies.Food Image Databases for Human Observer StudiesTable 1 provides an overview of publicly available food image databases for human observer studies.Table 1Overview of food image databases for human observer studies.DatabasesCoverage of affective spaceRecording methodsCuisinesAvailability of normative (and demographic) dataRemarksFRIDa (Foroni et al., 2013)Mainly positive valenceNot standardizedMainly WesternValence, Arousal, Familiarity (Italian)• Low resolution (530 pixels × 530 pixels)• Collected from Internet• Includes non-food imagesFood-Pics (Blechert et al., 2014b)Mainly positive valenceNot standardizedMainly WesternValence, Arousal, Familiarity, Recognizability, Complexity, Palatability (German and North American)• Low resolution (600 pixels × 450 pixels)• Collected from Internet No fixed backgroundOLAF (Miccoli et al., 2014, 2016)Mainly positive valenceNot standardizedMainly WesternValence, Arousal, Dominance, Food Craving (Spanish)• High resolution (4000 pixels × 3000 pixels)• Includes some low valence images from IAPS• Includes non-food imagesF4H (Charbonnier et al., 2016)Mainly positive valenceStandardizedMainly WesternLiking, Healthiness, Recognizability, Perceived Calories (Greek, Dutch, Scottish, German, Hungary, and Swedish)• Resolution (3872 pixels × 2592 pixels)• All images registered by the authors• Includes non-food imagesThe Foodcast Research Image Database (FRIDa1: Foroni et al., 2013) contains images of predominantly Western natural, transformed, and rotten food, natural and artificial non-food objects, animals, flowers and scenes, along with a description of several physical product properties (e.g., size, brightness and spatial frequency content) and normative ratings (by Italian participants) on several dimensions (including valence, arousal, and familiarity). The items were collected from the internet, pasted on a white background and have a low resolution (530 pixels × 530 pixels).The Food-pics database2 (Blechert et al., 2014b) contains images of predominantly Western food types, together with normative ratings (by participants from German speaking countries and North America) on familiarity, recognizability, complexity, valence, arousal, palatability, and desire to eat. The items were collected from the internet, pasted on a white background and have a low resolution (600 pixels × 450 pixels). Food-pics has been designed to support experimental research on food perception and eating behavior in general.The Open Library of Affective Foods (OLAF3: Miccoli et al., 2014, 2016) is a database of food pictures representing four different types of Western food (vegetables, fruit, sweet and salty high-fat foods), along with normative ratings (by Spanish students) of valence, arousal, dominance and food craving. The images have a high resolution (up to 4000 pixels × 3000 pixels) and include food served in restaurants and homemade meals, and display non-food items in the background to increase their ecological value and to resemble the appearance of images from the International Affective Picture System (IAPS: Lang et al., 2005). The four selected food categories focus on the extremes of the low-calorie/high-calorie food axis. Although OLAF was specifically compiled to be used in studies on the affective and appetitive effects of food, it contains no food images with negative valence. To remedy the lack of negative valence images and to provide affective anchors, OLAF was extended with 36 non-food images from the IAPS (12 from each of the three valence categories pleasant, neutral, unpleasant) that cover the full valence-arousal space.The Full4Health Image Collection (F4H4) contains 228 images of Western food types of different caloric content, together with normative ratings (by adults from Greece, Netherlands and Scotland, and by children from Germany, Hungary, and Sweden) on recognizability, liking, healthiness and perceived number of calories. In addition, F4H also includes images of 73 non-food items. The images have a high resolution (3872 pixels × 2592 pixels) and were registered according to a standardized photographing protocol (Charbonnier et al., 2016). F4H has been designed for health-related studies in which (perceived) caloric content is of interest and contains no food pictures with negative valence.Food Image Databases for Automatic Recognition StudiesTable 2 provides an overview of publicly available food image databases for automatic image recognition studies.Table 2Overview of food image databases for autonomic recognition studies.DatabasesCoverage of affective spaceRecording methodsCuisinesAvailability of normative (and demographic) dataRemarksPFID (Chen et al., 2009)A small part of valence and arousal spaceNot standardizedMainly WesternNot available• High resolution (2592 × 1944 pixels)• Collected by the authors• No fixed backgroundNU FOOD (Takahashi et al., 2017)Mainly positive valenceStandardizedSome Asian, Some WesternNot available• Only 10 different cuisines (six Asian and four Western cuisines)• No resolution specifiedChineseFoodNet (Chen et al., 2017)Mainly positive valenceNot standardizedOnly ChineseNot available• Variable resolution• Collected from Internet (185,628 images)• No fixed backgroundUNICT Food Dataset 889 (Farinella et al., 2015)Mainly positive valenceNot standardizedItalian, English, Thailand, Indian, Japanese etc.Not available• Variable resolution• Collected with smartphones (3,583 images)• No fixed backgroundUEC-Food 100 (Matsuda et al., 2012) UEC-Food 256 (Kawano and Yanai, 2015)Mainly positive valenceNot standardizedFrance, Italy, United States, China, Thailand, Vietnam, Japan, Indonesia, etc.Not available• Variable resolution• Collected from Internet• No fixed backgroundUPMCFOOD-101 (Wang et al., 2015) ETHZFOOD-101 (Bossard et al., 2014)Mainly positive valenceNot standardizedMore than 101 international food categoriesNot available• Variable resolution• Collected from Internet• No fixed backgroundVIREO-172 (Chen and Ngo, 2016)Mainly positive valenceNot standardizedOnly ChineseNot available• Variable resolution• Collected from Internet (110,241 images)• No fixed backgroundThe Pittsburgh Fast-Food Image Dataset (PFID5: Chen et al., 2009) contains still images, stereo pairs, 360-degrees videos and videos of Western fast-food and eating events, acquired in both restaurant environments and laboratory settings. The dataset represents 101 different foods with information on caloric content and is primarily intended for research on automated visual food recognition for dietary assessment. Although the images are registered at a high resolution (2592 pixels × 1944 pixels), the products that are shown occupy only a small region of the image, while the luminance, structure and shadowing of the background vary largely across the image set (due to undulations in the gray cloth in the background). Since the database only contains images of fast-food, it covers only a small part of the valence-arousal space and is therefore not suitable for systematically studying the emotional impact of food. Also, the database is significantly (Western) cultural specific, implying the previously mentioned cross-cultural restrictions for this dataset as well.The NU FOOD 360×10 database6 (Takahashi et al., 2017) is a small food image database containing images of 10 different types of food, each shot at three elevation angles (30, 60, and 90 degrees) from 12 different angles (30 degree spacing). Six of the 10 foods are typically Asian (Sashimi, Curry and rice, Eel rice-bowl, Tempura rice-bowl, Fried pork rice-bowl, and Tuna rice-bowl), while the remaining four represent Western food (Beef stew, Hamburger steak, Cheese burger, and Fish burger). The food categories were selected considering the variation of the appearance in both color and shape. However, for reasons of convenience and reproducibility plastic food samples were used instead of real ones. This may degrade the perceived naturalness of the images.The ChineseFoodNet7 (Chen et al., 2017) contains over 185,628 images of 208 Chinese food categories. The images in this database are collected through the internet and taken in real world under unconstrained conditions. The database is intended for the development and training of automatic food recognition algorithms.The UNICT Food Dataset 889 (UNICT-FD8898, Farinella et al., 2015) contains 3,583 images of 889 distinct real meals of different nationalities (e.g., Italian, English, Thai, Indian, Japanese, etc.). The images are acquired with smartphones both with and without flash. Although it is an extended, cross-cultural database, images are not standardized and provided with emotional scores of, e.g., valence and arousal. Additionally the technical quality of the presented images consequently fluctuates. This is most likely by design as the database is intended for the development of automatic image retrieval algorithms.The UEC-Food100 (Matsuda et al., 2012) and UEC-Food2569 (Kawano and Yanai, 2015) are both Japanese food image datasets, containing 100 and 256 food categories, respectively, from various countries such as France, Italy, United States, China, Thailand, Vietnam, Japan, and Indonesia. The dataset was compiled to develop algorithms that automatically retrieve food images from the internet. The images have widely varying backgrounds (e.g., different compositions and lighting of plates etc.), implying that this has limited value for human neurophysiological food-related studies.The UPMCFOOD-101 (Wang et al., 2015) and ETHZFOOD-10110 (Bossard et al., 2014) datasets are twin datasets with the same 101 international food categories but different real-world images, all collected through the internet. The images of UPMCFOOD-101 are annotated with recipe information, and the images of ETHZ-FOOD-101 are selfies. The datasets were compiled to develop automatic systems for recipe recognition, an exercise that requires significantly other pictorial features than applications that intent to evoke discriminative, though well-defined, emotional responses.VIREO-17211 (Chen and Ngo, 2016) is a dataset containing 110,241 images of popular Chinese dishes from 172 categories, annotated with 353 ingredient labels. The images are retrieved from the internet and have widely varying backgrounds, implying the associated diversity in technical quality. Like some previously mentioned databases, this database is intended to develop automatic cooking recipe retrieval algorithms with ingredient recognition.
[ "23055170", "13513951", "15159167", "22521912", "24361542", "25009514", "24204353", "18708099", "28784478", "12631504", "26167916", "26344127", "25447013", "27336469", "25521352", "17945385", "16836047", "24530691", "10702749", "23459781", "27841327", "21241285", "22245725", "20602749", "14992636", "24312382", "24318125", "28694958", "25336280", "19186916", "26072251", "27842263", "30599977", "12948696", "27884762", "27330520", "8497555", "22664394", "29403412", "28382006", "23977295", "2802594", "27513636", "25490404", "25728885", "26882325", "26168108", "24709484", "24462488", "10097030", "7178247", "18217832", "18839484", "15703257", "21264748", "26432045", "14978648", "18996158", "15325695", "22590605", "23145257", "21111829", "29441030", "24411766", "24341317", "26244107", "24141714", "24904121", "21440593", "21855585" ]
[ { "pmid": "23055170", "title": "Seriousness checks are useful to improve data validity in online research.", "abstract": "Nonserious answering behavior increases noise and reduces experimental power; it is therefore one of the most important threats to the validity of online research. A simple way to address the problem is to ask respondents about the seriousness of their participation and to exclude self-declared nonserious participants from analysis. To validate this approach, a survey was conducted in the week prior to the German 2009 federal election to the Bundestag. Serious participants answered a number of attitudinal and behavioral questions in a more consistent and predictively valid manner than did nonserious participants. We therefore recommend routinely employing seriousness checks in online surveys to improve data validity." }, { "pmid": "15159167", "title": "Motivation concepts in behavioral neuroscience.", "abstract": "Concepts of motivation are vital to progress in behavioral neuroscience. Motivational concepts help us to understand what limbic brain systems are chiefly evolved to do, i.e., to mediate psychological processes that guide real behavior. This article evaluates some major motivation concepts that have historic importance or have influenced the interpretation of behavioral neuroscience research. These concepts include homeostasis, setpoints and settling points, intervening variables, hydraulic drives, drive reduction, appetitive and consummatory behavior, opponent processes, hedonic reactions, incentive motivation, drive centers, dedicated drive neurons (and drive neuropeptides and receptors), neural hierarchies, and new concepts from affective neuroscience such as allostasis, cognitive incentives, and reward 'liking' versus 'wanting'." }, { "pmid": "22521912", "title": "Modulation of taste responsiveness and food preference by obesity and weight loss.", "abstract": "Palatable foods lead to overeating, and it is almost a forgone conclusion that it is also an important contributor to the current obesity epidemic - there is even talk about food addiction. However, the cause-effect relationship between taste and obesity is far from clear. As discussed here, there is substantial evidence for altered taste sensitivity, taste-guided liking and wanting, and neural reward processing in the obese, but it is not clear whether such traits cause obesity or whether obesity secondarily alters these functions. Studies with calorie restriction-induced weight loss and bariatric surgery in humans and animal models suggest that at least some of the obesity-induced alterations are reversible and consequently represent secondary effects of the obese state. Thus, both genetic and non-genetic predisposition and acquired alterations in taste and reward functions appear to work in concert to aggravate palatability-induced hyperphagia. In addition, palatability is typically associated with high energy content, further challenging energy balance regulation. The mechanisms responsible for these alterations induced by the obese state, weight loss, and bariatric surgery, remain largely unexplored. Better understanding would be helpful in designing strategies to promote healthier eating and prevention of obesity and the accompanying chronic disease risks." }, { "pmid": "24361542", "title": "Eat your troubles away: electrocortical and experiential correlates of food image processing are related to emotional eating style and emotional state.", "abstract": "Emotional eating, a trait-like style of food intake in response to negative emotion states, represents an important aspect of overeating and eating related psychopathology. The mechanisms of emotional eating both on experiential and neuronal levels are not well delineated. We recorded event related potentials (ERPs) while individuals with high or low emotional eating style (HEE, n=25; LEE, n=20) viewed and rated pictures of high-caloric food during neutral state vs. negative idiosyncratic emotion induction. Craving ratings increased in HEE and decreased in LEE during negative relative to neutral states. ERPs to food pictures showed an enhanced late positive potential (LPP) over parieto-occipital regions for HEE compared to LEE. Emotional state modulated food picture evoked ERPs over right frontal regions in HEE only. This suggests that appetitive food processing is susceptible to both concurrent emotion and habitual eating style which is of relevance for overeating in healthy and abnormal eating." }, { "pmid": "25009514", "title": "Food-pics: an image database for experimental research on eating and appetite.", "abstract": "Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior." }, { "pmid": "24204353", "title": "Statistical image properties of print advertisements, visual artworks and images of architecture.", "abstract": "Most visual advertisements are designed to attract attention, often by inducing a pleasant impression in human observers. Accordingly, results from brain imaging studies show that advertisements can activate the brain's reward circuitry, which is also involved in the perception of other visually pleasing images, such as artworks. At the image level, large subsets of artworks are characterized by specific statistical image properties, such as a high self-similarity and intermediate complexity. Moreover, some image properties are distributed uniformly across orientations in the artworks (low anisotropy). In the present study, we asked whether images of advertisements share these properties. To answer this question, subsets of different types of advertisements (single-product print advertisements, supermarket and department store leaflets, magazine covers and show windows) were analyzed using computer vision algorithms and compared to other types of images (photographs of simple objects, faces, large-vista natural scenes and branches). We show that, on average, images of advertisements and artworks share a similar degree of complexity (fractal dimension) and self-similarity, as well as similarities in the Fourier spectrum. However, images of advertisements are more anisotropic than artworks. Values for single-product advertisements resemble each other, independent of the type of product promoted (cars, cosmetics, fashion or other products). For comparison, we studied images of architecture as another type of visually pleasing stimuli and obtained comparable results. These findings support the general idea that, on average, man-made visually pleasing images are characterized by specific patterns of higher-order (global) image properties that distinguish them from other types of images. Whether these properties are necessary or sufficient to induce aesthetic perception and how they correlate with brain activation upon viewing advertisements remains to be investigated." }, { "pmid": "18708099", "title": "Affective valence, stimulus attributes, and P300: color vs. black/white and normal vs. scrambled images.", "abstract": "Pictures from the International Affective Picture System (IAPS) were selected to manipulate affective valence (unpleasant, neutral, pleasant) while keeping arousal level the same. The pictures were presented in an oddball paradigm, with a visual pattern used as the standard stimulus. Subjects pressed a button whenever a target was detected. Experiment 1 presented normal pictures in color and black/white. Control stimuli were constructed for both the color and black/white conditions by randomly rearranging 1 cm square fragments of each original picture to produce a \"scrambled\" image. Experiment 2 presented the same normal color pictures with large, medium, and small scrambled condition (2, 1, and 0.5 cm squares). The P300 event-related brain potential demonstrated larger amplitudes over frontal areas for positive compared to negative or neutral images for normal color pictures in both experiments. Attenuated and nonsignificant valence effects were obtained for black/white images. Scrambled stimuli in each study yielded no valence effects but demonstrated typical P300 topography that increased from frontal to parietal areas. The findings suggest that P300 amplitude is sensitive to affective picture valence in the absence of stimulus arousal differences, and that stimulus color contributes to ERP valence effects." }, { "pmid": "28784478", "title": "A comparison of five methodological variants of emoji questionnaires for measuring product elicited emotional associations: An application with seafood among Chinese consumers.", "abstract": "Product insights beyond hedonic responses are increasingly sought and include emotional associations. Various word-based questionnaires for direct measurement exist and an emoji variant was recently proposed. Herein, emotion words are replaced with emoji conveying a range of emotions. Further assessment of emoji questionnaires is needed to establish their relevance in food-related consumer research. Methodological research contributes hereto and in the present research the effects of question wording and response format are considered. Specifically, a web study was conducted with Chinese consumers (n=750) using four seafood names as stimuli (mussels, lobster, squid and abalone). Emotional associations were elicited using 33 facial emoji. Explicit reference to \"how would you feel?\" in the question wording changed product emoji profiles minimally. Consumers selected only a few emoji per stimulus when using CATA (check-all-that-apply) questions, and layout of the CATA question had only a small impact on responses. A comparison of CATA questions with forced yes/no questions and RATA (rate-all-that-apply) questions revealed an increase in frequency of emoji use for yes/no questions, but not a corresponding improvement in sample discrimination. For the stimuli in this research, which elicited similar emotional associations, RATA was probably the best methodological choice, with 8.5 emoji being used per stimulus, on average, and increased sample discrimination relative to CATA (12% vs. 6-8%). The research provided additional support for the potential of emoji surveys as a method for measurement of emotional associations to foods and beverages and began contributing to development of guidelines for implementation." }, { "pmid": "12631504", "title": "Relationship of gender and eating disorder symptoms to reported cravings for food: construct validation of state and trait craving questionnaires in Spanish.", "abstract": "Using confirmatory factor analysis, we cross-validated the factor structures of the Spanish versions of the State and Trait Food Cravings Questionnaires (FCQ-S and FCQ-T; ) in a sample of 304 Spanish college students. Controlling for eating disorder symptoms and food deprivation, scores on the FCQ-T were higher for women than for men, but no sex differences were observed on the FCQ-S. Eating disorder symptomatology was predictive of trait cravings, whereas food deprivation was predictive state cravings. Trait cravings, but not state cravings, were more strongly associated to symptoms of anorexia and bulimia nervosa than with other psychopathology. We suggest that cravings can be conceptualized as multidimensional motivational states and that our data support the hypothesis that food cravings are strongly associated with symptoms of bulimia nervosa." }, { "pmid": "26167916", "title": "Functional MRI of Challenging Food Choices: Forced Choice between Equally Liked High- and Low-Calorie Foods in the Absence of Hunger.", "abstract": "We are continuously exposed to food and during the day we make many food choices. These choices play an important role in the regulation of food intake and thereby in weight management. Therefore, it is important to obtain more insight into the mechanisms that underlie these choices. While several food choice functional MRI (fMRI) studies have been conducted, the effect of energy content on neural responses during food choice has, to our knowledge, not been investigated before. Our objective was to examine brain responses during food choices between equally liked high- and low-calorie foods in the absence of hunger. During a 10-min fMRI scan 19 normal weight volunteers performed a forced-choice task. Food pairs were matched on individual liking but differed in perceived and actual caloric content (high-low). Food choice compared with non-food choice elicited stronger unilateral activation in the left insula, superior temporal sulcus, posterior cingulate gyrus and (pre)cuneus. This suggests that the food stimuli were more salient despite subject's low motivation to eat. The right superior temporal sulcus (STS) was the only region that exhibited greater activation for high versus low calorie food choices between foods matched on liking. Together with previous studies, this suggests that STS activation during food evaluation and choice may reflect the food's biological relevance independent of food preference. This novel finding warrants further research into the effects of hunger state and weight status on STS, which may provide a marker of biological relevance." }, { "pmid": "26344127", "title": "Standardized food images: A photographing protocol and image database.", "abstract": "The regulation of food intake has gained much research interest because of the current obesity epidemic. For research purposes, food images are a good and convenient alternative for real food because many dietary decisions are made based on the sight of foods. Food pictures are assumed to elicit anticipatory responses similar to real foods because of learned associations between visual food characteristics and post-ingestive consequences. In contemporary food science, a wide variety of images are used which introduces between-study variability and hampers comparison and meta-analysis of results. Therefore, we created an easy-to-use photographing protocol which enables researchers to generate high resolution food images appropriate for their study objective and population. In addition, we provide a high quality standardized picture set which was characterized in seven European countries. With the use of this photographing protocol a large number of food images were created. Of these images, 80 were selected based on their recognizability in Scotland, Greece and The Netherlands. We collected image characteristics such as liking, perceived calories and/or perceived healthiness ratings from 449 adults and 191 children. The majority of the foods were recognized and liked at all sites. The differences in liking ratings, perceived calories and perceived healthiness between sites were minimal. Furthermore, perceived caloric content and healthiness ratings correlated strongly (r ≥ 0.8) with actual caloric content in both adults and children. The photographing protocol as well as the images and the data are freely available for research use on http://nutritionalneuroscience.eu/. By providing the research community with standardized images and the tools to create their own, comparability between studies will be improved and a head-start is made for a world-wide standardized food image database." }, { "pmid": "25447013", "title": "Blue lighting decreases the amount of food consumed in men, but not in women.", "abstract": "Previous research has demonstrated that colors of lighting can modulate participants' motivation to consume the food placed under the lighting. This study was designed to determine whether the colors of lighting can affect the amount of food consumed, in addition to sensory perception of the food. The influence of lighting color was also compared between men and women. One-hundred twelve participants (62 men and 50 women) were asked to consume a breakfast meal (omelets and mini-pancakes) under one of three different lighting colors: white, yellow, and blue. During the test, hedonic impression of the food's appearance, willingness to eat, overall flavor intensity and overall impression of the food, and meal size (i.e., the amount of food consumed) were measured. Blue lighting decreased the hedonic impression of the food's appearance, but not the willingness to eat, compared to yellow and white lighting conditions. The blue lighting significantly decreased the amount consumed in men, but not in women, compared to yellow and white lighting conditions. Overall flavor intensity and overall impression of the food were not significantly different among the three lighting colors. In conclusion, this study provides empirical evidence that the color of lighting can modulate the meal size. In particular, blue lighting can decrease the amount of food eaten in men without reducing their acceptability of the food." }, { "pmid": "27336469", "title": "Predicting Complexity Perception of Real World Images.", "abstract": "The aim of this work is to predict the complexity perception of real world images. We propose a new complexity measure where different image features, based on spatial, frequency and color properties are linearly combined. In order to find the optimal set of weighting coefficients we have applied a Particle Swarm Optimization. The optimal linear combination is the one that best fits the subjective data obtained in an experiment where observers evaluate the complexity of real world scenes on a web-based interface. To test the proposed complexity measure we have performed a second experiment on a different database of real world scenes, where the linear combination previously obtained is correlated with the new subjective data. Our complexity measure outperforms not only each single visual feature but also two visual clutter measures frequently used in the literature to predict image complexity. To analyze the usefulness of our proposal, we have also considered two different sets of stimuli composed of real texture images. Tuning the parameters of our measure for this kind of stimuli, we have obtained a linear combination that still outperforms the single measures. In conclusion our measure, properly tuned, can predict complexity perception of different kind of images." }, { "pmid": "25521352", "title": "Evoked emotions predict food choice.", "abstract": "In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores." }, { "pmid": "17945385", "title": "Sources of positive and negative emotions in food experience.", "abstract": "Emotions experienced by healthy individuals in response to tasting or eating food were examined in two studies. In the first study, 42 participants reported the frequency with which 22 emotion types were experienced in everyday interactions with food products, and the conditions that elicited these emotions. In the second study, 124 participants reported the extent to which they experienced each emotion type during sample tasting tests for sweet bakery snacks, savoury snacks, and pasta meals. Although all emotions occurred from time to time in response to eating or tasting food, pleasant emotions were reported more often than unpleasant ones. Satisfaction, enjoyment, and desire were experienced most often, and sadness, anger, and jealousy least often. Participants reported a wide variety of eliciting conditions, including statements that referred directly to sensory properties and experienced consequences, and statements that referred to more indirect conditions, such as expectations and associations. Five different sources of food emotions are proposed to represent the various reported eliciting conditions: sensory attributes, experienced consequences, anticipated consequences, personal or cultural meanings, and actions of associated agents." }, { "pmid": "16836047", "title": "An information theory analysis of visual complexity and dissimilarity.", "abstract": "The subjective complexity of a computer-generated bitmap image can be measured by magnitude estimation scaling, and its objective complexity can be measured by its compressed file size. There is a high correlation between these measures of subjective and objective complexity over a large set of marine electronic chart and radar images. The subjective dissimilarity of a pair of bitmap images can be predicted from subjective and objective measures of the complexity of each image, and from the subjective and objective complexity of the image produced by overlaying the two simple images. In addition, the subjective complexity of the image produced by overlaying two simple images can be predicted from the subjective complexity of the simple images and the subjective dissimilarity of the image pair. The results of the experiments that generated these complexity and dissimilarity judgments are consistent with a theory, outlined here, that treats objective and subjective measures of image complexity and dissimilarity as vectors in Euclidean space." }, { "pmid": "24530691", "title": "Background music genre can modulate flavor pleasantness and overall impression of food stimuli.", "abstract": "This study aimed to determine whether background music genre can alter food perception and acceptance, but also to determine how the effect of background music can vary as a function of type of food (emotional versus non-emotional foods) and source of music performer (single versus multiple performers). The music piece was edited into four genres: classical, jazz, hip-hop, and rock, by either a single or multiple performers. Following consumption of emotional (milk chocolate) or non-emotional food (bell peppers) with the four musical stimuli, participants were asked to rate sensory perception and impression of food stimuli. Participants liked food stimuli significantly more while listening to the jazz stimulus than the hip-hop stimulus. Further, the influence of background music on overall impression was present in the emotional food, but not in the non-emotional food. In addition, flavor pleasantness and overall impression of food stimuli differed between music genres arranged by a single performer, but not between those by multiple performers. In conclusion, our findings demonstrate that music genre can alter flavor pleasantness and overall impression of food stimuli. Furthermore, the influence of music genre on food acceptance varies as a function of the type of served food and the source of music performer." }, { "pmid": "10702749", "title": "Reproducibility, power and validity of visual analogue scales in assessment of appetite sensations in single test meal studies.", "abstract": "OBJECTIVE\nTo examine reproducibility and validity of visual analogue scales (VAS) for measurement of appetite sensations, with and without a diet standardization prior to the test days.\n\n\nDESIGN\nOn two different test days the subjects recorded their appetite sensations before breakfast and every 30 min during the 4.5 h postprandial period under exactly the same conditions.\n\n\nSUBJECTS\n55 healthy men (age 25.6+/-0.6 y, BMI 22.6+/-0.3 kg¿m2).\n\n\nMEASUREMENTS\nVAS were used to record hunger, satiety, fullness, prospective food consumption, desire to eat something fatty, salty, sweet or savoury, and palatability of the meals. Subsequently an ad libitum lunch was served and energy intake was recorded. Reproducibility was assessed by the coefficient of repeatability (CR) of fasting, mean 4.5 h and peak/nadir values.\n\n\nRESULTS\nCRs (range 20-61 mm) were larger for fasting and peak/nadir values compared with mean 4.5 h values. No parameter seemed to be improved by diet standardization. Using a paired design and a study power of 0.8, a difference of 10 mm on fasting and 5 mm on mean 4.5 h ratings can be detected with 18 subjects. When using desires to eat specific types of food or an unpaired design, more subjects are needed due to considerable variation. The best correlations of validity were found between 4.5 h mean VAS of the appetite parameters and subsequent energy intake (r=+/-0.50-0.53, P<0.001).\n\n\nCONCLUSION\nVAS scores are reliable for appetite research and do not seem to be influenced by prior diet standardization. However, consideration should be given to the specific parameters being measured, their sensitivity and study power. International Journal of Obesity (2000)24, 38-48" }, { "pmid": "23459781", "title": "The FoodCast research image database (FRIDa).", "abstract": "In recent years we have witnessed an increasing interest in food processing and eating behaviors. This is probably due to several reasons. The biological relevance of food choices, the complexity of the food-rich environment in which we presently live (making food-intake regulation difficult), and the increasing health care cost due to illness associated with food (food hazards, food contamination, and aberrant food-intake). Despite the importance of the issues and the relevance of this research, comprehensive and validated databases of stimuli are rather limited, outdated, or not available for non-commercial purposes to independent researchers who aim at developing their own research program. The FoodCast Research Image Database (FRIDa) we present here includes 877 images belonging to eight different categories: natural-food (e.g., strawberry), transformed-food (e.g., french fries), rotten-food (e.g., moldy banana), natural-non-food items (e.g., pinecone), artificial food-related objects (e.g., teacup), artificial objects (e.g., guitar), animals (e.g., camel), and scenes (e.g., airport). FRIDa has been validated on a sample of healthy participants (N = 73) on standard variables (e.g., valence, familiarity, etc.) as well as on other variables specifically related to food items (e.g., perceived calorie content); it also includes data on the visual features of the stimuli (e.g., brightness, high frequency power, etc.). FRIDa is a well-controlled, flexible, validated, and freely available (http://foodcast.sissa.it/neuroscience/) tool for researchers in a wide range of academic fields and industry." }, { "pmid": "27841327", "title": "Food color is in the eye of the beholder: the role of human trichromatic vision in food evaluation.", "abstract": "Non-human primates evaluate food quality based on brightness of red and green shades of color, with red signaling higher energy or greater protein content in fruits and leafs. Despite the strong association between food and other sensory modalities, humans, too, estimate critical food features, such as calorie content, from vision. Previous research primarily focused on the effects of color on taste/flavor identification and intensity judgments. However, whether evaluation of perceived calorie content and arousal in humans are biased by color has received comparatively less attention. In this study we showed that color content of food images predicts arousal and perceived calorie content reported when viewing food even when confounding variables were controlled for. Specifically, arousal positively co-varied with red-brightness, while green-brightness was negatively associated with arousal and perceived calorie content. This result holds for a large array of food comprising of natural food - where color likely predicts calorie content - and of transformed food where, instead, color is poorly diagnostic of energy content. Importantly, this pattern does not emerged with nonfood items. We conclude that in humans visual inspection of food is central to its evaluation and seems to partially engage the same basic system as non-human primates." }, { "pmid": "21241285", "title": "Predicting beauty: fractal dimension and visual complexity in art.", "abstract": "Visual complexity has been known to be a significant predictor of preference for artistic works for some time. The first study reported here examines the extent to which perceived visual complexity in art can be successfully predicted using automated measures of complexity. Contrary to previous findings the most successful predictor of visual complexity was Gif compression. The second study examined the extent to which fractal dimension could account for judgments of perceived beauty. The fractal dimension measure accounts for more of the variance in judgments of perceived beauty in visual art than measures of visual complexity alone, particularly for abstract and natural images. Results also suggest that when colour is removed from an artistic image observers are unable to make meaningful judgments as to its beauty." }, { "pmid": "22245725", "title": "The color red reduces snack food and soft drink intake.", "abstract": "Based on evidence that the color red elicits avoidance motivation across contexts (Mehta & Zhu, 2009), two studies investigated the effect of the color red on snack food and soft drink consumption. In line with our hypothesis, participants drank less from a red labeled cup than from a blue labeled cup (Study 1), and ate less snack food from a red plate than from a blue or white plate (Study 2). The results suggest that red functions as a subtle stop signal that works outside of focused awareness and thereby reduces incidental food and drink intake." }, { "pmid": "20602749", "title": "Assessment of the emotional responses produced by exposure to real food, virtual food and photographs of food in patients affected by eating disorders.", "abstract": "BACKGROUND\nMany researchers and clinicians have proposed using virtual reality (VR) in adjunct to in vivo exposure therapy to provide an innovative form of exposure to patients suffering from different psychological disorders. The rationale behind the 'virtual approach' is that real and virtual exposures elicit a comparable emotional reaction in subjects, even if, to date, there are no experimental data that directly compare these two conditions. To test whether virtual stimuli are as effective as real stimuli, and more effective than photographs in the anxiety induction process, we tested the emotional reactions to real food (RF), virtual reality (VR) food and photographs (PH) of food in two samples of patients affected, respectively, by anorexia (AN) and bulimia nervosa (BN) compared to a group of healthy subjects. The two main hypotheses were the following: (a) the virtual exposure elicits emotional responses comparable to those produced by the real exposure; (b) the sense of presence induced by the VR immersion makes the virtual experience more ecological, and consequently more effective than static pictures in producing emotional responses in humans.\n\n\nMETHODS\nIn total, 10 AN, 10 BN and 10 healthy control subjects (CTR) were randomly exposed to three experimental conditions: RF, PH, and VR while their psychological (Stait Anxiety Inventory (STAI-S) and visual analogue scale for anxiety (VAS-A)) and physiological (heart rate, respiration rate, and skin conductance) responses were recorded.\n\n\nRESULTS\nRF and VR induced a comparable emotional reaction in patients higher than the one elicited by the PH condition. We also found a significant effect in the subjects' degree of presence experienced in the VR condition about their level of perceived anxiety (STAI-S and VAS-A): the higher the sense of presence, the stronger the level of anxiety.\n\n\nCONCLUSIONS\nEven though preliminary, the present data show that VR is more effective than PH in eliciting emotional responses similar to those expected in real life situations. More generally, the present study suggests the potential of VR in a variety of experimental, training and clinical contexts, being its range of possibilities extremely wide and customizable. In particular, in a psychological perspective based on a cognitive behavioral approach, the use of VR enables the provision of specific contexts to help patients to cope with their diseases thanks to an easily controlled stimulation." }, { "pmid": "14992636", "title": "Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires.", "abstract": "The rapid growth of the Internet provides a wealth of new research opportunities for psychologists. Internet data collection methods, with a focus on self-report questionnaires from self-selected samples, are evaluated and compared with traditional paper-and-pencil methods. Six preconceptions about Internet samples and data quality are evaluated by comparing a new large Internet sample (N = 361,703) with a set of 510 published traditional samples. Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age. Moreover, Internet findings generalize across presentation formats, are not adversely affected by nonserious or repeat responders, and are consistent with findings from traditional methods. It is concluded that Internet methods can contribute to many areas of psychology." }, { "pmid": "24312382", "title": "Effect of replacing sugar with non-caloric sweeteners in beverages on the reward value after repeated exposure.", "abstract": "BACKGROUND\nThe reward value of food is partly dependent on learned associations. It is not yet known whether replacing sugar with non-caloric sweeteners in food is affecting long-term acceptance.\n\n\nOBJECTIVE\nTo determine the effect of replacing sugar with non-caloric sweeteners in a nutrient-empty drink (soft drink) versus nutrient-rich drink (yoghurt drink) on reward value after repeated exposure.\n\n\nDESIGN\nWe used a randomized crossover design whereby forty subjects (15 men, 25 women) with a mean ± SD age of 21 ± 2 y and BMI of 21.5 ± 1.7 kg/m(2) consumed a fixed portion of a non-caloric sweetened (NS) and sugar sweetened (SS) versions of either a soft drink or a yoghurt drink (counterbalanced) for breakfast which were distinguishable by means of colored labels. Each version of a drink was offered 10 times in semi-random order. Before and after conditioning the reward value of the drinks was assessed using behavioral tasks on wanting, liking, and expected satiety. In a subgroup (n=18) fMRI was performed to assess brain reward responses to the drinks.\n\n\nRESULTS\nOutcomes of both the behavioral tasks and fMRI showed that conditioning did not affect the reward value of the NS and SS versions of the drinks significantly. Overall, subjects preferred the yoghurt drinks to the soft drinks and the ss drinks to the NS drinks. In addition, they expected the yoghurt drinks to be more satiating, they reduced hunger more, and delayed the first eating episode more. Conditioning did not influence these effects.\n\n\nCONCLUSION\nOur study showed that repeated consumption of a non-caloric sweetened beverage, instead of a sugar sweetened version, appears not to result in changes in the reward value. It cannot be ruled out that learned associations between sensory attributes and food satiating capacity which developed preceding the conditioning period, during lifetime, affected the reward value of the drinks." }, { "pmid": "24318125", "title": "Color and illuminance level of lighting can modulate willingness to eat bell peppers.", "abstract": "BACKGROUND\nFood products are often encountered under colored lighting, particularly in restaurants and retail stores. However, relatively little attention has been paid to whether the color of ambient lighting can affect consumers' motivation for consumption. This study aimed to determine whether color (Experiment 1) and illuminance level (Experiment 2) of lighting can influence consumers' liking of appearance and their willingness to eat bell peppers.\n\n\nRESULTS\nFor red, green, and yellow bell peppers, yellow and blue lighting conditions consistently increased participants' liking of appearance the most and the least, respectively. Participants' willingness to consume bell peppers increased the most under yellow lighting and the least under blue lighting. In addition, a dark condition (i.e. low level of lighting illuminance) decreased liking of appearance and willingness to eat the bell peppers compared to a bright condition (i.e. high level of lighting illuminance).\n\n\nCONCLUSION\nOur findings demonstrate that lighting color and illuminance level can influence consumers' hedonic impression and likelihood to consume bell peppers. Furthermore, the influences of color and illuminance level of lighting appear to be dependent on the surface color of bell peppers." }, { "pmid": "28694958", "title": "Subjective Ratings of Beauty and Aesthetics: Correlations With Statistical Image Properties in Western Oil Paintings.", "abstract": "For centuries, oil paintings have been a major segment of the visual arts. The JenAesthetics data set consists of a large number of high-quality images of oil paintings of Western provenance from different art periods. With this database, we studied the relationship between objective image measures and subjective evaluations of the images, especially evaluations on aesthetics (defined as artistic value) and beauty (defined as individual liking). The objective measures represented low-level statistical image properties that have been associated with aesthetic value in previous research. Subjective rating scores on aesthetics and beauty correlated not only with each other but also with different combinations of the objective measures. Furthermore, we found that paintings from different art periods vary with regard to the objective measures, that is, they exhibit specific patterns of statistical image properties. In addition, clusters of participants preferred different combinations of these properties. In conclusion, the results of the present study provide evidence that statistical image properties vary between art periods and subject matters and, in addition, they correlate with the subjective evaluation of paintings by the participants." }, { "pmid": "25336280", "title": "Modulation of eyeblink and postauricular reflexes during the anticipation and viewing of food images.", "abstract": "One of the goals of neuroscience research on the reward system is to fractionate its functions into meaningful subcomponents. To this end, the present study examined emotional modulation of the eyeblink and postauricular components of startle in 60 young adults during anticipation and viewing of food images. Appetitive and disgusting photos served as rewards and punishments in a guessing game. Reflexes evoked during anticipation were not influenced by valence, consistent with the prevailing view that startle modulation indexes hedonic impact (liking) rather than incentive salience (wanting). During the slide-viewing period, postauricular reflexes were larger for correct than incorrect feedback, whereas the reverse was true for blink reflexes. Probes were delivered in brief trains, but only the first response exhibited this pattern. The specificity of affective startle modification makes it a valuable tool for studying the reward system." }, { "pmid": "19186916", "title": "When hunger finds no fault with moldy corn: food deprivation reduces food-related disgust.", "abstract": "The main purpose of this study was to examine if disgust toward unpalatable foods would be reduced among food-deprived subjects and if this attenuation would occur automatically even under moderate levels of food deprivation. Subjects were either satiated or food deprived for 15 hours and electromyographic activity was recorded at the levator muscle region while they were watching pictures of palatable versus unpalatable foods, and pictures of positive versus disgust-related control pictures. For control purposes, subjects' activity of the zygomaticus and corrugator muscles was also recorded. As compared with satiated subjects, food-deprived subjects exhibited stronger activity in the zygomaticus muscle region when watching pictures of palatable foods (but not when watching positive control pictures). More important, hungry subjects exhibited weaker activity in the levator muscle region when watching pictures of unpalatable foods (but not when watching disgusting control pictures). Thus, this is the first study ever to show that specific emotions (disgust) are moderated by homeostatic dysregulation automatically. Results indicate that the modulation of facial expressions might play an important role in lowering the threshold for food intake." }, { "pmid": "26072251", "title": "Neural processing of basic tastes in healthy young and older adults - an fMRI study.", "abstract": "Ageing affects taste perception as shown in psychophysical studies, however, underlying structural and functional mechanisms of these changes are still largely unknown. To investigate the neurobiology of age-related differences associated with processing of basic tastes, we measured brain activation (i.e. fMRI-BOLD activity) during tasting of four increasing concentrations of sweet, sour, salty, and bitter tastes in young (average 23 years of age) and older (average 65 years of age) adults. The current study highlighted age-related differences in taste perception at the different higher order brain areas of the taste pathway. We found that the taste information delivered to the brain in young and older adults was not different, as illustrated by the absence of age effects in NTS and VPM activity. Our results indicate that multisensory integration changes with age; older adults showed less brain activation to integrate both taste and somatosensory information. Furthermore, older adults directed less attention to the taste stimulus; therefore attention had to be reallocated by the older individuals in order to perceive the tastes. In addition, we considered that the observed age-related differences in brain activation between taste concentrations in the amygdala reflect its involvement in processing both concentration and pleasantness of taste. Finally, we state the importance of homeostatic mechanisms in understanding the taste quality specificity in age related differences in taste perception." }, { "pmid": "27842263", "title": "Appropriateness of the food-pics image database for experimental eating and appetite research with adolescents.", "abstract": "BACKGROUND\nResearch examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents.\n\n\nMETHODS\nThree hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image.\n\n\nRESULTS\nAdolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents.\n\n\nCONCLUSIONS\nWith some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents." }, { "pmid": "30599977", "title": "EmojiGrid: A 2D pictorial scale for cross-cultural emotion assessment of negatively and positively valenced food.", "abstract": "Because of the globalization of world food markets there is a growing need for valid and language independent self-assessment tools to measure food-related emotions. We recently introduced the EmojiGrid as a language-independent, graphical affective self-report tool. The EmojiGrid is a Cartesian grid that is labeled with facial icons (emoji) expressing different degrees of valence and arousal. Users can report their subjective ratings of valence and arousal by marking the location on the area of the grid that corresponds to the emoji that best represent their affective state when perceiving a given food or beverage. In a previous study we found that the EmojiGrid is robust, self-explaining and intuitive: valence and arousal ratings were independent of framing and verbal instructions. This suggests that the EmojiGrid may be a valuable tool for cross-cultural studies. To test this hypothesis, we performed an online experiment in which respondents from Germany (GE), Japan (JP), the Netherlands (NL) and the United Kingdom (UK) rated valence and arousal for 60 different food images (covering a large part of the affective space) using the EmojiGrid. The results show that the nomothetic relation between valence and arousal has the well-known U-shape for all groups. The European groups (GE, NL and UK) closely agree in their overall rating behavior. Compared to the European groups, the Japanese group systematically gave lower mean arousal ratings to low valenced images and lower mean valence ratings to high valenced images. These results agree with known cultural response characteristics. We conclude that the EmojiGrid is potentially a valid and language-independent affective self-report tool for cross-cultural research on food-related emotions. It reliably reproduces the familiar nomothetic U-shaped relation between valence and arousal across cultures, with shape variations reflecting established cultural characteristics." }, { "pmid": "12948696", "title": "Cortical and limbic activation during viewing of high- versus low-calorie foods.", "abstract": "Despite the high prevalence of obesity, eating disorders, and weight-related health problems in modernized cultures, the neural systems regulating human feeding remain poorly understood. Therefore, we applied functional magnetic resonance imaging (fMRI) to study the cerebral responses of 13 healthy normal-weight adult women as they viewed color photographs of food. The motivational salience of the stimuli was manipulated by presenting images from three categories: high-calorie foods, low-calorie foods, and nonedible dining-related utensils. Both food categories were associated with bilateral activation of the amygdala and ventromedial prefrontal cortex. High-calorie foods yielded significant activation within the medial and dorsolateral prefrontal cortex, thalamus, hypothalamus, corpus callosum, and cerebellum. Low-calorie foods yielded smaller regions of focal activation within medial orbitofrontal cortex; primary gustatory/somatosensory cortex; and superior, middle, and medial temporal regions. Findings suggest that the amygdala may be responsive to a general category of biologically relevant stimuli such as food, whereas separate ventromedial prefrontal systems may be activated depending on the perceived reward value or motivational salience of food stimuli." }, { "pmid": "27884762", "title": "Pleasantness, familiarity, and identification of spice odors are interrelated and enhanced by consumption of herbs and food neophilia.", "abstract": "The primary dimension of odor is pleasantness, which is associated with a multitude of factors. We investigated how the pleasantness, familiarity, and identification of spice odors were associated with each other and with the use of the respective spice, overall use of herbs, and level of food neophobia. A total of 126 adults (93 women, 33 men; age 25-61 years, mean 39 years) rated the odors from 12 spices (oregano, anise, rosemary, mint, caraway, sage, thyme, cinnamon, fennel, marjoram, garlic, and clove) for pleasantness and familiarity, and completed a multiple-choice odor identification. Data on the use of specific spices, overall use of herbs, and Food Neophobia Scale score were collected using an online questionnaire. Familiar odors were mostly rated as pleasant (except garlic), whereas unfamiliar odors were rated as neutral (r = 0.63). We observed consistent and often significant trends that suggested the odor pleasantness and familiarity were positively associated with the correct odor identification, consumption of the respective spice, overall use of herbs, and food neophilia. Our results suggest that knowledge acquisition through repetitive exposure to spice odor with active attention may gradually increase the odor pleasantness within the framework set by the chemical characteristics of the aroma compound." }, { "pmid": "27330520", "title": "A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.", "abstract": "OBJECTIVE\nIntraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis.\n\n\nDISCUSSION FOR RESEARCHERS\nThere are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, \"model,\" \"type,\" and \"definition\" selections.\n\n\nDISCUSSION FOR READERS\nWhen coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively.\n\n\nCONCLUSION\nThis article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article." }, { "pmid": "8497555", "title": "Looking at pictures: affective, facial, visceral, and behavioral reactions.", "abstract": "Colored photographic pictures that varied widely across the affective dimensions of valence (pleasant-unpleasant) and arousal (excited-calm) were each viewed for a 6-s period while facial electromyographic (zygomatic and corrugator muscle activity) and visceral (heart rate and skin conductance) reactions were measured. Judgments relating to pleasure, arousal, interest, and emotional state were measured, as was choice viewing time. Significant covariation was obtained between (a) facial expression and affective valence judgments and (b) skin conductance magnitude and arousal ratings. Interest ratings and viewing time were also associated with arousal. Although differences due to the subject's gender and cognitive style were obtained, affective responses were largely independent of the personality factors investigated. Response specificity, particularly facial expressiveness, supported the view that specific affects have unique patterns of reactivity. The consistency of the dimensional relationships between evaluative judgments (i.e., pleasure and arousal) and physiological response, however, emphasizes that emotion is fundamentally organized by these motivational parameters." }, { "pmid": "22664394", "title": "Number of foods available at a meal determines the amount consumed.", "abstract": "The number of foods available at a meal has been suggested as a major determinant of the amount consumed. Two studies conducted in humans test this idea by altering the number of foods available at a meal where participants eat the available foods ad libitum. In Study 1, dinner intake of twenty-seven young adults was measured. The amount consumed was measured when subjects were served either: (a) a composite meal (a protein rich food, a carbohydrate rich food, and a vegetable), (b) a low carbohydrate meal (protein rich food and vegetable), or (c) a vegetarian meal (carbohydrate rich food and vegetable). In Study 2, twenty-four subjects were given two different meals presented either as individual foods or as a composite meal (stir-fry or stew). Both studies show that the greater the number of foods offered at a meal, the greater the total intake. Study 2 demonstrated that the effects observed in Study 1 could not be attributed to different nutrient compositions, but was rather due to the presentation of the individual foods because the same foods that were offered as individual foods were combined to make the composite meal. The results demonstrate that the greater the number of foods offered at a meal, the greater the spontaneous intake of those foods. This finding is important because not only does it expand the concept of variety from the kinds of foods to the number of foods, but it presents an environmental variable that might contribute to overeating and obesity." }, { "pmid": "29403412", "title": "Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.", "abstract": "Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, \"a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.\" Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli." }, { "pmid": "28382006", "title": "Conducting Online Behavioral Research Using Crowdsourcing Services in Japan.", "abstract": "Recent research on human behavior has often collected empirical data from the online labor market, through a process known as crowdsourcing. As well as the United States and the major European countries, there are several crowdsourcing services in Japan. For research purpose, Amazon's Mechanical Turk (MTurk) is the widely used platform among those services. Previous validation studies have shown many commonalities between MTurk workers and participants from traditional samples based on not only personality but also performance on reasoning tasks. The present study aims to extend these findings to non-MTurk (i.e., Japanese) crowdsourcing samples in which workers have different ethnic backgrounds from those of MTurk. We conducted three surveys (N = 426, 453, 167, respectively) designed to compare Japanese crowdsourcing workers and university students in terms of their demographics, personality traits, reasoning skills, and attention to instructions. The results generally align with previous studies and suggest that non-MTurk participants are also eligible for behavioral research. Furthermore, small screen devices are found to impair participants' attention to instructions. Several recommendations concerning this sample are presented." }, { "pmid": "23977295", "title": "Examining complexity across domains: relating subjective and objective measures of affective environmental scenes, paintings and music.", "abstract": "Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general." }, { "pmid": "2802594", "title": "The effect of auditory stimulation on the consumption of soft drinks.", "abstract": "Three groups of five male and five female students were exposed to different levels of auditory stimulation (no music or music played at 70 dB or 90dB) under naturalistic conditions and were permitted to avail themselves freely of a supply of soft drinks. Increasing auditory stimulation produced an increase in total consumption. Reported prior frequencies of soft drink consumption and of exposure to loud music had no bearing on the result of the experimental manipulation, suggesting that the effect was not due to previous history." }, { "pmid": "27513636", "title": "Affective Pictures and the Open Library of Affective Foods (OLAF): Tools to Investigate Emotions toward Food in Adults.", "abstract": "Recently, several sets of standardized food pictures have been created, supplying both food images and their subjective evaluations. However, to date only the OLAF (Open Library of Affective Foods), a set of food images and ratings we developed in adolescents, has the specific purpose of studying emotions toward food. Moreover, some researchers have argued that food evaluations are not valid across individuals and groups, unless feelings toward food cues are compared with feelings toward intense experiences unrelated to food, that serve as benchmarks. Therefore the OLAF presented here, comprising a set of original food images and a group of standardized highly emotional pictures, is intended to provide valid between-group judgments in adults. Emotional images (erotica, mutilations, and neutrals from the International Affective Picture System/IAPS) additionally ensure that the affective ratings are consistent with emotion research. The OLAF depicts high-calorie sweet and savory foods and low-calorie fruits and vegetables, portraying foods within natural scenes matching the IAPS features. An adult sample evaluated both food and affective pictures in terms of pleasure, arousal, dominance, and food craving, following standardized affective rating procedures. The affective ratings for the emotional pictures corroborated previous findings, thus confirming the reliability of evaluations for the food images. Among the OLAF images, high-calorie sweet and savory foods elicited the greatest pleasure, although they elicited, as expected, less arousal than erotica. The observed patterns were consistent with research on emotions and confirmed the reliability of OLAF evaluations. The OLAF and affective pictures constitute a sound methodology to investigate emotions toward food within a wider motivational framework. The OLAF is freely accessible at digibug.ugr.es." }, { "pmid": "25490404", "title": "Meet OLAF, a good friend of the IAPS! The Open Library of Affective Foods: a tool to investigate the emotional impact of food in adolescents.", "abstract": "In the last decades, food pictures have been repeatedly employed to investigate the emotional impact of food on healthy participants as well as individuals who suffer from eating disorders and obesity. However, despite their widespread use, food pictures are typically selected according to each researcher's personal criteria, which make it difficult to reliably select food images and to compare results across different studies and laboratories. Therefore, to study affective reactions to food, it becomes pivotal to identify the emotional impact of specific food images based on wider samples of individuals. In the present paper we introduce the Open Library of Affective Foods (OLAF), which is a set of original food pictures created to reliably select food pictures based on the emotions they prompt, as indicated by affective ratings of valence, arousal, and dominance and by an additional food craving scale. OLAF images were designed to allow simultaneous use with affective images from the International Affective Picture System (IAPS), which is a well-known instrument to investigate emotional reactions in the laboratory. The ultimate goal of the OLAF is to contribute to understanding how food is emotionally processed in healthy individuals and in patients who suffer from eating and weight-related disorders. The present normative data, which was based on a large sample of an adolescent population, indicate that when viewing affective non-food IAPS images, valence, arousal, and dominance ratings were in line with expected patterns based on previous emotion research. Moreover, when viewing food pictures, affective and food craving ratings were consistent with research on food cue processing. As a whole, the data supported the methodological and theoretical reliability of the OLAF ratings, therefore providing researchers with a standardized tool to reliably investigate the emotional and motivational significance of food. The OLAF database is publicly available at zenodo.org." }, { "pmid": "25728885", "title": "Studying the impact of plating on ratings of the food served in a naturalistic dining context.", "abstract": "An experiment conducted in a naturalistic dining context is reported, in which the impact of different styles of plating on diners' experience of the food was assessed. A hundred and sixty three diners were separated into two groups during a luncheon event held in a large dining room. Each group of diners was served the same menu, with a variation in the visual presentation of the ingredients on the plate. The results revealed that the diners were willing to pay significantly more for the appetizer (a salad), when arranged in an artistically-inspired manner (M = £5.94 vs. £4.10). The main course was liked more, and considered more artistic, when the various elements were presented in the centre of the plate, rather than placed off to one side. The participants also reported being willing to pay significantly more for the centred than for the offset plating (M = £15.35 vs. £11.65). These results are consistent with the claim that people \"eat first with their eyes\", and that a diner's experience of the very same ingredients can be significantly enhanced (or diminished) simply by changing the visual layout of the food elements of the dish. Results such as these suggest that theories regarding the perception of food can potentially be confirmed (or disconfirmed) outside of the confines of the laboratory (i.e., in naturalistic dining settings)." }, { "pmid": "26882325", "title": "Testing Augmented Reality for Cue Exposure in Obese Patients: An Exploratory Study.", "abstract": "Binge eating is one of the key behaviors in relation to the etiology and severity of obesity. Cue exposure with response prevention consists of exposing patients to binge foods while actual eating is not allowed. Augmented reality (AR) has the potential to change the way cue exposure is administered, but very few prior studies have been conducted so far. Starting from these premises, this study was aimed to (a) investigate whether AR foods elicit emotional responses comparable to those produced by the real stimuli, (b) study differences between obese and control participants in terms of emotional responses to food, and (c) compare emotional responses to different categories of foods. To reach these goals, we assess in 15 obese (age, 44.6 ± 13 years; body mass index [BMI], 44.2 ± 8.1) and 15 control participants (age, 43.7 ± 12.8 years; BMI, 21.2 ± 1.4) the emotional responses to high-calorie (savory and sweet) and low-calorie food stimuli, presented through different exposure conditions (real, photographic, and AR). The State-Trait Anxiety Inventory was used for the assessment of state anxiety, and it was administered at the beginning and after the exposure to foods, along with the Visual Analog Scale (VAS) for Hunger and Happiness. To assess the perceived pleasantness, the VAS for Palatability was administered after the exposure to food stimuli. Heart rate, skin conductance response, and facial corrugator supercilii muscle activation were recorded. Although preliminary, the results showed that (a) AR food stimuli were perceived to be as palatable as real stimuli, and they also triggered a similar arousal response; (b) obese individuals showed lower happiness after the exposure to food compared to control participants, with regard to both psychological and physiological responses; and (c) high-calorie savory (vs. low-calorie) food stimuli were perceived by all the participants to be more palatable, and they triggered a greater arousal response." }, { "pmid": "24709484", "title": "\"Yummy\" versus \"Yucky\"! Explicit and implicit approach-avoidance motivations towards appealing and disgusting foods.", "abstract": "Wanting and rejecting food are natural reactions that we humans all experience, often unconsciously, on a daily basis. However, in the food domain, the focus to date has primarily been on the approach tendency, and researchers have tended not to study the two opposing tendencies in a balanced manner. Here, we develop a methodology with which to understand people's implicit and explicit reactions to both positive (appealing) and negative (disgusting) foods. It consists of a combination of direct and indirect computer-based tasks, as well as a validated food image stimulus set, specifically designed to investigate motivational approach and avoidance responses towards foods. Fifty non-dieting participants varying in terms of their hunger state (hungry vs. not hungry) reported their explicit evaluations of pleasantness, wanting, and disgust towards the idea of tasting each of the food images that were shown. Their motivational tendencies towards those food items were assessed indirectly using a joystick-based approach-avoidance procedure. For each of the food images that were presented, the participants had to move the joystick either towards or away from themselves (approach and avoidance movements, respectively) according to some unrelated instructions, while their reaction times were recorded. Our findings demonstrated the hypothesised approach-avoidance compatibility effect: a significant interaction of food valence and direction of movement. Furthermore, differences between the experimental groups were observed. The participants in the no-hunger group performed avoidance (vs. approach) movements significantly faster; and their approach movements towards positive (vs. negative) foods were significantly faster. As expected, the self-report measures revealed a strong effect of the food category on the three dependent variables and a strong main effect of the hunger state on wanting and to a lesser extent on pleasantness." }, { "pmid": "24462488", "title": "Colour, pleasantness, and consumption behaviour within a meal.", "abstract": "It is often claimed that colour (e.g., in a meal) affects consumption behaviour. However, just how strong is the evidence in support of this claim, and what are the underlying mechanisms? It has been shown that not only the colour itself, but also the variety and the arrangement of the differently-coloured components in a meal influence consumers' ratings of the pleasantness of a meal (across time) and, to a certain extent, might even affect their consumption behaviour as well. Typically, eating the same food constantly or repeatedly leads to a decrease in its perceived pleasantness, which, as a consequence, might lead to decreased intake of that food. However, variation within a meal (in one or several sensory attributes, or holistically) has been shown to slow down this process. In this review, we first briefly summarize the literature on how general variety in a meal influences these variables and the major theories that have been put forward by researchers to explain them. We then go on to evaluate the evidence of these effects based mainly on the colour of the food explaining the different processes that might affect colour-based sensory-specific satiety and, in more detail, consumption behaviour. In addition, we also discuss the overlap in the definitions of these terms and provide additional hypothesis as to why, in some cases, the opposite pattern of results has been observed." }, { "pmid": "10097030", "title": "Assessing food neophobia: the role of stimulus familiarity.", "abstract": "The present study assesses the effects of food familiarity on food ratings of neophobics and neophilics by having them sample and evaluate familiar and novel foods. Level of neophobia was assessed using the Food Neophobia Scale (FNS). Participants rated their familiarity with each food, their willingness to try the foods and expected liking for the foods, as well as their actual liking for the foods after they were sampled. Willingness to try the foods again in the future, and the amount of food sampled were also assessed. Evaluations of the foods were more positive for familiar vs. unfamiliar foods across all study participants. The responses of neophobics and neophilics were similar for familiar foods, but differed when the foods were unfamiliar, with neophobics making more negative evaluations. Neophobics and neophilics differed least in their liking ratings of the stimuli that were made after the foods were actually sampled, and differed most in their ratings of willingness to try the foods. It is concluded that neophobics have different expectancies about unfamiliar foods, and that these expectancies influence food sampling and rating behaviors. The neophobic's negative attitude toward an unfamiliar food may be ameliorated, but is not eliminated, once sensory information about the food is obtained." }, { "pmid": "7178247", "title": "How sensory properties of foods affect human feeding behavior.", "abstract": "The sensory properties of food which can lead to a decrease in the pleasantness of that food after it is eaten, and to enhanced food intake if that property of the food is changed by successive presentation of different foods, were investigated. After eating chocolates of one color the pleasantness of the taste of the eaten color declined more than of the non-eaten colors, although these chocolates differed only in appearance. The presentation of a variety of colors of chocolates, either simultaneously or successively, did not affect food intake compared with consumption of the subject's favorite color. Changes in the shape of food (which affects both appearance and mouth feel) were introduced by offering subjects three successive courses consisting of different shapes of pasta. Changes in shape led to a specific decrease in the pleasantness of the shape eaten and to a significant enhancement (14%) of food intake when three shapes were offered compared with intake of the subject's favorite shape. Changes in just the flavor of food (i.e., cream cheese sandwiches flavored with salt, or with the non-nutritive flavoring agents lemon and saccharin, or curry) led to a significant enhancement (15%) of food intake when all three flavors were presented successively compared with intake of the favorite. The experiments elucidate some of the properties of food which are involved in sensory specific satiety, and which determine the amount of food eaten." }, { "pmid": "18217832", "title": "Measuring visual clutter.", "abstract": "Visual clutter concerns designers of user interfaces and information visualizations. This should not surprise visual perception researchers because excess and/or disorganized display items can cause crowding, masking, decreased recognition performance due to occlusion, greater difficulty at both segmenting a scene and performing visual search, and so on. Given a reliable measure of the visual clutter in a display, designers could optimize display clutter. Furthermore, a measure of visual clutter could help generalize models like Guided Search (J. M. Wolfe, 1994) by providing a substitute for \"set size\" more easily computable on more complex and natural imagery. In this article, we present and test several measures of visual clutter, which operate on arbitrary images as input. The first is a new version of the Feature Congestion measure of visual clutter presented in R. Rosenholtz, Y. Li, S. Mansfield, and Z. Jin (2005). This Feature Congestion measure of visual clutter is based on the analogy that the more cluttered a display or scene is, the more difficult it would be to add a new item that would reliably draw attention. A second measure of visual clutter, Subband Entropy, is based on the notion that clutter is related to the visual information in the display. Finally, we test a third measure, Edge Density, used by M. L. Mack and A. Oliva (2004) as a measure of subjective visual complexity. We explore the use of these measures as stand-ins for set size in visual search models and demonstrate that they correlate well with search performance in complex imagery. This includes the search-in-clutter displays of J. M. Wolfe, A. Oliva, T. S. Horowitz, S. Butcher, and A. Bompas (2002) and Bravo and Farid (2004), as well as new search experiments. An additional experiment suggests that color variability, accounted for by Feature Congestion but not the Edge Density measure or the Subband Entropy measure, does matter for visual clutter." }, { "pmid": "18839484", "title": "Intraclass correlations: uses in assessing rater reliability.", "abstract": "Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed." }, { "pmid": "15703257", "title": "Pictures of appetizing foods activate gustatory cortices for taste and reward.", "abstract": "Increasing research indicates that concepts are represented as distributed circuits of property information across the brain's modality-specific areas. The current study examines the distributed representation of an important but under-explored category, foods. Participants viewed pictures of appetizing foods (along with pictures of locations for comparison) during event-related fMRI. Compared to location pictures, food pictures activated the right insula/operculum and the left orbitofrontal cortex, both gustatory processing areas. Food pictures also activated regions of visual cortex that represent object shape. Together these areas contribute to a distributed neural circuit that represents food knowledge. Not only does this circuit become active during the tasting of actual foods, it also becomes active while viewing food pictures. Via the process of pattern completion, food pictures activate gustatory regions of the circuit to produce conceptual inferences about taste. Consistent with theories that ground knowledge in the modalities, these inferences arise as reenactments of modality-specific processing." }, { "pmid": "21264748", "title": "Crossmodal correspondences: a tutorial review.", "abstract": "In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain \"know\" which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem." }, { "pmid": "26432045", "title": "Eating with our eyes: From visual hunger to digital satiation.", "abstract": "One of the brain's key roles is to facilitate foraging and feeding. It is presumably no coincidence, then, that the mouth is situated close to the brain in most animal species. However, the environments in which our brains evolved were far less plentiful in terms of the availability of food resources (i.e., nutriments) than is the case for those of us living in the Western world today. The growing obesity crisis is but one of the signs that humankind is not doing such a great job in terms of optimizing the contemporary food landscape. While the blame here is often put at the doors of the global food companies - offering addictive foods, designed to hit 'the bliss point' in terms of the pleasurable ingredients (sugar, salt, fat, etc.), and the ease of access to calorie-rich foods - we wonder whether there aren't other implicit cues in our environments that might be triggering hunger more often than is perhaps good for us. Here, we take a closer look at the potential role of vision; Specifically, we question the impact that our increasing exposure to images of desirable foods (what is often labelled 'food porn', or 'gastroporn') via digital interfaces might be having, and ask whether it might not inadvertently be exacerbating our desire for food (what we call 'visual hunger'). We review the growing body of cognitive neuroscience research demonstrating the profound effect that viewing such images can have on neural activity, physiological and psychological responses, and visual attention, especially in the 'hungry' brain." }, { "pmid": "14978648", "title": "Translation and validation of study instruments for cross-cultural research.", "abstract": "Cross-cultural research often involves physicians, nurses, and other health care providers. In studies of fecal and urinary incontinence, cross-cultural research has been applied to quality-of-life comparisons, and instruments have been translated to foreign languages for use in other countries. This report presents some of the principal methodological issues and problems associated with translating questionnaires for use in cross-cultural research in a manner relevant to clinicians and health care practitioners who are aware that, unless these potential problems are addressed, the results of their research may be suspect. Translation is the most common method of preparing instruments for cross-cultural research and has pitfalls that threaten validity. Some of these problems are difficult to detect and may have a detrimental effect on the study results. Identification and correction of problems can enhance research quality and validity. A method for translation and validation is presented in detail. However, the specific validation method adopted is less important than the recognition that the translation process must be appropriate and the validation process rigorous." }, { "pmid": "18996158", "title": "Vegetarianism and food perception. Selective visual attention to meat pictures.", "abstract": "Vegetarianism provides a model system to examine the impact of negative affect towards meat, based on ideational reasoning. It was hypothesized that meat stimuli are efficient attention catchers in vegetarians. Event-related brain potential recordings served to index selective attention processes at the level of initial stimulus perception. Consistent with the hypothesis, late positive potentials to meat pictures were enlarged in vegetarians compared to omnivores. This effect was specific for meat pictures and obtained during passive viewing and an explicit attention task condition. These findings demonstrate the attention capture of food stimuli, deriving affective salience from ideational reasoning and symbolic meaning." }, { "pmid": "15325695", "title": "Effect of ambience on food intake and food choice.", "abstract": "Eating takes place in a context of environmental stimuli known as ambience. Various external factors such as social and physical surroundings, including the presence of other people and sound, temperature, smell, color, time, and distraction affect food intake and food choice. Food variables such as the temperature, smell, and color of the food also influence food intake and choice differently. However, the influence of ambience on nutritional health is not fully understood. This review summarizes the research on ambient influences on food intake and food choice. The literature suggests that there are major influences of ambience on eating behavior and that the magnitude of the effect of ambience may be underestimated. Changes in intake can be detected with different levels of the number of people present, food accessibility, eating locations, food color, ambient temperatures and lighting, and temperature of foods, smell of food, time of consumption, and ambient sounds. It is suggested that the manipulation of these ambient factors as a whole or individually may be used therapeutically to alter food intake and that more attention needs to be paid to ambience in nutrition-related research." }, { "pmid": "22590605", "title": "Gender and weight shape brain dynamics during food viewing.", "abstract": "Hemodynamic imaging results have associated both gender and body weight to variation in brain responses to food-related information. However, the spatio-temporal brain dynamics of gender-related and weight-wise modulations in food discrimination still remain to be elucidated. We analyzed visual evoked potentials (VEPs) while normal-weighted men (n = 12) and women (n = 12) categorized photographs of energy-dense foods and non-food kitchen utensils. VEP analyses showed that food categorization is influenced by gender as early as 170 ms after image onset. Moreover, the female VEP pattern to food categorization co-varied with participants' body weight. Estimations of the neural generator activity over the time interval of VEP modulations (i.e. by means of a distributed linear inverse solution [LAURA]) revealed alterations in prefrontal and temporo-parietal source activity as a function of image category and participants' gender. However, only neural source activity for female responses during food viewing was negatively correlated with body-mass index (BMI) over the respective time interval. Women showed decreased neural source activity particularly in ventral prefrontal brain regions when viewing food, but not non-food objects, while no such associations were apparent in male responses to food and non-food viewing. Our study thus indicates that gender influences are already apparent during initial stages of food-related object categorization, with small variations in body weight modulating electrophysiological responses especially in women and in brain areas implicated in food reward valuation and intake control. These findings extend recent reports on prefrontal reward and control circuit responsiveness to food cues and the potential role of this reactivity pattern in the susceptibility to weight gain." }, { "pmid": "23145257", "title": "Emotional effects of dynamic textures.", "abstract": "This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures' area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval." }, { "pmid": "21111829", "title": "The first taste is always with the eyes: a meta-analysis on the neural correlates of processing visual food cues.", "abstract": "Food selection is primarily guided by the visual system. Multiple functional neuro-imaging studies have examined the brain responses to visual food stimuli. However, the results of these studies are heterogeneous and there still is uncertainty about the core brain regions involved in the neural processing of viewing food pictures. The aims of the present study were to determine the concurrence in the brain regions activated in response to viewing pictures of food and to assess the modulating effects of hunger state and the food's energy content. We performed three Activation Likelihood Estimation (ALE) meta-analyses on data from healthy normal weight subjects in which we examined: 1) the contrast between viewing food and nonfood pictures (17 studies, 189 foci), 2) the modulation by hunger state (five studies, 48 foci) and 3) the modulation by energy content (seven studies, 86 foci). The most concurrent brain regions activated in response to viewing food pictures, both in terms of ALE values and the number of contributing experiments, were the bilateral posterior fusiform gyrus, the left lateral orbitofrontal cortex (OFC) and the left middle insula. Hunger modulated the response to food pictures in the right amygdala and left lateral OFC, and energy content modulated the response in the hypothalamus/ventral striatum. Overall, the concurrence between studies was moderate: at best 41% of the experiments contributed to the clusters for the contrast between food and nonfood. Therefore, future research should further elucidate the separate effects of methodological and physiological factors on between-study variations." }, { "pmid": "29441030", "title": "Multisensory Technology for Flavor Augmentation: A Mini Review.", "abstract": "There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human-food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human-food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences." }, { "pmid": "24411766", "title": "A review of visual cues associated with food on food acceptance and consumption.", "abstract": "Several sensory cues affect food intake including appearance, taste, odor, texture, temperature, and flavor. Although taste is an important factor regulating food intake, in most cases, the first sensory contact with food is through the eyes. Few studies have examined the effects of the appearance of a food portion on food acceptance and consumption. The purpose of this review is to identify the various visual factors associated with food such as proximity, visibility, color, variety, portion size, height, shape, number, volume, and the surface area and their effects on food acceptance and consumption. We suggest some ways that visual cues can be used to increase fruit and vegetable intake in children and decrease excessive food intake in adults. In addition, we discuss the need for future studies that can further establish the relationship between several unexplored visual dimensions of food (specifically shape, number, size, and surface area) and food intake." }, { "pmid": "24341317", "title": "Portion size me: plate-size induced consumption norms and win-win solutions for reducing food intake and waste.", "abstract": "Research on the self-serving of food has empirically ignored the role that visual consumption norms play in determining how much food we serve on different sized dinnerware. We contend that dinnerware provides a visual anchor of an appropriate fill-level, which in turn, serves as a consumption norm (Study 1). The trouble with these dinnerware-suggested consumption norms is that they vary directly with dinnerware size--Study 2 shows Chinese buffet diners with large plates served 52% more, ate 45% more, and wasted 135% more food than those with smaller plates. Moreover, education does not appear effective in reducing such biases. Even a 60-min, interactive, multimedia warning on the dangers of using large plates had seemingly no impact on 209 health conference attendees, who subsequently served nearly twice as much food when given a large buffet plate 2 hr later (Study 3). These findings suggest that people may have a visual plate-fill level--perhaps 70% full--that they anchor on when determining the appropriate consumption norm and serving themselves. Study 4 suggests that the Delboeuf illusion offers an explanation why people do not fully adjust away from this fill-level anchor and continue to be biased across a large range of dishware sizes. These findings have surprisingly wide-ranging win-win implications for the welfare of consumers as well as for food service managers, restaurateurs, packaged goods managers, and public policy officials." }, { "pmid": "26244107", "title": "Conducting perception research over the internet: a tutorial review.", "abstract": "This article provides an overview of the recent literature on the use of internet-based testing to address important questions in perception research. Our goal is to provide a starting point for the perception researcher who is keen on assessing this tool for their own research goals. Internet-based testing has several advantages over in-lab research, including the ability to reach a relatively broad set of participants and to quickly and inexpensively collect large amounts of empirical data, via services such as Amazon's Mechanical Turk or Prolific Academic. In many cases, the quality of online data appears to match that collected in lab research. Generally-speaking, online participants tend to be more representative of the population at large than those recruited for lab based research. There are, though, some important caveats, when it comes to collecting data online. It is obviously much more difficult to control the exact parameters of stimulus presentation (such as display characteristics) with online research. There are also some thorny ethical elements that need to be considered by experimenters. Strengths and weaknesses of the online approach, relative to others, are highlighted, and recommendations made for those researchers who might be thinking about conducting their own studies using this increasingly-popular approach to research in the psychological sciences." }, { "pmid": "24904121", "title": "Modeling visual clutter perception using proto-object segmentation.", "abstract": "We introduce the proto-object model of visual clutter perception. This unsupervised model segments an image into superpixels, then merges neighboring superpixels that share a common color cluster to obtain proto-objects-defined here as spatially extended regions of coherent features. Clutter is estimated by simply counting the number of proto-objects. We tested this model using 90 images of realistic scenes that were ranked by observers from least to most cluttered. Comparing this behaviorally obtained ranking to a ranking based on the model clutter estimates, we found a significant correlation between the two (Spearman's ρ = 0.814, p < 0.001). We also found that the proto-object model was highly robust to changes in its parameters and was generalizable to unseen images. We compared the proto-object model to six other models of clutter perception and demonstrated that it outperformed each, in some cases dramatically. Importantly, we also showed that the proto-object model was a better predictor of clutter perception than an actual count of the number of objects in the scenes, suggesting that the set size of a scene may be better described by proto-objects than objects. We conclude that the success of the proto-object model is due in part to its use of an intermediate level of visual representation-one between features and objects-and that this is evidence for the potential importance of a proto-object representation in many common visual percepts and tasks." }, { "pmid": "21440593", "title": "Effects of energy conditioning on food preferences and choice.", "abstract": "This study investigated the development of conditioned preferences for foods varying in energy content in human adults in a laboratory setting. In a within-subjects design, 44 participants consumed high and low energy yoghurt drinks (255 kcal and 57 kcal per 200 ml serving, respectively) first thing in the morning following 8 h of fasting, every day for two weeks, with 5 exposures to each yoghurt drink on alternate days. The high and low energy yoghurt drinks were paired with two coloured labels (blue or pink), with the pairings fully counter-balanced. Every day of the third (test) week, participants were given a free choice of either consuming the pink or the blue labelled yoghurt drink. Participants chose the high energy drink significantly more often over the low energy drink, suggesting a conditioned preference for a delayed (energy) reward. These findings provide further evidence for energy based learning in human adults. This study also provides a new approach to the conditioning paradigm (cueing energy via a coloured label instead of flavour) and includes a new and important measure in this research area (preference instead of liking)." }, { "pmid": "21855585", "title": "Neatness counts. How plating affects liking for the taste of food.", "abstract": "Two studies investigated the effect that the arrangement of food on a plate has on liking for the flavor of the food. Food presented in a neatly arranged presentation is liked more than the same food presented in a messy manner. A third study found that subjects expected to like the food in the neat presentations more than in the messy ones and would be willing to pay more for them. They also indicated that the food in the neat presentations came from a higher quality restaurant and that more care was taken with its preparation than the food in the messy presentations. Only the animal-based food was judged as being more contaminated when presented in a messy rather than a neat way. Neatness of the food presentation increases liking for the taste of the food by suggesting greater care on the part of the preparer. Two mechanisms by which greater care might increase liking are discussed." } ]
Journal of the Association for Information Science and Technology
30775406
PMC6360409
10.1002/asi.24082
Comparing neural‐ and N‐gram‐based language models for word segmentation
Word segmentation is the task of inserting or deleting word boundary characters in order to separate character sequences that correspond to words in some language. In this article we propose an approach based on a beam search algorithm and a language model working at the byte/character level, the latter component implemented either as an n‐gram model or a recurrent neural network. The resulting system analyzes the text input with no word boundaries one token at a time, which can be a character or a byte, and uses the information gathered by the language model to determine if a boundary must be placed in the current position or not. Our aim is to use this system in a preprocessing step for a microtext normalization system. This means that it needs to effectively cope with the data sparsity present on this kind of texts. We also strove to surpass the performance of two readily available word segmentation systems: The well‐known and accessible Word Breaker by Microsoft, and the Python module WordSegment by Grant Jenks. The results show that we have met our objectives, and we hope to continue to improve both the precision and the efficiency of our system in the future.
Related WorkWord segmentation is an important preprocessing step in several natural language‐processing systems, such as machine translation (Koehn & Knight, 2003), information retrieval (Alfonseca, Bilac, & Pharies, 2008), or speech recognition (Adda‐Decker, Adda, & Lamel, 2000). On the other hand, most Asian languages, although retaining the concept of word, do not use word boundary characters in their writing systems to separate these elements. As a result, the application of word segmentation for these languages has drawn a lot of attention from the research community, with abundant work in recent years (Chen, Qiu, Zhu, Liu, & Huang, 2015; Pei, Ge, & Chang, 2014; Xu & Sun, 2016; Zheng, Chen, & Xu, 2013).Beyond the Asian context, we can also find European languages with highly complex morphology such as German, Turkish, or Finnish, which can also benefit from a conceptually different word segmentation procedure (Alfonseca et al., 2008; Koehn & Knight, 2003). In these cases, and mainly for agglutinative or compounding languages (Krott, Schreuder, Harald Baayen, & Dressler, 2007), new words are usually created just by joining together previously known words. A system with a vocabulary lacking these new words may still be able to process them if some sort of word segmentation system is in place. However, it is worth noting that this is a slightly different kind of word segmentation, as it is concerned with extracting the base words that form a compound word. In contrast, our approach focuses on separating all words, compound or not, from each other.Moving on to the web domain, there are special types of tokens that can also be targeted by a segmentation system. The first ones to appear, and an essential concept for the web itself, are URLs (Chi, Ding, & Lim, 1999; Wang et al., 2011). These elements do not admit literal whitespaces in their formation, but most of the time they do contain multiple words in them. Words may be separated by a special encoding of the whitespace character like percent‐encoding or a different encoding that uses URL‐safe characters. Most other times, words are just joined together with no boundary characters, and thus the requirement for a segmentation process arises.Then, with the advent of Web 2.0, the use of special tokens called hashtags in social media became very common (Maynard & Greenwood, 2014; Srinivasan, Bhattacharya, & Chakraborty, 2012). Similar to URLs, hashtags may also be formed by multiple words. Unlike those, these elements do not use any word boundary character(s) between words, thus the use of a segmentation system seems more advantageous in this case.The segmentation procedure that most of the previous work follows can be summarized in two steps. First, they scan the input to obtain a list of possible segmentation candidates. This step can be iterative, obtaining lists of candidates for substrings of the input until it is wholly consumed. Sets of predefined rules (Koehn & Knight, 2003) or other resources such as dictionaries and word or morpheme lexicons (Kacmarcik, Brockett, & Suzuki, 2000) may be used for candidate generation. Then, for the second step they select the best or n best segmentation candidates as their final solution. In this case, they resort to some scoring function, such as the likelihood given by the syntactic analysis of the candidate segmentations (Wu & Jiang, 1998) or the most probable sequence of words given a language model (Wang et al., 2011).Some other techniques, usually employed in the Chinese language, consider the word segmentation task as a tagging task (Xue, 2003). Under this approach, the objective of the segmentation system is to assign a tag to each character in the input text, rendering the word segmentation task as a sequence labeling task. The tags mark the position of a particular character in a candidate segmented word, and usually come from the following set: Beginning of word, middle of word, end of word, or unique character word.Recently, neural network‐based approaches have joined traditional statistical ones based on Maximum Entropy (Low, Ng, & Guo, 2005) and Conditional Random Fields (Peng, Feng, & McCallum, 2004). These models may be used inside the traditional sequence tagging framework (Chen, Qiu, Zhu, & Huang, 2015; Pei et al., 2014; Zheng et al., 2013) but, more interestingly, they also enable new approaches for word segmentation. Cai and Zhao (2016) obtain segmented word embeddings from the corresponding candidate character sequences and then feed them to a neural network for scoring. Zhang, Zhang, and Fu (2016) consider a transition‐based framework where they process the input at the character level and use neural networks to decide on the next action given the current state of the system: Append the character to a previous segmented word or insert a word boundary. Both of these approaches use recurrent neural networks for the segmentation candidate generation and beam search algorithms to find the best segmentation obtained.Outside the Chinese context, one of the most popular state‐of‐the‐art systems for word segmentation in multiple languages is the Microsoft Word Breaker from the Project Oxford (Wang et al., 2011). Its original article defines the word segmentation problem as a Bayesian Minimum Risk Framework. Using a uniform risk function and the Maximum a posteriori decision rule, they define the a priori distribution, or segmentation prior, as a Markov n‐gram. For the a posteriori distribution, or transformation model, they consider a binomial distribution and a word length‐adjusted model. Finally, they solve the optimization problem posed by the decision rule using a word synchronous beam search algorithm.The language model they use for the a priori distribution is presented in Wang, Thrasher, Viegas, Li, and Hsu (2010). This is a word‐based smoothed backoff n‐gram model constructed using the CALM algoritm (Wang & Li, 2009) with the web crawling data of the Bing search engine.2 Some particular features of this model are that all the words are first lowercased and their non‐ASCII alphanumeric characters transformed or removed to fit in this set, and also that it is being continuously updated with new data from the web. However, the aggressive preprocessing performed by this system may result in limitations in two particular domains: Microtexts and non‐English languages. For the first case, data sparsity may pose a problem for a word‐based n‐gram language model. This type of model would have to see every possible variation of a standard word in order to process it appropriately. As an example, an appearance of the unknown word “hii” would mean using the ¡UNK¿ token instead of the information stored for the equivalent standard known word “hi,” which constitutes some loss of information. Then, if the also unknown word “theeere” occurs, it would mean that the system has failed to use any relevant information to process the input. Hence, the input “hiitheeere” could be incorrectly segmented into “hii thee ere,” a more likely path for the model given the known token “thee.”On the other hand, working only with lowercased ASCII alphanumeric characters leaves non‐Latin alphabets out of the question—although Latin transcriptions could be used—and limits the overall capacity of the system due to the loss of information from the removed or replaced characters. For instance, consider “momsday” in the context of text normalization. The n‐gram model would give higher likelihood to “mom s day” as it would have seen the token “mom” very frequently, both when appearing on its own and when swapping the “’” by a word boundary character in “mom's,” obtaining “mom s day.” However, we prefer in this case “moms day” as the most likely answer, not only because it can be the correct answer but also because we can later correct the first word to include the apostrophe if needed and/or appropriate using a text normalization system.The WordSegment Python module is an implementation of the ideas covered in Norvig (2009). It is based on 1‐gram and 2‐gram language models working at the word level that are paired with a Viterbi algorithm for decoding. The system first obtains segmentation candidates that are scored using the n‐gram models, and then the best sequence of segmented words is selected using the Viterbi algorithm. A clear advantage of this system for our work is that we can easily train its n‐gram models from scratch in order to adapt it for our text domains/languages. This provides us with a better comparative framework than the Word Breaker.Our current take on the word segmentation task extends the work in Doval, Gómez‐Rodríguez, and Vilares (2016) with a new beam search algorithm and newer implementations for the language model component. We also broaden the scope of our work by targeting not only Spanish but also English, German, Turkish, and Finnish. We have chosen the last three languages based on the need to test our approach with morphologically complex languages, with the agglutinative languages Turkish and Finnish being the most notable cases.
[ "9377276" ]
[ { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." } ]
Heliyon
30839900
PMC6365394
10.1016/j.heliyon.2019.e01180
Government capabilities as drivers of performance: path to prosperity
The purpose of this study is to test the relationship between specific capabilities of Cyprus' government and performance and the impact of performance on Cypriots' prosperity. Via the resource-based view model (RBV), it was hypothesized that each capability (i.e., entrepreneurship, motivation, investment, and adaptation) is positively related to Cyprus' government achievement of higher performance. It was also hypothesized that performance is positively related to prosperity. Data collected from 200 Cypriot citizens, aged 18 or over. Using correlation analysis, the study shows that entrepreneurial and adaptive capabilities have a statistical strong positive relationship with performance. In turn, performance has a strong positive relationship with prosperity. Several implications can be drawn from this study's findings for democratic governments and interesting directions for future research are provided.
9Related workThe resource-based view tool could open-up great opportunities for political marketing. New studies can adapt this study's model (Entrepreneurial, Motivation, Investment, and Adaptation capabilities as drivers of performance and prosperity) in order to build on the political marketing of political parties and individual politicians (Table 1). Further research could identify new government resources and capabilities and collect data from other democratic governments. This study focused on a small country and this may limit the generalizability of the findings to other countries; future research could validate the findings using data obtained from much bigger countries (e.g., the U.S.A., or the United Kingdom, etc.).
[]
[]
Royal Society Open Science
30800345
PMC6366196
10.1098/rsos.180817
Another coin bites the dust: an analysis of dust in UTXO-based cryptocurrencies
Unspent Transaction Outputs (UTXOs) are the internal mechanism used in many cryptocurrencies to represent coins. Such representation has some clear benefits, but also entails some complexities that, if not properly handled, may leave the system in an inefficient state. Specifically, inefficiencies arise when wallets (the software responsible for transferring coins between parties) do not manage UTXOs properly when performing payments. In this paper, we study three cryptocurrencies: Bitcoin, Bitcoin Cash and Litecoin, by analysing the state of their UTXO sets, that is, the status of their sets of spendable coins. These three cryptocurrencies are the top-3 UTXO-based cryptocurrencies by market capitalization. Our analysis shows that the usage of each cryptocurrency presents some differences, and led to different results. Furthermore, it also points out that the management of the transactions has not always been performed efficiently and therefore, the current state of the UTXO sets is far from ideal.
6.Related workAs we have seen, the characteristics of the UTXO set can be a key point in cryptocurrencies like Bitcoin, Litecoin and Bitcoin Cash. The size and performance of this set have a direct impact on how the system will perform, and it is thus a focus area where to improve the scalability and efficiency of these cryptocurrencies. For example, transaction generation performance in Bitcoin is greatly influenced by the size of the UTXO set [9].We can currently find typical statics and simple visualizations of the UTXO set of Bitcoin [10,11], but we are not aware of a more in-depth study and comparison of the UTXO set of significant cryptocurrencies like the one presented in this paper. We believe that knowing the composition and evolution of the UTXO set will undoubtedly provide the means to better understand it and develop strategies and tools to improve the UTXO set usage, thus enhancing the whole system performance.The relevance of the UTXO is not new, concerns about its size, composition and performance have been around for some time [12]. These concerns are specially relevant in light of the scalability problems of Bitcoin and are currently an important issue for the future of Bitcoin itself. For instance, Bitcoin Core changed the UTXO set format in version v0.15 in order to improve its performance [13,14]. Both individual users and the whole system will benefit from better management of the UTXO set.From the user point of view, a strategy of consolidating UTXOs in order to prevent the creation of dust and unprofitable UTXOs in the future (in case of higher fees) has always been considered [15].15 But such strategies are not easy to generalize. A consolidation will not always reduce the fees for a given user, specially if we cannot anticipate future fee rates. On the other hand, some user will need to maintain a minimum number of UTXOs to be able to generate transactions in parallel. Furthermore, such strategies can undermine the privacy requirements of some users. Given these, sometimes conflicting, constraints and the unpredictability of future fee rates, there is currently no actual strategy for UTXO consolidation.An important process that directly impacts (and is influenced by) the UTXO set composition and size is the coin selection decision performed by wallets [16]. Coin selection is the decision process that a wallet carries in order to choose UTXOs as inputs for a new transaction. Each implementation might use a different coin selection strategy [17]. For instance, if we take a look at Bitcoin, according to [18], several strategies are being used by different wallets. The Bitcoin Core wallet attempts to find a direct match always minimizing the change to be generated. BRD [19] (a popular Android and iOS wallet also known as BreadWallet), uses an FIFO strategy, where the oldest UTXOs from the pool are chosen until the target value is matched. A similar approach is used by Electrum [20] and Mycelium [21] which additionally prunes small-valued UTXOs. The bitcoinj library [22] determines a priority metric from the age and value of the UTXOs in order to select new ones. It is by no means clear which strategy is better. Different goals and strategies can be conflicting, such as minimizing the generation of small UTXOs, minimizing the fees for the current and future transactions, or improving the user privacy. Even so, nowadays a common goal shared by all involved parties for the coin selection is actually to prevent the growth of the UTXO set in Bitcoin [18]. In any case, our work introduces new analysis that can help improve these selection strategies.Following these lines, other proposals such as TXO commitments [23,24] could allow to maintain a smaller functional UTXO set. Similarly, one can think of a two-tier data structure where a UTXO subset containing UTXOs with a low probability of being selected such as dust is kept on disk, while the other UTXOs are kept in memory. We think that the work presented in this paper provides an accurate estimation of such unprofitable UTXOs, which has not been previously considered.
[]
[]
Royal Society Open Science
30800359
PMC6366216
10.1098/rsos.181074
Depth image super-resolution reconstruction based on a modified joint trilateral filter
Depth image super-resolution (SR) is a technique that uses signal processing technology to enhance the resolution of a low-resolution (LR) depth image. Generally, external database or high-resolution (HR) images are needed to acquire prior information for SR reconstruction. To overcome the limitations, a depth image SR method without reference to any external images is proposed. In this paper, a high-quality edge map is first constructed using a sparse coding method, which uses a dictionary learned from the original images at different scales. Then, the high-quality edge map is used to guide the interpolation for depth images by a modified joint trilateral filter. During the interpolation, some information of gradient and structural similarity (SSIM) are added to preserve the detailed information and suppress the noise. The proposed method can not only preserve the sharpness of image edge, but also avoid the dependence on database. Experimental results show that the proposed method is superior to some state-of-the-art depth image SR methods.
2.Related worksIn recent years, two major trends emerge in depth image SR. One is example-based depth image SR method. This method mainly reconstructs an HR depth image based on example databases that could be used to acquire learned prior information. For example, Aodha et al. [8] used the Markov random field (MRF) model-based patches for depth image SR. Li et al. [9] proposed a modified MRF model, which matched the input LR patches from similar patches on a set of HR training images. Besides, the approach based on sparse representation has also been used widely in depth image SR. Yang et al. [10] jointly trained the HR and LR dictionaries to enhance the coupling between HR and LR image blocks, which can be represented by an alternate atomic linear combination of the dictionaries. On the basis of sparse representation, Zhao et al. [11] proposed a multiresidue dictionary to learn and refine the depth image SR. Timofte et al. [12] clustered dictionary atoms into sub-dictionaries by using the K-NN algorithm and then represented the HR image blocks with the best sub-dictionary atoms. Owing to the effectiveness and speediness of neural networks in colour image processing, neural networks are also widely used in depth images. For example, Song et al. [13] used deep convolutional neural network to learn the end-to-end mapping from LR depth image to HR depth image, and then further process the learned HR depth images. Riegler et al. [14] proposed a depth image SR reconstruction method based on deep primal-dual networks, which combines a deep fully convolutional network with a non-local variation.The other way is the colour-guided depth image SR method. RGB-D sensor can capture simultaneously depth image and the corresponding colour image, and the captured colour image usually has higher resolution than the depth image. Therefore, the colour image can be used to assist depth image SR. For example, Yang et al. [15] used one or two HR colour images as the reference, then refined the LR depth image iteratively. Ferstl et al. [16] used an anisotropic total variation diffusion tensor computed from the HR colour image to guide depth image SR. Lo et al. [17] proposed a framework of joint trilateral filter, the context information of which acquired from HR colour image was used to guide depth interpolation. Zhang et al. [18] presented a modified joint trilateral filter, and the depth image could be interpolated with the assistance of edge map and intensity information extracted from the HR colour image.These two methods can improve the resolution of depth images, but there still exist some limitations. In general, the example-based SR method has a strong dependence on example database. And the colour-guided method requires HR colour images that are perfectly aligned with the depth images. To overcome these limitations, we propose a depth image SR method that needs neither the external example database nor the assistance of the registered HR colour image.
[ "20483687", "28129196", "15641732", "26599968" ]
[ { "pmid": "20483687", "title": "Image super-resolution via sparse representation.", "abstract": "This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework." }, { "pmid": "28129196", "title": "Edge-Preserving Depth Map Upsampling by Joint Trilateral Filter.", "abstract": "Compared to the color images, their associated depth images captured by the RGB-D sensors are typically with lower resolution. The task of depth map super-resolution (SR) aims at increasing the resolution of the range data by utilizing the high-resolution (HR) color image, while the details of the depth information are to be properly preserved. In this paper, we present a joint trilateral filtering (JTF) algorithm for depth image SR. The proposed JTF first observes context information from the HR color image. In addition to the extracted spatial and range information of local pixels, our JTF further integrates local gradient information of the depth image, which allows the prediction and refinement of HR depth image outputs without artifacts like textural copies or edge discontinuities. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over prior depth map upsampling works." }, { "pmid": "15641732", "title": "Image enhancement and denoising by complex diffusion processes.", "abstract": "The linear and nonlinear scale spaces, generated by the inherently real-valued diffusion equation, are generalized to complex diffusion processes, by incorporating the free Schrödinger equation. A fundamental solution for the linear case of the complex diffusion equation is developed. Analysis of its behavior shows that the generalized diffusion process combines properties of both forward and inverse diffusion. We prove that the imaginary part is a smoothed second derivative, scaled by time, when the complex diffusion coefficient approaches the real axis. Based on this observation, we develop two examples of nonlinear complex processes, useful in image processing: a regularized shock filter for image enhancement and a ramp preserving denoising process." }, { "pmid": "26599968", "title": "Edge-Guided Single Depth Image Super Resolution.", "abstract": "Recently, consumer depth cameras have gained significant popularity due to their affordable cost. However, the limited resolution and the quality of the depth map generated by these cameras are still problematic for several applications. In this paper, a novel framework for the single depth image superresolution is proposed. In our framework, the upscaling of a single depth image is guided by a high-resolution edge map, which is constructed from the edges of the low-resolution depth image through a Markov random field optimization in a patch synthesis based manner. We also explore the self-similarity of patches during the edge construction stage, when limited training data are available. With the guidance of the high-resolution edge map, we propose upsampling the high-resolution depth image through a modified joint bilateral filter. The edge-based guidance not only helps avoiding artifacts introduced by direct texture prediction, but also reduces jagged artifacts and preserves the sharp edges. Experimental results demonstrate the effectiveness of our method both qualitatively and quantitatively compared with the state-of-the-art methods." } ]
Heliyon
30793052
PMC6370569
10.1016/j.heliyon.2019.e01164
An assessment of the influence of personal branding on financing entrepreneurial ventures
This research explores the influence of an entrepreneur's personal brand in attracting capital, by examining the validity of the Entrepreneurial Brand Personality Equity (EBPE) model of Balakrishnan and Michael (2011). Its particular concern is whether investors provide funding to an entrepreneur's idea, or, to the entrepreneur behind the idea. Concomitantly, it seeks to identify the variations in the importance accorded by different investors to the several variables of the EBPE model, and whether these variations-and also the stages of business-influence the final investment decisions of investors. The findings of this mixed methods study hold significant implications for various stakeholders, and suggest that the presence of the EPBE model's dimensions in an entrepreneur are very necessary for attracting investors' capital. The personal branding of the A-team in particular, clearly emerged as the most critical variable of the EBPE model, based on the type of investor and stage of the entrepreneurial venture.
6Related work‘Entrepreneurship research has become so homogenized that it targets a very small audience of researchers, despite generating a dazzling variety of findings that are, unfortunately, barely connected to reality’ (Schultz, 2010). At odds with this are the results of this research, that hold significant implications for various stakeholders, and provide nascent entrepreneurs and newly established SMEs with critical insights on how best to utilize -and develop-their personal brands, to better influence their capital-seeking endeavors. Simultaneously, they aid investors in taking more informed investment decisions, by providing them with a framework to better assess entrepreneurs behind their respective ideas; and practitioners (such as brand consultants, marketers, business owners, and investors), to better orient and influence their decisions towards personal and corporate branding, so as to obtain superior outcomes. Whilst this research confirms earlier studies' findings regarding the broad acceptance of the importance of an entrepreneur's A-team, a specific message it offers to practitioners is regarding the significant difference between VC's, AI's and PEI's, in terms of the importance they assigned to the A-team, across various stages of a business.The foregoing aspects -despite their significance for entrepreneurs and investors-are seemingly under researched topics within the entrepreneurship literature. An area for further research therefore, could be an exploration of whether different industry sectors demand other personal characteristics of entrepreneurs, in which case, a suitable modification of the EBPE model would be necessitated. Similarly, in the context of varying geographic locations, more studies would be worth undertaking, on cultural differences among investors and entrepreneurs, and their impact on investment decisions.
[ "14473104" ]
[]
Frontiers in Neural Circuits
30804760
PMC6371063
10.3389/fncir.2019.00005
DVID: Distributed Versioned Image-Oriented Dataservice
Open-source software development has skyrocketed in part due to community tools like github.com, which allows publication of code as well as the ability to create branches and push accepted modifications back to the original repository. As the number and size of EM-based datasets increases, the connectomics community faces similar issues when we publish snapshot data corresponding to a publication. Ideally, there would be a mechanism where remote collaborators could modify branches of the data and then flexibly reintegrate results via moderated acceptance of changes. The DVID system provides a web-based connectomics API and the first steps toward such a distributed versioning approach to EM-based connectomics datasets. Through its use as the central data resource for Janelia's FlyEM team, we have integrated the concepts of distributed versioning into reconstruction workflows, allowing support for proofreader training and segmentation experiments through branched, versioned data. DVID also supports persistence to a variety of storage systems from high-speed local SSDs to cloud-based object stores, which allows its deployment on laptops as well as large servers. The tailoring of the backend storage to each type of connectomics data leads to efficient storage and fast queries. DVID is freely available as open-source software with an increasing number of supported storage options.
3. Related WorkTypically, researchers have dealt with image-oriented data by either storing it in files or writing software systems that use a relational database to store image chunks or file pointers. Connectomics data servers include bossDB (Kleissas et al., 2017), OpenConnectome (Burns et al., 2013), CATMAID (Saalfeld et al., 2009), and more visualization-focused systems like BUTTERFLY (Haehn et al., 2017). DVID is distinguished from these other systems by its support of branched versioning, an extensible Science API through data type packages, and extremely flexible storage support through a variety of key-value store drivers.The first to support branched versioning at large scale was SciDB (Stonebraker et al., 2011). An approach to branched versioning in relational databases culminated in OrpheusDB (Huang et al., 2017). Both SciDB and OrpheusDB could be used as storage backends for DVID data types that match their strengths. For example, SciDB is particularly adept at handling multi-dimensional arrays and could be used for the voxel data component of DVID label data types, while OrpheusDB could be used for heavily indexed synapse point annotations.The DataHub effort (Bhardwaj et al., 2014) has very similar aims to bring a distributed versioning approach to scientific datasets, offering an analog to github.com with a centralized server that builds on a Dataset Version Control System (DVCS). DataHub and DVID developed in parallel and focused on different types of data. DVCS was designed to handle datasets in the sub-Terabyte range without an emphasis on 3d image data, and it's API is a versioning query language based on SQL so the significant connectomics-focused data layer would still be needed. Much as OrpheusDB is a possible storage engine for smaller data types like annotations, DVCS could be considered a possible storage interface to DVID.Ideally, connectomics tools would be able to use a variety of data services. This would require the community to develop common interfaces to standard operations. Currently, simple operations like retrieving 2D or 3D imagery are sufficiently similar across services so that tools like CATMAID, Neuroglancer, and BigDataViewer (Pietzsch et al., 2015) can use different image volume services including DVID.
[ "26785377", "24401992", "30013046", "29758457", "25349911", "24772079", "26018659", "26020499", "19376822", "28718765", "26483464", "30483068" ]
[ { "pmid": "24401992", "title": "The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.", "abstract": "We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization." }, { "pmid": "30013046", "title": "High-precision automated reconstruction of neurons with flood-filling networks.", "abstract": "Reconstruction of neural circuits from volume electron microscopy data requires the tracing of cells in their entirety, including all their neurites. Automated approaches have been developed for tracing, but their error rates are too high to generate reliable circuit diagrams without extensive human proofreading. We present flood-filling networks, a method for automated segmentation that, similar to most previous efforts, uses convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of individual neuronal processes. We used flood-filling networks to trace neurons in a dataset obtained by serial block-face electron microscopy of a zebra finch brain. Using our method, we achieved a mean error-free neurite path length of 1.1 mm, and we observed only four mergers in a test set with a path length of 97 mm. The performance of flood-filling networks was an order of magnitude better than that of previous approaches applied to this dataset, although with substantially increased computational costs." }, { "pmid": "29758457", "title": "Progress and remaining challenges in high-throughput volume electron microscopy.", "abstract": "Recent advances in the effectiveness of the automatic extraction of neural circuits from volume electron microscopy data have made us more optimistic that the goal of reconstructing the nervous system of an entire adult mammal (or bird) brain can be achieved in the next decade. The progress on the data analysis side-based mostly on variants of convolutional neural networks-has been particularly impressive, but improvements in the quality and spatial extent of published VEM datasets are substantial. Methodologically, the combination of hot-knife sample partitioning and ion milling stands out as a conceptual advance while the multi-beam scanning electron microscope promises to remove the data-acquisition bottleneck." }, { "pmid": "25349911", "title": "The big data challenges of connectomics.", "abstract": "The structure of the nervous system is extraordinarily complicated because individual neurons are interconnected to hundreds or even thousands of other cells in networks that can extend over large volumes. Mapping such networks at the level of synaptic connections, a field called connectomics, began in the 1970s with a the study of the small nervous system of a worm and has recently garnered general interest thanks to technical and computational advances that automate the collection of electron-microscopy data and offer the possibility of mapping even large mammalian brains. However, modern connectomics produces 'big data', unprecedented quantities of digital information at unprecedented rates, and will require, as with genomics at the time, breakthrough algorithmic and computational solutions. Here we describe some of the key difficulties that may arise and provide suggestions for managing them." }, { "pmid": "24772079", "title": "Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages.", "abstract": "The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them." }, { "pmid": "26018659", "title": "A context-aware delayed agglomeration framework for electron microscopy segmentation.", "abstract": "Electron Microscopy (EM) image (or volume) segmentation has become significantly important in recent years as an instrument for connectomics. This paper proposes a novel agglomerative framework for EM segmentation. In particular, given an over-segmented image or volume, we propose a novel framework for accurately clustering regions of the same neuron. Unlike existing agglomerative methods, the proposed context-aware algorithm divides superpixels (over-segmented regions) of different biological entities into different subsets and agglomerates them separately. In addition, this paper describes a \"delayed\" scheme for agglomerative clustering that postpones some of the merge decisions, pertaining to newly formed bodies, in order to generate a more confident boundary prediction. We report significant improvements attained by the proposed approach in segmentation accuracy over existing standard methods on 2D and 3D datasets." }, { "pmid": "19376822", "title": "CATMAID: collaborative annotation toolkit for massive amounts of image data.", "abstract": "SUMMARY\nHigh-resolution, three-dimensional (3D) imaging of large biological specimens generates massive image datasets that are difficult to navigate, annotate and share effectively. Inspired by online mapping applications like GoogleMaps, we developed a decentralized web interface that allows seamless navigation of arbitrarily large image stacks. Our interface provides means for online, collaborative annotation of the biological image data and seamless sharing of regions of interest by bookmarking. The CATMAID interface enables synchronized navigation through multiple registered datasets even at vastly different scales such as in comparisons between optical and electron microscopy.\n\n\nAVAILABILITY\nhttp://fly.mpi-cbg.de/catmaid." }, { "pmid": "28718765", "title": "A connectome of a learning and memory center in the adult Drosophila brain.", "abstract": "Understanding memory formation, storage and retrieval requires knowledge of the underlying neuronal circuits. In Drosophila, the mushroom body (MB) is the major site of associative learning. We reconstructed the morphologies and synaptic connections of all 983 neurons within the three functional units, or compartments, that compose the adult MB's α lobe, using a dataset of isotropic 8 nm voxels collected by focused ion-beam milling scanning electron microscopy. We found that Kenyon cells (KCs), whose sparse activity encodes sensory information, each make multiple en passant synapses to MB output neurons (MBONs) in each compartment. Some MBONs have inputs from all KCs, while others differentially sample sensory modalities. Only 6% of KC>MBON synapses receive a direct synapse from a dopaminergic neuron (DAN). We identified two unanticipated classes of synapses, KC>DAN and DAN>MBON. DAN activation produces a slow depolarization of the MBON in these DAN>MBON synapses and can weaken memory recall." }, { "pmid": "26483464", "title": "Synaptic circuits and their variations within different columns in the visual system of Drosophila.", "abstract": "We reconstructed the synaptic circuits of seven columns in the second neuropil or medulla behind the fly's compound eye. These neurons embody some of the most stereotyped circuits in one of the most miniaturized of animal brains. The reconstructions allow us, for the first time to our knowledge, to study variations between circuits in the medulla's neighboring columns. This variation in the number of synapses and the types of their synaptic partners has previously been little addressed because methods that visualize multiple circuits have not resolved detailed connections, and existing connectomic studies, which can see such connections, have not so far examined multiple reconstructions of the same circuit. Here, we address the omission by comparing the circuits common to all seven columns to assess variation in their connection strengths and the resultant rates of several different and distinct types of connection error. Error rates reveal that, overall, <1% of contacts are not part of a consensus circuit, and we classify those contacts that supplement (E+) or are missing from it (E-). Autapses, in which the same cell is both presynaptic and postsynaptic at the same synapse, are occasionally seen; two cells in particular, Dm9 and Mi1, form ≥ 20-fold more autapses than do other neurons. These results delimit the accuracy of developmental events that establish and normally maintain synaptic circuits with such precision, and thereby address the operation of such circuits. They also establish a precedent for error rates that will be required in the new science of connectomics." }, { "pmid": "30483068", "title": "NeuTu: Software for Collaborative, Large-Scale, Segmentation-Based Connectome Reconstruction.", "abstract": "Reconstructing a connectome from an EM dataset often requires a large effort of proofreading automatically generated segmentations. While many tools exist to enable tracing or proofreading, recent advances in EM imaging and segmentation quality suggest new strategies and pose unique challenges for tool design to accelerate proofreading. Namely, we now have access to very large multi-TB EM datasets where (1) many segments are largely correct, (2) segments can be very large (several GigaVoxels), and where (3) several proofreaders and scientists are expected to collaborate simultaneously. In this paper, we introduce NeuTu as a solution to efficiently proofread large, high-quality segmentation in a collaborative setting. NeuTu is a client program of our high-performance, scalable image database called DVID so that it can easily be scaled up. Besides common features of typical proofreading software, NeuTu tames unprecedentedly large data with its distinguishing functions, including: (1) low-latency 3D visualization of large mutable segmentations; (2) interactive splitting of very large false merges with highly optimized semi-automatic segmentation; (3) intuitive user operations for investigating or marking interesting points in 3D visualization; (4) visualizing proofreading history of a segmentation; and (5) real-time collaborative proofreading with lock-based concurrency control. These unique features have allowed us to manage the workflow of proofreading a large dataset smoothly without dividing them into subsets as in other segmentation-based tools. Most importantly, NeuTu has enabled some of the largest connectome reconstructions as well as interesting discoveries in the fly brain." } ]
PLoS Computational Biology
30703108
PMC6372216
10.1371/journal.pcbi.1006707
Predicting change: Approximate inference under explicit representation of temporal structure in changing environments
In our daily lives timing of our actions plays an essential role when we navigate the complex everyday environment. It is an open question though how the representations of the temporal structure of the world influence our behavior. Here we propose a probabilistic model with an explicit representation of state durations which may provide novel insights in how the brain predicts upcoming changes. We illustrate several properties of the behavioral model using a standard reversal learning design and compare its task performance to standard reinforcement learning models. Furthermore, using experimental data, we demonstrate how the model can be applied to identify participants’ beliefs about the latent temporal task structure. We found that roughly one quarter of participants seem to have learned the latent temporal structure and used it to anticipate changes, whereas the remaining participants’ behavior did not show signs of anticipatory responses, suggesting a lack of precise temporal expectations. We expect that the introduced behavioral model will allow, in future studies, for a systematic investigation of how participants learn the underlying temporal structure of task environments and how these representations shape behavior.
Related workThe key component of the proposed behavioral model is its conceptualization as an explicit duration hidden Markov models ED-HMM [30], which involves an explicit representation of the between reversal intervals as the hidden structural variable. This representation results in an anticipation of specific moments of reversals. Such anticipation would be clearly advantageous for an agent, as it enables faster behavioral adaptation in cases when reversals actually do occur in a (semi)regular manner.The ED-HMM belong to more general group of hidden semi-Markov models which are often applied to the analysis of non-stationary time series [65–68]. In the context of decision making the semi-Markov formalism allows for temporal structuring of behavioral policies [69]. Importantly, semi-Markov dynamics was also applied to temporal difference learning to account for dopamine activity in cases when the timing between action and reward is varied between trials [70].The proposed model builds upon recent approaches to model behavior in changing environments [11, 71, 72] and can be seen as a direct extension of hidden Markov models (HMM) which were applied to reversal learning tasks before [29, 38, 46–48]. In previous works, the HMM were used to identify the moment of reversal, changes in the beliefs about reversal probability, and the most likely moment in which agents reversed their behavior. This was crucial for understanding the effects of dopamine modulation on the underlying inference and consequently behavior. Although it is out of the scope of the present paper, it is possible to perform backward inference with hidden semi-Markov models (HSMM), hence identifying the most likely moments of reversal in the past. Furthermore, we will explore in the future possible learning rules for parameters of the prior beliefs p0(d), similar to the work of [73]. Such extension would make the models also suitable for addressing questions related to changes in prior beliefs of state durations.In recent years, reinforcement learning models have found multiple applications in studies relying on a reversal learning task. For example, the classical Rescorla-Wagner model [74], the dual update extension of the Rescorla-Wagner model (as described in the present paper) [39, 40], or models separating the prediction error signals on positive and negative prediction errors [75]. As we have shown here a reinforcement learning model generates behavior very similar to a probabilistic counterpart in relatively simple settings of the reversal learning task. Hence, we would expect that additional extensions of the considered dual update RW model could make the behavior of the reinforcement learning even more similar to the probabilistic model introduced here.Still, we can point out several advantages of probabilistic models of behavior over reinforcement learning models, in the context of decision making under uncertainty and in dynamic environments. The probabilistic modeling approach allows for a principle way of mapping complex knowledge about spatio-temporal task structure into a relatively simple set of learning rules (as demonstrated here). In turn this provides clear functional interpretation of various prediction error signals, and corresponding adaptive learning rates, which are typically difficult to derive or motivate within the context of classical reinforcement learning. Specifically, we would say that prescribing to a probabilistic modeling approach is crucial for understanding interaction between representation of temporal structure and decision making.
[ "16163383", "25449892", "18708142", "15464351", "26468192", "24462094", "20227271", "12718865", "23785310", "21283774", "21420893", "19915091", "28365777", "25462794", "26076466", "25187943", "24139048", "23849203", "23365241", "22660479", "20068583", "15541511", "28777060", "28077721", "29206225", "24722562", "28592695", "25673835", "22940577", "25689102", "15647499", "24267657", "17676057", "16899731", "27301429", "27798176", "21909088", "17416921", "26290251", "24291614", "25689102", "18032658", "16563737", "24018303", "25257798", "19697116", "15707252", "21629826", "20844132", "20569174", "22487047", "19448610", "27790629", "16280574", "12931961", "16163383", "27292535", "16764517", "19193900", "15689962", "21159958", "20107431", "18701696", "17088503", "22134477", "29025688", "17482797", "15087550", "21535456", "21697443", "15689962", "21921020", "15590495", "12815512", "12030598", "25142296", "30077331", "28599832", "25561321", "23752095", "18311134", "24840709", "23421527", "28798131" ]
[ { "pmid": "16163383", "title": "What makes us tick? Functional and neural mechanisms of interval timing.", "abstract": "Time is a fundamental dimension of life. It is crucial for decisions about quantity, speed of movement and rate of return, as well as for motor control in walking, speech, playing or appreciating music, and participating in sports. Traditionally, the way in which time is perceived, represented and estimated has been explained using a pacemaker-accumulator model that is not only straightforward, but also surprisingly powerful in explaining behavioural and biological data. However, recent advances have challenged this traditional view. It is now proposed that the brain represents time in a distributed manner and tells the time by detecting the coincidental activation of different neural populations." }, { "pmid": "25449892", "title": "Time and space in the hippocampus.", "abstract": "It has been hypothesized that one of the functions of the hippocampus is to enable the learning of relationships between different stimuli experienced in the environment. These relationships might be spatial (\"the bathroom is about 5m down the hall from the bedroom\") or temporal (\"the coffee is ready about 3 min after the button was pressed\"). Critically, these spatial and temporal relationships may exist on a variety of scales from a few hundred milliseconds up to minutes. In order to learn consistent relationships between stimuli separated by a variety of spatial and temporal scales using synaptic plasticity that has a fixed temporal window extending at most a few hundred milliseconds, information about the spatial and temporal relationships of distant stimuli must be available to the hippocampus in the present. Hippocampal place cells and time cells seem well suited to represent the spatial and temporal locations of distant stimuli in order to support learning of these relationships. We review a recent computational hypothesis that can be used to construct both spatial and temporal relationships. We suggest that there is a deep computational connection between spatial and temporal coding in the hippocampus and that both serve the overarching function of learning relationships between stimuli-constructing a \"memory space.\" This article is part of a Special Issue entitled SI: Brain and Memory." }, { "pmid": "18708142", "title": "Cortico-striatal representation of time in animals and humans.", "abstract": "Interval timing in the seconds-to-minutes range is crucial to learning, memory, and decision-making. Recent findings argue for the involvement of cortico-striatal circuits that are optimized by the dopaminergic modulation of oscillatory activity and lateral connectivity at the level of cortico-striatal inputs. Striatal medium spiny neurons are proposed to detect the coincident activity of specific beat patterns of cortical oscillations, thereby permitting the discrimination of supra-second durations based upon the reoccurring patterns of subsecond neural firing. This proposal for the cortico-striatal representation of time is consistent with the observed psychophysical properties of interval timing (e.g. linear time scale and scalar variance) as well as much of the available pharmacological, lesion, patient, electrophysiological, and neuroimaging data from animals and humans (e.g. dopamine-related timing deficits in Huntington's and Parkinson's disease as well as related animal models). The conclusion is that although the striatum serves as a 'core timer', it is part of a distributed timing system involving the coordination of large-scale oscillatory networks." }, { "pmid": "15464351", "title": "Neural representation of interval encoding and decision making.", "abstract": "Our perception of time depends on multiple psychological processes that allow us to anticipate events. In this study, we used event-related functional magnetic resonance imaging (fMRI) to differentiate neural systems involved in formulating representations of time from processes associated with making decisions about their duration. A time perception task consisting of two randomly presented standard intervals was used to ensure that intervals were encoded on each trial and to enhance memory requirements. During the encoding phase of a trial, activation was observed in the right caudate nucleus, right inferior parietal cortex and left cerebellum. Activation in these regions correlated with timing sensitivity (coefficient of variation). In contrast, encoding-related activity in the right parahippocampus and hippocampus correlated with the bisection point and right precuneus activation was associated with a measure of memory distortion. Decision processes were studied by examining brain activation during the decision phase of a trial that was associated with the difficulty of interval discriminations. Activation in the right parahippocampus was greater for easier than harder discriminations. In contrast, activation was greater for harder than easier discriminations in systems involved in working memory (left middle-frontal and parietal cortex) and auditory rehearsal (left inferior-frontal and superior-temporal cortex). Activity in the auditory rehearsal network correlated with memory distortion. Our results support the independence of systems that mediate interval encoding and decision processes. The results also suggest that distortions in memory for time may be due to strategic processing in cortical systems involved in either encoding or rehearsal." }, { "pmid": "26468192", "title": "Time in Cortical Circuits.", "abstract": "Time is central to cognition. However, the neural basis for time-dependent cognition remains poorly understood. We explore how the temporal features of neural activity in cortical circuits and their capacity for plasticity can contribute to time-dependent cognition over short time scales. This neural activity is linked to cognition that operates in the present or anticipates events or stimuli in the near future. We focus on deliberation and planning in the context of decision making as a cognitive process that integrates information across time. We progress to consider how temporal expectations of the future modulate perception. We propose that understanding the neural basis for how the brain tells time and operates in time will be necessary to develop general models of cognition.\n\n\nSIGNIFICANCE STATEMENT\nTime is central to cognition. However, the neural basis for time-dependent cognition remains poorly understood. We explore how the temporal features of neural activity in cortical circuits and their capacity for plasticity can contribute to time-dependent cognition over short time scales. We propose that understanding the neural basis for how the brain tells time and operates in time will be necessary to develop general models of cognition." }, { "pmid": "24462094", "title": "Orbitofrontal cortex as a cognitive map of task space.", "abstract": "Orbitofrontal cortex (OFC) has long been known to play an important role in decision making. However, the exact nature of that role has remained elusive. Here, we propose a unifying theory of OFC function. We hypothesize that OFC provides an abstraction of currently available information in the form of a labeling of the current task state, which is used for reinforcement learning (RL) elsewhere in the brain. This function is especially critical when task states include unobservable information, for instance, from working memory. We use this framework to explain classic findings in reversal learning, delayed alternation, extinction, and devaluation as well as more recent findings showing the effect of OFC lesions on the firing of dopaminergic neurons in ventral tegmental area (VTA) in rodents performing an RL task. In addition, we generate a number of testable experimental predictions that can distinguish our theory from other accounts of OFC function." }, { "pmid": "20227271", "title": "Learning latent structure: carving nature at its joints.", "abstract": "Reinforcement learning (RL) algorithms provide powerful explanations for simple learning and decision-making behaviors and the functions of their underlying neural substrates. Unfortunately, in real-world situations that involve many stimuli and actions, these algorithms learn pitifully slowly, exposing their inferiority in comparison to animal and human learning. Here we suggest that one reason for this discrepancy is that humans and animals take advantage of structure that is inherent in real-world tasks to simplify the learning problem. We survey an emerging literature on 'structure learning'--using experience to infer the structure of a task--and how this can be of service to RL, with an emphasis on structure in perception and action." }, { "pmid": "12718865", "title": "Temporal difference models and reward-related learning in the human brain.", "abstract": "Temporal difference learning has been proposed as a model for Pavlovian conditioning, in which an animal learns to predict delivery of reward following presentation of a conditioned stimulus (CS). A key component of this model is a prediction error signal, which, before learning, responds at the time of presentation of reward but, after learning, shifts its response to the time of onset of the CS. In order to test for regions manifesting this signal profile, subjects were scanned using event-related fMRI while undergoing appetitive conditioning with a pleasant taste reward. Regression analyses revealed that responses in ventral striatum and orbitofrontal cortex were significantly correlated with this error signal, suggesting that, during appetitive conditioning, computations described by temporal difference learning are expressed in the human brain." }, { "pmid": "23785310", "title": "Making predictions in a changing world-inference, uncertainty, and learning.", "abstract": "To function effectively, brains need to make predictions about their environment based on past experience, i.e., they need to learn about their environment. The algorithms by which learning occurs are of interest to neuroscientists, both in their own right (because they exist in the brain) and as a tool to model participants' incomplete knowledge of task parameters and hence, to better understand their behavior. This review focusses on a particular challenge for learning algorithms-how to match the rate at which they learn to the rate of change in the environment, so that they use as much observed data as possible whilst disregarding irrelevant, old observations. To do this algorithms must evaluate whether the environment is changing. We discuss the concepts of likelihood, priors and transition functions, and how these relate to change detection. We review expected and estimation uncertainty, and how these relate to change detection and learning rate. Finally, we consider the neural correlates of uncertainty and learning. We argue that the neural correlates of uncertainty bear a resemblance to neural systems that are active when agents actively explore their environments, suggesting that the mechanisms by which the rate of learning is set may be subject to top down control (in circumstances when agents actively seek new information) as well as bottom up control (by observations that imply change in the environment)." }, { "pmid": "21283774", "title": "Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.", "abstract": "Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating." }, { "pmid": "21420893", "title": "Posterior cingulate cortex: adapting behavior to a changing world.", "abstract": "When has the world changed enough to warrant a new approach? The answer depends on current needs, behavioral flexibility and prior knowledge about the environment. Formal approaches solve the problem by integrating the recent history of rewards, errors, uncertainty and context via Bayesian inference to detect changes in the world and alter behavioral policy. Neuronal activity in posterior cingulate cortex - a key node in the default network - is known to vary with learning, memory, reward and task engagement. We propose that these modulations reflect the underlying process of change detection and motivate subsequent shifts in behavior." }, { "pmid": "19915091", "title": "Neural components underlying behavioral flexibility in human reversal learning.", "abstract": "The ability to flexibly respond to changes in the environment is critical for adaptive behavior. Reversal learning (RL) procedures test adaptive response updating when contingencies are altered. We used functional magnetic resonance imaging to examine brain areas that support specific RL components. We compared neural responses to RL and initial learning (acquisition) to isolate reversal-related brain activation independent of cognitive control processes invoked during initial feedback-based learning. Lateral orbitofrontal cortex (OFC) was more activated during reversal than acquisition, suggesting its relevance for reformation of established stimulus-response associations. In addition, the dorsal anterior cingulate (dACC) and right inferior frontal gyrus (rIFG) correlated with change in postreversal accuracy. Because optimal RL likely requires suppression of a prior learned response, we hypothesized that similar regions serve both response inhibition (RI) and inhibition of learned associations during reversal. However, reversal-specific responding and stopping (requiring RI and assessed via the stop-signal task) revealed distinct frontal regions. Although RI-related regions do not appear to support inhibition of prepotent learned associations, a subset of these regions, dACC and rIFG, guide actions consistent with current reward contingencies. These regions and lateral OFC represent distinct neural components that support behavioral flexibility important for adaptive learning." }, { "pmid": "28365777", "title": "A Direct Demonstration of Functional Differences between Subdivisions of Human V5/MT.", "abstract": "Two subdivisions of human V5/MT+: one located posteriorly (MT/TO-1) and the other more anteriorly (MST/TO-2) were identified in human participants using functional magnetic resonance imaging on the basis of their representations of the ipsilateral versus contralateral visual field. These subdivisions were then targeted for disruption by the application of repetitive transcranial magnetic stimulation (rTMS). The rTMS was delivered to cortical areas while participants performed direction discrimination tasks involving 3 different types of moving stimuli defined by the translational, radial, or rotational motion of dot patterns. For translational motion, performance was significantly reduced relative to baseline when rTMS was applied to both MT/TO-1 and MST/TO-2. For radial motion, there was a differential effect between MT/TO-1 and MST/TO-2, with only disruption of the latter area affecting performance. The rTMS failed to reveal a complete dissociation between MT/TO-1 and MST/TO-2 in terms of their contribution to the perception of rotational motion. On the basis of these results, MT/TO-1 and MST/TO-2 appear to be functionally distinct subdivisions of hV5/MT+. While both areas appear to be implicated in the processing of translational motion, only the anterior region (MST/TO-2) makes a causal contribution to the perception of radial motion." }, { "pmid": "25462794", "title": "A computational analysis of the neural bases of Bayesian inference.", "abstract": "Empirical support for the Bayesian brain hypothesis, although of major theoretical importance for cognitive neuroscience, is surprisingly scarce. This hypothesis posits simply that neural activities code and compute Bayesian probabilities. Here, we introduce an urn-ball paradigm to relate event-related potentials (ERPs) such as the P300 wave to Bayesian inference. Bayesian model comparison is conducted to compare various models in terms of their ability to explain trial-by-trial variation in ERP responses at different points in time and over different regions of the scalp. Specifically, we are interested in dissociating specific ERP responses in terms of Bayesian updating and predictive surprise. Bayesian updating refers to changes in probability distributions given new observations, while predictive surprise equals the surprise about observations under current probability distributions. Components of the late positive complex (P3a, P3b, Slow Wave) provide dissociable measures of Bayesian updating and predictive surprise. Specifically, the updating of beliefs about hidden states yields the best fit for the anteriorly distributed P3a, whereas the updating of predictions of observations accounts best for the posteriorly distributed Slow Wave. In addition, parietally distributed P3b responses are best fit by predictive surprise. These results indicate that the three components of the late positive complex reflect distinct neural computations. As such they are consistent with the Bayesian brain hypothesis, but these neural computations seem to be subject to nonlinear probability weighting. We integrate these findings with the free-energy principle that instantiates the Bayesian brain hypothesis." }, { "pmid": "26076466", "title": "The Sense of Confidence during Probabilistic Learning: A Normative Account.", "abstract": "Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable \"feeling of knowing\" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process." }, { "pmid": "25187943", "title": "Inferring on the intentions of others by hierarchical Bayesian learning.", "abstract": "Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to \"player\" or \"adviser\" roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition." }, { "pmid": "24139048", "title": "Hierarchical prediction errors in midbrain and basal forebrain during sensory learning.", "abstract": "In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities." }, { "pmid": "23849203", "title": "The neural representation of unexpected uncertainty during value-based decision making.", "abstract": "Uncertainty is an inherent property of the environment and a central feature of models of decision-making and learning. Theoretical propositions suggest that one form, unexpected uncertainty, may be used to rapidly adapt to changes in the environment, while being influenced by two other forms: risk and estimation uncertainty. While previous studies have reported neural representations of estimation uncertainty and risk, relatively little is known about unexpected uncertainty. Here, participants performed a decision-making task while undergoing functional magnetic resonance imaging (fMRI), which, in combination with a Bayesian model-based analysis, enabled us to separately examine each form of uncertainty examined. We found representations of unexpected uncertainty in multiple cortical areas, as well as the noradrenergic brainstem nucleus locus coeruleus. Other unique cortical regions were found to encode risk, estimation uncertainty, and learning rate. Collectively, these findings support theoretical models in which several formally separable uncertainty computations determine the speed of learning." }, { "pmid": "23365241", "title": "Bayesian prediction and evaluation in the anterior cingulate cortex.", "abstract": "The dorsal anterior cingulate cortex (dACC) has been implicated in a variety of cognitive control functions, among them the monitoring of conflict, error, and volatility, error anticipation, reward learning, and reward prediction errors. In this work, we used a Bayesian ideal observer model, which predicts trial-by-trial probabilistic expectation of stop trials and response errors in the stop-signal task, to differentiate these proposed functions quantitatively. We found that dACC hemodynamic response, as measured by functional magnetic resonance imaging, encodes both the absolute prediction error between stimulus expectation and outcome, and the signed prediction error related to response outcome. After accounting for these factors, dACC has no residual correlation with conflict or error likelihood in the stop-signal task. Consistent with recent monkey neural recording studies, and in contrast with other neuroimaging studies, our work demonstrates that dACC reports at least two different types of prediction errors, and beyond contexts that are limited to reward processing." }, { "pmid": "22660479", "title": "Rational regulation of learning dynamics by pupil-linked arousal systems.", "abstract": "The ability to make inferences about the current state of a dynamic process requires ongoing assessments of the stability and reliability of data generated by that process. We found that these assessments, as defined by a normative model, were reflected in nonluminance-mediated changes in pupil diameter of human subjects performing a predictive-inference task. Brief changes in pupil diameter reflected assessed instabilities in a process that generated noisy data. Baseline pupil diameter reflected the reliability with which recent data indicate the current state of the data-generating process and individual differences in expectations about the rate of instabilities. Together these pupil metrics predicted the influence of new data on subsequent inferences. Moreover, a task- and luminance-independent manipulation of pupil diameter predictably altered the influence of new data. Thus, pupil-linked arousal systems can help to regulate the influence of incoming data on existing beliefs in a dynamic environment." }, { "pmid": "20068583", "title": "The free-energy principle: a unified brain theory?", "abstract": "A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework." }, { "pmid": "15541511", "title": "The Bayesian brain: the role of uncertainty in neural coding and computation.", "abstract": "To use sensory information efficiently to make judgments and guide action in the world, the brain must represent and use information about uncertainty in its computations for perception and action. Bayesian methods have proven successful in building computational theories for perception and sensorimotor control, and psychophysics is providing a growing body of evidence that human perceptual computations are \"Bayes' optimal\". This leads to the \"Bayesian coding hypothesis\": that the brain represents sensory information probabilistically, in the form of probability distributions. Several computational schemes have recently been proposed for how this might be achieved in populations of neurons. Neurophysiological data on the hypothesis, however, is almost non-existent. A major challenge for neuroscientists is to test these ideas experimentally, and so determine whether and how neurons code information about sensory uncertainty." }, { "pmid": "28777060", "title": "Temporal Anticipation Based on Memory.", "abstract": "The fundamental role that our long-term memories play in guiding perception is increasingly recognized, but the functional and neural mechanisms are just beginning to be explored. Although experimental approaches are being developed to investigate the influence of long-term memories on perception, these remain mostly static and neglect their temporal and dynamic nature. Here, we show that our long-term memories can guide attention proactively and dynamically based on learned temporal associations. Across two experiments, we found that detection and discrimination of targets appearing within previously learned contexts are enhanced when the timing of target appearance matches the learned temporal contingency. Neural markers of temporal preparation revealed that the learned temporal associations trigger specific temporal predictions. Our findings emphasize the ecological role that memories play in predicting and preparing perception of anticipated events, calling for revision of the usual conceptualization of contextual associative memory as a reflective and retroactive function." }, { "pmid": "28077721", "title": "Temporal Expectations Guide Dynamic Prioritization in Visual Working Memory through Attenuated α Oscillations.", "abstract": "Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively \"static,\" involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral α (8-14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of α oscillations in task-relevant sensory brain areas.\n\n\nSIGNIFICANCE STATEMENT\nIn dynamic, everyday-like, environments, flexible goal-directed behavior requires that mental representations that are kept in an active (working memory) store are dynamic, too. We investigated working memory in a more dynamic setting than is conventional, and demonstrate that expectations about when mnemonic items are most relevant can dynamically and reversibly prioritize these items in time. Moreover, we uncover a neural substrate of such dynamic prioritization in contralateral visual brain areas and show that this substrate predicts working memory retrieval times on a trial-by-trial basis. This places the experimental study of working memory, and its neuronal underpinnings, in a more dynamic and ecologically valid context, and provides new insights into the neural implementation of attentional prioritization within working memory." }, { "pmid": "29206225", "title": "Task relevance modulates the behavioural and neural effects of sensory predictions.", "abstract": "The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants' brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling." }, { "pmid": "24722562", "title": "Combining spatial and temporal expectations to improve visual perception.", "abstract": "The importance of temporal expectations in modulating perceptual functions is increasingly recognized. However, the means through which temporal expectations can bias perceptual information processing remains ill understood. Recent theories propose that modulatory effects of temporal expectations rely on the co-existence of other biases based on receptive-field properties, such as spatial location. We tested whether perceptual benefits of temporal expectations in a perceptually demanding psychophysical task depended on the presence of spatial expectations. Foveally presented symbolic arrow cues indicated simultaneously where (location) and when (time) target events were more likely to occur. The direction of the arrow indicated target location (80% validity), while its color (pink or blue) indicated the interval (80% validity) for target appearance. Our results confirmed a strong synergistic interaction between temporal and spatial expectations in enhancing visual discrimination. Temporal expectation significantly boosted the effectiveness of spatial expectation in sharpening perception. However, benefits for temporal expectation disappeared when targets occurred at unattended locations. Our findings suggest that anticipated receptive-field properties of targets provide a natural template upon which temporal expectations can operate in order to help prioritize goal-relevant events from early perceptual stages." }, { "pmid": "28592695", "title": "Unraveling the Role of the Hippocampus in Reversal Learning.", "abstract": "Research in reversal learning has mainly focused on the functional role of dopamine and striatal structures in driving behavior on the basis of classic reinforcement learning mechanisms. However, recent evidence indicates that, beyond classic reinforcement learning adaptations, individuals may also learn the inherent task structure and anticipate the occurrence of reversals. A candidate structure to support such task representation is the hippocampus, which might create a flexible representation of the environment that can be adaptively applied to goal-directed behavior. To investigate the functional role of the hippocampus in the implementation of anticipatory strategies in reversal learning, we first studied, in 20 healthy individuals (11 women), whether the gray matter anatomy and volume of the hippocampus were related to anticipatory strategies in a reversal learning task. Second, we tested 20 refractory temporal lobe epileptic patients (11 women) with unilateral hippocampal sclerosis, who served as a hippocampal lesion model. Our results indicate that healthy participants were able to learn the task structure and use it to guide their behavior and optimize their performance. Participants' ability to adopt anticipatory strategies correlated with the gray matter volume of the hippocampus. In contrast, hippocampal patients were unable to grasp the higher-order structure of the task with the same success than controls. Present results indicate that the hippocampus is necessary to respond in an appropriately flexible manner to high-order environments, and disruptions in this structure can render behavior habitual and inflexible.SIGNIFICANCE STATEMENT Understanding the neural substrates involved in reversal learning has provoked a great deal of interest in the last years. Studies with nonhuman primates have shown that, through repetition, individuals are able to anticipate the occurrence of reversals and, thus, adjust their behavior accordingly. The present investigation is devoted to know the role of the hippocampus in such strategies. Importantly, our findings evidence that the hippocampus is necessary to anticipate the occurrence of reversals, and disruptions in this structure can render behavior habitual and inflexible." }, { "pmid": "25673835", "title": "Reversal learning and dopamine: a bayesian perspective.", "abstract": "Reversal learning has been studied as the process of learning to inhibit previously rewarded actions. Deficits in reversal learning have been seen after manipulations of dopamine and lesions of the orbitofrontal cortex. However, reversal learning is often studied in animals that have limited experience with reversals. As such, the animals are learning that reversals occur during data collection. We have examined a task regime in which monkeys have extensive experience with reversals and stable behavioral performance on a probabilistic two-arm bandit reversal learning task. We developed a Bayesian analysis approach to examine the effects of manipulations of dopamine on reversal performance in this regime. We find that the analysis can clarify the strategy of the animal. Specifically, at reversal, the monkeys switch quickly from choosing one stimulus to choosing the other, as opposed to gradually transitioning, which might be expected if they were using a naive reinforcement learning (RL) update of value. Furthermore, we found that administration of haloperidol affects the way the animals integrate prior knowledge into their choice behavior. Animals had a stronger prior on where reversals would occur on haloperidol than on levodopa (l-DOPA) or placebo. This strong prior was appropriate, because the animals had extensive experience with reversals occurring in the middle of the block. Overall, we find that Bayesian dissection of the behavior clarifies the strategy of the animals and reveals an effect of haloperidol on integration of prior information with evidence in favor of a choice reversal." }, { "pmid": "22940577", "title": "Planning as inference.", "abstract": "Recent developments in decision-making research are bringing the topic of planning back to center stage in cognitive science. This renewed interest reopens an old, but still unanswered question: how exactly does planning happen? What are the underlying information processing operations and how are they implemented in the brain? Although a range of interesting possibilities exists, recent work has introduced a potentially transformative new idea, according to which planning is accomplished through probabilistic inference." }, { "pmid": "25689102", "title": "Active inference and epistemic value.", "abstract": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms." }, { "pmid": "15647499", "title": "Prefrontal serotonin depletion affects reversal learning but not attentional set shifting.", "abstract": "Recently, we have shown that serotonin (5-HT) depletion from the prefrontal cortex (PFC) of the marmoset monkey impairs performance on a serial discrimination reversal (SDR) task, resulting in perseverative responding to the previously correct stimulus (Clarke et al., 2004). This pattern of impairment is just one example of inflexible responding seen after damage to the PFC, with performance on the SDR task being dependent on the integrity of the orbitofrontal cortex. However, the contribution of 5-HT to other forms of flexible responding, such as attentional set shifting, an ability dependent on lateral PFC (Dias et al., 1996a), is unknown. The present study addresses this issue by examining the effects of 5,7-dihydroxytryptamine-induced PFC 5-HT depletions on the ability to shift attention between two perceptual dimensions of a compound visual stimulus (extradimensional shift). Monkeys with selective PFC 5-HT lesions, despite being impaired in their ability to reverse a stimulus-reward association, were unimpaired in their ability to make an extradimensional shift when compared with sham-operated controls. These findings suggest that 5-HT is critical for flexible responding at the level of changing stimulus-reward contingencies but is not essential for the higher-order shifting of attentional set. Thus, psychological functions dependent on different loci within the PFC are differentially sensitive to serotonergic modulation, a finding of relevance to our understanding of cognitive inflexibility apparent in disorders such as obsessive-compulsive disorder and schizophrenia." }, { "pmid": "24267657", "title": "Dissociable effects of dopamine and serotonin on reversal learning.", "abstract": "Serotonin and dopamine are speculated to subserve motivationally opponent functions, but this hypothesis has not been directly tested. We studied the role of these neurotransmitters in probabilistic reversal learning in nearly 700 individuals as a function of two polymorphisms in the genes encoding the serotonin and dopamine transporters (SERT: 5HTTLPR plus rs25531; DAT1 3'UTR VNTR). A double dissociation was observed. The SERT polymorphism altered behavioral adaptation after losses, with increased lose-shift associated with L' homozygosity, while leaving unaffected perseveration after reversal. In contrast, the DAT1 genotype affected the influence of prior choices on perseveration, while leaving lose-shifting unaltered. A model of reinforcement learning captured the dose-dependent effect of DAT1 genotype, such that an increasing number of 9R-alleles resulted in a stronger reliance on previous experience and therefore reluctance to update learned associations. These data provide direct evidence for doubly dissociable effects of serotonin and dopamine systems." }, { "pmid": "17676057", "title": "Learning the value of information in an uncertain world.", "abstract": "Our decisions are guided by outcomes that are associated with decisions made in the past. However, the amount of influence each past outcome has on our next decision remains unclear. To ensure optimal decision-making, the weight given to decision outcomes should reflect their salience in predicting future outcomes, and this salience should be modulated by the volatility of the reward environment. We show that human subjects assess volatility in an optimal manner and adjust decision-making accordingly. This optimal estimate of volatility is reflected in the fMRI signal in the anterior cingulate cortex (ACC) when each trial outcome is observed. When a new piece of information is witnessed, activity levels reflect its salience for predicting future outcomes. Furthermore, variations in this ACC signal across the population predict variations in subject learning rates. Our results provide a formal account of how we weigh our different experiences in guiding our future actions." }, { "pmid": "16899731", "title": "The role of the ventromedial prefrontal cortex in abstract state-based inference during decision making in humans.", "abstract": "Many real-life decision-making problems incorporate higher-order structure, involving interdependencies between different stimuli, actions, and subsequent rewards. It is not known whether brain regions implicated in decision making, such as the ventromedial prefrontal cortex (vmPFC), use a stored model of the task structure to guide choice (model-based decision making) or merely learn action or state values without assuming higher-order structure as in standard reinforcement learning. To discriminate between these possibilities, we scanned human subjects with functional magnetic resonance imaging while they performed a simple decision-making task with higher-order structure, probabilistic reversal learning. We found that neural activity in a key decision-making region, the vmPFC, was more consistent with a computational model that exploits higher-order structure than with simple reinforcement learning. These results suggest that brain regions, such as the vmPFC, use an abstract model of task structure to guide behavioral choice, computations that may underlie the human capacity for complex social interactions and abstract strategizing." }, { "pmid": "27301429", "title": "Impaired Flexible Reward-Based Decision-Making in Binge Eating Disorder: Evidence from Computational Modeling and Functional Neuroimaging.", "abstract": "Despite its clinical relevance and the recent recognition as a diagnostic category in the DSM-5, binge eating disorder (BED) has rarely been investigated from a cognitive neuroscientific perspective targeting a more precise neurocognitive profiling of the disorder. BED patients suffer from a lack of behavioral control during recurrent binge eating episodes and thus fail to adapt their behavior in the face of negative consequences, eg, high risk for obesity. To examine impairments in flexible reward-based decision-making, we exposed BED patients (n=22) and matched healthy individuals (n=22) to a reward-guided decision-making task during functional resonance imaging (fMRI). Performing fMRI analysis informed via computational modeling of choice behavior, we were able to identify specific signatures of altered decision-making in BED. On the behavioral level, we observed impaired behavioral adaptation in BED, which was due to enhanced switching behavior, a putative deficit in striking a balance between exploration and exploitation appropriately. This was accompanied by diminished activation related to exploratory decisions in the anterior insula/ventro-lateral prefrontal cortex. Moreover, although so-called model-free reward prediction errors remained intact, representation of ventro-medial prefrontal learning signatures, incorporating inference on unchosen options, was reduced in BED, which was associated with successful decision-making in the task. On the basis of a computational psychiatry account, the presented findings contribute to defining a neurocognitive phenotype of BED." }, { "pmid": "27798176", "title": "Behavioral and Neural Signatures of Reduced Updating of Alternative Options in Alcohol-Dependent Patients during Flexible Decision-Making.", "abstract": "Addicted individuals continue substance use despite the knowledge of harmful consequences and often report having no choice but to consume. Computational psychiatry accounts have linked this clinical observation to difficulties in making flexible and goal-directed decisions in dynamic environments via consideration of potential alternative choices. To probe this in alcohol-dependent patients (n = 43) versus healthy volunteers (n = 35), human participants performed an anticorrelated decision-making task during functional neuroimaging. Via computational modeling, we investigated behavioral and neural signatures of inference regarding the alternative option. While healthy control subjects exploited the anticorrelated structure of the task to guide decision-making, alcohol-dependent patients were relatively better explained by a model-free strategy due to reduced inference on the alternative option after punishment. Whereas model-free prediction error signals were preserved, alcohol-dependent patients exhibited blunted medial prefrontal signatures of inference on the alternative option. This reduction was associated with patients' behavioral deficit in updating the alternative choice option and their obsessive-compulsive drinking habits. All results remained significant when adjusting for potential confounders (e.g., neuropsychological measures and gray matter density). A disturbed integration of alternative choice options implemented by the medial prefrontal cortex appears to be one important explanation for the puzzling question of why addicted individuals continue drug consumption despite negative consequences.\n\n\nSIGNIFICANCE STATEMENT\nIn addiction, patients maintain substance use despite devastating consequences and often report having no choice but to consume. These clinical observations have been theoretically linked to disturbed mechanisms of inference, for example, to difficulties when learning statistical regularities of the environmental structure to guide decisions. Using computational modeling, we demonstrate disturbed inference on alternative choice options in alcohol addiction. Patients neglecting \"what might have happened\" was accompanied by blunted coding of inference regarding alternative choice options in the medial prefrontal cortex. An impaired integration of alternative choice options implemented by the medial prefrontal cortex might contribute to ongoing drug consumption in the face of evident negative consequences." }, { "pmid": "21909088", "title": "Differential roles of human striatum and amygdala in associative learning.", "abstract": "Although the human amygdala and striatum have both been implicated in associative learning, only the striatum's contribution has been consistently computationally characterized. Using a reversal learning task, we found that amygdala blood oxygen level-dependent activity tracked associability as estimated by a computational model, and dissociated it from the striatal representation of reinforcement prediction error. These results extend the computational learning approach from striatum to amygdala, demonstrating their complementary roles in aversive learning." }, { "pmid": "17416921", "title": "Model-based fMRI and its application to reward learning and decision making.", "abstract": "In model-based functional magnetic resonance imaging (fMRI), signals derived from a computational model for a specific cognitive process are correlated against fMRI data from subjects performing a relevant task to determine brain regions showing a response profile consistent with that model. A key advantage of this technique over more conventional neuroimaging approaches is that model-based fMRI can provide insights into how a particular cognitive process is implemented in a specific brain area as opposed to merely identifying where a particular process is located. This review will briefly summarize the approach of model-based fMRI, with reference to the field of reward learning and decision making, where computational models have been used to probe the neural mechanisms underlying learning of reward associations, modifying action choice to obtain reward, as well as in encoding expected value signals that reflect the abstract structure of a decision problem. Finally, some of the limitations of this approach will be discussed." }, { "pmid": "26290251", "title": "The Role of Frontal Cortical and Medial-Temporal Lobe Brain Areas in Learning a Bayesian Prior Belief on Reversals.", "abstract": "Reversal learning has been extensively studied across species as a task that indexes the ability to flexibly make and reverse deterministic stimulus-reward associations. Although various brain lesions have been found to affect performance on this task, the behavioral processes affected by these lesions have not yet been determined. This task includes at least two kinds of learning. First, subjects have to learn and reverse stimulus-reward associations in each block of trials. Second, subjects become more proficient at reversing choice preferences as they experience more reversals. We have developed a Bayesian approach to separately characterize these two learning processes. Reversal of choice behavior within each block is driven by a combination of evidence that a reversal has occurred, and a prior belief in reversals that evolves with experience across blocks. We applied the approach to behavior obtained from 89 macaques, comprising 12 lesion groups and a control group. We found that animals from all of the groups reversed more quickly as they experienced more reversals, and correspondingly they updated their prior beliefs about reversals at the same rate. However, the initial values of the priors that the various groups of animals brought to the task differed significantly, and it was these initial priors that led to the differences in behavior. Thus, by taking a Bayesian approach we find that variability in reversal-learning performance attributable to different neural systems is primarily driven by different prior beliefs about reversals that each group brings to the task.\n\n\nSIGNIFICANCE STATEMENT\nThe ability to use prior knowledge to adapt choice behavior is critical for flexible decision making. Reversal learning is often studied as a form of flexible decision making. However, prior studies have not identified which brain regions are important for the formation and use of prior beliefs to guide choice behavior. Here we develop a Bayesian approach that formally characterizes learning set as a concept, and we show that, in macaque monkeys, the amygdala and medial prefrontal cortex have a role in establishing an initial belief about the stability of the reward environment." }, { "pmid": "24291614", "title": "Striatal dysfunction during reversal learning in unmedicated schizophrenia patients.", "abstract": "Subjects with schizophrenia are impaired at reinforcement-driven reversal learning from as early as their first episode. The neurobiological basis of this deficit is unknown. We obtained behavioral and fMRI data in 24 unmedicated, primarily first episode, schizophrenia patients and 24 age-, IQ- and gender-matched healthy controls during a reversal learning task. We supplemented our fMRI analysis, focusing on learning from prediction errors, with detailed computational modeling to probe task solving strategy including an ability to deploy an internal goal directed model of the task. Patients displayed reduced functional activation in the ventral striatum (VS) elicited by prediction errors. However, modeling task performance revealed that a subgroup did not adjust their behavior according to an accurate internal model of the task structure, and these were also the more severely psychotic patients. In patients who could adapt their behavior, as well as in controls, task solving was best described by cognitive strategies according to a Hidden Markov Model. When we compared patients and controls who acted according to this strategy, patients still displayed a significant reduction in VS activation elicited by informative errors that precede salient changes of behavior (reversals). Thus, our study shows that VS dysfunction in schizophrenia patients during reward-related reversal learning remains a core deficit even when controlling for task solving strategies. This result highlights VS dysfunction is tightly linked to a reward-related reversal learning deficit in early, unmedicated schizophrenia patients." }, { "pmid": "25689102", "title": "Active inference and epistemic value.", "abstract": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms." }, { "pmid": "18032658", "title": "Reinforcement learning signals in the human striatum distinguish learners from nonlearners during reward-based decision making.", "abstract": "The computational framework of reinforcement learning has been used to forward our understanding of the neural mechanisms underlying reward learning and decision-making behavior. It is known that humans vary widely in their performance in decision-making tasks. Here, we used a simple four-armed bandit task in which subjects are almost evenly split into two groups on the basis of their performance: those who do learn to favor choice of the optimal action and those who do not. Using models of reinforcement learning we sought to determine the neural basis of these intrinsic differences in performance by scanning both groups with functional magnetic resonance imaging. We scanned 29 subjects while they performed the reward-based decision-making task. Our results suggest that these two groups differ markedly in the degree to which reinforcement learning signals in the striatum are engaged during task performance. While the learners showed robust prediction error signals in both the ventral and dorsal striatum during learning, the nonlearner group showed a marked absence of such signals. Moreover, the magnitude of prediction error signals in a region of dorsal striatum correlated significantly with a measure of behavioral performance across all subjects. These findings support a crucial role of prediction error signals, likely originating from dopaminergic midbrain neurons, in enabling learning of action selection preferences on the basis of obtained rewards. Thus, spontaneously observed individual differences in decision making performance demonstrate the suggested dependence of this type of learning on the functional integrity of the dopaminergic striatal system in humans." }, { "pmid": "16563737", "title": "The computational neurobiology of learning and reward.", "abstract": "Following the suggestion that midbrain dopaminergic neurons encode a signal, known as a 'reward prediction error', used by artificial intelligence algorithms for learning to choose advantageous actions, the study of the neural substrates for reward-based learning has been strongly influenced by computational theories. In recent work, such theories have been increasingly integrated into experimental design and analysis. Such hybrid approaches have offered detailed new insights into the function of a number of brain areas, especially the cortex and basal ganglia. In part this is because these approaches enable the study of neural correlates of subjective factors (such as a participant's beliefs about the reward to be received for performing some action) that the computational theories purport to quantify." }, { "pmid": "24018303", "title": "Bayesian model selection for group studies - revisited.", "abstract": "In this paper, we revisit the problem of Bayesian model selection (BMS) at the group level. We originally addressed this issue in Stephan et al. (2009), where models are treated as random effects that could differ between subjects, with an unknown population distribution. Here, we extend this work, by (i) introducing the Bayesian omnibus risk (BOR) as a measure of the statistical risk incurred when performing group BMS, (ii) highlighting the difference between random effects BMS and classical random effects analyses of parameter estimates, and (iii) addressing the problem of between group or condition model comparisons. We address the first issue by quantifying the chance likelihood of apparent differences in model frequencies. This leads to the notion of protected exceedance probabilities. The second issue arises when people want to ask \"whether a model parameter is zero or not\" at the group level. Here, we provide guidance as to whether to use a classical second-level analysis of parameter estimates, or random effects BMS. The third issue rests on the evidence for a difference in model labels or frequencies across groups or conditions. Overall, we hope that the material presented in this paper finesses the problems of group-level BMS in the analysis of neuroimaging and behavioural data." }, { "pmid": "25257798", "title": "Perceiving the passage of time: neural possibilities.", "abstract": "Although the study of time has been central to physics and philosophy for millennia, questions of how time is represented in the brain and how this representation is related to time perception have only recently started to be addressed. Emerging evidence subtly yet profoundly challenges our intuitive notions of time over short scales, offering insight into the nature of the brain's representation of time. Numerous different models, specified at the neural level, of how the brain may keep track of time have been proposed. These models differ in various ways, such as whether time is represented by a centralized or distributed neural system, or whether there are neural systems dedicated to the problem of timing. This paper reviews the insight offered by behavioral experiments and how these experiments refute and guide some of the various models of the brain's representation of time." }, { "pmid": "19697116", "title": "Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.", "abstract": "Neurons in vitro and in vivo have epochs of bursting or \"up state\" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label \"burst\" and \"non-burst,\" (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS." }, { "pmid": "15707252", "title": "Unsupervised learning and mapping of active brain functional MRI signals based on hidden semi-Markov event sequence models.", "abstract": "In this paper, a novel functional magnetic resonance imaging (fMRI) brain mapping method is presented within the statistical modeling framework of hidden semi-Markov event sequence models (HSMESMs). Neural activation detection is formulated at the voxel level in terms of time coupling between the sequence of hemodynamic response onsets (HROs) observed in the fMRI signal, and an HSMESM of the hidden sequence of task-induced neural activations. The sequence of HRO events is derived from a continuous wavelet transform (CWT) of the fMRI signal. The brain activation HSMESM is built from the timing information of the input stimulation protocol. The rich mathematical framework of HSMESMs makes these models an effective and versatile approach for fMRI data analysis. Solving for the HSMESM Evaluation and Learning problems enables the model to automatically detect neural activation embedded in a given set of fMRI signals, without requiring any template basis function or prior shape assumption for the fMRI response. Solving for the HSMESM Decoding problem allows to enrich brain mapping with activation lag mapping, activation mode visualizing, and hemodynamic response function analysis. Activation detection results obtained on synthetic and real epoch-related fMRI data demonstrate the superiority of the HSMESM mapping method with respect to a real application case of the statistical parametric mapping (SPM) approach. In addition, the HSMESM mapping method appears clearly insensitive to timing variations of the hemodynamic response, and exhibits low sensitivity to fluctuations of its shape." }, { "pmid": "21629826", "title": "A bayesian foundation for individual learning under uncertainty.", "abstract": "Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory." }, { "pmid": "20844132", "title": "An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment.", "abstract": "Maintaining appropriate beliefs about variables needed for effective decision making can be difficult in a dynamic environment. One key issue is the amount of influence that unexpected outcomes should have on existing beliefs. In general, outcomes that are unexpected because of a fundamental change in the environment should carry more influence than outcomes that are unexpected because of persistent environmental stochasticity. Here we use a novel task to characterize how well human subjects follow these principles under a range of conditions. We show that the influence of an outcome depends on both the error made in predicting that outcome and the number of similar outcomes experienced previously. We also show that the exact nature of these tendencies varies considerably across subjects. Finally, we show that these patterns of behavior are consistent with a computationally simple reduction of an ideal-observer model. The model adjusts the influence of newly experienced outcomes according to ongoing estimates of uncertainty and the probability of a fundamental change in the process by which outcomes are generated. A prior that quantifies the expected frequency of such environmental changes accounts for individual variability, including a positive relationship between subjective certainty and the degree to which new information influences existing beliefs. The results suggest that the brain adaptively regulates the influence of decision outcomes on existing beliefs using straightforward updating rules that take into account both recent outcomes and prior expectations about higher-order environmental structure." }, { "pmid": "20569174", "title": "Bayesian online learning of the hazard rate in change-point problems.", "abstract": "Change-point models are generative models of time-varying data in which the underlying generative parameters undergo discontinuous changes at different points in time known as change points. Change-points often represent important events in the underlying processes, like a change in brain state reflected in EEG data or a change in the value of a company reflected in its stock price. However, change-points can be difficult to identify in noisy data streams. Previous attempts to identify change-points online using Bayesian inference relied on specifying in advance the rate at which they occur, called the hazard rate (h). This approach leads to predictions that can depend strongly on the choice of h and is unable to deal optimally with systems in which h is not constant in time. In this letter, we overcome these limitations by developing a hierarchical extension to earlier models. This approach allows h itself to be inferred from the data, which in turn helps to identify when change-points occur. We show that our approach can effectively identify change-points in both toy and real data sets with complex hazard rates and how it can be used as an ideal-observer model for human and animal behavior when faced with rapidly changing inputs." }, { "pmid": "22487047", "title": "Surprise! Neural correlates of Pearce-Hall and Rescorla-Wagner coexist within the brain.", "abstract": "Learning theory and computational accounts suggest that learning depends on errors in outcome prediction as well as changes in processing of or attention to events. These divergent ideas are captured by models, such as Rescorla-Wagner (RW) and temporal difference (TD) learning on the one hand, which emphasize errors as directly driving changes in associative strength, vs. models such as Pearce-Hall (PH) and more recent variants on the other hand, which propose that errors promote changes in associative strength by modulating attention and processing of events. Numerous studies have shown that phasic firing of midbrain dopamine (DA) neurons carries a signed error signal consistent with RW or TD learning theories, and recently we have shown that this signal can be dissociated from attentional correlates in the basolateral amygdala and anterior cingulate. Here we will review these data along with new evidence: (i) implicating habenula and striatal regions in supporting error signaling in midbrain DA neurons; and (ii) suggesting that the central nucleus of the amygdala and prefrontal regions process the amygdalar attentional signal. However, while the neural instantiations of the RW and PH signals are dissociable and complementary, they may be linked. Any linkage would have implications for understanding why one signal dominates learning in some situations and not others, and also for appreciating the potential impact on learning of neuropathological conditions involving altered DA or amygdalar function, such as schizophrenia, addiction or anxiety disorders." }, { "pmid": "19448610", "title": "Two types of dopamine neuron distinctly convey positive and negative motivational signals.", "abstract": "Midbrain dopamine neurons are activated by reward or sensory stimuli predicting reward. These excitatory responses increase as the reward value increases. This response property has led to a hypothesis that dopamine neurons encode value-related signals and are inhibited by aversive events. Here we show that this is true only for a subset of dopamine neurons. We recorded the activity of dopamine neurons in monkeys (Macaca mulatta) during a Pavlovian procedure with appetitive and aversive outcomes (liquid rewards and airpuffs directed at the face, respectively). We found that some dopamine neurons were excited by reward-predicting stimuli and inhibited by airpuff-predicting stimuli, as the value hypothesis predicts. However, a greater number of dopamine neurons were excited by both of these stimuli, inconsistent with the hypothesis. Some dopamine neurons were also excited by both rewards and airpuffs themselves, especially when they were unpredictable. Neurons excited by the airpuff-predicting stimuli were located more dorsolaterally in the substantia nigra pars compacta, whereas neurons inhibited by the stimuli were located more ventromedially, some in the ventral tegmental area. A similar anatomical difference was observed for their responses to actual airpuffs. These findings suggest that different groups of dopamine neurons convey motivational signals in distinct manners." }, { "pmid": "27790629", "title": "Neurocomputational Models of Interval and Pattern Timing.", "abstract": "Most of the computations and tasks performed by the brain require the ability to tell time, and process and generate temporal patterns. Thus, there is a diverse set of neural mechanisms in place to allow the brain to tell time across a wide range of scales: from interaural delays on the order of microseconds to circadian rhythms and beyond. Temporal processing is most sophisticated on the scale of tens of milliseconds to a few seconds, because it is within this range that the brain must recognize and produce complex temporal patterns-such as those that characterize speech and music. Most models of timing, however, have focused primarily on simple intervals and durations, thus it is not clear whether they will generalize to complex pattern-based temporal tasks. Here, we review neurobiologically based models of timing in the subsecond range, focusing on whether they generalize to tasks that require placing consecutive intervals in the context of an overall pattern, that is, pattern timing." }, { "pmid": "16280574", "title": "Time and the brain: how subjective time relates to neural time.", "abstract": "Most of the actions our brains perform on a daily basis, such as perceiving, speaking, and driving a car, require timing on the scale of tens to hundreds of milliseconds. New discoveries in psychophysics, electrophysiology, imaging, and computational modeling are contributing to an emerging picture of how the brain processes, learns, and perceives time." }, { "pmid": "12931961", "title": "Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons.", "abstract": "This study investigated the firing patterns of striatal and cortical neurons in rats in a temporal generalization task. Striatal and cortical ensembles were recorded in rats trained to lever press at 2 possible criterion durations (10 s or 40 s from tone onset). Twenty-two percent of striatal and 15% of cortical cells had temporally specific modulations in their firing rate, firing at a significantly different rate around 10 s compared with 40 s. On 80% of trials, a post hoc analysis of the trial-by-trial consistency of the firing rates of an ensemble of neurons predicted whether a spike train came from a time window around 10 s versus around 40 s. Results suggest that striatal and cortical neurons encode specific durations in their firing rate and thereby serve as components of a neural circuit used to represent duration." }, { "pmid": "16163383", "title": "What makes us tick? Functional and neural mechanisms of interval timing.", "abstract": "Time is a fundamental dimension of life. It is crucial for decisions about quantity, speed of movement and rate of return, as well as for motor control in walking, speech, playing or appreciating music, and participating in sports. Traditionally, the way in which time is perceived, represented and estimated has been explained using a pacemaker-accumulator model that is not only straightforward, but also surprisingly powerful in explaining behavioural and biological data. However, recent advances have challenged this traditional view. It is now proposed that the brain represents time in a distributed manner and tells the time by detecting the coincidental activation of different neural populations." }, { "pmid": "27292535", "title": "Temporal Specificity of Reward Prediction Errors Signaled by Putative Dopamine Neurons in Rat VTA Depends on Ventral Striatum.", "abstract": "Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected reward are dissociable and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter." }, { "pmid": "16764517", "title": "Representation and timing in theories of the dopamine system.", "abstract": "Although the responses of dopamine neurons in the primate midbrain are well characterized as carrying a temporal difference (TD) error signal for reward prediction, existing theories do not offer a credible account of how the brain keeps track of past sensory events that may be relevant to predicting future reward. Empirically, these shortcomings of previous theories are particularly evident in their account of experiments in which animals were exposed to variation in the timing of events. The original theories mispredicted the results of such experiments due to their use of a representational device called a tapped delay line. Here we propose that a richer understanding of history representation and a better account of these experiments can be given by considering TD algorithms for a formal setting that incorporates two features not originally considered in theories of the dopaminergic response: partial observability (a distinction between the animal's sensory experience and the true underlying state of the world) and semi-Markov dynamics (an explicit account of variation in the intervals between events). The new theory situates the dopaminergic system in a richer functional and anatomical context, since it assumes (in accord with recent computational theories of cortex) that problems of partial observability and stimulus history are solved in sensory cortex using statistical modeling and inference and that the TD system predicts reward using the results of this inference rather than raw sensory data. It also accounts for a range of experimental data, including the experiments involving programmed temporal variability and other previously unmodeled dopaminergic response phenomena, which we suggest are related to subjective noise in animals' interval timing. Finally, it offers new experimental predictions and a rich theoretical framework for designing future experiments." }, { "pmid": "19193900", "title": "Striatal dopamine predicts outcome-specific reversal learning and its sensitivity to dopaminergic drug administration.", "abstract": "Individual variability in reward-based learning has been ascribed to quantitative variation in baseline levels of striatal dopamine. However, direct evidence for this pervasive hypothesis has hitherto been unavailable. We demonstrate that individual differences in reward-based reversal learning reflect variation in baseline striatal dopamine synthesis capacity, as measured with neurochemical positron emission tomography. Subjects with high baseline dopamine synthesis in the striatum showed relatively better reversal learning from unexpected rewards than from unexpected punishments, whereas subjects with low baseline dopamine synthesis in the striatum showed the reverse pattern. In addition, baseline dopamine synthesis predicted the direction of dopaminergic drug effects. The D(2) receptor agonist bromocriptine improved reward-based relative to punishment-based reversal learning in subjects with low baseline dopamine synthesis capacity, while impairing it in subjects with high baseline dopamine synthesis capacity in the striatum. Finally, this pattern of drug effects was outcome-specific, and driven primarily by drug effects on punishment-, but not reward-based reversal learning. These data demonstrate that the effects of D(2) receptor stimulation on reversal learning in humans depend on task demands and baseline striatal dopamine synthesis capacity." }, { "pmid": "15689962", "title": "Serotonergic modulation of prefrontal cortex during negative feedback in probabilistic reversal learning.", "abstract": "This study used functional magnetic resonance imaging to examine the effects of acute tryptophan (TRP) depletion (ATD), a well-recognized method for inducing transient cerebral serotonin depletion, on brain activity during probabilistic reversal learning. Twelve healthy male volunteers received a TRP-depleting drink or a balanced amino-acid drink (placebo) in a double-blind crossover design. At 5 h after drink ingestion, subjects were scanned while performing a probabilistic reversal learning task and while viewing a flashing checkerboard. The probabilistic reversal learning task enabled the separate examination of the effects of ATD on behavioral reversal following negative feedback and negative feedback per se that was not followed by behavioral adaptation. Consistent with previous findings, behavioral reversal was accompanied by significant signal change in the right ventrolateral prefrontal cortex (PFC) and the dorsomedial prefrontal cortex. ATD enhanced reversal-related signal change in the dorsomedial PFC, but did not modulate the ventrolateral PFC response. The ATD-induced signal change in the dorsomedial PFC during behavioral reversal learning extended to trials where subjects received negative feedback but did not change their behavior. These data suggest that ATD affects reversal learning and the processing of aversive signals by modulation of the dorsomedial PFC." }, { "pmid": "21159958", "title": "Beyond reversal: a critical role for human orbitofrontal cortex in flexible learning from probabilistic feedback.", "abstract": "Damage to the orbitofrontal cortex (OFC) has been linked to impaired reinforcement processing and maladaptive behavior in changing environments across species. Flexible stimulus-outcome learning, canonically captured by reversal learning tasks, has been shown to rely critically on OFC in rats, monkeys, and humans. However, the precise role of OFC in this learning remains unclear. Furthermore, whether other frontal regions also contribute has not been definitively established, particularly in humans. In the present study, a reversal learning task with probabilistic feedback was administered to 39 patients with focal lesions affecting various sectors of the frontal lobes and to 51 healthy, demographically matched control subjects. Standard groupwise comparisons were supplemented with voxel-based lesion-symptom mapping to identify regions within the frontal lobes critical for task performance. Learning in this dynamic stimulus-reinforcement environment was considered both in terms of overall performance and at the trial-by-trial level. In this challenging, probabilistic context, OFC damage disrupted both initial and reversal learning. Trial-by-trial performance patterns suggest that OFC plays a critical role in interpreting feedback from a particular trial within the broader context of the outcome history across trials rather than in simply suppressing preexisting stimulus-outcome associations. The findings show that OFC, and not other prefrontal regions, plays a necessary role in flexible stimulus-reinforcement learning in humans." }, { "pmid": "20107431", "title": "Serotonin modulates sensitivity to reward and negative feedback in a probabilistic reversal learning task in rats.", "abstract": "Depressed patients show cognitive deficits that may depend on an abnormal reaction to positive and negative feedback. The precise neurochemical mechanisms responsible for such cognitive abnormalities have not yet been clearly characterized, although serotoninergic dysfunction is frequently associated with depression. In three experiments described here, we investigated the effects of different manipulations of central serotonin (5-hydroxytryptamine, 5-HT) levels in rats performing a probabilistic reversal learning task that measures response to feedback. Increasing or decreasing 5-HT tone differentially affected behavioral indices of cognitive flexibility (reversals completed), reward sensitivity (win-stay), and reaction to negative feedback (lose-shift). A single low dose of the selective serotonin reuptake inhibitor citalopram (1 mg/kg) resulted in fewer reversals completed and increased lose-shift behavior. By contrast, a single higher dose of citalopram (10 mg/kg) exerted the opposite effect on both measures. Repeated (5 mg/kg, daily, 7 days) and subchronic (10 mg/kg, b.i.d., 5 days) administration of citalopram increased the number of reversals completed by the animals and increased the frequency of win-stay behavior, whereas global 5-HT depletion had the opposite effect on both indices. These results show that boosting 5-HT neurotransmission decreases negative feedback sensitivity and increases reward (positive feedback) sensitivity, whereas reducing it has the opposite effect. However, these effects depend on the nature of the manipulation used: acute manipulations of the 5-HT system modulate negative feedback sensitivity, whereas long-lasting treatments specifically affect reward sensitivity. These results parallel some of the findings in humans on effects of 5-HT manipulations and are relevant to hypotheses of altered response to feedback in depression." }, { "pmid": "18701696", "title": "Amygdala and orbitofrontal cortex lesions differentially influence choices during object reversal learning.", "abstract": "In nonhuman primates, interaction between the orbitofrontal cortex (OFC) and the amygdala (AMG) has been seen as critical for learning and subsequently changing associations between stimuli and reinforcement. However, it is still unclear what the precise role of the OFC is in altering these stimulus-reward associations, and recent research has questioned whether the AMG makes an essential contribution at all. To gain a better understanding of the role of these two structures in flexibly associating stimuli with reinforcement, we reanalyzed a set of previously published data from groups of monkeys with either OFC or AMG lesions that had been tested on an object reversal learning task. Based on trial-by-trial analyses of rewarded and unrewarded choices, we report two new findings. First, monkeys with OFC lesions were, compared with both control and AMG groups, unable to use correctly performed trials to optimally guide subsequent choices. Second, monkeys with AMG lesions showed the opposite pattern of behavior. This group benefited more than controls from correctly performed trials that followed an error. Finally, as has been reported by others, after a change in reward contingencies, monkeys with OFC lesions also showed a slightly greater tendency to choose the previously rewarded object. These findings demonstrate that the OFC and AMG make different contributions to object reversal learning not highlighted previously." }, { "pmid": "17088503", "title": "Reduced orbitofrontal-striatal activity on a reversal learning task in obsessive-compulsive disorder.", "abstract": "CONTEXT\nThe orbitofrontal cortex (OFC)-striatal circuit, which is important for motivational behavior, is assumed to be involved in the pathophysiology of obsessive-compulsive disorder (OCD) according to current neurobiological models of this disorder. However, the engagement of this neural loop in OCD has not been tested directly in a cognitive activation imaging paradigm so far.\n\n\nOBJECTIVE\nTo determine whether the OFC and the ventral striatum show abnormal neural activity in OCD during cognitive challenge.\n\n\nDESIGN\nA reversal learning task was employed in 20 patients with OCD who were not receiving medication and 27 healthy controls during an event-related functional magnetic resonance imaging experiment using a scanning sequence sensitive to OFC signal. This design allowed investigation of the neural correlates of reward and punishment receipt as well as of \"affective switching,\" ie, altering behavior on reversing reinforcement contingencies.\n\n\nRESULTS\nPatients with OCD exhibited an impaired task end result reflected by a reduced number of correct responses relative to control subjects but showed adequate behavior on receipt of punishment and with regard to affective switching. On reward outcome, patients showed decreased responsiveness in right medial and lateral OFC as well as in the right caudate nucleus (border zone ventral striatum) when compared with controls. During affective switching, patients recruited the left posterior OFC, bilateral insular cortex, bilateral dorsolateral, and bilateral anterior prefrontal cortex to a lesser extent than control subjects. No areas were found for which patients exhibited increased activity relative to controls, and no differential activations were observed for punishment in a direct group comparison.\n\n\nCONCLUSIONS\nThese data show behavioral impairments accompanied by aberrant OFC-striatal and dorsal prefrontal activity in OCD on a reversal learning task that addresses this circuit's function. These findings not only confirm previous reports of dorsal prefrontal dysfunction in OCD but also provide evidence for the involvement of the OFC-striatal loop in the pathophysiology of OCD." }, { "pmid": "22134477", "title": "Reversal learning as a measure of impulsive and compulsive behavior in addictions.", "abstract": "BACKGROUND\nOur ability to measure the cognitive components of complex decision-making across species has greatly facilitated our understanding of its neurobiological mechanisms. One task in particular, reversal learning, has proven valuable in assessing the inhibitory processes that are central to executive control. Reversal learning measures the ability to actively suppress reward-related responding and to disengage from ongoing behavior, phenomena that are biologically and descriptively related to impulsivity and compulsivity. Consequently, reversal learning could index vulnerability for disorders characterized by impulsivity such as proclivity for initial substance abuse as well as the compulsive aspects of dependence.\n\n\nOBJECTIVE\nThough we describe common variants and similar tasks, we pay particular attention to discrimination reversal learning, its supporting neural circuitry, neuropharmacology and genetic determinants. We also review the utility of this task in measuring impulsivity and compulsivity in addictions.\n\n\nMETHODS\nWe restrict our review to instrumental, reward-related reversal learning studies as they are most germane to addiction.\n\n\nCONCLUSION\nThe research reviewed here suggests that discrimination reversal learning may be used as a diagnostic tool for investigating the neural mechanisms that mediate impulsive and compulsive aspects of pathological reward-seeking and -taking behaviors. Two interrelated mechanisms are posited for the neuroadaptations in addiction that often translate to poor reversal learning: frontocorticostriatal circuitry dysregulation and poor dopamine (D2 receptor) modulation of this circuitry. These data suggest new approaches to targeting inhibitory control mechanisms in addictions." }, { "pmid": "29025688", "title": "Altered Medial Frontal Feedback Learning Signals in Anorexia Nervosa.", "abstract": "BACKGROUND\nIn their relentless pursuit of thinness, individuals with anorexia nervosa (AN) engage in maladaptive behaviors (restrictive food choices and overexercising) that may originate in altered decision making and learning.\n\n\nMETHODS\nIn this functional magnetic resonance imaging study, we employed computational modeling to elucidate the neural correlates of feedback learning and value-based decision making in 36 female patients with AN and 36 age-matched healthy volunteers (12-24 years). Participants performed a decision task that required adaptation to changing reward contingencies. Data were analyzed within a hierarchical Gaussian filter model that captures interindividual variability in learning under uncertainty.\n\n\nRESULTS\nBehaviorally, patients displayed an increased learning rate specifically after punishments. At the neural level, hemodynamic correlates for the learning rate, expected value, and prediction error did not differ between the groups. However, activity in the posterior medial frontal cortex was elevated in AN following punishment.\n\n\nCONCLUSIONS\nOur findings suggest that the neural underpinning of feedback learning is selectively altered for punishment in AN." }, { "pmid": "17482797", "title": "Probabilistic reversal learning impairments in schizophrenia: further evidence of orbitofrontal dysfunction.", "abstract": "Impairments in feedback processing and reinforcement learning appear to be prominent aspects of schizophrenia (SZ), which may relate to symptoms of the disorder. Evidence from cognitive neuroscience investigations indicates that disparate brain systems may underlie different kinds of feedback-driven learning. The ability to rapidly shift response tendencies in the face of negative feedback, when reinforcement contingencies are reversed, is an important type of learning thought to depend on ventral prefrontal cortex (PFC). Schizophrenia has long been associated with dysfunction in dorsolateral areas of PFC, but evidence for ventral PFC impairment in more mixed. In order to assess whether SZ patients experience particular difficulty in carrying out a cognitive function commonly linked to ventral PFC function, we administered to 34 patients and 26 controls a modified version of an established probabilistic reversal learning task from the experimental literature [Cools, R., Clark, L., Owen, A.M., Robbins, T.W., 2002. Defining the neural mechanisms of probabilistic reversal learning using event-related functional magnetic resonance imaging. J. Neurosci. 22, 4563-4567]. Although SZ patients and controls performed similarly on the initial acquisition of probabilistic contingencies, patients showed substantial learning impairments when reinforcement contingencies were reversed, achieving significantly fewer reversals [chi(2)(6)=15.717, p=0.008]. Even when analyses were limited to subjects who acquired all probabilistic contingencies initially (22 patients and 20 controls), patients achieved significantly fewer reversals [chi(2)(3)=9.408, p=0.024]. These results support the idea that ventral PFC dysfunction is a prevalent aspect of schizophrenic pathophysiology, which may contribute to deficits in reinforcement learning exhibited by patients. Further studies are required to investigate the roles of dopaminergic systems in these impairments." }, { "pmid": "15087550", "title": "Dissociable roles of ventral and dorsal striatum in instrumental conditioning.", "abstract": "Instrumental conditioning studies how animals and humans choose actions appropriate to the affective structure of an environment. According to recent reinforcement learning models, two distinct components are involved: a \"critic,\" which learns to predict future reward, and an \"actor,\" which maintains information about the rewarding outcomes of actions to enable better ones to be chosen more frequently. We scanned human participants with functional magnetic resonance imaging while they engaged in instrumental conditioning. Our results suggest partly dissociable contributions of the ventral and dorsal striatum, with the former corresponding to the critic and the latter corresponding to the actor." }, { "pmid": "21535456", "title": "Differentiable contributions of human amygdalar subregions in the computations underlying reward and avoidance learning.", "abstract": "To understand how the human amygdala contributes to associative learning, it is necessary to differentiate the contributions of its subregions. However, major limitations in the techniques used for the acquisition and analysis of functional magnetic resonance imaging (fMRI) data have hitherto precluded segregation of function with the amygdala in humans. Here, we used high-resolution fMRI in combination with a region-of-interest-based normalization method to differentiate functionally the contributions of distinct subregions within the human amygdala during two different types of instrumental conditioning: reward and avoidance learning. Through the application of a computational-model-based analysis, we found evidence for a dissociation between the contributions of the basolateral and centromedial complexes in the representation of specific computational signals during learning, with the basolateral complex contributing more to reward learning, and the centromedial complex more to avoidance learning. These results provide unique insights into the computations being implemented within fine-grained amygdala circuits in the human brain." }, { "pmid": "21697443", "title": "The human prefrontal cortex mediates integration of potential causes behind observed outcomes.", "abstract": "Prefrontal cortex has long been implicated in tasks involving higher order inference in which decisions must be rendered, not only about which stimulus is currently rewarded, but also which stimulus dimensions are currently relevant. However, the precise computational mechanisms used to solve such tasks have remained unclear. We scanned human participants with functional MRI, while they performed a hierarchical intradimensional/extradimensional shift task to investigate what strategy subjects use while solving higher order decision problems. By using a computational model-based analysis, we found behavioral and neural evidence that humans solve such problems not by occasionally shifting focus from one to the other dimension, but by considering multiple explanations simultaneously. Activity in human prefrontal cortex was better accounted for by a model that integrates over all available evidences than by a model in which attention is selectively gated. Importantly, our model provides an explanation for how the brain determines integration weights, according to which it could distribute its attention. Our results demonstrate that, at the point of choice, the human brain and the prefrontal cortex in particular are capable of a weighted integration of information across multiple evidences." }, { "pmid": "15689962", "title": "Serotonergic modulation of prefrontal cortex during negative feedback in probabilistic reversal learning.", "abstract": "This study used functional magnetic resonance imaging to examine the effects of acute tryptophan (TRP) depletion (ATD), a well-recognized method for inducing transient cerebral serotonin depletion, on brain activity during probabilistic reversal learning. Twelve healthy male volunteers received a TRP-depleting drink or a balanced amino-acid drink (placebo) in a double-blind crossover design. At 5 h after drink ingestion, subjects were scanned while performing a probabilistic reversal learning task and while viewing a flashing checkerboard. The probabilistic reversal learning task enabled the separate examination of the effects of ATD on behavioral reversal following negative feedback and negative feedback per se that was not followed by behavioral adaptation. Consistent with previous findings, behavioral reversal was accompanied by significant signal change in the right ventrolateral prefrontal cortex (PFC) and the dorsomedial prefrontal cortex. ATD enhanced reversal-related signal change in the dorsomedial PFC, but did not modulate the ventrolateral PFC response. The ATD-induced signal change in the dorsomedial PFC during behavioral reversal learning extended to trials where subjects received negative feedback but did not change their behavior. These data suggest that ATD affects reversal learning and the processing of aversive signals by modulation of the dorsomedial PFC." }, { "pmid": "21921020", "title": "Pathophysiological distortions in time perception and timed performance.", "abstract": "Distortions in time perception and timed performance are presented by a number of different neurological and psychiatric conditions (e.g. Parkinson's disease, schizophrenia, attention deficit hyperactivity disorder and autism). As a consequence, the primary focus of this review is on factors that define or produce systematic changes in the attention, clock, memory and decision stages of temporal processing as originally defined by Scalar Expectancy Theory. These findings are used to evaluate the Striatal Beat Frequency Theory, which is a neurobiological model of interval timing based upon the coincidence detection of oscillatory processes in corticostriatal circuits that can be mapped onto the stages of information processing proposed by Scalar Timing Theory." }, { "pmid": "15590495", "title": "Duration judgments in children with ADHD suggest deficient utilization of temporal information rather than general impairment in timing.", "abstract": "Clinicians, parents, and teachers alike have noted that individuals with ADHD often have difficulties with \"time management,\" which has led some to suggest a primary deficit in time perception in ADHD. Previous studies have implicated the basal ganglia, cerebellum, and frontal lobes in time estimation and production, with each region purported to make different contributions to the processing and utilization of temporal information. Given the observed involvement of the frontal-subcortical networks in ADHD, we examined judgment of durations in children with ADHD (N = 27) and age- and gender-matched control subjects (N = 15). Two judgment tasks were administered: short duration (550 ms) and long duration (4 s). The two groups did not differ significantly in their judgments of short interval durations; however, subjects with ADHD performed more poorly when making judgments involving long intervals. The groups also did not differ on a judgment-of-pitch task, ruling out a generalized deficit in auditory discrimination. Selective impairment in making judgments involving long intervals is consistent with performance by patients with frontal lobe lesions and suggests that there is a deficiency in the utilization of temporal information in ADHD (possibly secondary to deficits in working memory and/or strategy utilization), rather than a problem involving a central timing mechanism." }, { "pmid": "12815512", "title": "Time reproduction in children with ADHD: motivation matters.", "abstract": "The primary goal of this study was to examine whether children with ADHD have a true deficit in subjective time sense, or whether their impairment reflects a motivational deficit. Thirty children with ADHD and 30 matched control children completed two versions of a time reproduction paradigm (\"Regular\" and \"Enhanced\") in which motivational level was manipulated by the addition of positive sham feedback and the prospect of earning a reward. A secondary goal was to investigate performance on measures of working memory and behavioural inhibition, and how those constructs relate to time reproduction in the context of Barkley's (1997a) model of ADHD. Children with ADHD performed significantly better on the motivating 'Enhanced' versus the Regular time reproduction paradigm, although they continued to perform significantly worse than controls on both tasks. Control children exhibited no reliable change in performance between versions of the task. Significant group differences were also observed on the working memory and behavioural inhibition tasks. We discuss the impact of motivation, working memory, and behavioural inhibition on time reproduction performance." }, { "pmid": "12030598", "title": "Evidence for a pure time perception deficit in children with ADHD.", "abstract": "BACKGROUND\nDeficits have been found previously in children with ADHD on tasks of time reproduction, time production and motor timing, implicating a deficit in temporal processing abilities, which has been interpreted as either secondary or primary to core executive dysfunctions. The aim of this study was to explore further the abilities of hyperactive children in skills of time estimation, using a range of time perception tasks in different temporal domains.\n\n\nMETHOD\nTime estimation was tested in a verbal estimation task of 10 seconds. Time reproduction was also acquired for two time intervals of 5 and 12 seconds. A temporal discrimination task aimed to determine the idiosyncratic threshold of minimum time interval (in milliseconds) necessary to distinguish two intervals differing by approximately 300 milliseconds. Twenty-two children diagnosed with ADHD were compared to 22 healthy children, matched for age, handedness and working memory skills.\n\n\nRESULTS\nChildren with ADHD were significantly impaired in their time discrimination threshold: on average, time intervals had to be 50 ms longer for the hyperactive children in order to be discriminated when compared with controls. Children with ADHD also responded earlier on a 12-second reproduction task, which however only approached significance after controlling for IQ and short-term memory. No group differences were found for the 5-second time reproduction or verbal time estimation tasks.\n\n\nCONCLUSIONS\nThe findings suggest that children with ADHD perform poorly on time reproduction tasks which load heavily on impulsiveness and attentional processes and they also suggest that these children may have a perceptual deficit of time discrimination, which may only be detectable in brief durations which differ by several hundred milliseconds. A temporal perception deficit in the range of milliseconds in ADHD may impact upon other functions such as perceptual language skills and motor timing." }, { "pmid": "25142296", "title": "Role of the medial prefrontal cortex in impaired decision making in juvenile attention-deficit/hyperactivity disorder.", "abstract": "IMPORTANCE\nAttention-deficit/hyperactivity disorder (ADHD) has been associated with deficient decision making and learning. Models of ADHD have suggested that these deficits could be caused by impaired reward prediction errors (RPEs). Reward prediction errors are signals that indicate violations of expectations and are known to be encoded by the dopaminergic system. However, the precise learning and decision-making deficits and their neurobiological correlates in ADHD are not well known.\n\n\nOBJECTIVE\nTo determine the impaired decision-making and learning mechanisms in juvenile ADHD using advanced computational models, as well as the related neural RPE processes using multimodal neuroimaging.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nTwenty adolescents with ADHD and 20 healthy adolescents serving as controls (aged 12-16 years) were examined using a probabilistic reversal learning task while simultaneous functional magnetic resonance imaging and electroencephalogram were recorded.\n\n\nMAIN OUTCOMES AND MEASURES\nLearning and decision making were investigated by contrasting a hierarchical Bayesian model with an advanced reinforcement learning model and by comparing the model parameters. The neural correlates of RPEs were studied in functional magnetic resonance imaging and electroencephalogram.\n\n\nRESULTS\nAdolescents with ADHD showed more simplistic learning as reflected by the reinforcement learning model (exceedance probability, Px = .92) and had increased exploratory behavior compared with healthy controls (mean [SD] decision steepness parameter β: ADHD, 4.83 [2.97]; controls, 6.04 [2.53]; P = .02). The functional magnetic resonance imaging analysis revealed impaired RPE processing in the medial prefrontal cortex during cue as well as during outcome presentation (P < .05, family-wise error correction). The outcome-related impairment in the medial prefrontal cortex could be attributed to deficient processing at 200 to 400 milliseconds after feedback presentation as reflected by reduced feedback-related negativity (ADHD, 0.61 [3.90] μV; controls, -1.68 [2.52] μV; P = .04).\n\n\nCONCLUSIONS AND RELEVANCE\nThe combination of computational modeling of behavior and multimodal neuroimaging revealed that impaired decision making and learning mechanisms in adolescents with ADHD are driven by impaired RPE processing in the medial prefrontal cortex. This novel, combined approach furthers the understanding of the pathomechanisms in ADHD and may advance treatment strategies." }, { "pmid": "30077331", "title": "Incentives Boost Model-Based Control Across a Range of Severity on Several Psychiatric Constructs.", "abstract": "BACKGROUND\nHuman decision making exhibits a mixture of model-based and model-free control. Recent evidence indicates that arbitration between these two modes of control (\"metacontrol\") is based on their relative costs and benefits. While model-based control may increase accuracy, it requires greater computational resources, so people invoke model-based control only when potential rewards exceed those of model-free control. We used a sequential decision task, while concurrently manipulating performance incentives, to ask if symptoms and traits of psychopathology decrease or increase model-based control in response to incentives.\n\n\nMETHODS\nWe recruited a nonpatient population of 839 online participants using Amazon Mechanical Turk who completed transdiagnostic self-report measures encompassing symptoms, traits, and factors. We fit a dual-controller reinforcement learning model and obtained a computational measure of model-based control separately for small incentives and large incentives.\n\n\nRESULTS\nNone of the constructs were related to a failure of large incentives to boost model-based control. In fact, for the sensation seeking trait and anxious-depression factor, higher scores were associated with a larger incentive effect, whereby greater levels of these constructs were associated with larger increases in model-based control. Many constructs showed decreases in model-based control as a function of severity, but a social withdrawal factor was positively correlated; alcohol use and social anxiety were unrelated to model-based control.\n\n\nCONCLUSIONS\nOur results demonstrate that model-based control can reliably be improved independent of construct severity for most measures. This suggests that incentives may be a useful intervention for boosting model-based control across a range of symptom and trait severity." }, { "pmid": "28599832", "title": "Model-Based Control in Dimensional Psychiatry.", "abstract": "We use parallel interacting goal-directed and habitual strategies to make our daily decisions. The arbitration between these strategies is relevant to inflexible repetitive behaviors in psychiatric disorders. Goal-directed control, also known as model-based control, is based on an affective outcome relying on a learned internal model to prospectively make decisions. In contrast, habit control, also known as model-free control, is based on an integration of previous reinforced learning autonomous of the current outcome value and is implicit and more efficient but at the cost of greater inflexibility. The concept of model-based control can be further extended into pavlovian processes. Here we describe and compare tasks that tap into these constructs and emphasize the clinical relevance and translation of these tasks in psychiatric disorders. Together, these findings highlight a role for model-based control as a transdiagnostic impairment underlying compulsive behaviors and representing a promising therapeutic target." }, { "pmid": "25561321", "title": "Optimal inference with suboptimal models: addiction and active Bayesian inference.", "abstract": "When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent's beliefs - based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment - as opposed to the agent's beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less 'optimally' than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject's generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described 'limited offer' task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work." }, { "pmid": "23752095", "title": "Transition from 'model-based' to 'model-free' behavioral control in addiction: Involvement of the orbitofrontal cortex and dorsolateral striatum.", "abstract": "Cocaine addiction is a complex and multidimensional process involving a number of behavioral and neural forms of plasticity. The behavioral transition from voluntary drug use to compulsive drug taking may be explained at the neural level by drug-induced changes in function or interaction between a flexible planning system, associated with prefrontal cortical regions, and a rigid habit system, associated with the striatum. The dichotomy between these two systems is operationalized in computational theory by positing model-based and model-free learning mechanisms, the former relying on an \"internal model\" of the environment and the latter on pre-computed or cached values to control behavior. In this review, we will suggest that model-free and model-based learning mechanisms appear to be differentially affected, at least in the case of psychostimulants such as cocaine, with the former being enhanced while the latter are disrupted. As a result, the behavior of long-term drug users becomes less flexible and responsive to the desirability of expected outcomes and more habitual, based on the long history of reinforcement. To support our specific proposal, we will review recent neural and behavioral evidence on the effect of psychostimulant exposure on orbitofrontal and dorsolateral striatum structure and function. This article is part of a Special Issue entitled 'NIDA 40th Anniversary Issue'." }, { "pmid": "18311134", "title": "Smokers' brains compute, but ignore, a fictive error signal in a sequential investment task.", "abstract": "Addicted individuals pursue substances of abuse even in the clear presence of positive outcomes that may be foregone and negative outcomes that may occur. Computational models of addiction depict the addicted state as a feature of a valuation disease, where drug-induced reward prediction error signals steer decisions toward continued drug use. Related models admit the possibility that valuation and choice are also directed by 'fictive' outcomes (outcomes that have not been experienced) that possess their own detectable error signals. We hypothesize that, in addiction, anomalies in these fictive error signals contribute to the diminished influence of potential consequences. Using a simple investment game and functional magnetic resonance imaging in chronic cigarette smokers, we measured neural and behavioral responses to error signals derived from actual experience and from fictive outcomes. In nonsmokers, both fictive and experiential error signals predicted subjects' choices and possessed distinct neural correlates. In chronic smokers, choices were not guided by error signals derived from what might have happened, despite ongoing and robust neural correlates of these fictive errors. These data provide human neuroimaging support for computational models of addiction and suggest the addition of fictive learning signals to reinforcement learning accounts of drug dependence." }, { "pmid": "24840709", "title": "Disorders of compulsivity: a common bias towards learning habits.", "abstract": "Why do we repeat choices that we know are bad for us? Decision making is characterized by the parallel engagement of two distinct systems, goal-directed and habitual, thought to arise from two computational learning mechanisms, model-based and model-free. The habitual system is a candidate source of pathological fixedness. Using a decision task that measures the contribution to learning of either mechanism, we show a bias towards model-free (habit) acquisition in disorders involving both natural (binge eating) and artificial (methamphetamine) rewards, and obsessive-compulsive disorder. This favoring of model-free learning may underlie the repetitive behaviors that ultimately dominate in these disorders. Further, we show that the habit formation bias is associated with lower gray matter volumes in caudate and medial orbitofrontal cortex. Our findings suggest that the dysfunction in a common neurocomputational mechanism may underlie diverse disorders involving compulsion." }, { "pmid": "23421527", "title": "Neural correlates of feedback processing in obsessive-compulsive disorder.", "abstract": "Obsessive-compulsive disorder (OCD) patients show hyperactive performance monitoring when monitoring their own actions. Hyperactive performance monitoring is related to OCD symptomatology, like the unflexibility of compulsive behaviors, and was suggested as a potential endophenotype for the disorder. However, thus far the functioning of the performance monitoring system in OCD remains unclear in processes where performance is not monitored in one's own actions internally, but through external feedback during learning. The present study investigated whether electrocortical indicators of feedback processing are hyperactive, and whether feedback-guided learning is compromised in OCD. A modified deterministic four-choice object reversal learning task was used that required recurrent feedback-based behavioral adjustment in response to changing reward contingencies. Electrophysiological correlates of feedback processing (i.e. feedback-related negativity [FRN] and P300) were measured in 25 OCD patients and 25 matched healthy comparison subjects. Deficits in behavioral adjustment were found in terms of higher error rates of OCD patients in response to negative feedback. Whereas the FRN was unchanged for reversal negative feedback, it was reduced for negative feedback that indicated that a newly selected stimulus was still incorrect. The observed FRN reduction suggests attenuated monitoring of feedback during the learning process in OCD potentially contributing to a deficit in adaptive behavior reflected in obsessive thoughts and actions. The reduction of FRN amplitudes contrasts with overactive performance monitoring of self-generated errors. Nevertheless, the findings contribute to the theoretical framework of performance monitoring, suggesting a dissociation of processing systems for actions and feedback with specific alterations of these two systems in OCD." }, { "pmid": "28798131", "title": "Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors.", "abstract": "Some people hear voices that others do not, but only some of those people seek treatment. Using a Pavlovian learning task, we induced conditioned hallucinations in four groups of people who differed orthogonally in their voice-hearing and treatment-seeking statuses. People who hear voices were significantly more susceptible to the effect. Using functional neuroimaging and computational modeling of perception, we identified processes that differentiated voice-hearers from non-voice-hearers and treatment-seekers from non-treatment-seekers and characterized a brain circuit that mediated the conditioned hallucinations. These data demonstrate the profound and sometimes pathological impact of top-down cognitive processes on perception and may represent an objective means to discern people with a need for treatment from those without." } ]
Frontiers in Pharmacology
30792654
PMC6374626
10.3389/fphar.2019.00050
Semantic Queries Expedite MedDRA Terms Selection Thanks to a Dedicated User Interface: A Pilot Study on Five Medical Conditions
Background: Searching into the MedDRA terminology is usually limited to a hierarchical search, and/or a string search. Our objective was to compare user performances when using a new kind of user interface enabling semantic queries versus classical methods, and evaluating term selection improvement in MedDRA.Methods: We implemented a forms-based web interface: OntoADR Query Tools (OQT). It relies on OntoADR, a formal resource describing MedDRA terms using SNOMED CT concepts and corresponding semantic relations, enabling terminological reasoning. We then compared time spent on five examples of medical conditions using OQT or the MedDRA web-based browser (MWB), and precision and recall of the term selection.Results: OntoADR Query Tools allows the user to search in MedDRA: One may enter search criteria by selecting one semantic property from a dropdown list and one or more SNOMED CT concepts related to the range of the chosen property. The user is assisted in building his query: he can add criteria and combine them. Then, the interface displays the set of MedDRA terms matching the query. Meanwhile, on average, the time spent on OQT (about 4 min 30 s) is significantly lower (−35%; p < 0.001) than time spent on MWB (about 7 min). The results of the System Usability Scale (SUS) gave a score of 62.19 for OQT (rated as good). We also demonstrated increased precision (+27%; p = 0.01) and recall (+34%; p = 0.02). Computed “performance” (correct terms found per minute) is more than three times better with OQT than with MWB.Discussion: This pilot study establishes the feasibility of our approach based on our initial assumption: performing MedDRA queries on the five selected medical conditions, using terminological reasoning, expedites term selection, and improves search capabilities for pharmacovigilance end users. Evaluation with a larger number of users and medical conditions are required in order to establish if OQT is appropriate for the needs of different user profiles, and to check if conclusions can be extended to other kinds of medical conditions. The application is currently limited by the non-exhaustive coverage of MedDRA by OntoADR, but nevertheless shows good performance which encourages continuing in the same direction.
Related WorkDeveloping innovative softwares, especially user interfaces, has been subject to multiple studies. Shneiderman has defined “8 Golden Rules of Interface Design” (Shneiderman and Ben, 2003): focusing on keeping terminological consistency in the interface, reducing the number of interactions, offering feedback, dialogs, simple error handling, easy reversal of actions, turning users as actors of the process, and reducing load of human short-term memory. Shackel and Dillon showed the importance of evaluating both technological acceptance and accessibility when releasing innovating software (Shackel, 1984; Dillon and Morris, 1996). Carroll dissected the psychology of users in human-computer interface interactions (Carroll, 1991) and showed the importance of the cognitive ergonomics. We followed these development principles which could explain why our tool is well received by users.To our knowledge, no development has been realized on forms-based interfaces for knowledge engineering in pharmacovigilance. The only work, so far in the biomedical domain, is from Joubert et al. (1998) describing a software application where the user may select pairs of concepts and a relation between them, in order to build a conceptual graph and retrieve records in medical databases. However, it does not meet the needs of pharmacovigilance who aim to select medical terms. The SUS score for OQT suggests that forms-based interfaces show a better usability than existing systems.Rogers (Rogers, 2010) described factors that can influence a decision to adopt or reject an innovation. In our case, the development of OQT helps to significantly improve the trialability of OntoADR (i.e., making reasoning OntoADR feasible by all).
[ "17911807", "27692980", "22030036", "15649103", "24680984", "10082069", "16386470", "7719786", "22874155", "16185681", "9452985", "23122633", "26245245", "17604415", "27239556", "10719533", "18406213", "19745309", "25160157", "27348725", "27369567", "19757412", "26967899" ]
[ { "pmid": "17911807", "title": "PharmARTS: terminology web services for drug safety data coding and retrieval.", "abstract": "MedDRA and WHO-ART are the terminologies used to encode drug safety reports. The standardisation achieved with these terminologies facilitates: 1) The sharing of safety databases; 2) Data mining for the continuous reassessment of benefit-risk ratio at national or international level or in the pharmaceutical industry. There is some debate about the capacity of these terminologies for retrieving case reports related to similar medical conditions. We have developed a resource that allows grouping similar medical conditions more effectively than WHO-ART and MedDRA. We describe here a software tool facilitating the use of this terminological resource thanks to an RDF framework with support for RDF Schema inferencing and querying. This tool eases coding and data retrieval in drug safety." }, { "pmid": "27692980", "title": "[Automated grouping of terms associated to cardiac valve fibrosis in MedDRA].", "abstract": "AIM\nTo propose an alternative approach for building custom groupings of terms that complements the usual approach based on both hierarchical method (selection of reference groupings in medical dictionary for regulatory activities [MedDRA]) and/or textual method (string search), for case reports extraction from a pharmacovigilance database in response to a safety problem. Here we take cardiac valve fibrosis as an example.\n\n\nMETHODS\nThe list of terms obtained by an automated approach, based on querying ontology of adverse drug reactions (OntoADR), a knowledge base defining MedDRA terms through relationships with systematized nomenclature of medicine-clinical terms (SNOMED CT) concepts, was compared with the reference list consisting of 53 preferred terms obtained by hierarchical and textual method. Two queries were performed on OntoADR by using a dedicated software: OntoADR query tools. Both queries excluded congenital diseases, and included a procedure or an auscultation method performed on cardiac valve structures. Query 1 also considered MedDRA terms related to fibrosis, narrowing or calcification of heart valves, and query 2 MedDRA terms described according to one of these four SNOMED CT terms: \"Insufficiency\", \"Valvular sclerosis\", \"Heart valve calcification\" or \"Heart valve stenosis\".\n\n\nRESULTS\nThe reference grouping consisted of 53 MedDRA preferred terms. Our automated method achieved recall of 79% and precision of 100% for query 1 privileging morphological abnormalities, and recall of 100% and precision of 96% for query 2 privileging functional abnormalities.\n\n\nCONCLUSION\nAn alternative approach to MedDRA reference groupings for building custom groupings is feasible for cardiac valve fibrosis. OntoADR is still in development. Its application to other adverse reactions would require significant work for a knowledge engineer to define every MedDRA term, but such definitions could then be queried as many times as necessary by pharmacovigilance professionals." }, { "pmid": "22030036", "title": "A usability evaluation of a SNOMED CT based compositional interface terminology for intensive care.", "abstract": "OBJECTIVE\nTo evaluate the usability of a large compositional interface terminology based on SNOMED CT and the terminology application for registration of the reasons for intensive care admission in a Patient Data Management System.\n\n\nDESIGN\nObservational study with user-based usability evaluations before and 3 months after the system was implemented and routinely used.\n\n\nMEASUREMENTS\nUsability was defined by five aspects: effectiveness, efficiency, learnability, overall user satisfaction, and experienced usability problems. Qualitative (the Think-Aloud user testing method) and quantitative (the System Usability Scale questionnaire and Time-on-Task analyses) methods were used to examine these usability aspects.\n\n\nRESULTS\nThe results of the evaluation study revealed that the usability of the interface terminology fell short (SUS scores before and after implementation of 47.2 out of 100 and 37.5 respectively out of 100). The qualitative measurements revealed a high number (n=35) of distinct usability problems, leading to ineffective and inefficient registration of reasons for admission. The effectiveness and efficiency of the system did not change over time. About 14% (n=5) of the revealed usability problems were related to the terminology content based on SNOMED CT, while the remaining 86% (n=30) was related to the terminology application. The problems related to the terminology content were more severe than the problems related to the terminology application.\n\n\nCONCLUSIONS\nThis study provides a detailed insight into how clinicians interact with a controlled compositional terminology through a terminology application. The extensiveness, complexity of the hierarchy, and the language usage of an interface terminology are defining for its usability. Carefully crafted domain-specific subsets and a well-designed terminology application are needed to facilitate the use of a complex compositional interface terminology based on SNOMED CT." }, { "pmid": "15649103", "title": "Appraisal of the MedDRA conceptual structure for describing and grouping adverse drug reactions.", "abstract": "Computerised queries in spontaneous reporting systems for pharmacovigilance require reliable and reproducible coding of adverse drug reactions (ADRs). The aim of the Medical Dictionary for Regulatory Activities (MedDRA) terminology is to provide an internationally approved classification for efficient communication of ADR data between countries. Several studies have evaluated the domain completeness of MedDRA and whether encoded terms are coherent with physicians' original verbatim descriptions of the ADR. MedDRA terms are organised into five levels: system organ class (SOC), high level group terms (HLGTs), high level terms (HLTs), preferred terms (PTs) and low level terms (LLTs). Although terms may belong to different SOCs, no PT is related to more than one HLT within the same SOC. This hierarchical property ensures that terms cannot be counted twice in statistical studies, though it does not allow appropriate semantic grouping of PTs. For this purpose, special search categories (SSCs) [collections of PTs assembled from various SOCs] have been introduced in MedDRA to group terms with similar meanings. However, only a small number of categories are currently available and the criteria used to construct these categories have not been clarified. The objective of this work is to determine whether MedDRA contains the structural and terminological properties to group semantically linked adverse events in order to improve the performance of spontaneous reporting systems. Rossi Mori classifies terminological systems in three categories: first-generation systems, which represent terms as strings; second-generation systems, which dissect terminological phrases into a set of simpler terms; and third-generation systems, which provide advanced features to automatically retrieve the position of new terms in the classification and group sets of meaning-related terms. We applied Cimino's desiderata to show that MedDRA is not compatible with the properties of third-generation systems. Consequently, no tool can help for the automated positioning of new terms inside the hierarchy and SSCs have to be entered manually rather than automatically using the MedDRA files. One solution could be to link MedDRA to a third-generation system. This would allow the current MedDRA structure to be kept to ensure that end users have a common view on the same data and the addition of new computational properties to MedDRA." }, { "pmid": "24680984", "title": "Formalizing MedDRA to support semantic reasoning on adverse drug reaction terms.", "abstract": "Although MedDRA has obvious advantages over previous terminologies for coding adverse drug reactions and discovering potential signals using data mining techniques, its terminological organization constrains users to search terms according to predefined categories. Adding formal definitions to MedDRA would allow retrieval of terms according to a case definition that may correspond to novel categories that are not currently available in the terminology. To achieve semantic reasoning with MedDRA, we have associated formal definitions to MedDRA terms in an OWL file named OntoADR that is the result of our first step for providing an \"ontologized\" version of MedDRA. MedDRA five-levels original hierarchy was converted into a subsumption tree and formal definitions of MedDRA terms were designed using several methods: mappings to SNOMED-CT, semi-automatic definition algorithms or a fully manual way. This article presents the main steps of OntoADR conception process, its structure and content, and discusses problems and limits raised by this attempt to \"ontologize\" MedDRA." }, { "pmid": "10082069", "title": "The medical dictionary for regulatory activities (MedDRA).", "abstract": "The International Conference on Harmonisation has agreed upon the structure and content of the Medical Dictionary for Regulatory Activities (MedDRA) version 2.0 which should become available in the early part of 1999. This medical terminology is intended for use in the pre- and postmarketing phases of the medicines regulatory process, covering diagnoses, symptoms and signs, adverse drug reactions and therapeutic indications, the names and qualitative results of investigations, surgical and medical procedures, and medical/social history. It can be used for recording adverse events and medical history in clinical trials, in the analysis and tabulations of data from these trials and in the expedited submission of safety data to government regulatory authorities, as well as in constructing standard product information and documentation for applications for marketing authorisation. After licensing of a medicine, it may be used in pharmacovigilance and is expected to be the preferred terminology for international electronic regulatory communication. MedDRA is a hierarchical terminology with 5 levels and is multiaxial: terms may exist in more than 1 vertical axis, providing specificity of terms for data entry and flexibility in data retrieval. Terms in MedDRA were derived from several sources including the WHO's adverse reaction terminology (WHO-ART), Coding Symbols for a Thesaurus of Adverse Reaction Terms (COSTART), International Classification of Diseases (ICD) 9 and ICD9-CM. It will be maintained, further developed and distributed by a Maintenance Support Services Organisation (MSSO). It is anticipated that using MedDRA will improve the quality of data captured on databases, support effective analysis by providing clinically relevant groupings of terms and facilitate electronic communication of data, although as a new tool, users will need to invest time in gaining expertise in its use." }, { "pmid": "16386470", "title": "In defense of the Desiderata.", "abstract": "A 1998 paper that delineated desirable characteristics, or desiderata for controlled medical terminologies attempted to summarize emerging consensus regarding structural issues of such terminologies. Among the Desiderata was a call for terminologies to be \"concept oriented.\" Since then, research has trended toward the extension of terminologies into ontologies. A paper by Smith, entitled \"From Concepts to Clinical Reality: An Essay on the Benchmarking of Biomedical Terminologies\" urges a realist approach that seeks terminologies composed of universals, rather than concepts. The current paper addresses issues raised by Smith and attempts to extend the Desiderata, not away from concepts, but towards recognition that concepts and universals must both be embraced and can coexist peaceably in controlled terminologies. To that end, additional Desiderata are defined that deal with the purpose, rather than the structure, of controlled medical terminologies." }, { "pmid": "7719786", "title": "Knowledge-based approaches to the maintenance of a large controlled medical terminology.", "abstract": "OBJECTIVE\nDevelop a knowledge-based representation for a controlled terminology of clinical information to facilitate creation, maintenance, and use of the terminology.\n\n\nDESIGN\nThe Medical Entities Dictionary (MED) is a semantic network, based on the Unified Medical Language System (UMLS), with a directed acyclic graph to represent multiple hierarchies. Terms from four hospital systems (laboratory, electrocardiography, medical records coding, and pharmacy) were added as nodes in the network. Additional knowledge about terms, added as semantic links, was used to assist in integration, harmonization, and automated classification of disparate terminologies.\n\n\nRESULTS\nThe MED contains 32,767 terms and is in active clinical use. Automated classification was successfully applied to terms for laboratory specimens, laboratory tests, and medications. One benefit of the approach has been the automated inclusion of medications into multiple pharmacologic and allergenic classes that were not present in the pharmacy system. Another benefit has been the reduction of maintenance efforts by 90%.\n\n\nCONCLUSION\nThe MED is a hybrid of terminology and knowledge. It provides domain coverage, synonymy, consistency of views, explicit relationships, and multiple classification while preventing redundancy, ambiguity (homonymy) and misclassification." }, { "pmid": "22874155", "title": "Automatic generation of MedDRA terms groupings using an ontology.", "abstract": "In the context of PROTECT European project, we have developed an ontology of adverse drug reactions (OntoADR) based on the original MedDRA hierarchy and a query-based method to achieve automatic MedDRA terms groupings for improving pharmacovigilance signal detection. Those groupings were evaluated against standard handmade MedDRA groupings corresponding to first priority pharmacovigilance safety topics. Our results demonstrate that this automatic method allows catching most of the terms present in the reference groupings, and suggest that it could offer an important saving of time for the achievement of pharmacovigilance groupings. This paper describes the theoretical context of this work, the evaluation methodology, and presents the principal results." }, { "pmid": "16185681", "title": "Building an ontology of adverse drug reactions for automated signal generation in pharmacovigilance.", "abstract": "Automated signal generation in pharmacovigilance implements unsupervised statistical machine learning techniques in order to discover unknown adverse drug reactions (ADR) in spontaneous reporting systems. The impact of the terminology used for coding ADRs has not been addressed previously. The Medical Dictionary for Regulatory Activities (MedDRA) used worldwide in pharmacovigilance cases does not provide formal definitions of terms. We have built an ontology of ADRs to describe semantics of MedDRA terms. Ontological subsumption and approximate matching inferences allow a better grouping of medically related conditions. Signal generation performances are significantly improved but time consumption related to modelization remains very important." }, { "pmid": "9452985", "title": "UMLS-based conceptual queries to biomedical information databases: an overview of the project ARIANE. Unified Medical Language System.", "abstract": "OBJECTIVE\nThe aim of the project ARIANE is to model and implement seamless, natural, and easy-to-use interfaces with various kinds of heterogeneous biomedical information databases.\n\n\nDESIGN\nA conceptual model of some of the Unified Medical Language System (UMLS) knowledge sources has been developed to help end users to query information databases. A query is represented by a conceptual graph that translates the deep structure of an end-user's interest in a topic. A computational model exploits this conceptual model to build a query interactively represented as query graph. A query graph is then matched to the data graph built with data issued from each record of a database by means of a pattern-matching (projection) rule that applies to conceptual graphs.\n\n\nRESULTS\nPrototypes have been implemented to test the feasibility of the model with different kinds of information databases. Three cases are studied: 1) information in records is structured according to the UMLS knowledge sources; 2) information is able to be structured without error in the frame of the UMLS knowledge; 3) information cannot be structured. In each case the pattern-matching is processed by the projection rule according to the structure of information that has been implemented in the databases.\n\n\nCONCLUSION\nThe conceptual graphs theory provides with a homogeneous and powerful formalism able to represent both concepts, instances of concepts in medical contexts, and associations by means of relationships, and to represent data at different levels of details. The conceptual-graphs formalism allows powerful capabilities to operate a semantic integration of information databases using the UMLS knowledge sources." }, { "pmid": "23122633", "title": "Towards an ontology for data quality in integrated chronic disease management: a realist review of the literature.", "abstract": "PURPOSE\nEffective use of routine data to support integrated chronic disease management (CDM) and population health is dependent on underlying data quality (DQ) and, for cross system use of data, semantic interoperability. An ontological approach to DQ is a potential solution but research in this area is limited and fragmented.\n\n\nOBJECTIVE\nIdentify mechanisms, including ontologies, to manage DQ in integrated CDM and whether improved DQ will better measure health outcomes.\n\n\nMETHODS\nA realist review of English language studies (January 2001-March 2011) which addressed data quality, used ontology-based approaches and is relevant to CDM.\n\n\nRESULTS\nWe screened 245 papers, excluded 26 duplicates, 135 on abstract review and 31 on full-text review; leaving 61 papers for critical appraisal. Of the 33 papers that examined ontologies in chronic disease management, 13 defined data quality and 15 used ontologies for DQ. Most saw DQ as a multidimensional construct, the most used dimensions being completeness, accuracy, correctness, consistency and timeliness. The majority of studies reported tool design and development (80%), implementation (23%), and descriptive evaluations (15%). Ontological approaches were used to address semantic interoperability, decision support, flexibility of information management and integration/linkage, and complexity of information models.\n\n\nCONCLUSION\nDQ lacks a consensus conceptual framework and definition. DQ and ontological research is relatively immature with little rigorous evaluation studies published. Ontology-based applications could support automated processes to address DQ and semantic interoperability in repositories of routinely collected data to deliver integrated CDM. We advocate moving to ontology-based design of information systems to enable more reliable use of routine data to measure health mechanisms and impacts." }, { "pmid": "26245245", "title": "Using ontologies to improve semantic interoperability in health data.", "abstract": "The present-day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual person-based records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies." }, { "pmid": "17604415", "title": "Standardised MedDRA queries: their role in signal detection.", "abstract": "Standardised MedDRA (Medical Dictionary for Regulatory Activities) queries (SMQs) are a newly developed tool to assist in the retrieval of cases of interest from a MedDRA-coded database. SMQs contain terms related to signs, symptoms, diagnoses, syndromes, physical findings, laboratory and other physiological test data etc, that are associated with the medical condition of interest. They are being developed jointly by CIOMS and the MedDRA Maintenance and Support Services Organization (MSSO) and are provided as an integral part of a MedDRA subscription. During their development, SMQs undergo testing to assure that they are able to retrieve cases of interest within the defined scope of the SMQ. This paper describes the features of SMQs that allow for flexibility in their application, such as 'narrow' and 'broad' sub-searches, hierarchical grouping of sub-searches and search algorithms. In addition, as with MedDRA, users can request changes to SMQs. SMQs are maintained in synchrony with MedDRA versions by internal maintenance processes in the MSSO. The list of safety topics to be developed into SMQs is long and comprehensive. The CIOMS Working Group retains a list of topics to be developed and periodically reviews the list for priority and relevance. As of mid-2007, 37 SMQs are in production use and several more are under development. The potential uses of SMQs in safety analysis will be discussed including their role in signal detection and evaluation." }, { "pmid": "10719533", "title": "Reconciling users' needs and formal requirements: issues in developing a reusable ontology for medicine.", "abstract": "A common language, or terminology, for representing what clinicians have said and done is an important requirement for individual clinical systems, and it is a pre-requisite for integrating disparate applications in a distributed telematic healthcare environment. Formal representations based on description logics or closely related formalisms are increasingly used for representing medical terminologies. GALEN's experience in using one such formalism raises two major issues, as follows: how to make ontologies based on description logics easy to use and understand for both clinicians and applications developers; what features are required of the ontology and description logic if they are to achieve their aims. Based on our experience we put forward four contentions: two relating to each of these two issues, as follows: that natural language generation is essential to make a description logic based ontology accessible to users; that the description logic based ontology should be treated as an \"assembly language\" and accessed via \"intermediate representations\" oriented to users and \"perspectives\" adapting it to specific applications; that independence and reuse are best supported by partitioning the subsumption hierarchy of elementary concepts into orthogonal taxonomies, each of which forms a pure tree in which the branches at each level are disjoint but nonexhaustive subconcepts of the parent concept; that the expressivity of the description logic must include support for transitive relations despite the computational cost, and that this computational cost is acceptable in practice. The authors argue that these features will be necessary, though by no means sufficient, for the development of any large reusable ontology for medicine." }, { "pmid": "18406213", "title": "Heterogeneous but \"standard\" coding systems for adverse events: Issues in achieving interoperability between apples and oranges.", "abstract": "Monitoring adverse events (AEs) is an important part of clinical research and a crucial target for data standards. The representation of adverse events themselves requires the use of controlled vocabularies with thousands of needed clinical concepts. Several data standards for adverse events currently exist, each with a strong user base. The structure and features of these current adverse event data standards (including terminologies and classifications) are different, so comparisons and evaluations are not straightforward, nor are strategies for their harmonization. Three different data standards - the Medical Dictionary for Regulatory Activities (MedDRA) and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) terminologies, and Common Terminology Criteria for Adverse Events (CTCAE) classification - are explored as candidate representations for AEs. This paper describes the structural features of each coding system, their content and relationship to the Unified Medical Language System (UMLS), and unsettled issues for future interoperability of these standards." }, { "pmid": "19745309", "title": "Using the CEN/ISO standard for categorial structure to harmonise the development of WHO international terminologies.", "abstract": "Semantic interoperability (SIOp) is a major issue for health care systems having to share information across professionals, teams, legacies, countries, languages and citizens. The World Health Organisation (WHO) develops and updates a family of health care terminologies (ICD, ICF, ICHI and ICPS) and has embarked on an open web-based cooperation to revise ICD 11 using ontology driven tools. The International Health Terminology Standard Development Organisation (IHTSDO) updates, translates and maps SNOMED CT to ICD 10. We present the application of the CEN/ISO standard on categorial structure to bind terminologies and ontologies to harmonise and to map between these international terminologies." }, { "pmid": "25160157", "title": "Ci4SeR--curation interface for semantic resources--evaluation with adverse drug reactions.", "abstract": "Evaluation and validation have become a crucial problem for the development of semantic resources. We developed Ci4SeR, a Graphical User Interface to optimize the curation work (not taking into account structural aspects), suitable for any type of resource with lightweight description logic. We tested it on OntoADR, an ontology of adverse drug reactions. A single curator has reviewed 326 terms (1020 axioms) in an estimated time of 120 hours (2.71 concepts and 8.5 axioms reviewed per hour) and added 1874 new axioms (15.6 axioms per hour). Compared with previous manual endeavours, the interface allows increasing the speed-rate of reviewed concepts by 68% and axiom addition by 486%. A wider use of Ci4SeR would help semantic resources curation and improve completeness of knowledge modelling." }, { "pmid": "27348725", "title": "MedDRA® automated term groupings using OntoADR: evaluation with upper gastrointestinal bleedings.", "abstract": "OBJECTIVE\nTo propose a method to build customized sets of MedDRA terms for the description of a medical condition. We illustrate this method with upper gastrointestinal bleedings (UGIB).\n\n\nRESEARCH DESIGN AND METHODS\nWe created a broad list of MedDRA terms related to UGIB and defined a gold standard with the help of experts. MedDRA terms were formally described in a semantic resource named OntoADR. We report the use of two semantic queries that automatically select candidate terms for UGIB. Query 1 is a combination of two SNOMED CT concepts describing both morphology 'Hemorrhage' and finding site 'Upper digestive tract structure'. Query 2 complements Query 1 by taking into account MedDRA terms associated to SNOMED CT concepts describing clinical manifestations 'Melena' or 'Hematemesis'.\n\n\nRESULTS\nWe compared terms in queries and our gold standard achieving a recall of 71.0% and a precision of 81.4% for query 1 (F1 score 0.76); and a recall of 96.7% and a precision of 77.0% for query 2 (F1 score 0.86).\n\n\nCONCLUSIONS\nOur results demonstrate the feasibility of applying knowledge engineering techniques for building customized sets of MedDRA terms. Additional work is necessary to improve precision and recall, and confirm the interest of the proposed strategy." }, { "pmid": "27369567", "title": "OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.", "abstract": "INTRODUCTION\nEfficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR.\n\n\nMETHODS\nThe method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts.\n\n\nRESULTS\nWe built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest.\n\n\nDISCUSSION\nThe methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal detection, which suggests that the graphical user interface we are currently implementing to process OntoADR could be usefully integrated into specialized pharmacovigilance software that rely on MedDRA." }, { "pmid": "19757412", "title": "Data mining on electronic health record databases for signal detection in pharmacovigilance: which events to monitor?", "abstract": "PURPOSE\nData mining on electronic health records (EHRs) has emerged as a promising complementary method for post-marketing drug safety surveillance. The EU-ADR project, funded by the European Commission, is developing techniques that allow mining of EHRs for adverse drug events across different countries in Europe. Since mining on all possible events was considered to unduly increase the number of spurious signals, we wanted to create a ranked list of high-priority events.\n\n\nMETHODS\nScientific literature, medical textbooks, and websites of regulatory agencies were reviewed to create a preliminary list of events that are deemed important in pharmacovigilance. Two teams of pharmacovigilance experts independently rated each event on five criteria: 'trigger for drug withdrawal', 'trigger for black box warning', 'leading to emergency department visit or hospital admission', 'probability of event to be drug-related', and 'likelihood of death'. In case of disagreement, a consensus score was obtained. Ordinal scales between 0 and 3 were used for rating the criteria, and an overall score was computed to rank the events.\n\n\nRESULTS\nAn initial list comprising 23 adverse events was identified. After rating all the events and calculation of overall scores, a ranked list was established. The top-ranking events were: cutaneous bullous eruptions, acute renal failure, anaphylactic shock, acute myocardial infarction, and rhabdomyolysis.\n\n\nCONCLUSIONS\nA ranked list of 23 adverse drug events judged as important in pharmacovigilance was created to permit focused data mining. The list will need to be updated periodically as knowledge on drug safety evolves and new issues in drug safety arise." }, { "pmid": "26967899", "title": "PepeSearch: Semantic Data for the Masses.", "abstract": "With the emergence of the Web of Data, there is a need of tools for searching and exploring the growing amount of semantic data. Unfortunately, such tools are scarce and typically require knowledge of SPARQL/RDF. We propose here PepeSearch, a portable tool for searching semantic datasets devised for mainstream users. PepeSearch offers a multi-class search form automatically constructed from a SPARQL endpoint. We have tested PepeSearch with 15 participants searching a Linked Open Data version of the Norwegian Register of Business Enterprises for non-trivial challenges. Retrieval performance was encouragingly high and usability ratings were also very positive, thus suggesting that PepeSearch is effective for searching semantic datasets by mainstream users. We also assessed its portability by configuring PepeSearch to query other SPARQL endpoints." } ]
Frontiers in Neurology
30800095
PMC6375880
10.3389/fneur.2018.01066
Body Weight Support Combined With Treadmill in the Rehabilitation of Parkinsonian Gait: A Review of Literature and New Data From a Controlled Study
Background: Gait disorders represent disabling symptoms in Parkinson's Disease (PD). The effectiveness of rehabilitation treatment with Body Weight Support Treadmill Training (BWSTT) has been demonstrated in patients with stroke and spinal cord injuries, but limited data is available in PD.Aims: The aim of the study is to investigate the efficacy of BWSTT in the rehabilitation of gait in PD patients.Methods: Thirty-six PD inpatients were enrolled and performed rehabilitation treatment for 4-weeks, with daily sessions. Subjects were randomly divided into two groups: both groups underwent daily 40-min sessions of traditional physiokinesitherapy followed by 20-min sessions of overground gait training (Control group) or BWSTT (BWSTT group). The efficacy of BWSTT was evaluated with clinical scales and Computerized Gait Analysis (CGA). Patients were tested at baseline (T0) and at the end of the 4-weeks rehabilitation period (T1).Results: Both BWSTT and Control groups experienced a significant improvement in clinical scales as FIM and UPDRS and in gait parameters for both interventions. Even if we failed to detect any statistically significant differences between groups in the different clinical and gait parameters, the intragroup analysis captured a specific pattern of qualitative improvement associated to cadence and stride duration for the BWSTT group and to the swing/stance ratio for the Control group. Four patients with chronic pain or anxious symptoms did not tolerate BWSTT.Conclusions: BWSTT and traditional rehabilitation treatment are both effective in improving clinical motor functions and kinematic gait parameters. BWSTT may represent an option in PD patients with specific symptoms that limit traditional overground gait training, e.g., severe postural instability, balance disorder, orthostatic hypotension. BWSTT is generally well-tolerated, though caution is needed in subjects with chronic pain or with anxious symptoms.Clinical Trial Registration: www.ClinicalTrials.gov, identifier: NCT03815409
Related WorksBWS Delivered Without Robotic DevicesThe first report of BWSTT efficacy in gait rehabilitation of PD belongs to Miyai et al. Ten patients with PD were enrolled in a cross-over study and treated for 4 consecutive weeks with BWSTT (20% of unweighting for 12 min followed by another 12-min period of 10% of unweighting) or conventional physical therapy (CPT). The Authors showed that BWSTT was superior to CPT in improving gait disturbances and disability at the end of the rehabilitative period. More specifically BWSTT proved superior to CPT in improving UPDRS scores, gait speed and stride length (10). The same study group in 2002, evaluated the 6-months retention of BWSTT in PD. Twenty-four patients with PD were randomized to receive BWSTT (20% of unweighting for 10 min + 10% of unweighting for 10 min + 0% of unweighting for an additional 10-min period) or CPT 3 times/week for 4 consecutive weeks. All patients were clinically evaluated at baseline and then monthly for 6 months. In this series, gait speed significantly improved in BWSTT respect to CPT only at month 1, while the improvement in the stride length was more marked in BWSTT group with respect to CPT and persisted until month 4 (11).Toole et al. showed that 6-weeks of BWSTT increased gait speed and stride length, evaluated with clinical tests, and improved balance, measured with Computerized Dynamic Posturography. Of note, in this study no statistical difference in gait was observed when comparing patients treated with treadmill alone with patients treated with treadmill associated with weight-support (15).In 2008, Fisher et al. speculated on the possible central mechanism responsible for clinical effects of BWSTT. Thirty subjects affected by PD were randomly assigned to three groups: high-intensity group (24 sessions of BWSTT), low-intensity group (24 sessions of CPT), zero-intensity group (8-weeks of education classes). Again, the high-intensity group improved the most at the end of treatment period, in particular in gait speed, step length, stride length, and double support. Of note, in this study a subgroup of patients was also tested with transcranial magnetic stimulation: in the BWSTT group Authors were able to record a lengthening of the cortical silent period, postulating that high-intensity training improved neuronal plasticity in PD, through BDNF and GABA modulation (25).More recently, Rose et al. studied the efficacy of a high-intensity locomotor training in BWS condition achieved with a positive-pressure antigravity treadmill. When comparing training period (3 sessions/week for 8 weeks) with a control period (no intervention), they found a significant improvement in MDS-UPDRS (total and motor sub-scale) and walking distance (26). Indeed Ganesan et al. randomized 60 PD patients to 3 groups: (1) no specific exercise activity, (2) conventional gait training (30 min sessions, 4 times/week for 1 month), and (3) BWSTT (30 min sessions, 4 times/week for 1 month). At the end of a 4-weeks follow up, both intervention groups showed an improvement in the UPDRS score (total, motor, and sub-scores) and in gait parameters (walking distance, speed, and step length) when compared to non-exercising group; moreover, BWSTT appeared to be significantly superior respect to conventional gait training. At variance from the previous study, this latter one used an instrumental analysis of gait, which unfortunately was performed while the subjects were walking on the treadmill (instrumented 2-min walk test) and not during unassisted overground gait (23).Lander and Moran studied the spatiotemporal gait effects of BWSTT with an instrumented 6-m device (GAITRite, CIR systems); they studied the effects of a single session of BWSTT in PD and healthy controls and showed an improved gait speed and cadence in both groups. Unfortunately, they did not investigate the effect of repetitive BWSTT sessions (24).In the future, we hope that BWS might be improved and combined with novel technologies in order to develop new and individualized rehabilitative strategies. In this view, Park et al. combined BWS with a treadmill designed to adapt the walking speed according to the voluntary patient control via a feedback/feedforward control. Moreover, the environment around this BWSTT was enriched by a virtual reality system able to simulate real-life conditions. This approach proved safe and allowed the therapist to treat patients in more realistic overground gait conditions; indeed, the use of virtual obstacles, such as walls or narrow spaces, in the virtual reality setting allowed Authors to study freezing of gait in laboratory (27).Another example is the combination of BWS with cues rehabilitation; Schlick et al. showed how BWSTT combined with visual cues was more effective in improving step length and gait symmetry when compared to an un-cued condition. It is also noteworthy that this multimodal approach was efficient and well-tolerated in advance PD patient with a V Hoehn and Yahr stage (28).BWSTT Delivered With Robotic DevicesUstinova et al. published the first positive case report on the short-term gait rehabilitation efficacy of BWSTT delivered to a PD patient with a robotic device (Lokomat-Hocoma Inc., Volketswil, Switzerland). The intervention consisted in a 2-weeks gait training, delivered 3 times per week, with each session lasting 90–120 min (29).Lo et al. conducted a pilot study to assess the efficacy of BWSTT delivered with the Lokomat unit in reducing frequency of freezing of gait (FOG) in PD. Authors reported a 20% reduction in the average number of daily episodes of FOG and a 14% improvement in the FOG-questionnaire score (19).In 2012, Picelli et al. enrolled 41 PD patients in the first randomized controlled study aimed to compare the efficacy of BWSTT delivered with a robot-assisted gait training (RAGT-gait Trainer GT1) to CPT (not focused on gait training) in improving gait in PD. They showed how RAGT was significantly superior respect to CPT in improving the 6-min walking test, the 10-meter walking test, stride length, single/double support ratio, Parkinson's Fatigue Scale and UPDRS score (20).Carda et al. subsequently designed a randomized controlled study to assess superiority of robotic-gait training with BWS (Lokomat with 50% of unweighting for 15 min followed by 30% unweighting for an additional 15-min period) when compared to treadmill training without BWS. The Authors failed to record any significant differences between groups at the 6-min walking test, the 10-meter walking test and the Time up-and go test, although all the parameters significantly improved in both groups, with a positive effect persisting up to 6 months after rehabilitation (30).In the paper published by Sale et al., the main aim was the comparison between a new end-effector robotic BWS device (G-EO system device) and treadmill training without weight-support. After 4-weeks of rehabilitation, the statistical analysis showed a significant improvement with the robotic intervention in gait speed, step length and stride length, but the between-group analysis was not statistically significant (21).In 2013, Picelli et al. designed a comprehensive randomized controlled study aimed to compare robotic BWSTT (RAGT—gait Trainer GT1) with treadmill training without BWS (TT) and CPT. Sixty subjects with mild to moderate PD were enrolled and evaluated before treatment (T0), at the end of a 4-weeks rehabilitative programme (T1) and after 3 months (T2). This study failed to demonstrate the superiority of RAGT in improving gait speed when compared to TT; at variance, both RAGT and TT proved more effective than CPT as regards gait speed and walking capacity. It is worth noting that the improvement in gait speed was considered clinically significant (namely > 0.25 m/s at the 10-meter test) only after the RAGT approach (31).Finally, Galli et al. compared the effects of BWS delivered with a robotic end effector (G-EO system) with TT not only on the spatio-temporal gait parameters, but also on the range of motion of the most important lower limb joints. The results showed that robotic rehabilitation produced an improvement in the kinematic gait profile at the proximal level (hip and pelvis) when compared to TT without BWS. These results are useful from a clinical point of view because they suggest that rehabilitation with BWS and robotic gait training could be recommended in specific sub-groups of PD patients (for example in those with a deficit of pelvis and hip mobility at baseline) (22).
[ "10842411", "8800948", "4655275", "742658", "2202385", "2230833", "8420164", "9613733", "1865232", "10895994", "12370870", "28815562", "23850614", "2766124", "16403997", "16340099", "24021298", "20946640", "22258155", "23706025", "27678210", "26008873", "28222548", "18534554", "23187043", "22275661", "22019972", "20666620", "22623206", "23490463", "25175601", "29027544", "17115387", "10460115", "18982238", "24659140", "6407776", "11244017", "11378250", "9454324", "9010395", "9577399", "8800948", "9549526", "10388793", "9549526", "15248294", "17117354" ]
[ { "pmid": "10842411", "title": "Movement disorders in people with Parkinson disease: a model for physical therapy.", "abstract": "People who are diagnosed with idiopathic Parkinson disease (PD) experience movement disorders that, if not managed, can lead to considerable disability. The premise of this perspective is that physical therapy for people with PD relies on clinicians having: (1) up-to-date knowledge of the pathogenesis of movement disorders, (2) the ability to recognize common movement disorders in people with PD, (3) the ability to implement a basic management plan according to a person's stage of disability, and (4) problem-solving skills that enable treatment plans to be tailored to individual needs. This article will present a model of physical therapy management for people with idiopathic PD based on contemporary knowledge of the pathogenesis of movement disorders in basal ganglia disease as well as a review of the evidence for physical therapy interventions. The model advocates a task-specific approach to training, with emphasis on treating people with PD-related movement disorders such as hypokinesia and postural instability within the context of functional tasks of everyday living such as walking, turning over in bed, and manipulating objects. The effects of medication, cognitive impairment, the environment, and coexisting medical conditions are also taken into consideration. An argument is put forward that clinicians need to identify core elements of physical therapy training that apply to all people with PD as well as elements specific to the needs of each individual. A case history is used to illustrate how physical therapy treatment is regularly reviewed and adjusted according to the changing constellation of movement disorders that present as the disease progresses." }, { "pmid": "8800948", "title": "Stride length regulation in Parkinson's disease. Normalization strategies and underlying mechanisms.", "abstract": "Results of our previous studies have shown that the slow, shuffling gait of Parkinson's disease patients is due to an inability to generate appropriate stride length and that cadence control is intact and is used as a compensatory mechanism. The reason for the reduced stride length is unclear, although deficient internal cue production or inadequate contribution to cortical motor set by the basal ganglia are two possible explanations. In this study we have examined the latter possibility by comparing the long-lasting effects of visual cues in improving stride length with that of attentional strategies. Computerized stride analysis was used to measure the spatial (distance) and temporal (timing) parameters of the walking pattern in a total of 54 subjects in three separate studies. In each study Parkinson's disease subjects were trained for 20 min by repeated 10 m walks set at control stride length (determined from control subjects matched for age, sex and height), using either visual floor markers or a mental picture of the appropriate stride size. Following training, the gait patterns were monitored (i) every 15 min for 2 h; (ii) whilst interspersing secondary tasks of increasing levels of complexity; (iii) covertly, when subjects were unaware that measurement was taking place. The results demonstrated that training with both visual cues and attentional strategies could maintain normal gait for the maximum recording time of 2 h. Secondary tasks reduced stride length towards baseline values as did covert monitoring. The findings confirm that the ability to generate a normal stepping pattern is not lost in Parkinson's disease and that gait hypokinesia reflects a difficulty in activating the motor control system. Normal stride length can be elicited in Parkinson's disease using attentional strategies and visual cues. Both strategies appear to share the same mechanism of focusing attention on the stride length. The effect of attention appears to require constant vigilance to prevent reverting to more automatic control mechanisms." }, { "pmid": "742658", "title": "Walking patterns of men with parkinsonism.", "abstract": "Interrupted-light photography was used to record the simultaneous displacement patterns of multiple body segments of 44 patients with parkinsonism during free-speed and fast walking to quantitatively characterize their gait peculiarities. The patients were categorized into three disability groups according to their independence in activities of daily living. Their measurements of walking performance were compared to those of normal men. The gait components of the patients, which related systematically to the degree of disability, were: step lengths, vertical excursions of the head, extension of the hip and knee of the backward-directed limb at the onset of contralateral weight bearing, toe-floor distance at the onset of weight bearing, and rotation of the thorax." }, { "pmid": "2202385", "title": "Determinants of gait in the elderly parkinsonian on maintenance levodopa/carbidopa therapy.", "abstract": "1. We have used gait analysis to investigate the efficacy of maintenance therapy with a levodopa/carbidopa combination in patients with idiopathic Parkinsonism, who do not have overt fluctuations in control in relation to administration of medication. 2. Fourteen patients (aged 64 to 88 years) receiving maintenance therapy with levodopa and carbidopa (Sinemet Plus) entered a placebo-controlled, randomised cross-over study of the effect of omission of a morning dose of active treatment on distance/time parameters of gait. Measurements made 2, 4 and 6 h after the morning treatment were standardised by taking the pre-treatment measurement on that day as baseline. 3. The mean increase in stride length (7%) and decrease in double support time (20%) on active treatment were small but statistically significant (P less than 0.0001, in each case), there being no significant placebo effect on either gait parameter (P = 0.69 and 0.08 respectively). Neither active nor placebo treatments had any significant (P greater than 0.45 in each case) effect on the lying, standing or postural fall in mean arterial pressure, measurements being made in the same temporal relation to the treatments as was gait. 4. In a generalised linear model, after allowing for the effect (P less than 0.0001) of intrinsic variability in pre-treatment speed as well as for structure of the study, nature of treatment had an effect on stride length over the whole walk, significant at P = 0.002. 5. Pre-treatment postural fall in mean arterial pressure was nearly as significant (P = 0.003) as the nature of treatment in the context of such a model: the greater the fall, the greater the increment in stride length seen following active or placebo treatment. This was probably explained by an acquired tolerance to the fall as the day progressed. 6. The major determinant (P less than 0.0001) of the change in double support time over the whole walk, after allowing for the structure of the study, appeared to be the post treatment mean arterial standing blood pressure. The lower the pressure, the shorter the double support time, and hence, the greater the tendency to a hurried gait. 7. Nature of treatment, when added into the models described in summary points 5 and 6, had no significant effect (P greater than 0.25, in each case).(ABSTRACT TRUNCATED AT 400 WORDS)" }, { "pmid": "2230833", "title": "Quantitative analysis of gait in Parkinson patients: increased variability of stride length.", "abstract": "Analysis of the spatio-temporal and kinematic parameters of locomotion recorded in 21 parkinsonian patients compared to 58 normal elderly subjects showed significant differences in all the recorded parameters. However the relationship between these parameters was preserved, as was the basic locomotor pattern. The variability of stride length, more marked in parkinsonian patients, increased as a function of the clinical stages of Hoehn and Yahr. This index could be useful in assessing the course of the disease in patients." }, { "pmid": "9613733", "title": "Gait variability and basal ganglia disorders: stride-to-stride variations of gait cycle timing in Parkinson's disease and Huntington's disease.", "abstract": "The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings." }, { "pmid": "1865232", "title": "Dopa-sensitive and dopa-resistant gait parameters in Parkinson's disease.", "abstract": "Quantitative analysis of gait was performed in 20 parkinsonians before and 1 h after the acute administration of L-Dopa in order to discriminate between the Dopa-sensitive and the Dopa-resistant kinematic gait parameters. The stride length and the kinematic parameters (swing velocity, peak velocity) related to the energy were Dopa-sensitive. The improvement of the bent forward posture by L-Dopa may explain the stride length increase. Temporal parameters (stride and swing duration, stride duration variability), related to rhythm, were Dopa-resistant. Experimental data argue for the importance of force control in maintaining the posture. The stride length variability, possibly related to the variability of force production shown to exist in parkinsonians was not significantly improved by L-Dopa. In Parkinson's disease different hypotheses might explain the inexorable aggravation of gait disorders along the course of the disease: (1) an advancing disorder of coordination between postural control and locomotion, (2) if some gait parameters like stride length and kinematic parameters are Dopa-sensitive, the others are Dopa-resistant and thus may involve other mechanisms than dopamine deficiency." }, { "pmid": "10895994", "title": "Treadmill training with body weight support: its effect on Parkinson's disease.", "abstract": "OBJECTIVE\nTo test whether body weight-supported treadmill training (BWSTT) is effective in improving functional outcome of patients with Parkinson's disease.\n\n\nDESIGN\nProspective crossover trial. Patients were randomized to receive either a 4-week program of BWSTT with up to 20% of their body weight supported followed by 4 weeks of conventional physical therapy (PT), or the same treatments in the opposite order. Medications for parkinsonism were not modified throughout the study.\n\n\nSETTING\nInpatient rehabilitation unit for neurologic diseases.\n\n\nSUBJECTS\nTen patients (5 men, 5 women) with Hoehn and Yahr stage 2.5 or 3 parkinsonism; mean age 67.6 years, mean duration of Parkinson's disease 4.2 years.\n\n\nMAIN OUTCOME MEASURES\nThe Unified Parkinson's Disease Rating Scale (UPDRS), ambulation endurance and speed (sec/10 m), and number of steps for 10-meter walk.\n\n\nRESULTS\nThe mean total UPDRS before/after BWSTT was 31.6/25.6, and before/after PT was 29.1/28.0. Analysis of covariance for improvement of UPDRS demonstrated a significant effect of type of therapy (F(1, 16) = 42.779, p < .0001) but not order of therapy (F(1, 16) = 0.157, p = .697 1). Patients also had significantly greater improvement with BWSTT than with PT in ambulation speed (BWSTT, before/after = 10.0/8.3; PT, 9.5/8.9), and number of steps (BWSTT, 22.3/19.6; PT, 21.5/20.8).\n\n\nCONCLUSIONS\nIn persons with Parkinson's disease, treadmill training with body weight support produces greater improvement in activities of daily living, motor performance, and ambulation than does physical therapy." }, { "pmid": "12370870", "title": "Long-term effect of body weight-supported treadmill training in Parkinson's disease: a randomized controlled trial.", "abstract": "OBJECTIVE\nTo investigate whether body weight-supported treadmill training (BWSTT) is of long-term benefit for patients with Parkinson's disease (PD).\n\n\nDESIGN\nRandomized controlled trial.\n\n\nSETTING\nInpatient rehabilitation unit for neurologic diseases in Japan.\n\n\nPARTICIPANTS\nTwenty-four patients (Hoehn and Yahr stages 2.5 or 3) who were not demented (Mini-Mental State Examination score, >27).\n\n\nINTERVENTIONS\nPatients were randomized to receive either a 45-minute session of BWSTT (up to 20% of body weight supported) or conventional physical therapy (PT) for 3 days a week for 1 month.\n\n\nMAIN OUTCOME MEASURES\nOutcome measures were evaluated at baseline and at 1, 2, 3, and 6 months. Measures included the Unified Parkinson's Disease Rating Scale (UPDRS), ambulation speed (s/10 m), and number of steps taken for a 10-m walk as a parameter for stride length.\n\n\nRESULTS\nFour patients needed modification of medications in the follow-up period. Twenty patients (BWSTT, n=11; PT, n=9) without modified medications were analyzed for functional outcome. Age, duration of PD, gender, and doses of medications were comparable. There was no difference in the baseline UPDRS (BWSTT=33.3; PT=32.6), speed (BWSTT=10.8; PT=11.5), and steps (BWSTT=23.4; PT=22.8). The BWSTT group had significantly greater improvement than the PT group (Mann-Whitney U test, Bonferroni adjustment for multiple comparison) in ambulation speed at 1 month (BWSTT=8.5; PT=10.8; P<.005); and in the number of steps at 1 (BWSTT=20.0; PT=22.7; P<.005), 2 (BWSTT=19.5; PT=22.4; P<.005), 3 (BWSTT=20.1; PT=23.1; P<.005), and 4 months (BWSTT=21.0; PT=23.0; P=.006).\n\n\nCONCLUSIONS\nBWSTT has a lasting effect specifically on short-step gait in PD." }, { "pmid": "28815562", "title": "Treadmill training and body weight support for walking after stroke.", "abstract": "BACKGROUND\nTreadmill training, with or without body weight support using a harness, is used in rehabilitation and might help to improve walking after stroke. This is an update of the Cochrane review first published in 2003 and updated in 2005 and 2014.\n\n\nOBJECTIVES\nTo determine if treadmill training and body weight support, individually or in combination, improve walking ability, quality of life, activities of daily living, dependency or death, and institutionalisation or death, compared with other physiotherapy gait-training interventions after stroke. The secondary objective was to determine the safety and acceptability of this method of gait training.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched 14 February 2017), the Cochrane Central Register of Controlled Trials (CENTRAL) and the Database of Reviews of Effects (DARE) (the Cochrane Library 2017, Issue 2), MEDLINE (1966 to 14 February 2017), Embase (1980 to 14 February 2017), CINAHL (1982 to 14 February 2017), AMED (1985 to 14 February 2017) and SPORTDiscus (1949 to 14 February 2017). We also handsearched relevant conference proceedings and ongoing trials and research registers, screened reference lists, and contacted trialists to identify further trials.\n\n\nSELECTION CRITERIA\nRandomised or quasi-randomised controlled and cross-over trials of treadmill training and body weight support, individually or in combination, for the treatment of walking after stroke.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected trials, extracted data, and assessed risk of bias and methodological quality. The primary outcomes investigated were walking speed, endurance, and dependency.\n\n\nMAIN RESULTS\nWe included 56 trials with 3105 participants in this updated review. The average age of the participants was 60 years, and the studies were carried out in both inpatient and outpatient settings. All participants had at least some walking difficulties and many could not walk without assistance. Overall, the use of treadmill training did not increase the chances of walking independently compared with other physiotherapy interventions (risk difference (RD) -0.00, 95% confidence interval (CI) -0.02 to 0.02; 18 trials, 1210 participants; P = 0.94; I² = 0%; low-quality evidence). Overall, the use of treadmill training in walking rehabilitation for people after stroke increased the walking velocity and walking endurance significantly. The pooled mean difference (MD) (random-effects model) for walking velocity was 0.06 m/s (95% CI 0.03 to 0.09; 47 trials, 2323 participants; P < 0.0001; I² = 44%; moderate-quality evidence) and the pooled MD for walking endurance was 14.19 metres (95% CI 2.92 to 25.46; 28 trials, 1680 participants; P = 0.01; I² = 27%; moderate-quality evidence). Overall, the use of treadmill training with body weight support in walking rehabilitation for people after stroke did not increase the walking velocity and walking endurance at the end of scheduled follow-up. The pooled MD (random-effects model) for walking velocity was 0.03 m/s (95% CI -0.05 to 0.10; 12 trials, 954 participants; P = 0.50; I² = 55%; low-quality evidence) and the pooled MD for walking endurance was 21.64 metres (95% CI -4.70 to 47.98; 10 trials, 882 participants; P = 0.11; I² = 47%; low-quality evidence). In 38 studies with a total of 1571 participants who were independent in walking at study onset, the use of treadmill training increased the walking velocity significantly. The pooled MD (random-effects model) for walking velocity was 0.08 m/s (95% CI 0.05 to 0.12; P < 0.00001; I2 = 49%). There were insufficient data to comment on any effects on quality of life or activities of daily living. Adverse events and dropouts did not occur more frequently in people receiving treadmill training and these were not judged to be clinically serious events.\n\n\nAUTHORS' CONCLUSIONS\nOverall, people after stroke who receive treadmill training, with or without body weight support, are not more likely to improve their ability to walk independently compared with people after stroke not receiving treadmill training, but walking speed and walking endurance may improve slightly in the short term. Specifically, people with stroke who are able to walk (but not people who are dependent in walking at start of treatment) appear to benefit most from this type of intervention with regard to walking speed and walking endurance. This review did not find, however, that improvements in walking speed and endurance may have persisting beneficial effects. Further research should specifically investigate the effects of different frequencies, durations, or intensities (in terms of speed increments and inclination) of treadmill training, as well as the use of handrails, in ambulatory participants, but not in dependent walkers." }, { "pmid": "23850614", "title": "Effects of locomotor training after incomplete spinal cord injury: a systematic review.", "abstract": "OBJECTIVE\nTo provide an overview of, and evaluate the current evidence on, locomotor training approaches for gait rehabilitation in individuals with incomplete spinal cord injury to identify the most effective therapies.\n\n\nDATA SOURCES\nThe following electronic databases were searched systematically from first date of publication until May 2013: Allied and Complementary Medicine Database, Cumulative Index to Nursing and Allied Health Literature, Cochrane Database of Systematic Reviews, MEDLINE, Physiotherapy Evidence Database, and PubMed. References of relevant clinical trials and systematic reviews were also hand searched.\n\n\nSTUDY SELECTION\nOnly randomized controlled trials evaluating locomotor therapies after incomplete spinal cord injury in an adult population were included. Full-text versions of all relevant articles were selected and evaluated by both authors.\n\n\nDATA EXTRACTION\nEligible studies were identified, and methodologic quality was assessed with the Physiotherapy Evidence Database scale. Articles scoring <4 points on the scale were excluded. Sample population, interventions, outcome measures, and findings were evaluated with regard to walking capacity, velocity, duration, and quality of gait.\n\n\nDATA SYNTHESIS\nData were analyzed by systematic comparison of findings. Eight articles were included in this review. Five compared body-weight-supported treadmill training (BWSTT) or robotic-assisted BWSTT with conventional gait training in acute/subacute subjects (≤1y postinjury). The remaining studies each compared 3 or 4 different locomotor interventions in chronic participants (>1y postinjury). Sample sizes were small, and study designs differed considerably impeding comparison. Only minor differences in outcomes measures were found between groups. Gait parameters improved slightly more after BWSTT and robotic gait training for acute participants. For chronic participants, improvements were greater after BWSTT with functional electrical stimulation and overground training with functional electrical stimulation/body-weight support compared with BWSTT with manual assistance, robotic gait training, or conventional physiotherapy.\n\n\nCONCLUSIONS\nEvidence on the effectiveness of locomotor therapy is limited. All approaches show some potential for improvement of ambulatory function without superiority of 1 approach over another. More research on this topic is required." }, { "pmid": "2766124", "title": "The effects of body weight support on the locomotor pattern of spastic paretic patients.", "abstract": "The effects of mechanically supporting a percentage of body weight on the gait pattern of spastic paretic subjects during treadmill locomotion was investigated. Electromyographic (EMG), joint angular displacement and temporal distance data were simultaneously recorded while 7 spastic paretic subjects walked at 0% and 40% body weight support (BWS) at their maximal comfortable treadmill speed. Forty percent BWS produced a general decrease in EMG mean burst amplitude for the lower limb muscles investigated with instances of more appropriate EMG timing in relation to the gait cycle. The joint angular displacement data at 40% BWS revealed straighter trunk and knee alignment during the weight bearing phase especially at initial foot-floor contact and midstance. An increase in single limb support time and a decrease in percentage total double support time were evident at 40% BWS. An increase in stride length and maximum comfortable walking speed was also seen with BWS. The use of BWS during treadmill locomotion as a therapeutic approach to retrain gait in neurologically impaired patients is discussed." }, { "pmid": "16403997", "title": "The effects of loading and unloading treadmill walking on balance, gait, fall risk, and daily function in Parkinsonism.", "abstract": "Our study aims were: 1) to determine whether assisted weight bearing or additional weight bearing is more beneficial to the improvement of function and increased stability in gait and dynamic balance in patients with Parkinsonism, compared with matched controls (treadmill alone). Twenty-three men and women participants (M +/- SD = 74.5 +/- 9.7 yrs; Males = 19, Females = 4) with Parkinsonism were in the study. Participants staged at 1-7 (M +/- SD = 3.96 +/- 1.07) using the Hoehn & Yahr scale. All participants were tested before, after the intervention (within one week), and four weeks later on: 1) dynamic posturography, 2) Berg Balance scale, 3) United Parkinson's Disease Rating Scale (UPDRS), 4) biomechanical assessment of strength and range of motion, and 5) Gaitrite force sensitive gait mat. Group 1 (treadmill control group), received treadmill training with no loading or unloading. Group 2 (unweighted group), walked on the treadmill assisted by the Biodex Unweighing System at a 25% body weight reduction. Group 3 (weighted group), ambulated wearing a weighted scuba-diving belt, which increased their normal body weight by 5%. All subjects walked on the treadmill for 20 minutes per day for 3 days per week for 6 weeks. Improvements in dynamic posturography, falls during balance testing, Berg Balance, UPDRS (Motor Exam), and gait for all groups lead us to believe that neuromuscular regulation can be facilitated in all Parkinson's individuals no matter what treadmill intervention is employed." }, { "pmid": "16340099", "title": "Gait and step training to reduce falls in Parkinson's disease.", "abstract": "INTRODUCTION\nFrequent falls and risk of injury are evident in individuals with Parkinson's disease (PD) as the disease progresses. There have been no reports of any interventions that reduce the incidence of falls in idiopathic PD.\n\n\nPURPOSE\nAssess the benefit of gait and step perturbation training in individuals with PD.\n\n\nDESIGN\nRandomized, controlled trial.\n\n\nSETTING\nOutpatient research, education and clinical center in a tertiary care Veterans Affairs Medical Center.\n\n\nOUTCOME MEASURES\nGait parameters, 5-step test, report of falls.\n\n\nSUBJECTS\nEighteen men with idiopathic PD in stage 2 or 3 of the Hoehn and Yahr staging.\n\n\nMETHODS\nSubjects were randomly assigned to a trained or control group. They were asked about any falls 2 weeks prior to and after an 8 week period. Gait speed, cadence, and step length were tested on an instrumented walkway. Subjects were timed while stepping onto and back down from an 8.8 cm step for 5 consecutive steps. Gait training consisted of walking on a treadmill at a speed greater than over ground walking speed while walking in 4 directions and while supported in a harness for safety. Step training consisted of suddenly turning the treadmill on and off while the subject stood in the safety harness facing either forwards, backwards, or sideways. Training occurred 1 hour per day, three times per week for 8 weeks. A two-factor (time and group) analysis of variance with repeated measures was used to compare the groups.\n\n\nRESULTS\nSubstantial reduction occurred in falls in the trained group, but not in the control group. Gait speed increased in the trained group from 1.28+/-0.33 meters/sec to 1.45+/-0.37 meters/sec, but not in the control group (from 1.26 to 1.27 m/s). The cadence increased for both groups: from 112.8 to 120.3 steps/min for the trained group and 117.7 to 124.3 steps/min for the control group. Stride lengths increased for the trained group, but not the control group. The 5-step test speed increased in the trained group from 0.40+/-0.08 steps/sec to 0.51+/-0.12 steps/sec, and in the control group (0.36+/-0.11 steps/sec to 0.42+/-0.11 steps/sec).\n\n\nCONCLUSION\nGait and step perturbation training resulted in a reduction in falls and improvements in gait and dynamic balance. This is a promising approach to reduce falls for patients with PD." }, { "pmid": "24021298", "title": "Effect of partial weight-supported treadmill gait training on balance in patients with Parkinson disease.", "abstract": "OBJECTIVE\nTo investigate the role of conventional gait training and partial weight-supported treadmill gait training (PWSTT) in improving the balance of patients with Parkinson disease (PD).\n\n\nDESIGN\nProspective randomized controlled design.\n\n\nSETTING\nNational-level university tertiary hospital for mental health and neurosciences.\n\n\nPATIENTS\nSixty patients with PD fulfilling the United Kingdom Brain Bank PD diagnostic criteria were recruited from the neurology outpatient department and movement disorder clinic.\n\n\nMETHODOLOGY\nThe patients were randomly assigned into 3 equal groups: (1) a control group that only received a stable dosage of dopaminomimetic drugs; (2) a conventional gait training (CGT) group that received a stable dosage of dopaminomimetic drugs and conventional gait training; and (3) a PWSTT group that received a stable dosage of dopaminomimetic drugs and PWSTT with unloading of 20% of body weight. The sessions for the CGT and PWSTT groups were provided for 30 minutes per day, 4 days per week, for 4 weeks (16 sessions).\n\n\nOUTCOME MEASURES\nThe Unified Parkinson Disease Rating Scale (UPDRS) motor score, dynamic posturography, Berg Balance Scale, and Tinetti performance-oriented mobility assessment (POMA) were used as main outcome measures.\n\n\nRESULTS\nA significant interaction effect was observed in the UPDRS motor score, mediolateral index, Berg Balance Scale, limits of stability (LOS) total score, POMA gait score, and balance score. Post-hoc analysis showed that in comparison with the control group, the PWSTT group had a significantly better UPDRS motor score, balance indices, LOS in 8 directions, POMA gait, and balance score. The CGT group had a significantly better POMA gait score compared with control subjects. Compared with the CGT group, the PWSTT group had a significantly better UPDRS motor score, mediolateral index, POMA gait score, and LOS total score.\n\n\nCONCLUSION\nPWSTT may be a better interventional choice than CGT for gait and balance rehabilitation in patients with PD." }, { "pmid": "20946640", "title": "Reduction of freezing of gait in Parkinson's disease by repetitive robot-assisted treadmill training: a pilot study.", "abstract": "BACKGROUND\nParkinson's disease is a chronic, neurodegenerative disease characterized by gait abnormalities. Freezing of gait (FOG), an episodic inability to generate effective stepping, is reported as one of the most disabling and distressing parkinsonian symptoms. While there are no specific therapies to treat FOG, some external physical cues may alleviate these types of motor disruptions. The purpose of this study was to examine the potential effect of continuous physical cueing using robot-assisted sensorimotor gait training on reducing FOG episodes and improving gait.\n\n\nMETHODS\nFour individuals with Parkinson's disease and FOG symptoms received ten 30-minute sessions of robot-assisted gait training (Lokomat) to facilitate repetitive, rhythmic, and alternating bilateral lower extremity movements. Outcomes included the FOG-Questionnaire, a clinician-rated video FOG score, spatiotemporal measures of gait, and the Parkinson's Disease Questionnaire-39 quality of life measure.\n\n\nRESULTS\nAll participants showed a reduction in FOG both by self-report and clinician-rated scoring upon completion of training. Improvements were also observed in gait velocity, stride length, rhythmicity, and coordination.\n\n\nCONCLUSIONS\nThis pilot study suggests that robot-assisted gait training may be a feasible and effective method of reducing FOG and improving gait. Videotaped scoring of FOG has the potential advantage of providing additional data to complement FOG self-report." }, { "pmid": "22258155", "title": "Robot-assisted gait training in patients with Parkinson disease: a randomized controlled trial.", "abstract": "BACKGROUND\n. Gait impairment is a common cause of disability in Parkinson disease (PD). Electromechanical devices to assist stepping have been suggested as a potential intervention.\n\n\nOBJECTIVE\n. To evaluate whether a rehabilitation program of robot-assisted gait training (RAGT) is more effective than conventional physiotherapy to improve walking.\n\n\nMETHODS\n. A total of 41 patients with PD were randomly assigned to 45-minute treatment sessions (12 in all), 3 days a week, for 4 consecutive weeks of either robotic stepper training (RST; n = 21) using the Gait Trainer or physiotherapy (PT; n = 20) with active joint mobilization and a modest amount of conventional gait training. Participants were evaluated before, immediately after, and 1 month after treatment. Primary outcomes were 10-m walking speed and distance walked in 6 minutes.\n\n\nRESULTS\n. Baseline measures revealed no statistical differences between groups, but the PT group walked 0.12 m/s slower; 5 patients withdrew. A statistically significant improvement was found in favor of the RST group (walking speed 1.22 ± 0.19 m/s [P = .035]; distance 366.06 ± 78.54 m [P < .001]) compared with the PT group (0.98 ± 0.32 m/s; 280.11 ± 106.61 m). The RAGT mean speed increased by 0.13 m/s, which is probably not clinically important. Improvements were maintained 1 month later.\n\n\nCONCLUSIONS\n. RAGT may improve aspects of walking ability in patients with PD. Future trials should compare robotic assistive training with treadmill or equal amounts of overground walking practice." }, { "pmid": "23706025", "title": "Robot-assisted walking training for individuals with Parkinson's disease: a pilot randomized controlled trial.", "abstract": "BACKGROUND\nOver the last years, the introduction of robotic technologies into Parkinson's disease rehabilitation settings has progressed from concept to reality. However, the benefit of robotic training remains elusive. This pilot randomized controlled observer trial is aimed at investigating the feasibility, the effectiveness and the efficacy of new end-effector robot training in people with mild Parkinson's disease.\n\n\nMETHODS\nDesign. Pilot randomized controlled trial.\n\n\nRESULTS\nRobot training was feasible, acceptable, safe, and the participants completed 100% of the prescribed training sessions. A statistically significant improvement in gait index was found in favour of the EG (T0 versus T1). In particular, the statistical analysis of primary outcome (gait speed) using the Friedman test showed statistically significant improvements for the EG (p = 0,0195). The statistical analysis performed by Friedman test of Step length left (p = 0,0195) and right (p = 0,0195) and Stride length left (p = 0,0078) and right (p = 0,0195) showed a significant statistical gain. No statistically significant improvements on the CG were found.\n\n\nCONCLUSIONS\nRobot training is a feasible and safe form of rehabilitative exercise for cognitively intact people with mild PD. This original approach can contribute to increase a short time lower limb motor recovery in idiopathic PD patients. The focus on the gait recovery is a further characteristic that makes this research relevant to clinical practice. On the whole, the simplicity of treatment, the lack of side effects, and the positive results from patients support the recommendation to extend the use of this treatment. Further investigation regarding the long-time effectiveness of robot training is warranted.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01668407." }, { "pmid": "27678210", "title": "Robot-assisted gait training versus treadmill training in patients with Parkinson's disease: a kinematic evaluation with gait profile score.", "abstract": "The purpose of this study was to quantitatively compare the effects, on walking performance, of end-effector robotic rehabilitation locomotor training versus intensive training with a treadmill in Parkinson's disease (PD). Fifty patients with PD were randomly divided into two groups: 25 were assigned to the robot-assisted therapy group (RG) and 25 to the intensive treadmill therapy group (IG). They were evaluated with clinical examination and 3D quantitative gait analysis [gait profile score (GPS) and its constituent gait variable scores (GVSs) were calculated from gait analysis data] at the beginning (T0) and at the end (T1) of the treatment. In the RG no differences were found in the GPS, but there were significant improvements in some GVSs (Pelvic Obl and Hip Ab-Add). The IG showed no statistically significant changes in either GPS or GVSs. The end-effector robotic rehabilitation locomotor training improved gait kinematics and seems to be effective for rehabilitation in patients with mild PD." }, { "pmid": "26008873", "title": "Partial Body Weight-Supported Treadmill Training in Patients With Parkinson Disease: Impact on Gait and Clinical Manifestation.", "abstract": "OBJECTIVE\nTo evaluate the effect of conventional gait training (CGT) and partial weight-supported treadmill training (PWSTT) on gait and clinical manifestation.\n\n\nDESIGN\nProspective experimental research design.\n\n\nSETTING\nHospital.\n\n\nPARTICIPANTS\nPatients with idiopathic Parkinson disease (PD) (N=60; mean age, 58.15±8.7y) on stable dosage of dopaminomimetic drugs were randomly assigned into the 3 following groups (20 patients in each group): (1) nonexercising PD group, (2) CGT group, and (3) PWSTT group.\n\n\nINTERVENTIONS\nThe interventions included in the study were CGT and PWSTT. The sessions of the CGT and PWSTT groups were given in patient's self-reported best on status after regular medications. The interventions were given for 30min/d, 4d/wk, for 4 weeks (16 sessions).\n\n\nMAIN OUTCOME MEASURES\nClinical severity was measured by the Unified Parkinson Disease Rating Scale (UPDRS) and its subscores. Gait was measured by 2 minutes of treadmill walking and the 10-m walk test. Outcome measures were evaluated in their best on status at baseline and after the second and fourth weeks.\n\n\nRESULTS\nFour weeks of CGT and PWSTT gait training showed significant improvements of UPDRS scores, its subscores, and gait performance measures. Moreover, the effects of PWSTT were significantly better than CGT on most measures.\n\n\nCONCLUSIONS\nPWSTT is a promising intervention tool to improve the clinical and gait outcome measures in patients with PD." }, { "pmid": "28222548", "title": "Does positive pressure body weight-support alter spatiotemporal gait parameters in healthy and parkinsonian individuals?", "abstract": "BACKGROUND\nEvidence suggests treadmill training (TT) and body weight-supported treadmill training (BWSTT) are effective strategies to improve gait in Parkinson's disease (PD) patients. However, few researchers have investigated the spatiotemporal parameters during TT or BWSTT.\n\n\nOBJECTIVE\nThe goal of this study is to determine gait adaptations in PD and healthy subjects during positive pressure BWSTT and post-intervention overground walking.\n\n\nMETHODS\nTen PD and ten healthy individuals participated in this study. Baseline spatiotemporal parameters were assessed using a six meter instrumented mat. A 10-min progressive BWSTT trial from 10% to 40% body weight support (BWS) was then completed. Video capture and analysis of 10-min BWSTT trials were performed to determine spatiotemporal gait parameters. Three (5-min, 10-min, and 15-min) post-intervention overground assessments were obtained.\n\n\nRESULTS\nDuring positive pressure BWSTT there was a significant effect of BW support on step length(SL) increase (p < 0.01) and cadence decrease (p < 0.001) in the healthy group but not in the PD group (p = 0.45 SL, p = 0.21 cadence). In post-intervention assessments there was a significant effect of time on velocity (p < 0.002 non-PD, p < 0.001 PD) and cadence (p < 0.05 non-PD, p < 0.01 PD) in both groups.\n\n\nCONCLUSIONS\nThere appears to be a generalized effect of TT on overground gait mechanics after a single session of positive pressure BWSTT regardless of PD impairment." }, { "pmid": "18534554", "title": "The effect of exercise training in improving motor performance and corticomotor excitability in people with early Parkinson's disease.", "abstract": "OBJECTIVES\nTo obtain preliminary data on the effects of high-intensity exercise on functional performance in people with Parkinson's disease (PD) relative to exercise at low and no intensity and to determine whether improved performance is accompanied by alterations in corticomotor excitability as measured through transcranial magnetic stimulation (TMS).\n\n\nDESIGN\nCohort (prospective), randomized controlled trial.\n\n\nSETTING\nUniversity-based clinical and research facilities.\n\n\nPARTICIPANTS\nThirty people with PD, within 3 years of diagnosis with Hoehn and Yahr stage 1 or 2.\n\n\nINTERVENTIONS\nSubjects were randomized to high-intensity exercise using body weight-supported treadmill training, low-intensity exercise, or a zero-intensity education group. Subjects in the 2 exercise groups completed 24 exercise sessions over 8 weeks. Subjects in the zero-intensity group completed 6 education classes over 8 weeks.\n\n\nMAIN OUTCOME MEASURES\nUnified Parkinson's Disease Rating Scales (UPDRS), biomechanic analysis of self-selected and fast walking and sit-to-stand tasks; corticomotor excitability was assessed with cortical silent period (CSP) durations in response to single-pulse TMS.\n\n\nRESULTS\nA small improvement in total and motor UPDRS was observed in all groups. High-intensity group subjects showed postexercise increases in gait speed, step and stride length, and hip and ankle joint excursion during self-selected and fast gait and improved weight distribution during sit-to-stand tasks. Improvements in gait and sit-to-stand measures were not consistently observed in low- and zero-intensity groups. The high-intensity group showed lengthening in CSP.\n\n\nCONCLUSIONS\nThe findings suggest the dose-dependent benefits of exercise and that high-intensity exercise can normalize corticomotor excitability in early PD." }, { "pmid": "23187043", "title": "Improved clinical status, quality of life, and walking capacity in Parkinson's disease after body weight-supported high-intensity locomotor training.", "abstract": "OBJECTIVE\nTo evaluate the effect of body weight-supported progressive high-intensity locomotor training in Parkinson's disease (PD) on (1) clinical status; (2) quality of life; and (3) gait capacity.\n\n\nDESIGN\nOpen-label, fixed sequence crossover study.\n\n\nSETTING\nUniversity motor control laboratory.\n\n\nPARTICIPANTS\nPatients (N=13) with idiopathic PD (Hoehn and Yahr stage 2 or 3) and stable medication use.\n\n\nINTERVENTIONS\nPatients completed an 8-week (3 × 1h/wk) training program on a lower-body positive-pressure treadmill. Body weight support was used to facilitate increased intensity and motor challenges during treadmill training. The training program contained combinations of (1) running and walking intervals, (2) the use of sudden changes (eg, in body weight support and speed), (3) different types of locomotion (eg, chassé, skipping, and jumps), and (4) sprints at 50 percent body weight.\n\n\nMAIN OUTCOME MEASURES\nThe Movement Disorders Society-Unified Parkinson's Disease Rating Scale (MDS-UPDRS), Parkinson's Disease Questionnaire-39 items (PDQ-39), and the six-minute walk test were conducted 8 weeks before and pre- and posttraining.\n\n\nRESULTS\nAt the end of training, statistically significant improvements were found in all outcome measures compared with the control period. Total MDS-UPDRS score changed from (mean ± 1SD) 58±18 to 47±18, MDS-UPDRS motor part score changed from 35±10 to 29±12, PDQ-39 summary index score changed from 22±13 to 13±12, and the six-minute walking distance changed from 576±93 to 637±90m.\n\n\nCONCLUSIONS\nBody weight-supported progressive high-intensity locomotor training is feasible and well tolerated by patients with PD. The training improved clinical status, quality of life, and gait capacity significantly." }, { "pmid": "22275661", "title": "Development of a VR-based treadmill control interface for gait assessment of patients with Parkinson's disease.", "abstract": "Freezing of gait (FOG) is a commonly observed phenomenon in Parkinson's disease, but its causes and mechanisms are not fully understood. This paper presents the development of a virtual reality (VR)-based body-weight supported treadmill interface (BWSTI) designed and applied to investigate FOG. The BWSTI provides a safe and controlled walking platform which allows investigators to assess gait impairments under various conditions that simulate real life. In order to be able to evoke FOG, our BWSTI employed a novel speed adaptation controller, which allows patients to drive the treadmill speed. Our interface responsively follows the subject's intention of changing walking speed by the combined use of feedback and feedforward controllers. To provide realistic visual stimuli, a three dimensional VR system is interfaced with the speed adaptation controller and synchronously displays realistic visual cues. The VR-based BWSTI was tested with three patients with PD who are known to have FOG. Visual stimuli that might cause FOG were shown to them while the speed adaptation controller adjusted treadmill speed to follow the subjects' intention. Two of the three subjects showed FOG during the treadmill walking." }, { "pmid": "22019972", "title": "Dynamic visual cueing in combination with treadmill training for gait rehabilitation in Parkinson disease.", "abstract": "Various cueing techniques as well as treadmill training have been shown to be effective in the gait rehabilitation of patients with Parkinson disease. We present a novel setup combining both dynamic visual cueing and body weight-supported treadmill training. A nonambulatory patient with Parkinson disease received six training sessions. Continuous improvement of gait parameters was observed throughout the course of training. When comparing cued and noncued conditions in individual training sessions, it was found that step length was larger and that gait symmetry was enhanced in the cued condition. At the end of the training period, the patient was capable of walking short distances with a walking frame. In conclusion, dynamic visual cueing in combination with body weight-supported treadmill training seems to be a promising treatment strategy for patients with Parkinson disease, even in the case of severe impairment." }, { "pmid": "20666620", "title": "Effect of robotic locomotor training in an individual with Parkinson's disease: a case report.", "abstract": "PURPOSE\nThe purpose was to test the effect of robot-assisted gait therapy with the Lokomat system in one representative individual with Parkinson's disease (PD).\n\n\nMETHODS\nThe patient was a 67-year-old female with more than an 8-year history of PD. The manifestations of the disease included depressive mood with lack of motivation, moderate bradykinesia, rigidity and resting tremor, both involving more the right side of the body, slow and shuffling gait with episodes of freezing and risk of falling. The patient underwent six sessions of robot-assisted gait training. The practice included treadmill walking at variable speed for 25-40 min with a partial body weight support and assistance from the Lokomat orthosis.\n\n\nRESULTS\nAfter the therapy, the patient increased the gait speed, stride length and foot clearance during over ground walking. She reduced the time required to complete a 180° turn and the latency of gait initiation. Improvements were observed in some items of the Unified Parkinson's Disease Rating Scale including motivation, bradykinesia, rigidity, freezing, leg agility, gait and posture.\n\n\nCONCLUSIONS\nAlthough the results supported the feasibility of using robot-assisted gait therapy in the rehabilitation an individual with PD, further studies are needed to assess a potential advantage of the Lokomat system over conventional locomotor training for this population." }, { "pmid": "22623206", "title": "Robotic gait training is not superior to conventional treadmill training in parkinson disease: a single-blind randomized controlled trial.", "abstract": "BACKGROUND\nThe use of robots for gait training in Parkinson disease (PD) is growing, but no evidence points to an advantage over the standard treadmill.\n\n\nMETHODS\nIn this randomized, single-blind controlled trial, participants aged <75 years with early-stage PD (Hoehn-Yahr <3) were randomly allocated to 2 groups: either 30 minutes of gait training on a treadmill or in the Lokomat for 3 d/wk for 4 weeks. Patients were evaluated by a physical therapist blinded to allocation before and at the end of treatment and then at the 3- and 6-month follow-up. The primary outcome measure was the 6-minute walk test.\n\n\nRESULTS\nOf 334 screened patients, the authors randomly allocated 30 to receive gait training with treadmill or the Lokomat. At baseline, the 2 groups did not differ. At the 6-month follow-up, both groups had improved significantly in the primary outcome measure (treadmill: mean = 490.95 m, 95% confidence interval [CI] = 448.56-533.34, P = .0006; Lokomat: 458.6 m, 95% CI = 417.23-499.96, P = .01), but no significant differences were found between the 2 groups (P = .53).\n\n\nDISCUSSION\nRobotic gait training with the Lokomat is not superior to treadmill training in improving gait performance in patients with PD. Both approaches are safe, with results maintained for up to 6 months." }, { "pmid": "23490463", "title": "Robot-assisted gait training versus equal intensity treadmill training in patients with mild to moderate Parkinson's disease: a randomized controlled trial.", "abstract": "BACKGROUND\nThere is a lack of evidence about the most effective strategy for training gait in mild to moderate Parkinson's disease. The aim of this study was to compare the effects of robotic gait training versus equal intensity treadmill training and conventional physiotherapy on walking ability in patients with mild to moderate Parkinson's disease.\n\n\nMETHODS\nSixty patients with mild to moderate Parkinson's disease (Hoehn & Yahr stage 3) were randomly assigned into three groups. All patients received twelve, 45-min treatment sessions, three days a week, for four consecutive weeks. The Robotic Gait Training group (n = 20) underwent robot-assisted gait training. The Treadmill Training group (n = 20) performed equal intensity treadmill training without body-weight support. The Physical Therapy group (n = 20) underwent conventional gait therapy according to the proprioceptive neuromuscular facilitation concept. Patients were evaluated before, after and 3 months post-treatment. Primary outcomes were the following timed tasks: 10-m walking test, 6-min walking test.\n\n\nRESULTS\nNo statistically significant difference was found on the primary outcome measures between the Robotic Gait Training group and the Treadmill Training group at the after treatment evaluation. A statistically significant improvement was found after treatment on the primary outcomes in favor of the Robotic Gait Training group and Treadmill Training group compared to the Physical Therapy group. Findings were confirmed at the 3-month follow-up evaluation.\n\n\nCONCLUSIONS\nOur findings support the hypothesis that robotic gait training is not superior to equal intensity treadmill training for improving walking ability in patients with mild to moderate Parkinson's disease." }, { "pmid": "25175601", "title": "Botulinum toxin type A potentiates the effect of neuromotor rehabilitation of Pisa syndrome in Parkinson disease: a placebo controlled study.", "abstract": "INTRODUCTION\nPisa syndrome (PS) is a tonic lateral flexion of trunk that represents a disabling complication of advanced Parkinson disease (PD). Conventional rehabilitation treatment (CT) ameliorates axial posture and trunk mobility in PD patients, but the improvement tends to wane in 4-6 months. Botulin toxin (BT) may reduce muscle hyperactivity, therefore improving CT effectiveness. We evaluated whether the injection of incabotulinum toxin type A (iBTA) into the hyperactive trunk muscles might improve the effectiveness of rehabilitation in a group of PD patients with PS.\n\n\nMETHODS\nTwenty-six PD patients were enrolled in a randomized placebo-controlled trial. Group A was treated with iBTA before undergoing CT (a 4-week intensive programme), while Group B received saline before the 4-week CT treatment. Patients were evaluated at baseline, at the end of the rehabilitative period, 3 and 6 months with kinematic analysis of movement, UPDRS, Functional Independence Measure and Visual Analog Scale for pain.\n\n\nRESULTS\nAt the end of the rehabilitation period, both groups improved significantly in terms of static postural alignment and of range of motion. Group A showed a significantly more marked reduction in pain score as compared with Group B and a more prolonged efficacy on several clinical and kinematic variables.\n\n\nCONCLUSIONS\nOur preliminary data suggest that BT may be considered an important addition to the rehabilitation programme for PD subjects with PS for improving axial posture and trunk mobility, as well as for a better control of pain." }, { "pmid": "29027544", "title": "Long-term effects of exercise and physical therapy in people with Parkinson disease.", "abstract": "Parkinson disease (PD) is a progressive, neurodegenerative movement disorder with symptoms reflecting various impairments and functional limitations, such as postural instability, gait disturbance, immobility and falls. In addition to pharmacological and surgical management of PD, exercise and physical therapy interventions are also being actively researched. This Review provides an overview of the effects of PD on physical activity - including muscle weakness, reduced aerobic capacity, gait impairment, balance disorders and falls. Previously published reviews have discussed only the short-term benefits of exercises and physical therapy for people with PD. However, owing to the progressive nature of PD, the present Review focuses on the long-term effects of such interventions. We also discuss exercise-induced neuroplasticity, present data on the possible risks and adverse effects of exercise training, make recommendations for clinical practice, and describe new treatment approaches. Evidence suggests that a minimum of 4 weeks of gait training or 8 weeks of balance training can have positive effects that persist for 3-12 months after treatment completion. Sustained strength training, aerobic training, tai chi or dance therapy lasting at least 12 weeks can produce long-term beneficial effects. Further studies are needed to verify disease-modifying effects of these interventions." }, { "pmid": "17115387", "title": "Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS): Process, format, and clinimetric testing plan.", "abstract": "This article presents the revision process, major innovations, and clinimetric testing program for the Movement Disorder Society (MDS)-sponsored revision of the Unified Parkinson's Disease Rating Scale (UPDRS), known as the MDS-UPDRS. The UPDRS is the most widely used scale for the clinical study of Parkinson's disease (PD). The MDS previously organized a critique of the UPDRS, which cited many strengths, but recommended revision of the scale to accommodate new advances and to resolve problematic areas. An MDS-UPDRS committee prepared the revision using the recommendations of the published critique of the scale. Subcommittees developed new material that was reviewed by the entire committee. A 1-day face-to-face committee meeting was organized to resolve areas of debate and to arrive at a working draft ready for clinimetric testing. The MDS-UPDRS retains the UPDRS structure of four parts with a total summed score, but the parts have been modified to provide a section that integrates nonmotor elements of PD: I, Nonmotor Experiences of Daily Living; II, Motor Experiences of Daily Living; III, Motor Examination; and IV, Motor Complications. All items have five response options with uniform anchors of 0 = normal, 1 = slight, 2 = mild, 3 = moderate, and 4 = severe. Several questions in Part I and all of Part II are written as a patient/caregiver questionnaire, so that the total rater time should remain approximately 30 minutes. Detailed instructions for testing and data acquisition accompany the MDS-UPDRS in order to increase uniform usage. Multiple language editions are planned. A three-part clinimetric program will provide testing of reliability, validity, and responsiveness to interventions. Although the MDS-UPDRS will not be published until it has successfully passed clinimetric testing, explanation of the process, key changes, and clinimetric programs allow clinicians and researchers to understand and participate in the revision process." }, { "pmid": "10460115", "title": "The UK FIM+FAM: development and evaluation. Functional Assessment Measure.", "abstract": "BACKGROUND AND AIMS\nThe aim of this study was to develop and evaluate the UK version of the Functional Assessment Measure (UK FIM+FAM).\n\n\nDESIGN\nBefore and after evaluation of inter-rater reliability.\n\n\nDEVELOPMENT\nTen 'troublesome' items in the original FIM+FAM were identified as being particularly difficult to score reliably. Revised decision trees were developed and tested for these items over a period of two years to produce the UK FIM+FAM.\n\n\nEVALUATION\nA multicentre study was undertaken to test agreement between raters for the UK FIM+FAM, in comparison with the original version, by assessing accuracy of scoring for standard vignettes.\n\n\nMETHODS\nBaseline testing of the original FIM+FAM was undertaken at the start of the project in 1995. Thirty-seven rehabilitation professionals (11 teams) each rated the same three sets of vignettes - first individually and then as part of a multidisciplinary team. Accuracy was assessed in relation to the agreed 'correct' answers, both for individual and for team scores. Following development of the UK version, the same vignettes (with minimal adaptation to place them in context with the revised version) were rated by 28 individuals (nine teams).\n\n\nRESULTS\nTaking all 30 items together, the accuracy for scoring by individuals improved from 74.7% to 77.1% with the UK version, and team scores improved from 83.7% to 86.5%. When the 10 troublesome items were taken together, accuracy of individual raters improved from 69.5% to 74.6% with the UK version (p <0.001), and team scores improved from 78.2% to 84.1% (N/S). For both versions, team ratings were significantly more accurate than individual ratings (p <0.01). Kappa values for team scoring of the troublesome items were all above 0.65 in the UK version.\n\n\nCONCLUSION\nThe UK FIM+FAM compares favourably with the original version for scoring accuracy and ease of use, and is now sufficiently well-developed for wider dissemination." }, { "pmid": "18982238", "title": "Treadmill training for the treatment of gait disturbances in people with Parkinson's disease: a mini-review.", "abstract": "This report reviews recent investigations of the effects of treadmill training (TT) on the gait of patients with Parkinson's disease. A literature search identified 14 relevant studies. Three studies reported on the immediate effects of TT; over-ground walking improved (e.g., increased speed and stride length) after one treadmill session. Effects persisted even 15 min later. Eleven longer-term trials demonstrated feasibility, safety and efficacy, reporting positive benefits in gait speed, stride length and other measures such as disease severity (e.g., Unified Parkinson's Disease Rating Scale) and health-related quality of life, even several weeks after cessation of the TT. Long-term carryover effects also raise the possibility that TT may elicit positive neural plastic changes. While encouraging, the work to date is preliminary; none of the identified studies received a quality rating of Gold or level Ia. Additional high quality randomized controlled studies are needed before TT can be recommended with evidence-based support." }, { "pmid": "24659140", "title": "Treadmill gait training improves baroreflex sensitivity in Parkinson's disease.", "abstract": "BACKGROUND\nPartial weight supported treadmill gait training (PWSTT) is widely used in rehabilitation of gait in patient with Parkinson’s Diseases (PD). However, its effect on blood pressure variability (BPV) and baroreflex sensitivity (BRS) in PD has not been studied.\n\n\nAIM\nTo evaluate the effect of conventional and treadmill gait training on BPV components and BRS.\n\n\nMETHODS\nSixty patients with idiopathic PD were randomized into three groups. Twenty patients in control group were on only stable medication, 20 patients in conventional gait training (CGT) group (Stable medication with CGT) and 20 patients in PWSTT group (Stable medication with 20 % PWSTT). The CGT and PWSTT sessions were given for 30 min per day, 4 days per week, for 4 weeks (16 sessions). Groups were evaluated in their best ‘ON’ states. The beat-to-beat finger blood pressure (BP) was recorded for 10 min using a Finometer instrument (Finapres Medical Systems, The Netherlands). BPV and BRS results were derived from artifact-free 5-min segments using Nevrocard software.\n\n\nRESULTS\nBRS showed a significant group with time interaction (F = 6.930; p = 0.003). Post-hoc analysis revealed that PWSTT group showed significant improvement in BRS (p < 0.001) after 4 weeks of training. No significant differences found in BPV parameters; systolic BP, diastolic BP, co-variance of systolic BP and low frequency component of systolic BP.\n\n\nCONCLUSIONS\nFour weeks of PWSTT significantly improves BRS in patients with PD. It can be considered as a non-invasive method of influencing BRS for prevention of orthostatic BP fall in patients with PD." }, { "pmid": "6407776", "title": "Arterial baroreflex sensitivity, plasma catecholamines, and pressor responsiveness in essential hypertension.", "abstract": "Arterial baroreflex sensitivity, plasma norepinephrine (NE) and epinephrine (E), and pressor and depressor responses were assessed in 25 patients with essential hypertension and 29 normotensive control subjects. Sensitivity of the cardiac limb of the baroreflex was determined by blood pressure and interbeat interval responses associated with the Valsalva maneuver, externally applied neck suction and pressure, and injection of phenylephrine and nitroglycerin. By all these techniques, patients with essential hypertension had significantly decreased baroreflex sensitivity, even after adjustment for age mismatching between the hypertensive and normotensive groups. Hypertensive patients also had significantly higher mean levels of plasma NE and E in both brachial arterial and antecubital venous blood (246 vs 154 pg/ml arterial NE, 286 vs 184 pg/ml venous NE, 99 vs 55 pg/ml arterial E, and 65 vs 35 pg/ml venous E) and significantly larger pressor responses to injected phenylephrine (30.9 mm Hg/100 micrograms vs 16.7 mm Hg/100 micrograms). When baroreflex-cardiac sensitivity values measured by the various techniques were averaged, there was a significant inverse relationship between sensitivity and venous NE and between sensitivity and pressor responsiveness. The results indicate that decreased baroreflex-cardiac sensitivity, increased sympathetic outflow, and pressor hyperresponsiveness tend to occur together in some patients with essential hypertension. Decreased arterial distensibility and altered central neural integration can account for these findings." }, { "pmid": "11244017", "title": "Determinants of spontaneous baroreflex sensitivity in a healthy working population.", "abstract": "Baroreflex sensitivity (BRS) by the spontaneous sequence technique has been widely used as a cardiac autonomic index for a variety of pathological conditions. However, little information is available on determinants of the variability of spontaneous BRS and on age-related reference values of this measurement in a healthy population. We evaluated BRS as the slope of spontaneous changes in systolic blood pressure (BP) and pulse interval from 10 minutes BP (Finapres) and ECG recordings in 1134 healthy volunteers 18 to 60 years of age. Measurement of BRS could be obtained in 90% of subjects. Those with unmeasurable spontaneous BRS had a slightly lower heart rate but were otherwise not different from the rest of the population. BRS was inversely related to age (lnBRS, 3.24-0.03xage; r(2)=0.23; P:<0.0001) in both genders. In addition, univariate analysis revealed a significant inverse correlation between BRS and heart rate, body mass index, and BP. Sedentary lifestyle and regular alcohol consumption were also associated with lower BRS. However, only age, heart rate, systolic and diastolic BP, body mass index, smoking, and gender were independent predictors of BRS in a multivariate model, accounting for 47% of the variance of BRS. The present study provides reference values for spontaneous BRS in a healthy white population. Only approximately half of the variability of BRS could be explained by anthropometric variables and common risk factors, which suggests that a significant proportion of interindividual differences may reflect genetic heterogeneity." }, { "pmid": "11378250", "title": "Depressed baroreflex sensitivity in patients with Alzheimer's and Parkinson's disease.", "abstract": "Parkinson's disease (PD) and Alzheimer's dementia (AD) are often associated with an autonomic neuropathy. The extent of autonomic involvement, however is poorly defined and unpredictable. In order to assess the autonomic cardiovascular regulation baroreflex sensitivity (BRS) was determined non-invasively in 23 patients (age: 65 +/- 9.3 years) with PD and 24 patients with AD (age: 72.3 +/- 7.2 years). The results were compared with those on 22 healthy age- and sex-matched volunteers. Patients with PD and AD exhibited marked abnormalities in cardiovascular autonomic reflex regulation showed by markedly depressed BRS. The possible predictive value of centrally based depression of baroreflex sensitivity necessitates further studies." }, { "pmid": "9010395", "title": "Rhythmic auditory-motor facilitation of gait patterns in patients with Parkinson's disease.", "abstract": "OBJECTIVES\nThe effect of rhythmic auditory stimulation (RAS) on gait velocity, cadence, stride length, and symmetry was studied in 31 patients with idiopathic Parkinson's disease, 21 of them on (ON) and 10 off medication (OFF), and 10 healthy elderly subjects.\n\n\nMETHOD\nPatients walked under four conditions: (1) their own maximal speed without external rhythm; (2) with the RAS beat frequency matching the baseline cadence; (3) with RAS 10% faster than the baseline cadence; (4) without rhythm to check for carry over from RAS. Gait data were recorded via a computerised foot switch system. The RAS was delivered via a 50 ms square wave tone embedded in instrumental music (Renaissance style) in 2/4 metre prerecorded digitally on a sequencer for variable tempo reproduction. Patients on medication were tested in the morning 60-90 minutes after medication. Patients off medication were tested at the same time of day 24 hours after the last dose. Healthy elderly subjects were tested during the same time of day.\n\n\nRESULTS\nFaster RAS produced significant improvement (P < 0.05) in mean gait velocity, cadence, and stride length in all groups. Close synchronisation between rhythm and step frequency in the controls and both Parkinson's disease groups suggest evidence for rhythmic entrainment mechanisms even in the presence of basal ganglia dysfunction.\n\n\nCONCLUSIONS\nThe results are consistent with and extend prior reports of rhythmic auditory facilitation in Parkinson's disease gait when there is mild to moderate impairment, and suggest a technique for gait rehabilitation in Parkinson's disease." }, { "pmid": "9577399", "title": "The reaching movements of patients with Parkinson's disease under self-determined maximal speed and visually cued conditions.", "abstract": "Two-dimensional kinematic analysis was performed of the reaching movements that six subjects with Parkinson's disease and six healthy subjects produced under self-determined maximal speed and visually cued conditions. Subjects were required to reach as fast as possible to grasp a ball (i) that was fixed stationary in the centre of a designated contact zone on an inclined ramp (self-determined maximal speed condition), or (ii) that rolled rapidly from left to right down the incline and into the contact zone (visually cued condition). Parkinson's disease subjects displayed bradykinesia when performing maximal speed reaches to the stationary ball, but not when they reached for the moving ball. In response to the external driving stimulus of the moving ball, Parkinson's disease subjects showed the ability to exceed their self-determined maximal speed of reaching and still maintain a movement accuracy that was comparable to that of healthy subjects. Thus, the bradykinesia of Parkinson's disease subjects did not seem to be the result of a basic deficit in their force production capacity or to be a compensatory mechanism for poor movement accuracy. Instead, bradykinesia appeared to result from the inability of Parkinson's disease subjects to maximize their movement speed when required to internally drive their motor output. The occasional failure of Parkinson's disease subjects to successfully grasp the moving ball suggested errors of coincident anticipation and impairments in grasp performance rather than limitations in the speed or accuracy of their reaches. These results are discussed in relation to the notion that the motor circuits of the basal ganglia play an important role in the modulation of internally regulated movements." }, { "pmid": "8800948", "title": "Stride length regulation in Parkinson's disease. Normalization strategies and underlying mechanisms.", "abstract": "Results of our previous studies have shown that the slow, shuffling gait of Parkinson's disease patients is due to an inability to generate appropriate stride length and that cadence control is intact and is used as a compensatory mechanism. The reason for the reduced stride length is unclear, although deficient internal cue production or inadequate contribution to cortical motor set by the basal ganglia are two possible explanations. In this study we have examined the latter possibility by comparing the long-lasting effects of visual cues in improving stride length with that of attentional strategies. Computerized stride analysis was used to measure the spatial (distance) and temporal (timing) parameters of the walking pattern in a total of 54 subjects in three separate studies. In each study Parkinson's disease subjects were trained for 20 min by repeated 10 m walks set at control stride length (determined from control subjects matched for age, sex and height), using either visual floor markers or a mental picture of the appropriate stride size. Following training, the gait patterns were monitored (i) every 15 min for 2 h; (ii) whilst interspersing secondary tasks of increasing levels of complexity; (iii) covertly, when subjects were unaware that measurement was taking place. The results demonstrated that training with both visual cues and attentional strategies could maintain normal gait for the maximum recording time of 2 h. Secondary tasks reduced stride length towards baseline values as did covert monitoring. The findings confirm that the ability to generate a normal stepping pattern is not lost in Parkinson's disease and that gait hypokinesia reflects a difficulty in activating the motor control system. Normal stride length can be elicited in Parkinson's disease using attentional strategies and visual cues. Both strategies appear to share the same mechanism of focusing attention on the stride length. The effect of attention appears to require constant vigilance to prevent reverting to more automatic control mechanisms." }, { "pmid": "9549526", "title": "Training improves the speed of aimed movements in Parkinson's disease.", "abstract": "In this study, the extent to which bradykinesia in patients with idiopathic Parkinson's disease can be influenced by practice and by specific training strategies was investigated. Fifteen patients with Parkinson's disease tested after withdrawal of anti-Parkinson medication, and 15 matched control subjects, practised a ballistic aiming task. Performance was tested before, during and after training and again 1 h later. The Parkinson's disease patients and control subjects were randomly assigned to one of two training schedules, practising with or without rhythmic auditory cues. At baseline, the Parkinson's disease patients showed longer movement times, with a marked decrease in maximum acceleration and deceleration in the initial open-loop phase compared with those of the control subjects. With training, they were able to make significant improvement in the speed of aimed movements, particularly in the early movement phase, without any deterioration in accuracy. These effects transferred to an untrained limb and were at least partially maintained after a 1-h delay. While patients remained impaired relative to control subjects at all phases of training and follow-up, the patients' performance at the end of training did not differ significantly from the control subjects' baseline function. Contrary to expectation, rhythmic auditory cues did not enhance improvement in the speed of aimed movements in either patients or control subjects. If anything, less improvement was shown in the cued groups, although there were suggestions that the aiming skill was retained better over the delay period. The results demonstrate preserved abilities to improve speed of single ballistic aiming movements in Parkinson's disease patients and the possibility of reducing bradykinesia by training." }, { "pmid": "10388793", "title": "Mechanisms underlying gait disturbance in Parkinson's disease: a single photon emission computed tomography study.", "abstract": "Single photon emission computed tomography was used to evaluate regional cerebral blood flow changes during gait on a treadmill in 10 patients with Parkinson's disease and 10 age-matched controls. The subjects were injected with [99mTc]hexamethyl-propyleneamine oxime twice: while walking on the treadmill, which moved at a steady speed, and while lying on a bed with their eyes open. On the treadmill, all subjects walked at the same speed with their preferred stride length. The patients showed typical hypokinetic gait with higher cadence and smaller stride length than the controls. In the controls, a gait-induced increase in brain activity was observed in the medial and lateral premotor areas, primary sensorimotor areas, anterior cingulate contex, superior parietal cortex, visual cortex, dorsal brainstem, basal ganglia and cerebellum. The Parkinson's disease patients revealed relative underactivation in the left medial frontal area, right precuneus and left cerebellar hemisphere, whereas they showed relative overactivity in the left temporal cortex, right insula, left cingulate cortex and cerebellar vermis. This is the first experimental study showing that the dorsal brainstem, which corresponds to the brainstem locomotor region in experimental animals, is active during human bipedal gait. The reduced brain activity in the medial frontal motor areas is a basic abnormality in motor performance in Parkinson's disease. The underactivity in the left cerebellar hemisphere, in contrast to the overactivity in the vermis, could be associated with a loss of lateral gravity shift in parkinsonian gait." }, { "pmid": "9549526", "title": "Training improves the speed of aimed movements in Parkinson's disease.", "abstract": "In this study, the extent to which bradykinesia in patients with idiopathic Parkinson's disease can be influenced by practice and by specific training strategies was investigated. Fifteen patients with Parkinson's disease tested after withdrawal of anti-Parkinson medication, and 15 matched control subjects, practised a ballistic aiming task. Performance was tested before, during and after training and again 1 h later. The Parkinson's disease patients and control subjects were randomly assigned to one of two training schedules, practising with or without rhythmic auditory cues. At baseline, the Parkinson's disease patients showed longer movement times, with a marked decrease in maximum acceleration and deceleration in the initial open-loop phase compared with those of the control subjects. With training, they were able to make significant improvement in the speed of aimed movements, particularly in the early movement phase, without any deterioration in accuracy. These effects transferred to an untrained limb and were at least partially maintained after a 1-h delay. While patients remained impaired relative to control subjects at all phases of training and follow-up, the patients' performance at the end of training did not differ significantly from the control subjects' baseline function. Contrary to expectation, rhythmic auditory cues did not enhance improvement in the speed of aimed movements in either patients or control subjects. If anything, less improvement was shown in the cued groups, although there were suggestions that the aiming skill was retained better over the delay period. The results demonstrate preserved abilities to improve speed of single ballistic aiming movements in Parkinson's disease patients and the possibility of reducing bradykinesia by training." }, { "pmid": "15248294", "title": "Exercise-induced behavioral recovery and neuroplasticity in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-lesioned mouse basal ganglia.", "abstract": "Physical activity has been shown to be neuroprotective in lesions affecting the basal ganglia. Using a treadmill exercise paradigm, we investigated the effect of exercise on neurorestoration. The 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-lesioned mouse model provides a means to investigate the effect of exercise on neurorestoration because 30-40% of nigrostriatal dopaminergic neurons survive MPTP lesioning and may provide a template for neurorestoration to occur. MPTP-lesioned C57 BL/6J mice were administered MPTP (four injections of 20 mg/kg free-base, 2 hr apart) or saline and divided into the following groups: (1). saline; (2). saline + exercise; (3). MPTP; and (4) MPTP + exercise. Mice in exercise groups were run on a motorized treadmill for 30 days starting 4 days after MPTP lesioning (a period after which MPTP-induced cell death is complete). Initially, MPTP-lesioned + exercise mice ran at slower speeds for a shorter amount of time compared to saline + exercise mice. Both velocity and endurance improved in the MPTP + exercise group to near normal levels over the 30-day exercise period. The expression of proteins and genes involved in basal ganglia function including the dopamine transporter (DAT), tyrosine hydroxylase (TH), and the dopamine D1 and D2 receptors, as well as alterations on glutamate immunolabeling were determined. Exercise resulted in a significant downregulation of striatal DAT in the MPTP + exercise compared to MPTP nonexercised mice and to a lesser extent in the saline + exercised mice compared to their no-exercise counterparts. There was no significant difference in TH protein levels between MPTP and MPTP + exercise groups at the end of the study. The expression of striatal dopamine D1 and D2 receptor mRNA transcript was suppressed in the saline + exercise group; however, dopamine D2 transcript expression was increased in the MPTP + exercise mice. Immunoelectron microscopy indicated that treadmill exercise reversed the lesioned-induced increase in nerve terminal glutamate immunolabeling seen after MPTP administration. Our data demonstrates that exercise promotes behavioral recovery in the injured brain by modulating genes and proteins important to basal ganglia function." }, { "pmid": "17117354", "title": "The science and practice of LSVT/LOUD: neural plasticity-principled approach to treating individuals with Parkinson disease and other neurological disorders.", "abstract": "Our 15 years of research have generated the first short- and long-term efficacy data for speech treatment (Lee Silverman Voice Treatment; LSVT/LOUD) in Parkinson's disease. We have learned that training the single motor control parameter amplitude (vocal loudness) and recalibration of self-perception of vocal loudness are fundamental elements underlying treatment success. This training requires intensive, high-effort exercise combined with a single, functionally relevant target (loudness) taught across simple to complex speech tasks. We have documented that training vocal loudness results in distributed effects of improved articulation, facial expression, and swallowing. Furthermore, positive effects of LSVT/LOUD have been documented in disorders other than Parkinson's disease (stroke, cerebral palsy). The purpose of this article is to elucidate the potential of a single target in treatment to encourage cross-system improvements across seemingly diverse motor systems and to discuss key elements in mode of delivery of treatment that are consistent with principles of neural plasticity." } ]
Frontiers in Psychology
30804863
PMC6378280
10.3389/fpsyg.2019.00263
Psychotherapy and Artificial Intelligence: A Proposal for Alignment
Brief Psychotherapy assists patients to become aware and change their behavior when facing an immediate emotional conflict, and to implement a transformation process through actions of listening, observing, increasing awareness and making interventions. Therapeutic work employs tools and techniques to trigger a process of change, emphasizing cognitive and affective understanding. This article presents an approach that combines Psychology and Artificial Intelligence with the purpose of enhancing psychotherapy with computer-implemented tools. This approach highlights the intersection between these two knowledge areas and shows how machine intelligence can help to characterize affective areas, construct genograms, determine degree of differentiation of self, investigate cognitive interaction patterns, and achieve self-awareness and redefinition. The conceptual proposal was implemented by a web application, and a sample of computer-aided analysis is presented.
Related WorkPsychology uses many tools for data collection and methods for patient evaluation. Murray Bowen devised the genogram as an assessment and intervention tool that provides a graphical representation of family structure, from generation to generation, capable of helping to capture the pattern of interactional functioning of individuals in that family nucleus and its major morbidities. Bowen also created the Differentiation of Self Scale (Bowen, 1991) to understand the degree of emotional maturity of the individual in the context of relational processes. This scale evaluates the individual in contexts where the ego embodies and distinguishes itself from another ego.Simon (1989) proposed a theory of adaptation from which the quality of subject adjustment is evaluated from four adaptive perspectives (affective-relational, productivity, sociocultural and organic), resulting in the Operational Adaptive Diagnostic Scale. This tool is widely used (Honda and Yoshida, 2012) and has branched into several variations (Yoshida, 2013; Khater, 2014; Peixoto and Yoshida, 2016).Bertalanffy (1968) created the General System Theory, which points to the integration between the natural and social sciences toward a system theory, where systems are defined as organized modules of elements that are interrelated and that interact among them. Its premise is to look for general value rules that can be applied in any arrangement and at any level of reality. The concept of system is particularly important in Psychology because couples, families, or individuals tend to be understood as systems, sometimes in balance, sometimes far from it. Maturana (1975) created the concept of autopoiesis, which refers to a system being able to self-define, self-construct and, finally, renew itself from these two former actions; a view that also shares concepts with Wiener’s (1954). Cybernetics Theory (Slawomir and Yoshikatsu, 2016). This ability is fundamental to the existence of psychological therapy, since its goal is to lead the couple, family or individual to achieve self-knowledge and discernment, and to move from systemic homeostasis to a new balance in a healthier stable condition. In addition, Prigogine’s (1997). Theory of Dissipative Structures (1997) indicates that disorder (entropy) stimulates the processes of self-organization, and that a system may work both on and off balance, implying a new interpretation of psychopathological phenomena and the psychotherapeutic process.Freud was one of the pioneers of Brief Psychotherapy, since his early-on work involved treatments that did not usually last more than a year. However, over time, he changed his interest and his studies turned to longer analysis. Ferenczi and Rank (Borgogno, 2001) attempted to introduce changes in the psychoanalytic process in order to reduce time by introducing the term “active technique,” which seeks to make the patient more participative, anticipating their past experiences and propelling them from difficult situations. These authors believed that shortening therapy time was not only just a social and economic matter, but also a technical one. According to them, a predefined number of sessions would induce the patient to stop practicing children’s attitudes toward adult posture.In Brief Psychotherapy (Knobel, 1986), it is essential to establish extensive knowledge about the patient’s history and personality. Although it may seem time-consuming to collect specific patient data in a context where the number of sessions is limited, the cost is necessary. Deeper knowledge of the patient improves the psychologist work because it accelerates the search for alternative solutions, and thus, shortens the time of therapy.The first insight into AI combined with psychotherapy is the chatterbot called Eliza (Weizenbaum, 1966), a 1960s natural language processing program created to simulate conversations and give users an illusion of understanding. It is a very important and successful experiment, which was followed by several other bots. However, such software aimed to mimetic a psychologist interacting with a patient, and was never supposed to perform recommendations about patient’s problems. It was during the 1980s that many reports were published describing the support of computer to clinical use (Hartman, 1986; LaChat, 1986; Sampson, 1986; Servan-Schreiber, 1986). These papers proved that logic-based AI could be used as an approach to computerized therapy, particularly to brief cognitive and behavioral therapies. By that time, automatic theorem proving and deduction systems were not mature enough to support such applications, which may be the explanation for the lack of publishing concerning this theme over the next years.Nowadays, there is a new wave of reports concerning AI and psychotherapy, mainly because the evolution of AI techniques. For instance, Luxton (2014) introduces a computational clinician system concept, which is quite complete. Moreover, there are some initiatives devoted to special issues, such as the one from Morales et al. (2017) who use data mining techniques to distinguish between groups with and without suicide risk. Fitzpatrick et al. (2017) presents a fully automated conversational agent to deliver a self-help program for college students who identify themselves as having symptoms of anxiety and depression. Glomann et al. (2019) describe an application that acts as a constant companion for clinically diagnosed patients who suffer psychological illnesses, supporting them during or after an ambulatory treatment. Besides, there are proposals concerning a wider range of issues. Kravets et al. (2017) presents a full-scale automation of establishing the diagnosis using fuzzy logic for modeling of psychiatrist reasoning. D’Alfonso et al. (2017) discusses the development of the moderated online social therapy web application, which provides an interactive social media-based platform for recovery in mental health.The process of knowing the patient involves the construction of mental models based on fragmented evidence. The modeling of knowledge is one of the concerns of Artificial Intelligence, since it is necessary to understand human behavior so that a machine can mimic it. Mello and Carvalho (2015) constructed a computational model of representation called Knowledge Geometry, which is agnostic to technology and capable of describing the process of mapping a phenomenon on concepts (intuition) and vice versa (reification). This model also adheres to the process of psychological evaluation in which the professional needs to map the perceived conditions of the patient on the behavioral patterns that belong to Brief Psychotherapy (Yi and Kun, 2017). There is also compliance with the feedback process of designing intervention patterns on the sick patient-system.When a psychotherapist tries to map and understand the phenomenon that generates a conflict in a patient, there is an attempt to project the theoretical concepts of psychotherapy on the specific situation presented by the individual. The projection of these concepts on the real world is the reification operation of Knowledge Geometry, a process of inference whose resources are analogies and isomorphism. By identifying the modus operandi and the functional pattern of the family (or conjugal system), it becomes possible to intervene and propose new alternatives to the system. Thus, it is possible to deconstruct the addictive mechanisms of feedback and maintenance that prevent the system from admitting new experiences and learnings, thus hampering its development or the resolution of the conflict in question. In AI, this situation is known as Case-Based Reasoning, and is usually modeled by first order logic. On the other hand, when the psychotherapist tries to identify and understand how the patient’s individual symptom is connected to the broader interactional system, that is, how the singular situation is related to the general scenario, it represents the intuition operation of Knowledge Geometry. From this point of view, the patient manifests a symptomatology that is projected onto the family or conjugal system; in other words, the particular phenomenon is used as a support for understanding a broader pattern. In this case, Artificial Intelligence calls a process similar to this Machine Learning.
[ "11760664", "28626431", "28588005", "26200578", "28210230", "27496574" ]
[ { "pmid": "11760664", "title": "Elasticity of technique: the psychoanalytic project and the trajectory of Ferenczi's life.", "abstract": "The object of this paper is the Elasticity of Psychoanalytic Technique in the work of Sándor Ferenczi. The author sustains that this can be considered neither as an ultimate arrival point nor as a particular stage of Ferenczi's clinical-theoretical body of work, but rather as an ensemble of affective qualities, attitudes and values, which he gradually developed through experience, signalling a paradigm shift in the history of psychoanalysis. The following areas will be explored: the new sensitivity demonstrated by Ferenczi concerning the relational and communicative factors present in the analytic session, his subtle and acute attention to the participation of the analyst's own subjectivity in the therapeutic process, and how these enduring elements of Ferenczi's technique anticipate several significant future developments in psychoanalysis." }, { "pmid": "28626431", "title": "Artificial Intelligence-Assisted Online Social Therapy for Youth Mental Health.", "abstract": "Introduction: Benefits from mental health early interventions may not be sustained over time, and longer-term intervention programs may be required to maintain early clinical gains. However, due to the high intensity of face-to-face early intervention treatments, this may not be feasible. Adjunctive internet-based interventions specifically designed for youth may provide a cost-effective and engaging alternative to prevent loss of intervention benefits. However, until now online interventions have relied on human moderators to deliver therapeutic content. More sophisticated models responsive to user data are critical to inform tailored online therapy. Thus, integration of user experience with a sophisticated and cutting-edge technology to deliver content is necessary to redefine online interventions in youth mental health. This paper discusses the development of the moderated online social therapy (MOST) web application, which provides an interactive social media-based platform for recovery in mental health. We provide an overview of the system's main features and discus our current work regarding the incorporation of advanced computational and artificial intelligence methods to enhance user engagement and improve the discovery and delivery of therapy content. Methods: Our case study is the ongoing Horyzons site (5-year randomized controlled trial for youth recovering from early psychosis), which is powered by MOST. We outline the motivation underlying the project and the web application's foundational features and interface. We discuss system innovations, including the incorporation of pertinent usage patterns as well as identifying certain limitations of the system. This leads to our current motivations and focus on using computational and artificial intelligence methods to enhance user engagement, and to further improve the system with novel mechanisms for the delivery of therapy content to users. In particular, we cover our usage of natural language analysis and chatbot technologies as strategies to tailor interventions and scale up the system. Conclusions: To date, the innovative MOST system has demonstrated viability in a series of clinical research trials. Given the data-driven opportunities afforded by the software system, observed usage patterns, and the aim to deploy it on a greater scale, an important next step in its evolution is the incorporation of advanced and automated content delivery mechanisms." }, { "pmid": "28588005", "title": "Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial.", "abstract": "BACKGROUND\nWeb-based cognitive-behavioral therapeutic (CBT) apps have demonstrated efficacy but are characterized by poor adherence. Conversational agents may offer a convenient, engaging way of getting support at any time.\n\n\nOBJECTIVE\nThe objective of the study was to determine the feasibility, acceptability, and preliminary efficacy of a fully automated conversational agent to deliver a self-help program for college students who self-identify as having symptoms of anxiety and depression.\n\n\nMETHODS\nIn an unblinded trial, 70 individuals age 18-28 years were recruited online from a university community social media site and were randomized to receive either 2 weeks (up to 20 sessions) of self-help content derived from CBT principles in a conversational format with a text-based conversational agent (Woebot) (n=34) or were directed to the National Institute of Mental Health ebook, \"Depression in College Students,\" as an information-only control group (n=36). All participants completed Web-based versions of the 9-item Patient Health Questionnaire (PHQ-9), the 7-item Generalized Anxiety Disorder scale (GAD-7), and the Positive and Negative Affect Scale at baseline and 2-3 weeks later (T2).\n\n\nRESULTS\nParticipants were on average 22.2 years old (SD 2.33), 67% female (47/70), mostly non-Hispanic (93%, 54/58), and Caucasian (79%, 46/58). Participants in the Woebot group engaged with the conversational agent an average of 12.14 (SD 2.23) times over the study period. No significant differences existed between the groups at baseline, and 83% (58/70) of participants provided data at T2 (17% attrition). Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47; P=.01) while those in the information control group did not. In an analysis of completers, participants in both groups significantly reduced anxiety as measured by the GAD-7 (F1,54= 9.24; P=.004). Participants' comments suggest that process factors were more influential on their acceptability of the program than content factors mirroring traditional therapy.\n\n\nCONCLUSIONS\nConversational agents appear to be a feasible, engaging, and effective way to deliver CBT." }, { "pmid": "26200578", "title": "A Multisite Initiative to Increase the Use of Alcohol Screening and Brief Intervention Through Resident Training and Clinic Systems Changes.", "abstract": "PURPOSE\nScreening and brief intervention (SBI) is a seldom-used evidence-based practice for reducing unhealthy alcohol use among primary care patients. This project assessed the effectiveness of a regional consortium's training efforts in increasing alcohol SBI.\n\n\nMETHOD\nInvestigators combined alcohol SBI residency training efforts with clinic SBI implementation processes and used chart reviews to assess impact on SBI rates in four residency clinics. Data were derived from a random sample of patient charts collected before (2010; n = 662) and after (2011; n = 656) resident training/clinic implementation. Patient charts were examined for evidence that patients were asked about alcohol use by a validated screening instrument, the screening result (positive or negative), evidence that patients received a brief intervention, prescriptions for medications to assist abstinence, and referrals to alcohol treatment. Chi-square analyses identified differences in pre- and posttraining implementation of SBI practices.\n\n\nRESULTS\nFollowing program implementation, screening with validated instruments increased from 151/662 (22.8%) at baseline to 543/656 (82.8%, P < .01), and identification of unhealthy alcohol use increased from 12/662 (1.8%) to 41/656 (6.3%, P < .01). Performance of brief interventions more than doubled (10/662 [1.5%] versus 24/656 [3.7%], P < .01). There were no increases in the use of medications or referrals to treatment.\n\n\nCONCLUSIONS\nResident training combined with clinic implementation efforts can increase the delivery of evidence-based practices such as alcohol SBI in residency clinics." }, { "pmid": "28210230", "title": "Acute Mental Discomfort Associated with Suicide Behavior in a Clinical Sample of Patients with Affective Disorders: Ascertaining Critical Variables Using Artificial Intelligence Tools.", "abstract": "AIM\nIn efforts to develop reliable methods to detect the likelihood of impending suicidal behaviors, we have proposed the following.\n\n\nOBJECTIVE\nTo gain a deeper understanding of the state of suicide risk by determining the combination of variables that distinguishes between groups with and without suicide risk.\n\n\nMETHOD\nA study involving 707 patients consulting for mental health issues in three health centers in Greater Santiago, Chile. Using 345 variables, an analysis was carried out with artificial intelligence tools, Cross Industry Standard Process for Data Mining processes, and decision tree techniques. The basic algorithm was top-down, and the most suitable division produced by the tree was selected by using the lowest Gini index as a criterion and by looping it until the condition of belonging to the group with suicidal behavior was fulfilled.\n\n\nRESULTS\nFour trees distinguishing the groups were obtained, of which the elements of one were analyzed in greater detail, since this tree included both clinical and personality variables. This specific tree consists of six nodes without suicide risk and eight nodes with suicide risk (tree decision 01, accuracy 0.674, precision 0.652, recall 0.678, specificity 0.670, F measure 0.665, receiver operating characteristic (ROC) area under the curve (AUC) 73.35%; tree decision 02, accuracy 0.669, precision 0.642, recall 0.694, specificity 0.647, F measure 0.667, ROC AUC 68.91%; tree decision 03, accuracy 0.681, precision 0.675, recall 0.638, specificity 0.721, F measure, 0.656, ROC AUC 65.86%; tree decision 04, accuracy 0.714, precision 0.734, recall 0.628, specificity 0.792, F measure 0.677, ROC AUC 58.85%).\n\n\nCONCLUSION\nThis study defines the interactions among a group of variables associated with suicidal ideation and behavior. By using these variables, it may be possible to create a quick and easy-to-use tool. As such, psychotherapeutic interventions could be designed to mitigate the impact of these variables on the emotional state of individuals, thereby reducing eventual risk of suicide. Such interventions may reinforce psychological well-being, feelings of self-worth, and reasons for living, for each individual in certain groups of patients." }, { "pmid": "27496574", "title": "Anticipation: Beyond synthetic biology and cognitive robotics.", "abstract": "The aim of this paper is to propose that current robotic technologies cannot have intentional states any more than is feasible within the sensorimotor variant of embodied cognition. It argues that anticipation is an emerging concept that can provide a bridge between both the deepest philosophical theories about the nature of life and cognition and the empirical biological and cognitive sciences steeped in reductionist and Newtonian conceptions of causality. The paper advocates that in order to move forward, cognitive robotics needs to embrace new platforms and a conceptual framework that will enable it to pursue, in a meaningful way, questions about autonomy and purposeful behaviour. We suggest that hybrid systems, part robotic and part cultures of neurones, offer experimental platforms where different dimensions of enactivism (sensorimotor, constitutive foundations of biological autonomy, including anticipation), and their relative contributions to cognition, can be investigated in an integrated way. A careful progression, mindful to the deep philosophical concerns but also respecting empirical evidence, will ultimately lead towards unifying theoretical and empirical biological sciences and may offer advancement where reductionist sciences have been so far faltering." } ]
Frontiers in Computational Neuroscience
30809141
PMC6380086
10.3389/fncom.2019.00001
Symbolic Modeling of Asynchronous Neural Dynamics Reveals Potential Synchronous Roots for the Emergence of Awareness
A new computational framework implementing asynchronous neural dynamics is used to address the duality between synchronous vs. asynchronous processes, and their possible relation to conscious vs. unconscious behaviors. Extending previous results on modeling the first three levels of animal awareness, this formalism is used here to produce the execution traces of parallel threads that implement these models. Running simulations demonstrate how sensory stimuli associated with a population of excitatory neurons inhibit in turn other neural assemblies i.e., a kind of neuronal asynchronous wiring/unwiring process that is reflected in the progressive trimming of execution traces. Whereas, reactive behaviors relying on configural learning produce vanishing traces, the learning of a rule and its later application produce persistent traces revealing potential synchronous roots of animal awareness. In contrast, to previous formalisms that use analytical and/or statistical methods to search for patterns existing in a brain, this new framework proposes a tool for studying the emergence of brain structures that might be associated with higher level cognitive capabilities.
Related WorkPrevious work related to the modeling of brain and cognition using symbolic methods, and more generally to global brain simulations and the emergence of consciousness, are now reviewed.In an extension of his early work on classical conditioning (Klopf, 1988), Klopf Johnson et al. (2001) did propose a computational model of learned avoidance that relies on an internal clock controlling both classically and instrumentally conditioned components, thus allowing for an explicit “proprioceptive feedback” i.e., a kind of primitive consciousness. This proposal opposed the then dominant paradigm requiring an evaluative feedback from the environment. This opposition did rest on the argument that “animals do not receive error signals during learning,” thus pointing out to the biological implausibility of error-correction back propagation i.e., an argument that, notwithstanding the proven effectiveness of this technique as a tool for functional approximation, is still valid today for brain research.Using classical results on Hopfield networks and attractors (Hopfield, 1982), Balkenius and his co-workers (Balkenius et al., 2018) did implement a memory model for robots. In this model, a prototypal form of consciousness arises from sensory information filled in a memory that in turns produces memory transitions over time, thus creating an inner world that is used both to interpret external input and to support “thoughts disconnected from the present situation.” A far reaching but questionable conclusion of this study is that “an inner world is a sine qua non for consciousness.”The work by Deco et al. (2008) falls in the category of “whole (or global) brain” simulations. Their theoretical account follows an overall statistical strategy. Degrees of freedom are successively reduced to resolve an otherwise intractable computational problem. Populations of spiking neurons get first reduced to distribution functions describing their probabilistic evolution, giving then rise to neural fields defined by differential operators involving both temporal and spatial terms. It finally proposes a measure for partitioning the brain into functionally relevant regions, this so-called “dynamical workspace of binding nodes” being supposedly responsible for binding information into conscious perceptions and memories. As in our own proposal, this formalism uses a multilevel architecture, which in this case distinguishes between the single neuron level, the mesoscopic describing how neural elements interact to yield emergent behavior, and the macroscopic level of dynamical large-scale neural systems such as cortical regions, the thalamus, etc. Each level of this description relates to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI). In conclusion, this formalism uses analytical and statistical tools to search for existing patterns in a functioning brain. In contrast, our own framework, which is constrained solely by a symbolic model of synaptic plasticity, proposes a tool for shaping the brain by linking perception to behavior trough a mechanism of hebbian learning.Besold and Kühnberger (2015) envision a system that operates on different levels corresponding to the layers in a system's architecture in order to update network structures via the artificial equivalent of synaptic dynamics. Our formalism relying on a virtual machine can be considered as an attempt to implement this architecture via a conceptual abstraction of synaptic plasticity. Our formalism also bears some similarities with a new model of neural networks, namely fibring neural networks (Garcez and Gabbay, 2004) that, similarly to threads, allow for the activation of groups of neurons and thus represent different levels of abstraction.With a few notable exceptions (e.g., Smith, 1992; Ruksénasz et al., 2009; Su et al., 2014), system validation is an issue that is seldom addressed in computational cognitive neuroscience. In order to obtain symbolic descriptions of neuronal behavior that allow for model checking, Su et al. have applied concurrency theory in a framework extending classical automata theory with communicating capabilities. A network of communicating automata is then mapped into a labeled transition system whose inference rules (for both internal transitions and automata synchronizations) define the semantics of the overall model. Su et al. further show that, in accordance with our own approach, asynchronous processing is not only a more biologically plausible way to model neural systems than do conventional artificial neural networks with synchronous updates, but also offers new perspectives for the cognitive modeling of higher level cognitive capabilities through emergent synchronous processes.
[ "21841845", "28761554", "29074768", "17817537", "26969413", "18769680", "21521609", "16603406", "11164022", "12829797", "29074766", "26447576", "15328408", "26593091", "9248061", "16764513", "6953413", "23559034", "18079071", "12517353", "26494276", "18602905", "26451489", "26096599", "28093548", "19669426", "24099930", "7434008", "8466179", "29074764", "26513352", "26157510", "23596410", "29074765", "26593093", "12757823", "25823871" ]
[ { "pmid": "21841845", "title": "The Neurodynamics of Cognition: A Tutorial on Computational Cognitive Neuroscience.", "abstract": "Computational Cognitive Neuroscience (CCN) is a new field that lies at the intersection of computational neuroscience, machine learning, and neural network theory (i.e., connectionism). The ideal CCN model should not make any assumptions that are known to contradict the current neuroscience literature and at the same time provide good accounts of behavior and at least some neuroscience data (e.g., single-neuron activity, fMRI data). Furthermore, once set, the architecture of the CCN network and the models of each individual unit should remain fixed throughout all applications. Because of the greater weight they place on biological accuracy, CCN models differ substantially from traditional neural network models in how each individual unit is modeled, how learning is modeled, and how behavior is generated from the network. A variety of CCN solutions to these three problems are described. A real example of this approach is described, and some advantages and limitations of the CCN approach are discussed." }, { "pmid": "28761554", "title": "Towards neuro-inspired symbolic models of cognition: linking neural dynamics to behaviors through asynchronous communications.", "abstract": "A computational architecture modeling the relation between perception and action is proposed. Basic brain processes representing synaptic plasticity are first abstracted through asynchronous communication protocols and implemented as virtual microcircuits. These are used in turn to build mesoscale circuits embodying parallel cognitive processes. Encoding these circuits into symbolic expressions gives finally rise to neuro-inspired programs that are compiled into pseudo-code to be interpreted by a virtual machine. Quantitative evaluation measures are given by the modification of synapse weights over time. This approach is illustrated by models of simple forms of behaviors exhibiting cognition up to the third level of animal awareness. As a potential benefit, symbolic models of emergent psychological mechanisms could lead to the discovery of the learning processes involved in the development of cognition. The executable specifications of an experimental platform allowing for the reproduction of simulated experiments are given in \"Appendix\"." }, { "pmid": "29074768", "title": "Space and time in the brain.", "abstract": "Nothing is more intuitive, yet more complex, than the concepts of space and time. In contrast to spacetime in physics, space and time in neuroscience remain separate coordinates to which we attach our observations. Investigators of navigation and memory relate neuronal activity to position, distance, time point, and duration and compare these parameters to units of measuring instruments. Although spatial-temporal sequences of brain activity often correlate with distance and duration measures, these correlations may not correspond to neuronal representations of space or time. Neither instruments nor brains sense space or time. Neuronal activity can be described as a succession of events without resorting to the concepts of space or time. Instead of searching for brain representations of our preconceived ideas, we suggest investigating how brain mechanisms give rise to inferential, model-building explanations." }, { "pmid": "17817537", "title": "Spatial learning as an adaptation in hummingbirds.", "abstract": "An ecological approach based on food distribution suggests that humming birds should more easily learn to visit a flower in a new location than to learn to return to a flower in a position just visited, for a food reward. Experimental results support this hypothesis as well as the general view that differences in learning within and among species represent adaptations." }, { "pmid": "26969413", "title": "Astrocytes as new targets to improve cognitive functions.", "abstract": "Astrocytes are now viewed as key elements of brain wiring as well as neuronal communication. Indeed, they not only bridge the gap between metabolic supplies by blood vessels and neurons, but also allow fine control of neurotransmission by providing appropriate signaling molecules and insulation through a tight enwrapping of synapses. Recognition that astroglia is essential to neuronal communication is nevertheless fairly recent and the large body of evidence dissecting such role has focused on the synaptic level by identifying neuro- and gliotransmitters uptaken and released at synaptic or extrasynaptic sites. Yet, more integrated research deciphering the impact of astroglial functions on neuronal network activity have led to the reasonable assumption that the role of astrocytes in supervising synaptic activity translates in influencing neuronal processing and cognitive functions. Several investigations using recent genetic tools now support this notion by showing that inactivating or boosting astroglial function directly affects cognitive abilities. Accordingly, brain diseases resulting in impaired cognitive functions have seen their physiopathological mechanisms revisited in light of this primary protagonist of brain processing. We here provide a review of the current knowledge on the role of astrocytes in cognition and in several brain diseases including neurodegenerative disorders, psychiatric illnesses, as well as other conditions such as epilepsy. Potential astroglial therapeutic targets are also discussed." }, { "pmid": "18769680", "title": "The dynamic brain: from spiking neurons to neural masses and cortical fields.", "abstract": "The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences." }, { "pmid": "21521609", "title": "Experimental and theoretical approaches to conscious processing.", "abstract": "Recent experimental studies and theoretical models have begun to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity. The present review focuses on the well-delimited issue of how an external or internal piece of information goes beyond nonconscious processing and gains access to conscious processing, a transition characterized by the existence of a reportable subjective experience. Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and \"ignition\" of a large-scale prefronto-parietal network. We compare these findings to current theoretical models of conscious processing, including the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices. The clinical implications of these results for general anesthesia, coma, vegetative state, and schizophrenia are discussed." }, { "pmid": "16603406", "title": "Conscious, preconscious, and subliminal processing: a testable taxonomy.", "abstract": "Of the many brain events evoked by a visual stimulus, which are specifically associated with conscious perception, and which merely reflect non-conscious processing? Several recent neuroimaging studies have contrasted conscious and non-conscious visual processing, but their results appear inconsistent. Some support a correlation of conscious perception with early occipital events, others with late parieto-frontal activity. Here we attempt to make sense of these dissenting results. On the basis of the global neuronal workspace hypothesis, we propose a taxonomy that distinguishes between vigilance and access to conscious report, as well as between subliminal, preconscious and conscious processing. We suggest that these distinctions map onto different neural mechanisms, and that conscious perception is systematically associated with surges of parieto-frontal activity causing top-down amplification." }, { "pmid": "11164022", "title": "Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework.", "abstract": "This introductory chapter attempts to clarify the philosophical, empirical, and theoretical bases on which a cognitive neuroscience approach to consciousness can be founded. We isolate three major empirical observations that any theory of consciousness should incorporate, namely (1) a considerable amount of processing is possible without consciousness, (2) attention is a prerequisite of consciousness, and (3) consciousness is required for some specific cognitive tasks, including those that require durable information maintenance, novel combinations of operations, or the spontaneous generation of intentional behavior. We then propose a theoretical framework that synthesizes those facts: the hypothesis of a global neuronal workspace. This framework postulates that, at any given time, many modular cerebral networks are active in parallel and process information in an unconscious manner. An information becomes conscious, however, if the neural population that represents it is mobilized by top-down attentional amplification into a brain-scale state of coherent activity that involves many neurons distributed throughout the brain. The long-distance connectivity of these 'workspace neurons' can, when they are active for a minimal duration, make the information available to a variety of processes including perceptual categorization, long-term memorization, evaluation, and intentional action. We postulate that this global availability of information through the workspace is what we subjectively experience as a conscious state. A complete theory of consciousness should explain why some cognitive and cerebral representations can be permanently or temporarily inaccessible to consciousness, what is the range of possible conscious contents, how they map onto specific cerebral circuits, and whether a generic neuronal mechanism underlies all of them. We confront the workspace model with those issues and identify novel experimental predictions. Neurophysiological, anatomical, and brain-imaging data strongly argue for a major role of prefrontal cortex, anterior cingulate, and the areas that connect to them, in creating the postulated brain-scale workspace." }, { "pmid": "12829797", "title": "A neuronal network model linking subjective reports and objective physiological data during conscious perception.", "abstract": "The subjective experience of perceiving visual stimuli is accompanied by objective neuronal activity patterns such as sustained activity in primary visual area (V1), amplification of perceptual processing, correlation across distant regions, joint parietal, frontal, and cingulate activation, gamma-band oscillations, and P300 waveform. We describe a neuronal network model that aims at explaining how those physiological parameters may cohere with conscious reports. The model proposes that the step of conscious perception, referred to as access awareness, is related to the entry of processed visual stimuli into a global brain state that links distant areas including the prefrontal cortex through reciprocal connections, and thus makes perceptual information reportable by multiple means. We use the model to simulate a classical psychological paradigm: the attentional blink. In addition to reproducing the main objective and subjective features of this paradigm, the model predicts an unique property of nonlinear transition from nonconscious processing to subjective perception. This all-or-none dynamics of conscious perception was verified behaviorally in human subjects." }, { "pmid": "29074766", "title": "Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain?", "abstract": "New technologies in neuroscience generate reams of data at an exponentially increasing rate, spurring the design of very-large-scale data-mining initiatives. Several supranational ventures are contemplating the possibility of achieving, within the next decade(s), full simulation of the human brain." }, { "pmid": "26447576", "title": "Cortical Correlates of Low-Level Perception: From Neural Circuits to Percepts.", "abstract": "Low-level perception results from neural-based computations, which build a multimodal skeleton of unconscious or self-generated inferences on our environment. This review identifies bottleneck issues concerning the role of early primary sensory cortical areas, mostly in rodent and higher mammals (cats and non-human primates), where perception substrates can be searched at multiple scales of neural integration. We discuss the limitation of purely bottom-up approaches for providing realistic models of early sensory processing and the need for identification of fast adaptive processes, operating within the time of a percept. Future progresses will depend on the careful use of comparative neuroscience (guiding the choices of experimental models and species adapted to the questions under study), on the definition of agreed-upon benchmarks for sensory stimulation, on the simultaneous acquisition of neural data at multiple spatio-temporal scales, and on the in vivo identification of key generic integration and plasticity algorithms validated experimentally and in simulations." }, { "pmid": "15328408", "title": "Modulation of long-range neural synchrony reflects temporal limitations of visual attention in humans.", "abstract": "Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact." }, { "pmid": "26593091", "title": "Distinct Eligibility Traces for LTP and LTD in Cortical Synapses.", "abstract": "In reward-based learning, synaptic modifications depend on a brief stimulus and a temporally delayed reward, which poses the question of how synaptic activity patterns associate with a delayed reward. A theoretical solution to this so-called distal reward problem has been the notion of activity-generated \"synaptic eligibility traces,\" silent and transient synaptic tags that can be converted into long-term changes in synaptic strength by reward-linked neuromodulators. Here we report the first experimental demonstration of eligibility traces in cortical synapses. We demonstrate the Hebbian induction of distinct traces for LTP and LTD and their subsequent timing-dependent transformation into lasting changes by specific monoaminergic receptors anchored to postsynaptic proteins. Notably, the temporal properties of these transient traces allow stable learning in a recurrent neural network that accurately predicts the timing of the reward, further validating the induction and transformation of eligibility traces for LTP and LTD as a plausible synaptic substrate for reward-based learning." }, { "pmid": "9248061", "title": "The NEURON simulation environment.", "abstract": "The moment-to-moment processing of information by the nervous system involves the propagation and interaction of electrical and chemical signals that are distributed in space and time. Biologically realistic modeling is needed to test hypotheses about the mechanisms that govern these signals and how nervous system function emerges from the operation of these mechanisms. The NEURON simulation program provides a powerful and flexible environment for implementing such models of individual neurons and small networks of neurons. It is particularly useful when membrane potential is nonuniform and membrane currents are complex. We present the basic ideas that would help informed users make the most efficient use of NEURON." }, { "pmid": "16764513", "title": "A fast learning algorithm for deep belief nets.", "abstract": "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind." }, { "pmid": "6953413", "title": "Neural networks and physical systems with emergent collective computational abilities.", "abstract": "Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices." }, { "pmid": "23559034", "title": "A review of cell assemblies.", "abstract": "Since the cell assembly (CA) was hypothesised, it has gained substantial support and is believed to be the neural basis of psychological concepts. A CA is a relatively small set of connected neurons, that through neural firing can sustain activation without stimulus from outside the CA, and is formed by learning. Extensive evidence from multiple single unit recording and other techniques provides support for the existence of CAs that have these properties, and that their neurons also spike with some degree of synchrony. Since the evidence is so broad and deep, the review concludes that CAs are all but certain. A model of CAs is introduced that is informal, but is broad enough to include, e.g. synfire chains, without including, e.g. holographic reduced representation. CAs are found in most cortical areas and in some sub-cortical areas, they are involved in psychological tasks including categorisation, short-term memory and long-term memory, and are central to other tasks including working memory. There is currently insufficient evidence to conclude that CAs are the neural basis of all concepts. A range of models have been used to simulate CA behaviour including associative memory and more process- oriented tasks such as natural language parsing. Questions involving CAs, e.g. memory persistence, CAs' complex interactions with brain waves and learning, remain unanswered. CA research involves a wide range of disciplines including biology and psychology, and this paper reviews literature directly related to the CA, providing a basis of discussion for this interdisciplinary community on this important topic. Hopefully, this discussion will lead to more formal and accurate models of CAs that are better linked to neuropsychological data." }, { "pmid": "18079071", "title": "Learning strategies in matching to sample: if-then and configural learning by pigeons.", "abstract": "Pigeons learned a matching-to-sample task with a split training-set design in which half of the stimulus displays were untrained and tested following acquisition. Transfer to the untrained displays along with no novel-stimulus transfer indicated that these pigeons learned the task (partially) via if-then rules. Comparisons to other performance measures indicated that they also partially learned the task via configural learning (learning the gestalt of the whole stimulus display). Differences in the FR-sample requirement (1 vs. 20) had no systematic effect on the type of learning or level of learning obtained. Differences from a previous study [Wright, A.A., 1997. Concept learning and learning strategies. Psychol. Sci. 8, 119-123] are discussed, including the effect of displaying the stimuli vertically (traditional display orientation) or horizontally from the floor." }, { "pmid": "12517353", "title": "Why visual attention and awareness are different.", "abstract": "Now that the study of consciousness is warmly embraced by cognitive scientists, much confusion seems to arise between the concepts of visual attention and visual awareness. Often, visual awareness is equated to what is in the focus of attention. There are, however, two sets of arguments to separate attention from awareness: a psychological/theoretical one and a neurobiological one. By combining these arguments I present definitions of visual attention and awareness that clearly distinguish between the two, yet explain why attention and awareness are so intricately related. In fact, there seems more overlap between mechanisms of memory and awareness than between those of attention and awareness." }, { "pmid": "26494276", "title": "Disinhibition, a Circuit Mechanism for Associative Learning and Memory.", "abstract": "Although a wealth of data have elucidated the structure and physiology of neuronal circuits, we still only have a very limited understanding of how behavioral learning is implemented at the network level. An emerging crucial player in this implementation is disinhibition--a transient break in the balance of excitation and inhibition. In contrast to the widely held view that the excitation/inhibition balance is highly stereotyped in cortical circuits, recent findings from behaving animals demonstrate that salient events often elicit disinhibition of projection neurons that favors excitation and thereby enhances their activity. Behavioral functions ranging from auditory fear learning, for which most data are available to date, to spatial navigation are causally linked to disinhibition in different compartments of projection neurons, in diverse cortical areas and at timescales ranging from milliseconds to days, suggesting that disinhibition is a conserved circuit mechanism contributing to learning and memory expression." }, { "pmid": "18602905", "title": "Linking neurons to behavior in multisensory perception: a computational review.", "abstract": "A large body of psychophysical and physiological findings has characterized how information is integrated across multiple senses. This work has focused on two major issues: how do we integrate information, and when do we integrate, i.e., how do we decide if two signals come from the same source or different sources. Recent studies suggest that humans and animals use Bayesian strategies to solve both problems. With regard to how to integrate, computational studies have also started to shed light on the neural basis of this Bayes-optimal computation, suggesting that, if neuronal variability is Poisson-like, a simple linear combination of population activity is all that is required for optimality. We review both sets of developments, which together lay out a path towards a complete neural theory of multisensory perception." }, { "pmid": "26451489", "title": "Reconstruction and Simulation of Neocortical Microcircuitry.", "abstract": "We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.\n\n\nPAPERCLIP\nVIDEO ABSTRACT." }, { "pmid": "26096599", "title": "Homing in on consciousness in the nervous system: An action-based synthesis.", "abstract": "What is the primary function of consciousness in the nervous system? The answer to this question remains enigmatic, not so much because of a lack of relevant data, but because of the lack of a conceptual framework with which to interpret the data. To this end, we have developed Passive Frame Theory, an internally coherent framework that, from an action-based perspective, synthesizes empirically supported hypotheses from diverse fields of investigation. The theory proposes that the primary function of consciousness is well-circumscribed, serving the somatic nervous system. For this system, consciousness serves as a frame that constrains and directs skeletal muscle output, thereby yielding adaptive behavior. The mechanism by which consciousness achieves this is more counterintuitive, passive, and \"low level\" than the kinds of functions that theorists have previously attributed to consciousness. Passive frame theory begins to illuminate (a) what consciousness contributes to nervous function, (b) how consciousness achieves this function, and (c) the neuroanatomical substrates of conscious processes. Our untraditional, action-based perspective focuses on olfaction instead of on vision and is descriptive (describing the products of nature as they evolved to be) rather than normative (construing processes in terms of how they should function). Passive frame theory begins to isolate the neuroanatomical, cognitive-mechanistic, and representational (e.g., conscious contents) processes associated with consciousness." }, { "pmid": "28093548", "title": "Astrocytic control of synaptic function.", "abstract": "Astrocytes intimately interact with synapses, both morphologically and, as evidenced in the past 20 years, at the functional level. Ultrathin astrocytic processes contact and sometimes enwrap the synaptic elements, sense synaptic transmission and shape or alter the synaptic signal by releasing signalling molecules. Yet, the consequences of such interactions in terms of information processing in the brain remain very elusive. This is largely due to two major constraints: (i) the exquisitely complex, dynamic and ultrathin nature of distal astrocytic processes that renders their investigation highly challenging and (ii) our lack of understanding of how information is encoded by local and global fluctuations of intracellular calcium concentrations in astrocytes. Here, we will review the existing anatomical and functional evidence of local interactions between astrocytes and synapses, and how it underlies a role for astrocytes in the computation of synaptic information.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'." }, { "pmid": "19669426", "title": "On the role of synchrony for neuron-astrocyte interactions and perceptual conscious processing.", "abstract": "Recent research on brain correlates of cognitive processes revealed the occurrence of global synchronization during conscious processing of sensory stimuli. In spite of technological progress in brain imaging, an explanation of the computational role of synchrony is still a highly controversial issue. In this study, we depart from an analysis of the usage of blood-oxygen-level-dependent functional magnetic resonance imaging for the study of cognitive processing, leading to the identification of evoked local field potentials as the vehicle for sensory patterns that compose conscious episodes. Assuming the \"astrocentric hypothesis\" formulated by James M. Robertson (astrocytes being the final stage of conscious processing), we propose that the role of global synchrony in perceptual conscious processing is to induce the transfer of information patterns embodied in local field potentials to astrocytic calcium waves, further suggesting that these waves are responsible for the \"binding\" of spatially distributed patterns into unitary conscious episodes." }, { "pmid": "24099930", "title": "Astrocyte domains and the three-dimensional and seamless expression of consciousness and explicit memories.", "abstract": "UNLABELLED\nThe expression of consciousness and the site of memory storage within the brain are unknown despite over a century of intense empirical scrutiny. Recent anatomical studies show that human protoplasmic astrocytes form innumerable uniform polyhedral tessellating domains that are arranged three-dimensionally. This complex geometric structure provides a matrix for seamless and three-dimensional expression of consciousness and explicit memories. These studies, in conjunction with physiological data, demonstrate how this may be achieved: 1. Individual protoplasmic astrocytes occupy separate three-dimensional non overlapping (i.e., tessellating) territories known as domains. Thus, billions of contiguous and continuous domains tile mammalian cortical gray matter. 2. Each domain subtends approximately 90,000 rodent and 2,000,000 human tripartite synapses that signal to perisynaptic astrocyte processes which encode and integrate synaptic information. Neuron to astrocyte signalling is as rapid as neuron to neuron signaling. 3. Astrocytes are exquisitely sensitive to neural activity and distinguish synapse from numerous afferent pathways with different neurotransmitters or neuromodulators. Therefore, synaptic information is dynamically integrated within a global matrix of tessellating astrocyte domains. 4. Astrocytes of the sensory cortex respond to peripheral stimulation in vivo, and some have the ability to distinguish sensory input in more refined detail than surrounding neurons (e.g., visual cortex). Additionally, astrocytes of the cortex and cerebellum react in concert with activity of awake and behaving animals. 5. Domains are extensively interconnected by gap junctions that transmit molecules, many important for information processing and transcription, through complex syncytial networks. This adds an additional level of complexity to interactions between astrocyte domains that may extend over large areas including the entire neocortex.\n\n\nHYPOTHESIS\nConsciousness is seamlessly expressed in a three-dimensional matrix consisting of billions of tessellating cortical astrocyte domains that bind attended sensory information. The temporal sequence (stream of consciousness) depends on the global distribution of sequential neural activity. The matrix may also be utilized to encode and store explicit memories." }, { "pmid": "7434008", "title": "Reference: the linguistic essential.", "abstract": "Three chimpanzees learned to label three edibles as \"foods\" and three inedibles as \"tools\". Two chimpanzees could then similarly categorize numerous objects during blind trial 1 tests when shown only objects' names. The language-like skills of the chimpanzee who failed (Lana) illustrates that apes can use symbols in ways that emulate human usage without comprehending their representational function." }, { "pmid": "26513352", "title": "Learning-Induced Gene Expression in the Hippocampus Reveals a Role of Neuron -Astrocyte Metabolic Coupling in Long Term Memory.", "abstract": "We examined the expression of genes related to brain energy metabolism and particularly those encoding glia (astrocyte)-specific functions in the dorsal hippocampus subsequent to learning. Context-dependent avoidance behavior was tested in mice using the step-through Inhibitory Avoidance (IA) paradigm. Animals were sacrificed 3, 9, 24, or 72 hours after training or 3 hours after retention testing. The quantitative determination of mRNA levels revealed learning-induced changes in the expression of genes thought to be involved in astrocyte-neuron metabolic coupling in a time dependent manner. Twenty four hours following IA training, an enhanced gene expression was seen, particularly for genes encoding monocarboxylate transporters 1 and 4 (MCT1, MCT4), alpha2 subunit of the Na/K-ATPase and glucose transporter type 1. To assess the functional role for one of these genes in learning, we studied MCT1 deficient mice and found that they exhibit impaired memory in the inhibitory avoidance task. Together, these observations indicate that neuron-glia metabolic coupling undergoes metabolic adaptations following learning as indicated by the change in expression of key metabolic genes." }, { "pmid": "26157510", "title": "The necessity of connection structures in neural models of variable binding.", "abstract": "In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures." }, { "pmid": "23596410", "title": "On the dynamics of cortical development: synchrony and synaptic self-organization.", "abstract": "We describe a model for cortical development that resolves long-standing difficulties of earlier models. It is proposed that, during embryonic development, synchronous firing of neurons and their competition for limited metabolic resources leads to selection of an array of neurons with ultra-small-world characteristics. Consequently, in the visual cortex, macrocolumns linked by superficial patchy connections emerge in anatomically realistic patterns, with an ante-natal arrangement which projects signals from the surrounding cortex onto each macrocolumn in a form analogous to the projection of a Euclidean plane onto a Möbius strip. This configuration reproduces typical cortical response maps, and simulations of signal flow explain cortical responses to moving lines as functions of stimulus velocity, length, and orientation. With the introduction of direct visual inputs, under the operation of Hebbian learning, development of mature selective response \"tuning\" to stimuli of given orientation, spatial frequency, and temporal frequency would then take place, overwriting the earlier ante-natal configuration. The model is provisionally extended to hierarchical interactions of the visual cortex with higher centers, and a general principle for cortical processing of spatio-temporal images is sketched." }, { "pmid": "29074765", "title": "The emperor's new wardrobe: Rebalancing diversity of animal models in neuroscience research.", "abstract": "The neuroscience field is steaming ahead, fueled by a revolution in cutting-edge technologies. Concurrently, another revolution has been underway-the diversity of species utilized for neuroscience research is sharply declining, as the field converges on a few selected model organisms. Here, from the perspective of a young scientist, I naively ask: Is the great diversity of questions in neuroscience best studied in only a handful of animal models? I review some of the limitations the field is facing following this convergence and how these can be rectified by increasing the diversity of appropriate model species. I propose that at this exciting time of revolution in genetics and device technologies, neuroscience might be ready to diversify again, if provided the appropriate support." }, { "pmid": "26593093", "title": "Competing Neural Ensembles in Motor Cortex Gate Goal-Directed Motor Output.", "abstract": "Unit recordings in behaving animals have revealed the transformation of sensory to motor representations in cortical neurons. However, we still lack basic insights into the mechanisms by which neurons interact to generate such transformations. Here, we study cortical circuits related to behavioral control in mice engaged in a sensory detection task. We recorded neural activity using extracellular and intracellular techniques and analyzed the task-related neural dynamics to reveal underlying circuit processes. Within motor cortex, we find two populations of neurons that have opposing spiking patterns in anticipation of movement. From correlation analyses and circuit modeling, we suggest that these dynamics reflect neural ensembles engaged in a competition. Furthermore, we demonstrate how this competitive circuit may convert a transient, sensory stimulus into a motor command. Together, these data reveal cellular and circuit processes underlying behavioral control and establish an essential framework for future studies linking cellular activity to behavior." }, { "pmid": "12757823", "title": "The disunity of consciousness.", "abstract": "Attempts to decode what has become known as the (singular) neural correlate of consciousness (NCC) suppose that consciousness is a single unified entity, a belief that finds expression in the term 'unity of consciousness'. Here, I propose that the quest for the NCC will remain elusive until we acknowledge that consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space." }, { "pmid": "25823871", "title": "A massively asynchronous, parallel brain.", "abstract": "Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain." } ]
Frontiers in Neuroscience
30809114
PMC6380225
10.3389/fnins.2019.00073
FLGR: Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition
A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g., hand gesture. Rather than wastefully sending entire images at a fixed frame rate, neuromorphic vision sensors only transmit the local pixel-level changes induced by the movement in a scene when they occur. This leads to advantageous characteristics, including low energy consumption, high dynamic range, a sparse event stream and low response latency. In this study, a novel representation learning method was proposed: Fixed Length Gists Representation (FLGR) learning for event-based gesture recognition. Previous methods accumulate events into video frames in a time duration (e.g., 30 ms) to make the accumulated image-level representation. However, the accumulated-frame-based representation waives the friendly event-driven paradigm of neuromorphic vision sensor. New representation are urgently needed to fill the gap in non-accumulated-frame-based representation and exploit the further capabilities of neuromorphic vision. The proposed FLGR is a sequence learned from mixture density autoencoder and preserves the nature of event-based data better. FLGR has a data format of fixed length, and it is easy to feed to sequence classifier. Moreover, an RNN-HMM hybrid was proposed to address the continuous gesture recognition problem. Recurrent neural network (RNN) was applied for FLGR sequence classification while hidden Markov model (HMM) is employed for localizing the candidate gesture and improving the result in a continuous sequence. A neuromorphic continuous hand gestures dataset (Neuro ConGD Dataset) was developed with 17 hand gestures classes for the community of the neuromorphic research. Hopefully, FLGR can inspire the study on the event-based highly efficient, high-speed, and high-dynamic-range sequence classification tasks.
1.3. Related WorksUnder the recent development of deep learning (Krizhevsky et al., 2012), many methods used for hand gesture recognition with conventional cameras have been presented based on Convolutional Neural Networks (ConvNets) (Ji et al., 2013; Neverova et al., 2014; Molchanov et al., 2015; Knoller et al., 2016; Sinha et al., 2016) and RNN (Ohn-Bar and Trivedi, 2014; Neverova et al., 2016; Wu et al., 2016). Among these frameworks, RNNs are attractive because they equip neural networks with memories for temporal tasks, and the introduction of gating units e.g., LSTM and GRU (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) has significantly contributed to making the learning of these networks manageable. In general, deep-learning-based methods outperform traditional handcrafted-feature-based methods in gesture recognition task (Wang et al., 2018).All the efforts above rely on conventional cameras at fixed frame-rate. Conventional cameras will suffer from various motion-related artifacts (motion blur, rolling shutter, etc.) which may affect the performance for the rapid gesture recognition. In contrast, the event data generated by neuromorphic vision sensors are natural motion detectors and automatically filter out any temporally redundant information. The DVS is promising sensor for low latency and low bandwidth tasks. A robotic goal keeper was presented in Delbruck and Lang (2013) with a reaction time of 3 ms. Robot localization was demonstrated by Mueggler et al. (2014) using a DVS during high-speed maneuvers, in which rotational speed was measured up to 1, 200°/s during quadrotor flips. In the meantime, gesture recognition is vital for in human-robot interaction. Hence, the neuromorphic gesture recognition system is urgently needed.Ahn et al. (2011) were one of the first groups to use the DVS for gesture recognition when detecting and distinguishing between the 3 throws of the classical rock-paper-scissors game. It is noteworthy that their work was published in 2011, which predating the deep learning era. The DVS' inventors performed gesture recognition with spiking neural networks and leaky integrate-and-fire (LIF) neurons (Gerstner and Kistler, 2002; Lee et al., 2012a,b, 2014). Spiking neural networks (SNNs) are trainable models of the brain, thereby being suitable for neuromorphic sensors. In 2016 deep learning was first applied for gesture recognition with DVS (Park et al., 2016). With super-resolution technology by spatiotemporal demosaicing on the event stream, they trained a GoogLeNet CNN with the reconstructed information to classify these temporal-fusion frames and decode the network output with an LSTM. Amir et al. (2017) processed a live DVS event stream with IBM TrueNorth, a natively event-based processor containing 1 million spiking neurons. Configured as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming <200 mW.In fact, continuous gesture recognition is a task totally different from the segmented gesture recognition. For the segmented gesture recognition (Lee et al., 2012a; Amir et al., 2017), the scenario of the problem can be summarized as classifying a well-delineated sequence of video frames as one of a set of gesture types. This is in contrast with the continuous/online human gesture recognition where there are no a priori given boundaries of gesture execution. In a simple case where a video is segmented to contain only one execution of a human gesture, the system aims to correctly classify the video into its gesture category. In more general and complex cases, the continuous recognition of human gestures must be performed to detect the starting and ending times of all occurring gestures from an input video (Aggarwal and Ryoo, 2011). However, there has been no measurement till now for the detection performance in neuromorphic gesture recognition task. In brief, the continuous gesture recognition is the first step to reach online recognition though it is harder than the segmented gesture recognition (Wang et al., 2018).However, the non-accumulated-image-based representation for event-driven recognition has not aroused enough attention. Both methods, Park et al. (2016) and Amir et al. (2017), belong to the semi-accumulated-frame-based representation and train CNN on the frames. Moreover, the CNN in Amir et al. (2017) was based on a neuromorphic hardware, which is not fully accessible to scientific and academic fields.There has been no pure deep network that can process the sequence of non-accumulated-frame-based representation for the gesture recognition task. A deep network should be urgently designed to process events or non-accumulated-frame-based representation sequence to explore a paradigm shift in neuromorphic vision community (Cadena et al., 2016). Because of the data nature of asynchronous, the direct raw event-based recognition might be unsatisfactory. How to learn a novel non-accumulated-frame-based representation for event-driven recognition therefore becomes a promising direction to reduce the noted negative effect and maximize the capability of the event-based sequence data.The rest of this study is organized as follows: section 2 describes the preprocessing, the representation learning and RNN-HMM hybrid temporal classification for neuromorphic continuous gesture recognition. Section 3 verified the Neuro ConGD dataset collection, evaluation metrics and experimental results. Section 4 draws the conclusion of this study.
[ "24311999", "9377276", "27630540", "22392705", "25420246", "26955020", "29875621" ]
[ { "pmid": "24311999", "title": "Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor.", "abstract": "Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most \"threatening\" ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "22392705", "title": "3D convolutional neural networks for human action recognition.", "abstract": "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods." }, { "pmid": "25420246", "title": "Real-time gesture interface based on event-driven processing from stereo silicon retinas.", "abstract": "We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naïve users." }, { "pmid": "26955020", "title": "Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.", "abstract": "This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data." }, { "pmid": "29875621", "title": "Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.", "abstract": "Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics." } ]
JMIR Medical Informatics
30735140
PMC6384542
10.2196/10788
Detection of Bleeding Events in Electronic Health Record Notes Using Convolutional Neural Network Models Enhanced With Recurrent Neural Network Autoencoders: Deep Learning Approach
BackgroundBleeding events are common and critical and may cause significant morbidity and mortality. High incidences of bleeding events are associated with cardiovascular disease in patients on anticoagulant therapy. Prompt and accurate detection of bleeding events is essential to prevent serious consequences. As bleeding events are often described in clinical notes, automatic detection of bleeding events from electronic health record (EHR) notes may improve drug-safety surveillance and pharmacovigilance.ObjectiveWe aimed to develop a natural language processing (NLP) system to automatically classify whether an EHR note sentence contains a bleeding event.MethodsWe expert annotated 878 EHR notes (76,577 sentences and 562,630 word-tokens) to identify bleeding events at the sentence level. This annotated corpus was used to train and validate our NLP systems. We developed an innovative hybrid convolutional neural network (CNN) and long short-term memory (LSTM) autoencoder (HCLA) model that integrates a CNN architecture with a bidirectional LSTM (BiLSTM) autoencoder model to leverage large unlabeled EHR data.ResultsHCLA achieved the best area under the receiver operating characteristic curve (0.957) and F1 score (0.938) to identify whether a sentence contains a bleeding event, thereby surpassing the strong baseline support vector machines and other CNN and autoencoder models.ConclusionsBy incorporating a supervised CNN model and a pretrained unsupervised BiLSTM autoencoder, the HCLA achieved high performance in detecting bleeding events.
Related WorksExisting work in automated bleeding detection mainly involves detection and classification of bleeding for wireless capsule endoscopy images. Neural network methods are also employed for such image detection [23,24]. In addition, previous studies have assessed detection of bleeding events in outcome studies by using health registers [25]. However, studies on the detection of bleeding events in EHR notes are lacking.The proposed model is based on neural network models that learn feature representations for sentence-level classification. Related work includes the CNN models that first made a series of breakthroughs in the computer vision field and subsequently showed excellent performance in NLP tasks such as machine translation [26], sentence classification [27,28], and sentence modelling [29]. Autoencoders were originally proposed to reduce the dimensionality of images and documents [21] and were subsequently applied to many NLP tasks such as sentiment analysis [30], machine translation [31], and paraphrase detection [32].Neural network models have been applied to the clinical data-mining tasks. Gehrmann et al [33] applied CNNs to 10 phenotyping tasks and showed that they outperformed concept extraction-based methods in almost all tasks. CNN was used to classify radiology free-text reports and showed an accuracy equivalent to or beyond that of an existing traditional NLP model [34]. Lin et al [35] also used a CNN model to identify the International Classification of Diseases, Tenth Revision, Clinical Modification, diagnosis codes in discharge notes and showed outstanding performance compared with traditional methods; they also showed that the convolutional layers of the CNN can effectively identify keywords for use in the prediction of diagnosis codes. Since our annotated data are relatively small, we expanded the CNN model by integrating it with an LSTM-based autoencoder. Tran et al [36] developed two independent deep neural network models: one based on CNNs and the other based on RNNs with hierarchical attention for the prediction of mental conditions in psychiatric notes; their study showed that the CNN and RNN models outperformed the competitive baseline approaches. Furthermore, a previous study used semisupervising learning methods such as learning from positive and unlabeled examples [37] and the anchor-and-learn method [38], for which traditional machine-learning algorithms like expectation–maximization and SVM can be used to build classifiers.
[ "23479259", "28122885", "14644891", "21694823", "25150296", "16636233", "15452298", "27687455", "17620536", "19261947", "29989977", "29854183", "27885364", "9377276", "16873662", "20703770", "27617328", "29447188", "29135365", "29109070", "28606869", "27107443", "14681409", "26187250", "28419261" ]
[ { "pmid": "23479259", "title": "Assessing bleeding risk in patients taking anticoagulants.", "abstract": "Anticoagulant medications are commonly used for the prevention and treatment of thromboembolism. Although highly effective, they are also associated with significant bleeding risks. Numerous individual clinical factors have been linked to an increased risk of hemorrhage, including older age, anemia, and renal disease. To help quantify hemorrhage risk for individual patients, a number of clinical risk prediction tools have been developed. These risk prediction tools differ in how they were derived and how they identify and weight individual risk factors. At present, their ability to effective predict anticoagulant-associated hemorrhage remains modest. Use of risk prediction tools to estimate bleeding in clinical practice is most influential when applied to patients at the lower spectrum of thromboembolic risk, when the risk of hemorrhage will more strongly affect clinical decisions about anticoagulation. Using risk tools may also help counsel and inform patients about their potential risk for hemorrhage while on anticoagulants, and can identify patients who might benefit from more careful management of anticoagulation." }, { "pmid": "14644891", "title": "Clinical impact of bleeding in patients taking oral anticoagulant therapy for venous thromboembolism: a meta-analysis.", "abstract": "BACKGROUND\nClinicians should consider the clinical impact of anticoagulant-related bleeding when deciding on the duration of anticoagulant therapy in patients with venous thromboembolism.\n\n\nPURPOSE\nTo provide reliable estimates of the clinical impact of anticoagulant-related bleeding, defined as the case-fatality rate of major bleeding and the risk for intracranial bleeding.\n\n\nDATA SOURCES\nMEDLINE (January 1989 to May 2003), Cochrane Controlled Trial Registry, thromboembolism experts, and reference lists; English-language literature only.\n\n\nSTUDY SELECTION\nRandomized, controlled trials and prospective cohort studies that investigated patients with venous thromboembolism who received oral anticoagulant therapy (target international normalized ratio, 2.0 to 3.0) for at least 3 months and that reported major bleeding and death as primary study outcomes.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data on the number of anticoagulant-related major and intracranial bleeding episodes and on whether these events were fatal or nonfatal.\n\n\nDATA SYNTHESIS\nThe authors analyzed 33 studies involving 4374 patient-years of oral anticoagulant therapy. For all patients, the case-fatality rate of major bleeding was 13.4% (95% CI, 9.4% to 17.4%) and the rate of intracranial bleeding was 1.15 per 100 patient-years (CI, 1.14 to 1.16 per 100 patient-years). For patients who received anticoagulant therapy for more than 3 months, the case-fatality rate of major bleeding was 9.1% (CI, 2.5% to 21.7%), and the rate of intracranial bleeding was 0.65 per 100 patient-years (CI, 0.63 to 0.68 per 100 patient-years) after the initial 3 months of anticoagulation.\n\n\nCONCLUSION\nThe clinical impact of anticoagulant-related major bleeding in patients with venous thromboembolism is considerable, and clinicians should take this into account when deciding whether to continue long-term oral anticoagulant therapy in an individual patient." }, { "pmid": "21694823", "title": "Mortality associated with gastrointestinal bleeding events: Comparing short-term clinical outcomes of patients hospitalized for upper GI bleeding and acute myocardial infarction in a US managed care setting.", "abstract": "OBJECTIVES\nTo compare the short-term mortality rates of gastrointestinal (GI) bleeding to those of acute myocardial infarction (AMI) by estimating the 30-, 60-, and 90-day mortality among hospitalized patients.\n\n\nMETHODS\nUnited States national health plan claims data (1999-2003) were used to identify patients hospitalized with a GI bleeding event. Patients were propensity-matched to AMI patients with no evidence of GI bleed from the same US health plan.\n\n\nRESULTS\n12,437 upper GI-bleed patients and 22,847 AMI patients were identified. Propensity score matching yielded 6,923 matched pairs. Matched cohorts were found to have a similar Charlson Comorbidity Index score and to be similar on nearly all utilization and cost measures (excepting emergency room costs). A comparison of outcomes among the matched cohorts found that AMI patients had higher rates of 30-day mortality (4.35% vs 2.54%; p < 0.0001) and rehospitalization (2.56% vs 1.79%; p = 0.002), while GI bleed patients were more likely to have a repeat procedure (72.38% vs 44.95%; p < 0.001) following their initial hospitalization. The majority of the difference in overall 30-day mortality between GI bleed and AMI patients was accounted for by mortality during the initial hospitalization (1.91% vs 3.58%).\n\n\nCONCLUSIONS\nGI bleeding events result in significant mortality similar to that of an AMI after adjusting for the initial hospitalization." }, { "pmid": "25150296", "title": "The impact of bleeding complications in patients receiving target-specific oral anticoagulants: a systematic review and meta-analysis.", "abstract": "Vitamin K antagonists (VKAs) have been the standard of care for treatment of thromboembolic diseases. Target-specific oral anticoagulants (TSOACs) have been developed and found to be at least noninferior to VKAs with regard to efficacy, but the risk of bleeding with TSOACs remains controversial. We performed a systematic review and meta-analysis of phase-3 randomized controlled trials (RCTs) to assess the bleeding side effects of TSOACs compared with VKAs in patients with venous thromboembolism or atrial fibrillation. We searched MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials; conference abstracts; and www.clinicaltrials.gov with no language restriction. Two reviewers independently performed study selection, data extraction, and study quality assessment. Twelve RCTs involving 102 607 patients were retrieved. TSOACs significantly reduced the risk of overall major bleeding (relative risk [RR] 0.72, P < .01), fatal bleeding (RR 0.53, P < .01), intracranial bleeding (RR 0.43, P < .01), clinically relevant nonmajor bleeding (RR 0.78, P < .01), and total bleeding (RR 0.76, P < .01). There was no significant difference in major gastrointestinal bleeding between TSOACs and VKAs (RR 0.94, P = .62). When compared with VKAs, TSOACs are associated with less major bleeding, fatal bleeding, intracranial bleeding, clinically relevant nonmajor bleeding, and total bleeding. Additionally, TSOACs do not increase the risk of gastrointestinal bleeding." }, { "pmid": "16636233", "title": "Hematoma growth is a determinant of mortality and poor outcome after intracerebral hemorrhage.", "abstract": "BACKGROUND\nAlthough volume of intracerebral hemorrhage (ICH) is a predictor of mortality, it is unknown whether subsequent hematoma growth further increases the risk of death or poor functional outcome.\n\n\nMETHODS\nTo determine if hematoma growth independently predicts poor outcome, the authors performed an individual meta-analysis of patients with spontaneous ICH who had CT within 3 hours of onset and 24-hour follow-up. Placebo patients were pooled from three trials investigating dosing, safety, and efficacy of rFVIIa (n = 115), and 103 patients from the Cincinnati study (total 218). Other baseline factors included age, gender, blood glucose, blood pressure, Glasgow Coma Score (GCS), intraventricular hemorrhage (IVH), and location.\n\n\nRESULTS\nOverall, 72.9% of patients exhibited some degree of hematoma growth. Percentage hematoma growth (hazard ratio [HR] 1.05 per 10% increase [95% CI: 1.03, 1.08; p < 0.0001]), initial ICH volume (HR 1.01 per mL [95% CI: 1.00, 1.02; p = 0.003]), GCS (HR 0.88 [95% CI: 0.81, 0.96; p = 0.003]), and IVH (HR 2.23 [95% CI: 1.25, 3.98; p = 0.007]) were all associated with increased mortality. Percentage growth (cumulative OR 0.84 [95% CI: 0.75, 0.92; p < 0.0001]), initial ICH volume (cumulative OR 0.94 [95% CI: 0.91, 0.97; p < 0.0001]), GCS (cumulative OR 1.46 [95% CI: 1.21, 1.82; p < 0.0001]), and age (cumulative OR 0.95 [95% CI: 0.92, 0.98; p = 0.0009]) predicted outcome modified Rankin Scale. Gender, location, blood glucose, and blood pressure did not predict outcomes.\n\n\nCONCLUSIONS\nHematoma growth is an independent determinant of both mortality and functional outcome after intracerebral hemorrhage. Attenuation of growth is an important therapeutic strategy." }, { "pmid": "15452298", "title": "Warfarin, hematoma expansion, and outcome of intracerebral hemorrhage.", "abstract": "BACKGROUND\nWarfarin increases mortality of intracerebral hemorrhage (ICH). The authors investigated whether this effect reflects increased baseline ICH volume at presentation or increased ICH expansion.\n\n\nMETHODS\nSubjects were drawn from an ongoing prospective cohort study of ICH outcome. The effect of warfarin on baseline ICH volume was studied in 183 consecutive cases of supratentorial ICH age > or = 18 years admitted to the emergency department over a 5-year period. Baseline ICH volume was determined using computerized volumetric analysis. The effect of warfarin on ICH expansion (increase in volume > or = 33% of baseline) was analyzed in 70 consecutive cases in whom ICH volumes were measured on all subsequent CT scans up to 7 days after admission. Multivariable analysis was used to determine warfarin's influence on baseline ICH, ICH expansion, and whether warfarin's effect on ICH mortality was dependent on baseline volume or subsequent expansion.\n\n\nRESULTS\nThere was no effect of warfarin on initial volume. Predictors of larger baseline volume were hyperglycemia (p < 0.0001) and lobar hemorrhage (p < 0.0001). Warfarin patients were at increased risk of death, even when controlling for ICH volume at presentation. Warfarin was the sole predictor of expansion (OR 6.2, 95% CI 1.7 to 22.9) and expansion in warfarin patients was detected later in the hospital course compared with non-warfarin patients (p < 0.001). ICH expansion showed a trend toward increased mortality (OR 3.5, 95% CI 0.7 to 8.9, p = 0.14) and reduced the marginal effect of warfarin on ICH mortality.\n\n\nCONCLUSIONS\nWarfarin did not increase ICH volume at presentation but did raise the risk of in-hospital hematoma expansion. This expansion appears to mediate part of warfarin's effect on ICH mortality." }, { "pmid": "27687455", "title": "Risk stratification, perioperative and periprocedural management of the patient receiving anticoagulant therapy.", "abstract": "As a result of the aging US population and the subsequent increase in the prevalence of coronary disease and atrial fibrillation, therapeutic use of anticoagulants has increased. Perioperative and periprocedural management of anticoagulated patients has become routine for anesthesiologists, who frequently mediate communication between the prescribing physician and the surgeon and assess the risks of both thromboembolic complications and hemorrhage. Data from randomized clinical trials on perioperative management of antithrombotic therapy are lacking. Therefore, clinical judgment is typically needed regarding decisions to continue, discontinue, bridge, or resume anticoagulation and regarding the time points when these events should occur in the perioperative period. In this review, we will discuss the most commonly used anticoagulants used in outpatient settings and discuss their management in the perioperative period. Special considerations for regional anesthesia and interventional pain procedures will also be reviewed." }, { "pmid": "17620536", "title": "Bleeding complications with warfarin use: a prevalent adverse effect resulting in regulatory action.", "abstract": "BACKGROUND\nWarfarin sodium is widely used and causes bleeding; a review might suggest the need for regulatory action by the US Food and Drug Administration (FDA).\n\n\nMETHODS\nWe accessed warfarin prescriptions from the National Prescription Audit Plus database of IMS Health (Plymouth Meeting, Pennsylvania), adverse event reports submitted to the FDA, deaths due to therapeutic use of anticoagulants from vital statistics data, and warfarin bleeding complications from national hospital emergency department data.\n\n\nRESULTS\nThe number of dispensed outpatient prescriptions for warfarin increased 45%, from 21 million in 1998 to nearly 31 million in 2004. The FDA's Adverse Event Reporting System indicated that warfarin is among the top 10 drugs with the largest number of serious adverse event reports submitted during the 1990 and 2000 decades. From US death certificates, anticoagulants ranked first in 2003 and 2004 in the number of total mentions of deaths for drugs causing \"adverse effects in therapeutic use.\" Data from hospital emergency departments for 1999 through 2003 indicated that warfarin was associated with about 29 000 visits for bleeding complications per year, and it was among the drugs with the most visits. These data are consistent with literature reports of major bleeding frequencies for warfarin as high as 10% to 16%.\n\n\nCONCLUSIONS\nUse of warfarin has increased, and bleeding from warfarin use is a prevalent reaction and an important cause of mortality. Consequently, a \"black box\" warning about warfarin's bleeding risk was added to the US product labeling in 2006. Physicians and nurses should tell patients to immediately report signs and symptoms of bleeding. A Medication Guide, which is required to be provided with each prescription, reinforces this message." }, { "pmid": "19261947", "title": "Comparison of information content of structured and narrative text data sources on the example of medication intensification.", "abstract": "OBJECTIVE To compare information obtained from narrative and structured electronic sources using anti-hypertensive medication intensification as an example clinical issue of interest. DESIGN A retrospective cohort study of 5,634 hypertensive patients with diabetes from 2000 to 2005. MEASUREMENTS The authors determined the fraction of medication intensification events documented in both narrative and structured data in the electronic medical record. The authors analyzed the relationship between provider characteristics and concordance between intensifications in narrative and structured data. As there is no gold standard data source for medication information, the authors clinically validated medication intensification information by assessing the relationship between documented medication intensification and the patients' blood pressure in univariate and multivariate models. RESULTS Overall, 5,627 (30.9%) of 18,185 medication intensification events were documented in both sources. For a medication intensification event documented in narrative notes the probability of a concordant entry in structured records increased by 11% for each study year (p < 0.0001) and decreased by 19% for each decade of provider age (p = 0.035). In a multivariate model that adjusted for patient demographics and intraphysician correlations, an increase of one medication intensification per month documented in either narrative or structured data were associated with a 5-8 mm Hg monthly decrease in systolic and 1.5-4 mm Hg decrease in diastolic blood pressure (p < 0.0001 for all). CONCLUSION Narrative and structured electronic data sources provide complementary information on anti-hypertensive medication intensification. Clinical validity of information in both sources was demonstrated by correlation with changes in blood pressure." }, { "pmid": "29989977", "title": "Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis.", "abstract": "The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHRs). While primarily designed for archiving patient information and performing administrative healthcare tasks like billing, many researchers have found secondary use of these records for various clinical informatics applications. Over the same period, the machine learning community has seen widespread advances in the field of deep learning. In this review, we survey the current research on applying deep learning to clinical tasks based on EHR data, where we find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification. We identify several limitations of current research involving topics such as model interpretability, data heterogeneity, and lack of universal benchmarks. We conclude by summarizing the state of the field and identifying avenues of future deep EHR research." }, { "pmid": "29854183", "title": "A hybrid Neural Network Model for Joint Prediction of Presence and Period Assertions of Medical Events in Clinical Notes.", "abstract": "In this paper, we propose a novel neural network architecture for clinical text mining. We formulate this hybrid neural network model (HNN), composed of recurrent neural network and deep residual network, to jointly predict the presence and period assertion values associated with medical events in clinical texts. We evaluate the effectiveness of our model on a corpus of expert-annotated longitudinal Electronic Health Records (EHR) notes from Cancer patients. Our experiments show that HNN improves the joint assertion classification accuracy as compared to conventional baselines." }, { "pmid": "27885364", "title": "Bidirectional RNN for Medical Event Detection in Electronic Health Records.", "abstract": "Sequence labeling for extraction of medical events and their attributes from unstructured text in Electronic Health Record (EHR) notes is a key step towards semantic understanding of EHRs. It has important applications in health informatics including pharmacovigilance and drug surveillance. The state of the art supervised machine learning models in this domain are based on Conditional Random Fields (CRFs) with features calculated from fixed context windows. In this application, we explored recurrent neural network frameworks and show that they significantly out-performed the CRF models." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "16873662", "title": "Reducing the dimensionality of data with neural networks.", "abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." }, { "pmid": "20703770", "title": "Bleeding detection in Wireless Capsule Endoscopy based on Probabilistic Neural Network.", "abstract": "Wireless Capsule Endoscopy (WCE), which allows clinicians to inspect the whole gastrointestinal tract (GI) noninvasively, has bloomed into one of the most efficient technologies to diagnose the bleeding in GI tract. However WCE generates large amount of images in one examination of a patient. It is hard for clinicians to leave continuous time to examine the full WCE images, and this is the main factor limiting the wider application of WCE in clinic. A novel intelligent bleeding detection based on Probabilistic Neural Network (PNN) is proposed in this paper. The features of bleeding region in WCE images distinguishing from non-bleeding region are extracted. A PNN classifier is built to recognize bleeding regions in WCE images. Finally the intelligent bleeding detection method is implemented through programming. The experiments show this method can correctly recognize the bleeding regions in WCE images and clearly mark them out. The sensitivity and specificity on image level are measured as 93.1% and 85.6% respectively." }, { "pmid": "27617328", "title": "Usefulness of Health Registers for detection of bleeding events in outcome studies.", "abstract": "Administrative and claims databases are attractive for safety studies of anticoagulant and antithrombotic drugs. However, the validity of such data is often uncertain. It was our aim to assess the usefulness of the Swedish administrative health databases for detection of major bleeding events. All individuals with atrial fibrillation in Stockholm County from 2006 to 2013 (n=78,022) were identified from the Swedish Patient register. A search for bleeding diagnoses was done in the Patient register and in the Cause of Death register. The medical records of a random sample of 761 patients were studied and classified in a blinded and pre-specified way. The highest sensitivity (99.5 %) and specificity (94.0 %) were obtained by counting fatal bleeding events with the bleeding diagnosis recorded as first or second cause of death, and all hospitalisations without regard to the position of the diagnosis. Codes for transfusions were unreliable and did not increase accuracy. The registries identified 99.4 % of intracranial bleeding events and 82.6 % of gastrointestinal bleeding events correctly. All patients classified as \"no bleedings\" were indeed without bleeding. Overall the sensitivity was 85.5 % and the specificity 95.9 % for major bleeding events. In conclusion, Swedish nationwide health registries are well suited for conducting outcome studies involving identification of bleeding events. The use of a diagnostic code for bleeding, irrespective of its position as primary or secondary diagnosis, provided the best sensitivity and specificity for detection of bleeding events, as long as it was limited to contacts resulting in hospital admission." }, { "pmid": "29447188", "title": "Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.", "abstract": "In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions." }, { "pmid": "29135365", "title": "Deep Learning to Classify Radiology Free-Text Reports.", "abstract": "Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. © RSNA, 2017 Online supplemental material is available for this article." }, { "pmid": "29109070", "title": "Artificial Intelligence Learning Semantics via External Resources for Classifying Diagnosis Codes in Discharge Notes.", "abstract": "BACKGROUND\nAutomated disease code classification using free-text medical information is important for public health surveillance. However, traditional natural language processing (NLP) pipelines are limited, so we propose a method combining word embedding with a convolutional neural network (CNN).\n\n\nOBJECTIVE\nOur objective was to compare the performance of traditional pipelines (NLP plus supervised machine learning models) with that of word embedding combined with a CNN in conducting a classification task identifying International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes in discharge notes.\n\n\nMETHODS\nWe used 2 classification methods: (1) extracting from discharge notes some features (terms, n-gram phrases, and SNOMED CT categories) that we used to train a set of supervised machine learning models (support vector machine, random forests, and gradient boosting machine), and (2) building a feature matrix, by a pretrained word embedding model, that we used to train a CNN. We used these methods to identify the chapter-level ICD-10-CM diagnosis codes in a set of discharge notes. We conducted the evaluation using 103,390 discharge notes covering patients hospitalized from June 1, 2015 to January 31, 2017 in the Tri-Service General Hospital in Taipei, Taiwan. We used the receiver operating characteristic curve as an evaluation measure, and calculated the area under the curve (AUC) and F-measure as the global measure of effectiveness.\n\n\nRESULTS\nIn 5-fold cross-validation tests, our method had a higher testing accuracy (mean AUC 0.9696; mean F-measure 0.9086) than traditional NLP-based approaches (mean AUC range 0.8183-0.9571; mean F-measure range 0.5050-0.8739). A real-world simulation that split the training sample and the testing sample by date verified this result (mean AUC 0.9645; mean F-measure 0.9003 using the proposed method). Further analysis showed that the convolutional layers of the CNN effectively identified a large number of keywords and automatically extracted enough concepts to predict the diagnosis codes.\n\n\nCONCLUSIONS\nWord embedding combined with a CNN showed outstanding performance compared with traditional methods, needing very little data preprocessing. This shows that future studies will not be limited by incomplete dictionaries. A large amount of unstructured information from free-text medical writing will be extracted by automated approaches in the future, and we believe that the health care field is about to enter the age of big data." }, { "pmid": "28606869", "title": "Predicting mental conditions based on \"history of present illness\" in psychiatric notes with deep neural networks.", "abstract": "BACKGROUND\nApplications of natural language processing to mental health notes are not common given the sensitive nature of the associated narratives. The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) changed this scenario by providing the first set of neuropsychiatric notes to participants. This study summarizes our efforts and results in proposing a novel data use case for this dataset as part of the third track in this shared task.\n\n\nOBJECTIVE\nWe explore the feasibility and effectiveness of predicting a set of common mental conditions a patient has based on the short textual description of patient's history of present illness typically occurring in the beginning of a psychiatric initial evaluation note.\n\n\nMATERIALS AND METHODS\nWe clean and process the 1000 records made available through the N-GRID clinical NLP task into a key-value dictionary and build a dataset of 986 examples for which there is a narrative for history of present illness as well as Yes/No responses with regards to presence of specific mental conditions. We propose two independent deep neural network models: one based on convolutional neural networks (CNN) and another based on recurrent neural networks with hierarchical attention (ReHAN), the latter of which allows for interpretation of model decisions. We conduct experiments to compare these methods to each other and to baselines based on linear models and named entity recognition (NER).\n\n\nRESULTS\nOur CNN model with optimized thresholding of output probability estimates achieves best overall mean micro-F score of 63.144% for 11 common mental conditions with statistically significant gains (p<0.05) over all other models. The ReHAN model with interpretable attention mechanism scored 61.904% mean micro-F1 score. Both models' improvements over baseline models (support vector machines and NER) are statistically significant. The ReHAN model additionally aids in interpretation of the results by surfacing important words and sentences that lead to a particular prediction for each instance.\n\n\nCONCLUSIONS\nAlthough the history of present illness is a short text segment averaging 300 words, it is a good predictor for a few conditions such as anxiety, depression, panic disorder, and attention deficit hyperactivity disorder. Proposed CNN and RNN models outperform baseline approaches and complement each other when evaluating on a per-label basis." }, { "pmid": "27107443", "title": "Electronic medical record phenotyping using the anchor and learn framework.", "abstract": "BACKGROUND\nElectronic medical records (EMRs) hold a tremendous amount of information about patients that is relevant to determining the optimal approach to patient care. As medicine becomes increasingly precise, a patient's electronic medical record phenotype will play an important role in triggering clinical decision support systems that can deliver personalized recommendations in real time. Learning with anchors presents a method of efficiently learning statistically driven phenotypes with minimal manual intervention.\n\n\nMATERIALS AND METHODS\nWe developed a phenotype library that uses both structured and unstructured data from the EMR to represent patients for real-time clinical decision support. Eight of the phenotypes were evaluated using retrospective EMR data on emergency department patients using a set of prospectively gathered gold standard labels.\n\n\nRESULTS\nWe built a phenotype library with 42 publicly available phenotype definitions. Using information from triage time, the phenotype classifiers have an area under the ROC curve (AUC) of infection 0.89, cancer 0.88, immunosuppressed 0.85, septic shock 0.93, nursing home 0.87, anticoagulated 0.83, cardiac etiology 0.89, and pneumonia 0.90. Using information available at the time of disposition from the emergency department, the AUC values are infection 0.91, cancer 0.95, immunosuppressed 0.90, septic shock 0.97, nursing home 0.91, anticoagulated 0.94, cardiac etiology 0.92, and pneumonia 0.97.\n\n\nDISCUSSION\nThe resulting phenotypes are interpretable and fast to build, and perform comparably to statistically learned phenotypes developed with 5000 manually labeled patients.\n\n\nCONCLUSION\nLearning with anchors is an attractive option for building a large public repository of phenotype definitions that can be used for a range of health IT applications, including real-time decision support." }, { "pmid": "14681409", "title": "The Unified Medical Language System (UMLS): integrating biomedical terminology.", "abstract": "The Unified Medical Language System (http://umlsks.nlm.nih.gov) is a repository of biomedical vocabularies developed by the US National Library of Medicine. The UMLS integrates over 2 million names for some 900,000 concepts from more than 60 families of biomedical vocabularies, as well as 12 million relations among these concepts. Vocabularies integrated in the UMLS Metathesaurus include the NCBI taxonomy, Gene Ontology, the Medical Subject Headings (MeSH), OMIM and the Digital Anatomist Symbolic Knowledge Base. UMLS concepts are not only inter-related, but may also be linked to external resources such as GenBank. In addition to data, the UMLS includes tools for customizing the Metathesaurus (MetamorphoSys), for generating lexical variants of concept names (lvg) and for extracting UMLS concepts from text (MetaMap). The UMLS knowledge sources are updated quarterly. All vocabularies are available at no fee for research purposes within an institution, but UMLS users are required to sign a license agreement. The UMLS knowledge sources are distributed on CD-ROM and by FTP." }, { "pmid": "26187250", "title": "Challenges in clinical natural language processing for automated disorder normalization.", "abstract": "BACKGROUND\nIdentifying key variables such as disorders within the clinical narratives in electronic health records has wide-ranging applications within clinical practice and biomedical research. Previous research has demonstrated reduced performance of disorder named entity recognition (NER) and normalization (or grounding) in clinical narratives than in biomedical publications. In this work, we aim to identify the cause for this performance difference and introduce general solutions.\n\n\nMETHODS\nWe use closure properties to compare the richness of the vocabulary in clinical narrative text to biomedical publications. We approach both disorder NER and normalization using machine learning methodologies. Our NER methodology is based on linear-chain conditional random fields with a rich feature approach, and we introduce several improvements to enhance the lexical knowledge of the NER system. Our normalization method - never previously applied to clinical data - uses pairwise learning to rank to automatically learn term variation directly from the training data.\n\n\nRESULTS\nWe find that while the size of the overall vocabulary is similar between clinical narrative and biomedical publications, clinical narrative uses a richer terminology to describe disorders than publications. We apply our system, DNorm-C, to locate disorder mentions and in the clinical narratives from the recent ShARe/CLEF eHealth Task. For NER (strict span-only), our system achieves precision=0.797, recall=0.713, f-score=0.753. For the normalization task (strict span+concept) it achieves precision=0.712, recall=0.637, f-score=0.672. The improvements described in this article increase the NER f-score by 0.039 and the normalization f-score by 0.036. We also describe a high recall version of the NER, which increases the normalization recall to as high as 0.744, albeit with reduced precision.\n\n\nDISCUSSION\nWe perform an error analysis, demonstrating that NER errors outnumber normalization errors by more than 4-to-1. Abbreviations and acronyms are found to be frequent causes of error, in addition to the mentions the annotators were not able to identify within the scope of the controlled vocabulary.\n\n\nCONCLUSION\nDisorder mentions in text from clinical narratives use a rich vocabulary that results in high term variation, which we believe to be one of the primary causes of reduced performance in clinical narrative. We show that pairwise learning to rank offers high performance in this context, and introduce several lexical enhancements - generalizable to other clinical NER tasks - that improve the ability of the NER system to handle this variation. DNorm-C is a high performing, open source system for disorders in clinical text, and a promising step toward NER and normalization methods that are trainable to a wide variety of domains and entities. (DNorm-C is open source software, and is available with a trained model at the DNorm demonstration website: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#DNorm.)." }, { "pmid": "28419261", "title": "Challenges in adapting existing clinical natural language processing systems to multiple, diverse health care settings.", "abstract": "OBJECTIVE\nWidespread application of clinical natural language processing (NLP) systems requires taking existing NLP systems and adapting them to diverse and heterogeneous settings. We describe the challenges faced and lessons learned in adapting an existing NLP system for measuring colonoscopy quality.\n\n\nMATERIALS AND METHODS\nColonoscopy and pathology reports from 4 settings during 2013-2015, varying by geographic location, practice type, compensation structure, and electronic health record.\n\n\nRESULTS\nThough successful, adaptation required considerably more time and effort than anticipated. Typical NLP challenges in assembling corpora, diverse report structures, and idiosyncratic linguistic content were greatly magnified.\n\n\nDISCUSSION\nStrategies for addressing adaptation challenges include assessing site-specific diversity, setting realistic timelines, leveraging local electronic health record expertise, and undertaking extensive iterative development. More research is needed on how to make it easier to adapt NLP systems to new clinical settings.\n\n\nCONCLUSIONS\nA key challenge in widespread application of NLP is adapting existing systems to new clinical settings." } ]
Frontiers in Genetics
30838023
PMC6390493
10.3389/fgene.2019.00080
Deep Learning Based Analysis of Histopathological Images of Breast Cancer
Breast cancer is associated with the highest morbidity rates for cancer diagnoses in the world and has become a major public health issue. Early diagnosis can increase the chance of successful treatment and survival. However, it is a very challenging and time-consuming task that relies on the experience of pathologists. The automatic diagnosis of breast cancer by analyzing histopathological images plays a significant role for patients and their prognosis. However, traditional feature extraction methods can only extract some low-level features of images, and prior knowledge is necessary to select useful features, which can be greatly affected by humans. Deep learning techniques can extract high-level abstract features from images automatically. Therefore, we introduce it to analyze histopathological images of breast cancer via supervised and unsupervised deep convolutional neural networks. First, we adapted Inception_V3 and Inception_ResNet_V2 architectures to the binary and multi-class issues of breast cancer histopathological image classification by utilizing transfer learning techniques. Then, to overcome the influence from the imbalanced histopathological images in subclasses, we balanced the subclasses with Ductal Carcinoma as the baseline by turning images up and down, right and left, and rotating them counterclockwise by 90 and 180 degrees. Our experimental results of the supervised histopathological image classification of breast cancer and the comparison to the results from other studies demonstrate that Inception_V3 and Inception_ResNet_V2 based histopathological image classification of breast cancer is superior to the existing methods. Furthermore, these findings show that Inception_ResNet_V2 network is the best deep learning architecture so far for diagnosing breast cancers by analyzing histopathological images. Therefore, we used Inception_ResNet_V2 to extract features from breast cancer histopathological images to perform unsupervised analysis of the images. We also constructed a new autoencoder network to transform the features extracted by Inception_ResNet_V2 to a low dimensional space to do clustering analysis of the images. The experimental results demonstrate that using our proposed autoencoder network results in better clustering results than those based on features extracted only by Inception_ResNet_V2 network. All of our experimental results demonstrate that Inception_ResNet_V2 network based deep transfer learning provides a new means of performing analysis of histopathological images of breast cancer.
Related WorksBreast cancer diagnosis based on image analysis has been studied for more than 40 years, and there have been several notable research achievements in the area. These studies can be divided into two categories according to their methods: one is based on traditional machine learning methods, and the other is based on deep learning methods. The former category is mainly focused on small datasets of breast cancer images and is based on labor intensive and comparatively low-performing, abstract features. The latter category can deal with big data and can also extract much more abstract features from data automatically.For example, Zhang et al. (2013) proposed a new cascade random subspace ensemble scheme with rejection options for microscopic biopsy image classification in 2012. This classification system consists of two random subspace classifier ensembles. The first ensemble consists of a set of support vector machines which correspond to the K binary classification problems transformed from the original K-class classification problem (K = 3). The second ensemble consists of a Multi-Layer Perceptron ensemble which focuses on rejected samples from the first ensemble. This system was tested on a database composed of 361 images, of which 119 were normal tissue, 102 were carcinoma in situ, and 140 were lobular carcinoma or invasive ductal. The authors randomly split the images into training and testing sets, with 20% of each class' images used for testing and the rest used for training. It obtained a high classification accuracy of 99.25% and a high classification reliability of 97.65% with a small rejection rate of 1.94%. In 2013, Kowal et al. (2013) used four clustering algorithms to perform nuclei segmentation for 500 images from 50 patients with breast cancer. Then, they used three different classification approaches to classify these images into benign and malignant tumors. Among 500 images, there were 25 benign and 25 malignant cases with 10 images per case. They achieved classification accuracy between 96 and 100% using a 50-fold cross-validation technique. In the same year, Filipczuk et al. (2013) presented a breast cancer diagnosis system based on the analysis of cytological images of fine needle biopsies to discriminate between benign or malignant biopsies. Four traditional machine learning methods including KNN (K-nearest neighbor with K = 5), NB (naive Bayes classifier with kernel density estimate), DT (decision tree) and SVM (support vector machine with Gaussian radial basis function kernel and scaling factor σ = 0.9) were used to build the classifiers of the biopsies with 25 features of the nuclei. These classifiers were tested on a set of 737 microscopic images of fine needle biopsies obtained from 67 patients, which contained 25 benign (275 images) and 42 malignant (462 images) cases. The best reported effectiveness is up to 98.51%. In 2014, George et al. (2014) proposed a diagnosis system for breast cancer using nuclear segmentation based on cytological images. Four classification models were used, including MLP (multilayer perceptron using the backpropagation algorithm), PNN (probabilistic neural network), LVQ (learning vector quantization), and SVM. The parameters for each model can be found in Table 5 in George et al. (2014). The classification accuracy using 10-fold cross-validation is 76~94% with only 92 images, including 45 images of benign tumors and 47 images of malignant tumors. In 2016, a performance comparison was conducted by Asri et al. (2016) between four machine learning algorithms, including SVM, DT, NB and KNN, on the Wisconsin Breast Cancer dataset, which contains 699 instances (including 458 benign and 241 malignant cases). Experimental results demonstrated that SVM achieved the highest accuracy of 97.13% with 10-fold cross-validation.However, the above breast cancer diagnosis studies focused on Whole-Slide Imaging (Zhang et al., 2013, 2014). Since the operation of Whole-Slide Imaging is complex and expensive, many studies based on this technique use small datasets and achieve poor generalization performance. To solve these problems, Spanhol et al. (2016a) published a breast cancer dataset called BreaKHis in 2016. BreaKHis contains 7,909 histopathological images of breast cancer from 82 patients. The authors used 6 different feature descriptors and 4 different traditional machine learning methods, including 1-NN (1 Nearest Neighbor), QDA (Quadratic Discriminant Analysis), RF (Random Forest), and SVM with the Gaussian kernel function, to perform binary diagnosis of benign and malignant tumors. The classification accuracy is between 80 and 85% using 5-fold cross-validation.Although traditional machine learning methods have made great achievements in analyzing histopathological images of breast cancer and even in dealing with relatively large datasets, their performance is heavily dependent on the choice of data representation (or features) for the task they are trained to perform. Furthermore, they are unable to extract and organize discriminative information from data (Bengio et al., 2013). Deep learning methods typically are neural network based learning machines with much more layers than the usual neural network. They have been widely used in the medical field since they can automatically yield more abstract—and ultimately more useful—representations (Bengio et al., 2013). That is, they can extract the discriminative information or features from data without requiring the manual design of features by a domain expert (Spanhol et al., 2016b).As a consequence, Spanhol et al. (2016b) classified histopathological images of breast cancer from BreaKHis using a variation of the AlexNet (Krizhevsky et al., 2012) convolutional neural network that improved classification accuracy by 4–6%. Bayramoglu et al. (2016) proposed to classify breast cancer histopathological images independently of their magnifications using CNN (convolutional neural networks). They proposed two different architectures: the single task CNN used to predict malignancy, and the multi-task CNN used to predict both malignancy and image magnification level simultaneously. Evaluations were carried out on the BreaKHis dataset, and the experimental results were competitive with the state-of-the-art results obtained from traditional machine learning methods.However, the above studies on the BreaKHis dataset only focus on the binary classification problem. Multi-class classification studies on histopathological images of breast cancer can provide more reliable information for diagnosis and prognosis. As a result, Araújo et al. (2017) proposed a CNN based method to classify the hematoxylin and eosin stained breast biopsy images from a dataset composed of 269 images into four classes (normal tissue, benign lesion, in situ carcinoma and invasive carcinoma), and into two classes (carcinoma and non-carcinoma), respectively. An SVM classifier with the radial basis kernel function was trained using the features extracted by CNN. The accuracies of the SVM for the four-class and two-class classification problems are 77.8–83.3%, respectively. To realize the development of a system for diagnosing breast cancer using multi-class classification on BreaKHis, Han et al. (2017) proposed a class structure-based deep convolutional network to provide an accurate and reliable solution for breast cancer multi-class classification by using hierarchical feature representation. Using these techniques, they were able to achieve multi-class classification of breast cancer with a maximum accuracy of 95.9%. This study is important for precise treatment of breast cancer. In addition, Nawaz et al. (2018) presented a DenseNet based model for multi-class breast cancer classification to predict the subclass of the tumors. The experimental results on BreaKHis achieved the accuracy of 95.4%. After that, Motlagh et al. (2018) used the pre-trained model of ResNet_V1_152 (He et al., 2016) to perform diagnosis of benign and malignant tumors as well as diagnosis based on multi-class classification of various subtypes of histopathological images of breast cancer in BreaKHis. They were able to achieve an accuracy of 98.7–96.4% for binary classification and multi-class classification, respectively.Although there are 7,909 histopathological images from 82 patients in BreaKHis, the number of images is far from enough for effectively using deep learning techniques. Therefore, we proposed to combine transfer learning techniques with deep learning to perform breast cancer diagnosis using the relatively small number of histopathological images (7,909) from the BreaKHis dataset.The Inception_V3 (Szegedy et al., 2016) and Inception_ResNet_V2 (Szegedy et al., 2017) networks were proposed by Szegedy et al. (2016, 2017), respectively. In the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition, the Inception_V3 network achieved 78.0–93.9% accuracy in top-1 and top-5 metrics, respectively, while the Inception_ResNet_V2 achieved 80.4–95.3% accuracy in the same evaluation.One common method for performing transfer learning (Pan and Yang, 2010) involves obtaining the basic parameters for training a deep learning model by pre-training on large data sets, such as ImageNet, and then using the data set of the new target task to retrain the last fully-connected layer of the model. This process can achieve good results even on small data sets.Therefore, we adopt two deep convolutional neural networks, specifically Inception_V3 and Inception_Resnet_V2, to study the diagnosis of breast cancer in the BreaKHis dataset via transfer learning techniques. To solve the unbalanced distribution of samples of histopathological images of breast cancer, the BreaKHis dataset was expanded by rotation, inversion, and several other data augmentation techniques. The Inception_ResNet_V2 network was chosen to conduct binary and multi-class classification diagnosis on the expanded set of histopathological breast cancer images for its better performance on the original dataset of BreaKHis compared to that of Inception_V3. The powerful feature extraction capability of the Inception_ResNet_V2 network was used to extract features of the histopathological images of breast cancer for the linear kernel SVM and 1-NN classifiers. The image features extracted by the Inception_ResNet_V2 network are also used as the input of the K-means algorithm to do clustering analysis for the BreaKHis dataset. Furthermore, a new autoencoder deep learning model is constructed to apply a non-linear transformation to the image features extracted by Inception_ResNet_V2 network in order to get the low-dimensional features of the image, and to do clustering analysis for BreaKHis dataset using the K-means algorithm.
[ "28570557", "23787338", "26064558", "28117445", "23912498", "27898976", "28646155", "16873662", "24034748", "843571", "29860482", "26540668", "82482", "24759275" ]
[ { "pmid": "28570557", "title": "Classification of breast cancer histology images using Convolutional Neural Networks.", "abstract": "Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%." }, { "pmid": "23787338", "title": "Representation learning: a review and new perspectives.", "abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning." }, { "pmid": "26064558", "title": "An investigation of the false discovery rate and the misinterpretation of p-values.", "abstract": "If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time. This conclusion is demonstrated from several points of view. First, tree diagrams which show the close analogy with the screening test problem. Similar conclusions are drawn by repeated simulations of t-tests. These mimic what is done in real life, which makes the results more persuasive. The simulation method is used also to evaluate the extent to which effect sizes are over-estimated, especially in underpowered experiments. A script is supplied to allow the reader to do simulations themselves, with numbers appropriate for their own work. It is concluded that if you wish to keep your false discovery rate below 5%, you need to use a three-sigma rule, or to insist on p≤0.001. And never use the word 'significant'." }, { "pmid": "28117445", "title": "Dermatologist-level classification of skin cancer with deep neural networks.", "abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care." }, { "pmid": "23912498", "title": "Computer-Aided Breast Cancer Diagnosis Based on the Analysis of Cytological Images of Fine Needle Biopsies.", "abstract": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information." }, { "pmid": "27898976", "title": "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.", "abstract": "Importance\nDeep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.\n\n\nObjective\nTo apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.\n\n\nDesign and Setting\nA specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.\n\n\nExposure\nDeep learning-trained algorithm.\n\n\nMain Outcomes and Measures\nThe sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.\n\n\nResults\nThe EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.\n\n\nConclusions and Relevance\nIn this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment." }, { "pmid": "28646155", "title": "Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model.", "abstract": "Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed deep learning model. The structured deep learning model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings." }, { "pmid": "16873662", "title": "Reducing the dimensionality of data with neural networks.", "abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." }, { "pmid": "24034748", "title": "Computer-aided diagnosis of breast cancer based on fine needle biopsy microscopic images.", "abstract": "Prompt and widely available diagnostics of breast cancer is crucial for the prognosis of patients. One of the diagnostic methods is the analysis of cytological material from the breast. This examination requires extensive knowledge and experience of the cytologist. Computer-aided diagnosis can speed up the diagnostic process and allow for large-scale screening. One of the largest challenges in the automatic analysis of cytological images is the segmentation of nuclei. In this study, four different clustering algorithms are tested and compared in the task of fast nuclei segmentation. K-means, fuzzy C-means, competitive learning neural networks and Gaussian mixture models were incorporated for clustering in the color space along with adaptive thresholding in grayscale. These methods were applied in a medical decision support system for breast cancer diagnosis, where the cases were classified as either benign or malignant. In the segmented nuclei, 42 morphological, topological and texture features were extracted. Then, these features were used in a classification procedure with three different classifiers. The system was tested for classification accuracy by means of microscopic images of fine needle breast biopsies. In cooperation with the Regional Hospital in Zielona Góra, 500 real case medical images from 50 patients were collected. The acquired classification accuracy was approximately 96-100%, which is very promising and shows that the presented method ensures accurate and objective data acquisition that could be used to facilitate breast cancer diagnosis." }, { "pmid": "843571", "title": "The measurement of observer agreement for categorical data.", "abstract": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature." }, { "pmid": "29860482", "title": "Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-Years for 29 Cancer Groups, 1990 to 2016: A Systematic Analysis for the Global Burden of Disease Study.", "abstract": "Importance\nThe increasing burden due to cancer and other noncommunicable diseases poses a threat to human development, which has resulted in global political commitments reflected in the Sustainable Development Goals as well as the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases. To determine if these commitments have resulted in improved cancer control, quantitative assessments of the cancer burden are required.\n\n\nObjective\nTo assess the burden for 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus.\n\n\nEvidence Review\nCancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) were evaluated for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods. Levels and trends were analyzed over time, as well as by the Sociodemographic Index (SDI). Changes in incident cases were categorized by changes due to epidemiological vs demographic transition.\n\n\nFindings\nIn 2016, there were 17.2 million cancer cases worldwide and 8.9 million deaths. Cancer cases increased by 28% between 2006 and 2016. The smallest increase was seen in high SDI countries. Globally, population aging contributed 17%; population growth, 12%; and changes in age-specific rates, -1% to this change. The most common incident cancer globally for men was prostate cancer (1.4 million cases). The leading cause of cancer deaths and DALYs was tracheal, bronchus, and lung cancer (1.2 million deaths and 25.4 million DALYs). For women, the most common incident cancer and the leading cause of cancer deaths and DALYs was breast cancer (1.7 million incident cases, 535 000 deaths, and 14.9 million DALYs). In 2016, cancer caused 213.2 million DALYs globally for both sexes combined. Between 2006 and 2016, the average annual age-standardized incidence rates for all cancers combined increased in 130 of 195 countries or territories, and the average annual age-standardized death rates decreased within that timeframe in 143 of 195 countries or territories.\n\n\nConclusions and Relevance\nLarge disparities exist between countries in cancer incidence, deaths, and associated disability. Scaling up cancer prevention and ensuring universal access to cancer care are required for health equity and to fulfill the global commitments for noncommunicable disease and cancer control." }, { "pmid": "26540668", "title": "A Dataset for Breast Cancer Histopathological Image Classification.", "abstract": "Today, medical image analysis papers require solid experiments to prove the usefulness of proposed methods. However, experiments are often performed on data selected by the researchers, which may come from different institutions, scanners, and populations. Different evaluation measures may be used, making it difficult to compare the methods. In this paper, we introduce a dataset of 7909 breast cancer histopathology images acquired on 82 patients, which is now publicly available from http://web.inf.ufpr.br/vri/breast-cancer-database. The dataset includes both benign and malignant images. The task associated with this dataset is the automated classification of these images in two classes, which would be a valuable computer-aided diagnosis tool for the clinician. In order to assess the difficulty of this task, we show some preliminary results obtained with state-of-the-art image classification systems. The accuracy ranges from 80% to 85%, showing room for improvement is left. By providing this dataset and a standardized evaluation protocol to the scientific community, we hope to gather researchers in both the medical and the machine learning field to advance toward this clinical application." }, { "pmid": "82482", "title": "Computerized nuclear morphometry as an objective method for characterizing human cancer cell populations.", "abstract": "A new method for measuring differences in nuclear detail in chrome alum gallocyanin-stained nuclei of cells from human breast cancers was compared with conventional subjective grading and classification systems. The new method, termed computerized nuclear morphometry (CNM), gives a multivariate numerical score that correlates well with nuclear atypia and gives a higher reproducibility of classification than do subjective observations with conventional histological preparations. When 100 individual nuclei from each of 137 breast cancers were examined by CNM, there was a broad CNM score variation between patients but a good reproducibility for each tumor. When different parts of the same tumor were sampled, there was good reproducibility between samples, indicating that some breast cancers at least are \"geometrically monoclonal.\" When these cancers were compared by the grading systems of WHO and Black, correlations of 0.43 and 0.48, respectively, were found. There was a poor correlation between CNM and classifications of tumor type, but in general there were high values for CNM in medullary tumors and low values in mucous tumors. Correlations between CNM and tumor progression and prognosis await future study of patients participating in the study." }, { "pmid": "24759275", "title": "Breast cancer histopathology image analysis: a review.", "abstract": "This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients." } ]
Scientific Reports
30816296
PMC6395677
10.1038/s41598-019-39795-x
Automatic Choroid Layer Segmentation from Optical Coherence Tomography Images Using Deep Learning
The choroid layer is a vascular layer in human retina and its main function is to provide oxygen and support to the retina. Various studies have shown that the thickness of the choroid layer is correlated with the diagnosis of several ophthalmic diseases. For example, diabetic macular edema (DME) is a leading cause of vision loss in patients with diabetes. Despite contemporary advances, automatic segmentation of the choroid layer remains a challenging task due to low contrast, inhomogeneous intensity, inconsistent texture and ambiguous boundaries between the choroid and sclera in Optical Coherence Tomography (OCT) images. The majority of currently implemented methods manually or semi-automatically segment out the region of interest. While many fully automatic methods exist in the context of choroid layer segmentation, more effective and accurate automatic methods are required in order to employ these methods in the clinical sector. This paper proposed and implemented an automatic method for choroid layer segmentation in OCT images using deep learning and a series of morphological operations. The aim of this research was to segment out Bruch’s Membrane (BM) and choroid layer to calculate the thickness map. BM was segmented using a series of morphological operations, whereas the choroid layer was segmented using a deep learning approach as more image statistics were required to segment accurately. Several evaluation metrics were used to test and compare the proposed method against other existing methodologies. Experimental results showed that the proposed method greatly reduced the error rate when compared with the other state-of-the-art methods.
Related WorkExisting literature in the context of choroid layer segmentation includes manual, semi-automatic and few automatic segmentation methods in OCT images. As the prime objective of this research focuses on the use of deep learning methodologies, the literature review is divided into two subgroups: deep learning methods and non-deep learning methods.Machine learning is a method of data analysis that employs statistical and analytical tools on training data. These tools are later used to learn from past examples and employ the learned information from the preceding training to categorize new data, envisage new inclinations and find new patterns. Image classification or segmentation is one of the fundamental methods in the domain of machine learning. Analysis of literature shows the use of several, non-deep learning, traditional methods such as graph-based, k-nearest neighbor, Bayesian network, support vector machine and decision tree approaches, all of which involve the use of hand-crafted features for the specific classification purpose. The hand-crafted features include shape, pixel density and texture of the image features.In the context of these non-deep learning methods, a previous automatic choroid layer segmentation method was used to extract choroidal vessels for quantification of choroidal vasculature thickness10. This approach focused on the thickness of vessels rather than the choroid layer. Another automatic segmentation technique11 applied a statistical model on choroid layers in OCT images. However, the processing time of the method is quite high and extensive training is required. Use of phase information for automatic segmentation of the interface between the choroid and sclera was proposed and implemented12,13. While successful, these methods are not clinically practical as the used imaging modalities are not commercially obtainable. The use of dynamic programming (DP) in one study was used to find the shortest path of a graph for choroid layer segmentation, giving a segmentation accuracy of about 90 percent8. Additionally, a two-stage active contour model for choroidal boundary extraction with a segmentation accuracy of 92.7 percent was proposed14. Graph based approaches have also been used in this context but, due to the heterogeneous nature of OCT images, these methods are not generally helpful in choroid layer segmentation15–17. Still, one study utilized a graph search algorithm from 3D OCT volumes to perform choroid layer segmentation18. This method was a semi-automatic approach. The interface between the choroid and sclera using Dijkstra’s algorithm was investigated9.OCT B-scan choroidal segmentation based on a dual probability gradient has also been attempted19. Another study presented an automatic segmentation method based on a multi-resolution textural graph cut method16. The combination of Markov Random Field (MRF) and level set approach was also used to segment the choroid layer20. Distance regularization and edge constraint terms were rooted into the level set technique to evade uneven and trivial areas and preserve information around the boundary between the choroid and sclera. MRF based methods21–23 have been proposed and implemented to detect the intra-retinal layers from 2D or 3D OCT images. Another method focused on obtaining the spatial distribution of choroidal sub-layers with 3D 1060-nm OCT mapping using Haller’s and Sattler’s layer24. A hybrid approach has made the use of level set, multi-region continuous max-flow method to segment out different retinal layers. The approach also used nonlinear anisotropic diffusion in order to eliminate the spackle noise present in the OCT images25. Seven layers of the retina were also segmented in this context; the proposed approach involved a combination of graph cut and dynamic programming26. Two sequential diffusion map based segmentations of intra-retinal layers from 3D SD-OCT scans have been developed27. A similar approach for the segmentation of multiple retinal layers utilized spectral rounding for segmentation28.Review of Recent Deep Learning Techniques in Computer VisionThe conducted literature review showed that most of the existing choroid layer segmentation approaches made use of non-deep learning methodologies. An overview of existing deep learning methods revealed that, in this context, these methods are not used specifically for the choroid layer segmentation. They are generally applied in medical image segmentation, such as on the brain, retina, liver, knee, urinary bladder, chest, heart, etc. Deep learning is a branch of Artificial Intelligence (AI) with the ability to make use of optimization, probabilistic and statistical tools; these methods maintain an extensive contribution to the analysis of medical images.With regard to biomedical image segmentation, one major study contributed an approach for the segmentation of biomedical image segmentation using Convolutional Neural networks (CNNs)29. The model employed a network and training strategy that relied on the robust use of data augmentation. Low grade glioma assessment through a modified CNN architecture with 6 convolutional layers and a fully connected layer was used to classify these brain tumors30. A similar approach was used in the diagnosis of white matter hyperintensities from brain MRI images through a series of CNN architectures that considered multi-scale patches to yield the obvious position of required features while training31. Another group proposed an automatic computer-aided diagnosis method for the classification of solid and non-solid nodules in pulmonary computerized tomography (CT) images through a CNN32. Brain image segmentation to produce high nonlinear mappings between inputs and outputs using deep learning was analyzed; the segmentation problem was solved using CNNs33,34. The approach made use of local features in conjunction with more global contextual features to perform the segmentation. Other studies have investigated the use of automatic segmentation and assessment of rectal cancers from the multi-parametric MR images through the use of CNN architectures35. The combination of CNN and total kidney volume calculation from Computed Tomography for the segmentation has been proposed in36. The method was tested on the images of real patients demonstrating trivial to adequate function to severe renal inadequacy. The detection of bladder cancer has also incorporated use of CNN architectures in a combination with level set methods to acquire the region of interest37.In the domain of retinal image segmentation using deep learning approaches, segmentation of the optic disc, fovea and retinal vasculature has been carried out using a CNN model. The approach segmented three channels of input from the point’s neighborhood and propagated the response across the 7-layer network. The output layer was comprised of four neurons, denoting background, blood vessels, optic disc, and fovea38. Another study showed segmentation of retinal blood vessels using a deep neural network with zero phase whitening, global contrast normalization, and gamma corrections39. A similar approach for blood vessel segmentation using deep learning has been performed40. Segmentation of retinal blood vessels has also been analyzed as a multi-label implication task through the use of implied benefits of the blend of convolutional neural networks and the structured estimate41. The main observation in the overview of retinal based segmentation using deep learning is that the imaging modality being used in these approaches is the fundus image. The use of OCT images for the segmentation of retinal layers has not observed in existing methods of deep learning. As OCT imaging technology allows capturing of the cross-section of the retina, it may be very helpful to analyze OCT images for the diagnosis of several retinal diseases. Because choroid layer segmentation and associated thickness measurements help diagnose retinal based diseases, several approaches have been proposed and implemented to diagnose them. For instance, retinitis pigmentosa42, central serous chorioretinopathy43, age-related macular degeneration44, and diabetic retinopathy45,46 have been found to have associated changes in the thickness of the choroid.One study showed the development of an automatic method for the segmentation of retinal layers based on deep learning methodologies, highlighting one of only few current implementations of an automatic technique. The method made use of CNN to perform the retinal layer segmentation in OCT images47. The procedure was limited in accuracy of edge detection as it depended on at most one-pixel accuracy; however, the Bidirectional Long Short-term Memory (BLSTM) entails sub-pixel accuracy. This can lead to confusion in the accurate segmentation of the corresponding boundaries. A similar method48 performed OCT image semantic segmentation through fully convolutional neural networks, but the results were tested on a data-set of normal individuals with fewer images of mild spectrum diabetic retinopathy. A combination of CNNs and graph search methods has also been employed in the segmentation of 9 retinal boundaries49. The method was computationally expensive and the black-boxed architecture of the CNN made customization and performance examination of every step less controllable. Segmentation of retinal layers with emphasis on fluid masses has also been performed using deep learning methods50. While results were promising, evaluation was conducted on a limited number of B-scans. The tested and training data sets contained a total of 110 B-scans.According to analysis of the related works, it can be observed that higher segmentation accuracy was achieved through deep learning methodologies. The key restraint of non-deep leaning approaches is that these methods mainly rely on the feature extraction phase for the accurate segmentation of the region of interest. It is tough to extract appropriate image features for a definite medical image recognition problem. As a result, the classifier cannot provide effective segmentation accuracy because the segmented features are not effective enough. In order to tackle problems faced by non-deep learning approaches, significant segmentation accuracy is achieved by the use of deep learning methods through adaptive learning of image features. Considering this, it is likely that the application of deep learning methods on the categorization of OCT images for automated disease diagnosis would be more successful than counterpart non-deep learning procedures. Thus, the focus of this research is to overcome existing challenges in choroid layer segmentation and provide a fully realized automatic segmentation approach. The proposed method makes use of deep learning to achieve the desired task, with a combination of morphological operations and CNNs. In order to calculate the thickness map of choroid layer, segmentation was considered for two layers. The desired layers included BM and the choroid layer. BM was segmented out using a series of morphological operations followed by the use of CNNs for choroid segmentation. Then, a thickness map was generated based on the extracted layers.
[ "23856651", "29394291", "7862410", "19503235", "23504041", "23060139", "22254171", "22453435", "22330573", "18815101", "23349432", "24409381", "24911446", "23837966", "28710497", "28698556", "28864824", "25562829", "27310171", "28706185", "28515418", "23093617", "19898183", "28663902", "28856040", "20094011", "27345731" ]
[ { "pmid": "23856651", "title": "Optical coherence tomography– 15 years in cardiology.", "abstract": "Since its invention in the late 1990s, intravascular optical coherence tomography (OCT) has been rapidly adopted in clinical research and, more recently, in clinical practice. Given its unprecedented resolution and high image contrast, OCT has been used to visualize plaque characteristics and to evaluate the vascular response to percutaneous coronary intervention. In particular, OCT is becoming the standard modality to evaluate in vivo plaque vulnerability, including the presence of lipid content, thin fibrous cap, or macrophage accumulation. Furthermore, OCT findings after stent implantation, such as strut apposition, neointimal hyperplasia, strut coverage, and neoatherosclerosis, are used as surrogate markers of the vascular response. New applications for OCT are being explored, such as transplant vasculopathy or non-coronary vascular imaging. Although OCT has contributed to cardiovascular research by providing a better understanding of the pathophysiology of coronary artery disease, data linking the images and clinical outcomes are lacking. Prospective data are needed to prove that the use of OCT improves patient outcomes, which is the ultimate goal of any clinical diagnostic tool." }, { "pmid": "29394291", "title": "Choroidal thickness measured using swept-source optical coherence tomography is reduced in patients with type 2 diabetes.", "abstract": "OBJECTIVE\nTo compare choroidal thickness between patients with type 2 diabetes (T2D) and healthy controls measured using swept-source optical coherence tomography (SS-OCT).\n\n\nMETHODS\nThe sample comprised 157 eyes of 94 T2D patients, 48 eyes of which had diabetic macular edema (DME), and 71 normal eyes of 38 healthy patients. Subfoveal (SF) choroidal thickness, and choroidal thickness at 500-μm intervals up to 2500 μm nasal and temporal from the fovea were measured using the SS-OCT. Choroidal thicknesses were compared between groups using Student's t-test. Additionally, Pearson correlations were calculated between diabetes duration, glycosylated hemoglobin (HbA1c) levels, and choroidal thickness.\n\n\nRESULTS\nMean diabetes duration was 16.6±9.5 years, while mean glycosylated hemoglobin was 7.7±1.3%. Overall, the choroid was significantly thinner in T2D patients. Individuals with DME had reduced choroidal thickness in all measurements, except at 2000 and 2500-μm nasal positions, compared to healthy controls. There was a moderate correlation between choroidal thickness and HbA1c levels in DME patients (SF: r = 0.342; p = 0.017). Diabetes duration did not correlate significantly with choroidal thickness.\n\n\nCONCLUSION\nSS-OCT measurements revealed that the choroid was significantly thinner in T2D patients, moderate non-proliferative diabetic retinopathy patients, and DME patients than in healthy individuals. Further studies are needed to clarify the effect of diabetes on this layer and the relationship between choroidal thickness and DME." }, { "pmid": "7862410", "title": "Imaging of macular diseases with optical coherence tomography.", "abstract": "BACKGROUND/PURPOSE\nTo assess the potential of a new diagnostic technique called optical coherence tomography for imaging macular disease. Optical coherence tomography is a novel noninvasive, noncontact imaging modality which produces high depth resolution (10 microns) cross-sectional tomographs of ocular tissue. It is analogous to ultrasound, except that optical rather than acoustic reflectivity is measured.\n\n\nMETHODS\nOptical coherence tomography images of the macula were obtained in 51 eyes of 44 patients with selected macular diseases. Imaging is performed in a manner compatible with slit-lamp indirect biomicroscopy so that high-resolution optical tomography may be accomplished simultaneously with normal ophthalmic examination. The time-of-flight delay of light backscattered from different layers in the retina is determined using low-coherence interferometry. Cross-sectional tomographs of the retina profiling optical reflectivity versus distance into the tissue are obtained in 2.5 seconds and with a longitudinal resolution of 10 microns.\n\n\nRESULTS\nCorrelation of fundus examination and fluorescein angiography with optical coherence tomography tomographs was demonstrated in 12 eyes with the following pathologies: full- and partial-thickness macular hole, epiretinal membrane, macular edema, intraretinal exudate, idiopathic central serous chorioretinopathy, and detachments of the pigment epithelium and neurosensory retina.\n\n\nCONCLUSION\nOptical coherence tomography is potentially a powerful tool for detecting and monitoring a variety of macular diseases, including macular edema, macular holes, and detachments of the neurosensory retina and pigment epithelium." }, { "pmid": "19503235", "title": "Automated detection of retinal layer structures on optical coherence tomography images.", "abstract": "Segmentation of retinal layers from OCT images is fundamental to diagnose the progress of retinal diseases. In this study we show that the retinal layers can be automatically and/or interactively located with good accuracy with the aid of local coherence information of the retinal structure. OCT images are processed using the ideas of texture analysis by means of the structure tensor combined with complex diffusion filtering. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the STRATUSOCTTM system." }, { "pmid": "23504041", "title": "Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.", "abstract": "Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds." }, { "pmid": "23060139", "title": "Automated segmentation of the choroid from clinical SD-OCT.", "abstract": "PURPOSE\nWe developed and evaluated a fully automated 3-dimensional (3D) method for segmentation of the choroidal vessels, and quantification of choroidal vasculature thickness and choriocapillaris-equivalent thickness of the macula, and evaluated repeat variability in normal subjects using standard clinically available spectral domain optical coherence tomography (SD-OCT).\n\n\nMETHODS\nA total of 24 normal subjects was imaged twice, using clinically available, 3D SD-OCT. A novel, fully-automated 3D method was used to segment and visualize the choroidal vasculature in macular scans. Local choroidal vasculature and choriocapillaris-equivalent thicknesses were determined. Reproducibility on repeat imaging was analyzed using overlapping rates, Dice coefficient, and root mean square coefficient of variation (CV) of choroidal vasculature and choriocapillaris-equivalent thicknesses.\n\n\nRESULTS\nFor the 6 × 6 mm(2) macula-centered region as depicted by the SD-OCT, average choroidal vasculature thickness in normal subjects was 172.1 μm (95% confidence interval [CI] 163.7-180.5 μm) and average choriocapillaris-equivalent thickness was 23.1 μm (95% CI 20.0-26.2 μm). Overlapping rates were 0.79 ± 0.07 and 0.75 ± 0.06, Dice coefficient was 0.78 ± 0.08, CV of choroidal vasculature thickness was 8.0% (95% CI 6.3%-9.4%), and of choriocapillaris-equivalent thickness was 27.9% (95% CI 21.0%-33.3%).\n\n\nCONCLUSIONS\nFully automated 3D segmentation and quantitative analysis of the choroidal vasculature and choriocapillaris-equivalent thickness demonstrated excellent reproducibility in repeat scans (CV 8.0%) and good reproducibility of choriocapillaris-equivalent thickness (CV 27.9%). Our method has the potential to improve the diagnosis and management of patients with eye diseases in which the choroid is affected." }, { "pmid": "22254171", "title": "Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model.", "abstract": "A two stage statistical model based on texture and shape for fully automatic choroidal segmentation of normal and pathologic eyes obtained by a 1060 nm optical coherence tomography (OCT) system is developed. A novel dynamic programming approach is implemented to determine location of the retinal pigment epithelium/ Bruch's membrane /choriocapillaris (RBC) boundary. The choroid-sclera interface (CSI) is segmented using a statistical model. The algorithm is robust even in presence of speckle noise, low signal (thick choroid), retinal pigment epithelium (RPE) detachments and atrophy, drusen, shadowing and other artifacts. Evaluation against a set of 871 manually segmented cross-sectional scans from 12 eyes achieves an average error rate of 13%, computed per tomogram as a ratio of incorrectly classified pixels and the total layer surface. For the first time a fully automatic choroidal segmentation algorithm is successfully applied to a wide range of clinical volumetric OCT data." }, { "pmid": "22453435", "title": "Automated measurement of choroidal thickness in the human eye by polarization sensitive optical coherence tomography.", "abstract": "We present a new method to automatically segment the thickness of the choroid in the human eye by polarization sensitive optical coherence tomography (PS-OCT). A swept source PS-OCT instrument operating at a center wavelength of 1040 nm is used. The segmentation method is based entirely on intrinsic, tissue specific polarization contrast mechanisms. In a first step, the anterior boundary of the choroid, the retinal pigment epithelium, is segmented based on depolarization. In a second step, the choroid-sclera interface is found by using the birefringence of the sclera. The method is demonstrated in five healthy eyes. The mean repeatability (standard deviation) of thickness measurement was found to be 18.3 µm." }, { "pmid": "22330573", "title": "Automated phase retardation oriented segmentation of chorio-scleral interface by polarization sensitive optical coherence tomography.", "abstract": "An automated chorio-scleral interface (CSI) detection algorithm based on polarization sensitive optical coherence tomography (PS-OCT) is presented. This algorithm employs a two-step scheme based on the phase retardation variation detected by PS-OCT. In the first step, a rough CSI segmentation is implemented to distinguish the choroid and sclera by using depth-oriented second derivative of the phase retardation. Second, the CSI is further finely defined as the intersection of lines fitted to the phase retardation in the choroid and sclera. This algorithm challenges the current back-scattering intensity based CSI segmentation approaches that are not fully based on anatomical and morphological evidence, and provides a rational segmentation method for the morphological investigation of the choroid. Applications of this algorithm are demonstrated on in vivo posterior images acquired by a PS-OCT system with 1-μm probe." }, { "pmid": "18815101", "title": "Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search.", "abstract": "Current techniques for segmenting macular optical coherence tomography (OCT) images have been 2-D in nature. Furthermore, commercially available OCT systems have only focused on segmenting a single layer of the retina, even though each intraretinal layer may be affected differently by disease. We report an automated approach for segmenting (anisotropic) 3-D macular OCT scans into five layers. Each macular OCT dataset consisted of six linear radial scans centered at the fovea. The six surfaces defining the five layers were identified on each 3-D composite image by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients (24 3-D composite image datasets) with unilateral anterior ischemic optic neuropathy (AION). Using the average of three experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.1 +/- 2.9 microm, a result comparable to the interobserver variability (6.9 +/- 3.3 microm). Our quantitative analysis of the automated segmentation results from AION subject data revealed that the inner retinal layer thickness for the affected eye was 24.1 microm (21%) smaller on average than for the unaffected eye (p < 0.001), supporting the need for segmenting the layers separately." }, { "pmid": "23349432", "title": "Semiautomated segmentation of the choroid in spectral-domain optical coherence tomography volume scans.", "abstract": "PURPOSE\nChanges in the choroid, in particular its thickness, are believed to be of importance in the pathophysiology of a number of retinal diseases. The purpose of this study was to adapt the graph search algorithm to semiautomatically identify the choroidal layer in spectral-domain optical coherence tomography (SD-OCT) volume scans and compare its performance to manual delineation.\n\n\nMETHODS\nA graph-based multistage segmentation approach was used to identify the choroid, defined as the layer between the outer border of the RPE band and the choroid-sclera junction. Thirty randomly chosen macular SD-OCT (1024 × 37 × 496 voxels, Heidelberg Spectralis) volumes were obtained from 20 healthy subjects and 10 subjects with non-neovascular AMD. The positions of the choroidal borders and resultant thickness were compared with consensus manual delineation performed by two graders. For consistency of the statistical analysis, the left eyes were horizontally flipped in the x-direction.\n\n\nRESULTS\nThe algorithm-defined position of the outer RPE border and choroid-sclera junction was consistent with the manual delineation, resulting in highly correlated choroidal thickness values with r = 0.91 to 0.93 for the healthy subjects and 0.94 for patients with non-neovascular AMD. Across all cases, the mean and absolute differences between the algorithm and manual segmentation for the outer RPE boundary was -0.74 ± 3.27 μm and 3.15 ± 3.07 μm; and for the choroid-sclera junction was -3.90 ± 15.93 μm and 21.39 ± 10.71 μm.\n\n\nCONCLUSIONS\nExcellent agreement was observed between the algorithm and manual choroidal segmentation in both normal eyes and those with non-neovascular AMD. The choroid was thinner in AMD eyes. Semiautomated choroidal thickness calculation may be useful for large-scale quantitative studies of the choroid." }, { "pmid": "24409381", "title": "Automatic segmentation of choroidal thickness in optical coherence tomography.", "abstract": "The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye's normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data." }, { "pmid": "24911446", "title": "Choroidal Haller's and Sattler's layer thickness measurement using 3-dimensional 1060-nm optical coherence tomography.", "abstract": "OBJECTIVES\nTo examine the feasibility of automatically segmented choroidal vessels in three-dimensional (3D) 1060-nmOCT by testing repeatability in healthy and AMD eyes and by mapping Haller's and Sattler's layer thickness in healthy eyes.\n\n\nMETHODS\nFifty-five eyes (from 45 healthy subjects and 10 with non-neovascular age-related macular degeneration (AMD) subjects) were imaged by 3D-1060-nmOCT over a 36°x36° field of view. Haller's and Sattler's layer were automatically segmented, mapped and averaged across the Early Treatment Diabetic Retinopathy Study grid. For ten AMD eyes and ten healthy eyes, imaging was repeated within the same session and on another day. Outcomes were the repeatability agreement of Haller's and Sattler's layer thicknesses in healthy and AMD eyes, the validation with ICGA and the statistical analysis of the effect of age and axial eye length (AL) on both healthy choroidal sublayers.\n\n\nRESULTS\nThe coefficients of repeatability for Sattler's and Haller's layers were 35% and 21% in healthy eyes and 44% and 31% in AMD eyes, respectively. The mean±SD healthy central submacular field thickness for Sattler's and Haller's was 87±56 µm and 141±50 µm, respectively, with a significant relationship for AL (P<.001).\n\n\nCONCLUSIONS\nAutomated Sattler's and Haller's thickness segmentation generates rapid 3D measurements with a repeatability corresponding to reported manual segmentation. Sublayers in healthy eyes thinned significantly with increasing AL. In the presence of the thinned Sattler's layer in AMD, careful measurement interpretation is needed. Automatic choroidal vascular layer mapping may help to explain if pathological choroidal thinning affects medium and large choroidal vasculature in addition to choriocapillaris loss." }, { "pmid": "23837966", "title": "Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.", "abstract": "Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively." }, { "pmid": "28710497", "title": "Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma.", "abstract": "Deep learning-based radiomics (DLR) was developed to extract deep information from multiple modalities of magnetic resonance (MR) images. The performance of DLR for predicting the mutation status of isocitrate dehydrogenase 1 (IDH1) was validated in a dataset of 151 patients with low-grade glioma. A modified convolutional neural network (CNN) structure with 6 convolutional layers and a fully connected layer with 4096 neurons was used to segment tumors. Instead of calculating image features from segmented images, as typically performed for normal radiomics approaches, image features were obtained by normalizing the information of the last convolutional layers of the CNN. Fisher vector was used to encode the CNN features from image slices of different sizes. High-throughput features with dimensionality greater than 1.6*104 were obtained from the CNN. Paired t-tests and F-scores were used to select CNN features that were able to discriminate IDH1. With the same dataset, the area under the operating characteristic curve (AUC) of the normal radiomics method was 86% for IDH1 estimation, whereas for DLR the AUC was 92%. The AUC of IDH1 estimation was further improved to 95% using DLR based on multiple-modality MR images. DLR could be a powerful way to extract deep information from medical images." }, { "pmid": "28698556", "title": "Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities.", "abstract": "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06)." }, { "pmid": "28864824", "title": "Automatic Categorization and Scoring of Solid, Part-Solid and Non-Solid Pulmonary Nodules in CT Images with Convolutional Neural Network.", "abstract": "We present a computer-aided diagnosis system (CADx) for the automatic categorization of solid, part-solid and non-solid nodules in pulmonary computerized tomography images using a Convolutional Neural Network (CNN). Provided with only a two-dimensional region of interest (ROI) surrounding each nodule, our CNN automatically reasons from image context to discover informative computational features. As a result, no image segmentation processing is needed for further analysis of nodule attenuation, allowing our system to avoid potential errors caused by inaccurate image processing. We implemented two computerized texture analysis schemes, classification and regression, to automatically categorize solid, part-solid and non-solid nodules in CT scans, with hierarchical features in each case learned directly by the CNN model. To show the effectiveness of our CNN-based CADx, an established method based on histogram analysis (HIST) was implemented for comparison. The experimental results show significant performance improvement by the CNN model over HIST in both classification and regression tasks, yielding nodule classification and rating performance concordant with those of practicing radiologists. Adoption of CNN-based CADx systems may reduce the inter-observer variation among screening radiologists and provide a quantitative reference for further nodule analysis." }, { "pmid": "25562829", "title": "Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.", "abstract": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement." }, { "pmid": "27310171", "title": "Brain tumor segmentation with Deep Neural Networks.", "abstract": "In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster." }, { "pmid": "28706185", "title": "Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR.", "abstract": "Multiparametric Magnetic Resonance Imaging (MRI) can provide detailed information of the physical characteristics of rectum tumours. Several investigations suggest that volumetric analyses on anatomical and functional MRI contain clinically valuable information. However, manual delineation of tumours is a time consuming procedure, as it requires a high level of expertise. Here, we evaluate deep learning methods for automatic localization and segmentation of rectal cancers on multiparametric MR imaging. MRI scans (1.5T, T2-weighted, and DWI) of 140 patients with locally advanced rectal cancer were included in our analysis, equally divided between discovery and validation datasets. Two expert radiologists segmented each tumor. A convolutional neural network (CNN) was trained on the multiparametric MRIs of the discovery set to classify each voxel into tumour or non-tumour. On the independent validation dataset, the CNN showed high segmentation accuracy for reader1 (Dice Similarity Coefficient (DSC = 0.68) and reader2 (DSC = 0.70). The area under the curve (AUC) of the resulting probability maps was very high for both readers, AUC = 0.99 (SD = 0.05). Our results demonstrate that deep learning can perform accurate localization and segmentation of rectal cancer in MR imaging in the majority of patients. Deep learning technologies have the potential to improve the speed and accuracy of MRI-based rectum segmentations." }, { "pmid": "28515418", "title": "Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease.", "abstract": "Autosomal Dominant Polycystic Kidney Disease (ADPKD) is the most common inherited disorder of the kidneys. It is characterized by enlargement of the kidneys caused by progressive development of renal cysts, and thus assessment of total kidney volume (TKV) is crucial for studying disease progression in ADPKD. However, automatic segmentation of polycystic kidneys is a challenging task due to severe alteration in the morphology caused by non-uniform cyst formation and presence of adjacent liver cysts. In this study, an automated segmentation method based on deep learning has been proposed for TKV computation on computed tomography (CT) dataset of ADPKD patients exhibiting mild to moderate or severe renal insufficiency. The proposed method has been trained (n = 165) and tested (n = 79) on a wide range of TKV (321.2-14,670.7 mL) achieving an overall mean Dice Similarity Coefficient of 0.86 ± 0.07 (mean ± SD) between automated and manual segmentations from clinical experts and a mean correlation coefficient (ρ) of 0.98 (p < 0.001) for segmented kidney volume measurements in the entire test set. Our method facilitates fast and reproducible measurements of kidney volumes in agreement with manual segmentations from clinical experts." }, { "pmid": "23093617", "title": "Evaluation of choroidal thickness in retinitis pigmentosa using enhanced depth imaging optical coherence tomography.", "abstract": "OBJECTIVE\nTo describe the choroidal characteristics of patients with retinitis pigmentosa (RP) using enhanced depth imaging (EDI) and spectral domain (SD) optical coherence tomography (OCT).\n\n\nPURPOSE\nTo investigate the spectral-domain ocular coherence tomography features of the choroid in patients with RP using EDI.\n\n\nMETHODS\nA prospective, case-control study of 21 patients from the Cole Eye Institute with RP imaged using the Spectralis OCT and an EDI protocol. Submacular choroidal thickness measurements were obtained beneath the fovea and at 500 µm intervals for 2.5 mm nasal and temporal to the centre of the fovea. These measurements were compared to choroidal thickness measurements from 25 healthy age-matched controls with similar refractive error range and no clinical evidence of retinal or glaucomatous disease. Statistical analysis was performed to compare choroidal thickness at each location between the two groups and to correlate choroidal thickness with best-corrected visual acuity and central retinal thickness.\n\n\nRESULTS\nMean ages were 40.6 years for control patients and 45.1 years for RP patients (p>0.05). Mean choroidal thickness measurements were 245.6±103 µm in RP patients and 337.8.2±109 µm in controls (p<0.0001). There was no correlation between subfoveal choroidal thickness and visual acuity or retinal thickness in the RP patients when compared to the control group.\n\n\nCONCLUSIONS\nSubmacular choroidal thickness, as measured by SD-OCT EDI, is significantly reduced in patients with RP, but did not correlate with visual acuity or retinal thickness in RP patients. Further research is needed to understand better  the pathophysiological significance of the choroidal alterations present in RP." }, { "pmid": "19898183", "title": "Enhanced depth imaging optical coherence tomography of the choroid in central serous chorioretinopathy.", "abstract": "PURPOSE\nThe purpose of the study was to evaluate the choroidal thickness in patients with central serous chorioretinopathy, a disease attributed to increased choroidal vascular hyperpermeability.\n\n\nMETHODS\nPatients with central serous chorioretinopathy underwent enhanced depth imaging spectral-domain optical coherence tomography, which was obtained by positioning a spectral-domain optical coherence tomography device close enough to the eye to acquire an inverted image. Seven sections, each comprising 100 averaged scans, were obtained within a 5 degrees x 30 degrees rectangle to encompass the macula. The subfoveal choroidal thickness was measured from the outer border of the retinal pigment epithelium to the inner scleral border.\n\n\nRESULTS\nThe mean age of subjects undergoing enhanced depth imaging spectral-domain optical coherence tomography was 59.3 years (standard deviation, 15.8 years). Seventeen of 19 patients (89.5%) were men, and 12 (63.2%) patients had bilateral clinical disease. The choroidal thickness measured in 28 eligible eyes of the 19 patients was 505 microm (standard deviation, 124 microm), which was significantly greater than the choroidal thickness in normal eyes (P < or = 0.001).\n\n\nCONCLUSION\nEnhanced depth imaging spectral-domain optical coherence tomography demonstrated a very thick choroid in patients with central serous chorioretinopathy. This finding provides additional evidence that central serous chorioretinopathy may be caused by increased hydrostatic pressure in the choroid." }, { "pmid": "28663902", "title": "Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search.", "abstract": "We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique." }, { "pmid": "28856040", "title": "ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks.", "abstract": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness." }, { "pmid": "20094011", "title": "Artifacts in automatic retinal segmentation using different optical coherence tomography instruments.", "abstract": "PURPOSE\nThe purpose of this study was to compare and evaluate artifact errors in automatic inner and outer retinal boundary detection produced by different time-domain and spectral-domain optical coherence tomography (OCT) instruments.\n\n\nMETHODS\nNormal and pathologic eyes were imaged by six different OCT devices. For each instrument, standard analysis protocols were used for macular thickness evaluation. Error frequencies, defined as the percentage of examinations affected by at least one error in retinal segmentation (EF-exam) and the percentage of total errors per total B-scans, were assessed for each instrument. In addition, inner versus outer retinal boundary delimitation and central (1,000 microm) versus noncentral location of errors were studied.\n\n\nRESULTS\nThe study population of the EF-exam for all instruments was 25.8%. The EF-exam of normal eyes was 6.9%, whereas in all pathologic eyes, it was 32.7% (P < 0.0001). The EF-exam was highest in eyes with macular holes, 83.3%, followed by epiretinal membrane with cystoid macular edema, 66.6%, and neovascular age-related macular degeneration, 50.3%. The different OCT instruments produced different EF-exam values (P < 0.0001). The Zeiss Stratus produced the highest percentage of total errors per total B-scans compared with the other OCT systems, and this was statistically significant for all devices (P < or = 0.005) except the Optovue RTvue-100 (P = 0.165).\n\n\nCONCLUSION\nSpectral-domain OCT instruments reduce, but do not eliminate, errors in retinal segmentation. Moreover, accurate segmentation is lower in pathologic eyes compared with normal eyes for all instruments. The important differences in EF among the instruments studied are probably attributable to analysis algorithms used to set retinal inner and outer boundaries. Manual adjustments of retinal segmentations could reduce errors, but it will be important to evaluate interoperator variability." }, { "pmid": "27345731", "title": "Repeatability of Choroidal Thickness Measurements on Enhanced Depth Imaging Optical Coherence Tomography Using Different Posterior Boundaries.", "abstract": "PURPOSE\nTo assess the reliability of manual choroidal thickness measurements by comparing different posterior boundary definitions of the choroidal-scleral junction on enhanced depth imaging optical coherence tomography (EDI-OCT).\n\n\nDESIGN\nReliability analysis.\n\n\nMETHODS\nTwo graders marked the choroidal-scleral junction with segmentation software using different posterior boundaries: (1) the outer border of the choroidal vessel lumen, (2) the outer border of the choroid stroma, and (3) the inner border of the sclera, to measure the vascular choroidal thickness (VCT), stromal choroidal thickness (SCT), and total choroidal thickness (TCT), respectively. Measurements were taken at 0.5-mm intervals from 1.5 mm nasal to 1.5 mm temporal to the fovea, and averaged continuously across the central 3 mm of the macula. Intraclass correlation coefficient (ICC) and coefficient of reliability (CR) were compared to assess intergrader and intragrader reliability.\n\n\nRESULTS\nChoroidal thickness measurements varied significantly with different posterior boundaries (P < .001 for all). Intergrader ICCs were greater for SCT (0.959-0.980) than for TCT (0.928-0.963) and VCT (0.750-0.869), even in eyes where choroidal-scleral junction visibility was <75%. Intergrader CRs were lower for SCT (41.40-62.31) than for TCT (61.13-74.24) or VCT (72.44-115.11). ICCs and CRs showed greater reliability for averaged VCT, SCT, or TCT measurements than at individual locations. Intragrader ICCs and CRs were comparable to intergrader values.\n\n\nCONCLUSIONS\nChoroidal thickness measurements are more reproducible when measured to the border of the choroid stroma (SCT) than the vascular lumen (VCT) or sclera (TCT)." } ]
Frontiers in Neurorobotics
30853907
PMC6396706
10.3389/fnbot.2019.00004
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by a form of intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move a specified object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, bumping an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasping reliable is more complex than for reaching, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.
2. Related Work2.1. The Human Model: Evidence From Child DevelopmentThere is a rich literature in developmental psychology on how infants learn to reach and grasp, in which the overall chronology of learning to reach is reasonably clear (e.g., Berthier, 2011; Corbetta et al., 2014). From birth to about 15 weeks, infants can respond to visual targets with “pre-reaching” movements that are generally not successful at making contact with the targets. From about 15 weeks to about 8 months, reaching movements become increasingly successful, but they are jerky with successive submovements, some of which may represent corrective submovements (von Hofsten, 1991), and some of which reflect underdamped oscillations on the way to an equilibrium point (Thelen et al., 1993). For decades, early reaching was generally believed to require visual perception of both the hand and the target object, with reaching taking place through a process of bringing the hand and object images together (“visual servoing”). However, a landmark experiment (Clifton et al., 1993) showed that the pattern and success rate of reaching by young infants is unaffected when the hand is not visible. Toward the end of the first year, vision of the hand becomes important for configuring and orienting the hand in anticipation of contact with target objects. The smoothness of reaching continues to improve over early years, toward adult reaches which typically consist of “a single motor command with inflight corrective movements as needed” (Berthier, 2011).Theorists grapple with the problem that reaching and grasping require learning useful mappings between visual space (two- or three-dimensional) and the configuration space of the arm (with dimensionality equal to the number degrees of freedom).Bremner et al. (2008) address this issue under the term, multisensory integration, focusing on sensory modalities including touch, proprioception, and vision. They propose two distinct neural mechanisms. The first assumes a fixed initial body posture and arm configuration, and represents the positions of objects within an egocentric frame of reference. The second is capable of re-mapping spatial relations in light of changes in body posture and arm configuration, and thus effectively encodes object position in a world-centered frame of reference.Corbetta et al. (2014) focus directly on how the relation is learned between proprioception (“the feel of the arm”) and vision (“the sight of the object”) during reach learning. They describe three theories: vision first; proprioception first; and vision and proprioception together. Their experimental results weakly supported the proprioception-first theory, but all three had strengths and weaknesses.Thomas et al. (2015) closely observed spontaneous self-touching behavior in infants during their first 6 months. Their analysis supports two separately-developing neural pathways, one for Reach, which moves the hand to contact the target object, and a second for Grasp, which shapes the hand to gain successful control of the object.These and other investigators provide valuable insights into distinctions that contribute to answering this important question. But different distinctions from different investigators can leave us struggling to discern which differences are competing theories to be discriminated, and which are different but compatible aspects of a single more complex reality.We believe that a theory of a behavior of interest (in this case, learning from unguided experience to reach and grasp) can be subjected to an additional demanding evaluation by working to define and implement a computational model capable of exhibiting the desired behavior. In addition to identifying important distinctions, this exercise ensures that the different parts of a complex theory can, in fact, work together to accomplish their goal.The model we present at this point is preliminary. To implement it on a particular robot, certain aspects of the perceptual and motor system models will be specific to the robot, and not realistic for a human infant. To design, implement, debug, and improve a complex model, we focus on certain aspects of the model, while others remain over-simplified. For example, our model of the Peri-Personal Space (PPS) Graph uses vision during the creation of the PPS Graph, but then does not need vision of the hand while reaching to a visible object (Clifton et al., 1993). The early reaching trajectory will be quite jerky because of the granularity of the edges in the PPS Graph (von Hofsten, 1991), but another component of the jerkiness could well be due to underdamped dynamical control of the hand as it moves along each edge (Thelen et al., 1993), which is not yet incorporated into our model.2.2. Robot Developmental Learning to Reach and Grasp2.2.1. Robotic ModelingSome robotics researchers (e.g., Hersch et al., 2008; Sturm et al., 2008) focus on learning the kind of precise model of the robot that is used for traditional forward and inverse kinematics-based motion planning. Hersch et al. (2008) learn a body schema for a humanoid robot, modeled as a tree-structured hierarchy of frames of reference, assuming that the robot is given the topology of the network of joints and segments and that the robot can perceive and track the 3D position of each end-effector. Sturm et al. (2008) start with a pre-specified set of variables and a fully-connected Bayesian network model. The learning process uses visual images of the arm while motor babbling, exploiting visual markers that allow extraction of 6D pose for each joint. Bayesian inference eliminates unnecessary links and learns probability distributions over variable values. Our model makes weaker assumptions about the variables and constraints included in the model, and uses much weaker information from visual perception.2.2.2. Neural ModelingOther researchers structure their models according to hypotheses about the neural control of reaching and grasping, with constraints represented by neural networks that are trained from experience. Oztop et al. (2004) draw on empirical data from the literature about human infants, to motivate their computational model (ILGM) of grasp learning. The model consists of neural networks representing the probability distributions of joint angle velocities. They evaluate the performance of their model with a simulated robot arm and hand, assuming that reaching is already programmed in. Their model includes a Palmar reflex, and they focus on learning an open-loop controller that is likely to terminate with a successful grasp.Chinellato et al. (2011) propose an architecture consisting of two radial basis function networks linking retinotopic information with eye movements and arm movements through a shared head/body-centered representation. Network weights are trained through experience with a simulated 2D environment and 2 dof arm. Experiments demonstrate appropriate qualitative properties of the behavior.Savastano and Nolfi (2013) describe an embodied computational model implemented as a recurrent neural network, and evaluated on a simulation of the iCub robot. They demonstrate pre-reaching, gross-reaching, and fine-reaching phases of learning and behavior, qualitatively matching observations of children such as diminished use of vision in the first two phases, and proximal-then-distal use of the arm's degrees of freedom. The transitions from one phase to the next are represented by manually adding certain links and changing certain parameters in the network, begging the question about how and why those changes take place during development.Caligiore et al. (2014) present a computational model of reach learning based on reinforcement learning, equilibrium point control, and minimizing the speed of the hand at contact. The model is implemented on a simulated planar 2 dof arm. Model predictions are compared with longitudinal observations of infant reaching between ages of 100 and 600 days (Berthier and Keen, 2006), demonstrating qualitative similarities between their predictions and the experimental data in the evolution of performance variables over developmental time. Their focus is on the irregular, jerky trajectories of early reaching (Berthier, 2011), and they attribute this to sensor and process noise, corrective motions, and underdamped dynamics (Thelen et al., 1993). By contrast, we attribute part of the irregular motion to the irregularity of motion along paths in the PPS graph (rather than to real-time detection and correction of errors in the trajectory, which would be inconsistent with Clifton et al., 1993). We accept that other parts of this irregularity is likely due to process noise and underdamped dynamics during motion along individual edges in the PPS graph, but that aspect of our model is not yet implemented. At the same time, the graph representation we use to represent early knowledge of peripersonal space can handle a realistic number of degrees of freedom in a humanoid robot manipulator (Figure 1).2.2.3. Sensorimotor LearningSeveral recent research results are closer to our approach, in the sense of focusing on sensorimotor learning without explicit skill programming, exploration guidance, or labeled training examples. Each of these (including ours) makes simplifying assumptions to support progress at the current state of the art, but each contributes a “piece of the puzzle” for learning to reach and grasp.Our work is closely related to the developmental robotics results of Law et al. (2014a,b). As in their work, we learn graph-structured mappings between proprioceptive and visual sensors, and thus between the corresponding configuration space and work space. Like them, we apply a form of intrinsic motivation to focus the learning agent's attention on unusual events, attempting to make the outcomes reliable. A significant difference is that Law et al. (2014a,b) provide as input an explicit schedule of “constraint release” times, designed to follow the observed stages identified in the developmental psychology literature. Our goal is for the developmental sequence to emerge from the learning process as pre-requisite actions (e.g., reaching) must be learned before actions that use them (e.g., grasping).Jamone et al. (2012, 2014) define a Reachable Space Map over gaze coordinates (head yaw and pitch, plus eye vergence to encode depth) during fixation. The control system moves the head and eyes to place the target object at the center of both camera images. Aspects of this relationship between retinal, gaze, and reach spaces were previously investigated by Hülse et al. (2010). In the Reachable Space Map, R = 0 describes unreachable targets; intermediate values describe how close manipulator joints are to the physical limits of their ranges; and R = 1 means that all joints are well away from their limits. The Reachable Space Map is learned from goal-directed reaching experience trying to find optimal reaches to targets in gaze coordinates. Intermediate values of R can then be used as error values to drive other body-pose degrees of freedom (e.g., waist, legs) to improve the reachability of target objects. Within our framework, the Reachable Space Map would be a valuable addition (in future work), but the PPS Graph (Juett and Kuipers, 2016) is learned at a developmentally earlier stage of knowledge, before goal-directed reaching has a meaningful chance of success. The PPS Graph is learned during non-goal-directed motor babbling, as a sampled exploration of configuration space, accumulating associations between the joint angles determining the arm configuration and the visual image of the arm.Ugur et al. (2015) demonstrate autonomous learning of behavioral primitives and object affordances, leading up to imitation learning of complex actions. However, they start with the assumption that peripersonal space can be modeled as a 3D Euclidean space, and that hand motions can be specified via starting, midpoint, and endpoint coordinates in that 3D space. Our agent starts with only the raw proprioceptively sensed joint angles in the arm and the 2D images provided by vision sensors. The PPS graph represents a learned mapping between those spaces. The egocentric Reachable Space Map (Jamone et al., 2014) could be a step toward a 3D model of peripersonal space.Hoffmann et al. (2017) integrate empirical data from infant experiments with computational modeling on the physical iCub robot. Their model includes haptic and proprioceptive sensing, but not vision. They model the processes by which infants learn to reach to different parts of their bodies, prompted by buzzers on the skin. They report results from experiments with infants, and derive constraints on their computational model. The model is implemented and evaluated on an iCub robot with artificial tactile-sensing skin. However, the authors themselves describe their success as partial, observing that the empirical data, conceptual framework, and robotic modeling are quite disparate, and not well integrated. They aspire to implement a version of the sensorimotor account, but they describe their actual model as much closer to traditional robot programming.
[ "16341854", "18606563", "25090425", "23028516", "23640106", "8404258", "24966847", "22778756", "24478693", "30018547", "15221160", "27711136", "29052630", "8404257", "25620939", "7839147", "14766510" ]
[ { "pmid": "16341854", "title": "Development of reaching in infancy.", "abstract": "The development of reaching for stationary objects was studied longitudinally in 12 human infants: 5 from the time of reach onset to 5 months of age, 5 from 6 to 20 months of age, and 2 from reach onset to 20 months of age. We used linear mixed-effects statistical modeling and found a gradual slowing of reach speed and a more rapid decrease of movement jerk with increasing age. The elbow was essentially locked during early reaching, but was prominently used by 6 months. Differences between infants were distributed normally and no evidence of different types of reachers was found. The current work combined with other longitudinal studies of infant reaching shows that the increase in skill over the first 2 years of life is seen, not by an increase in reaching speed, but by an increase in reach smoothness. By the end of the second year, the overall speed profile of reaching is approaching the typical adult profile where an early acceleration of the hand brings the hand to the region of the target with a smooth transition to a lower-speed phase where grasp is accomplished." }, { "pmid": "18606563", "title": "Infants lost in (peripersonal) space?", "abstract": "A significant challenge in developing spatial representations for the control of action is one of multisensory integration. Specifically, we require an ability to efficiently integrate sensory information arriving from multiple modalities pertaining to the relationships between the acting limbs and the nearby external world (i.e. peripersonal space), across changes in body posture and limb position. Evidence concerning the early development of such spatial representations points towards the independent emergence of two distinct mechanisms of multisensory integration. The earlier-developing mechanism achieves spatial correspondence by representing body parts in their typical or default locations, and the later-developing mechanism does so by dynamically remapping the representation of the position of the limbs with respect to external space in response to changes in postural information arriving from proprioception and vision." }, { "pmid": "25090425", "title": "Integrating reinforcement learning, equilibrium points, and minimum variance to understand the development of reaching: a computational model.", "abstract": "Despite the huge literature on reaching behavior, a clear idea about the motor control processes underlying its development in infants is still lacking. This article contributes to overcoming this gap by proposing a computational model based on three key hypotheses: (a) trial-and-error learning processes drive the progressive development of reaching; (b) the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; (c) the request of precision of the end movement in the presence of muscular noise drives the progressive refinement of the reaching behavior. The tests of the model, based on a two degrees of freedom simulated dynamical arm, show that it is capable of reproducing a large number of empirical findings, most deriving from longitudinal studies with children: the developmental trajectory of several dynamical and kinematic variables of reaching movements, the time evolution of submovements composing reaching, the progressive development of a bell-shaped speed profile, and the evolution of the management of redundant degrees of freedom. The model also produces testable predictions on several of these phenomena. Most of these empirical data have never been investigated by previous computational models and, more important, have never been accounted for by a unique model. In this respect, the analysis of the model functioning reveals that all these results are ultimately explained, sometimes in unexpected ways, by the same developmental trajectory emerging from the interplay of the three mentioned hypotheses: The model first quickly learns to perform coarse movements that assure a contact of the hand with the target (an achievement with great adaptive value) and then slowly refines the detailed control of the dynamical aspects of movement to increase accuracy." }, { "pmid": "23028516", "title": "Dynamic sounds capture the boundaries of peripersonal space representation in humans.", "abstract": "BACKGROUND\nWe physically interact with external stimuli when they occur within a limited space immediately surrounding the body, i.e., Peripersonal Space (PPS). In the primate brain, specific fronto-parietal areas are responsible for the multisensory representation of PPS, by integrating tactile, visual and auditory information occurring on and near the body. Dynamic stimuli are particularly relevant for PPS representation, as they might refer to potential harms approaching the body. However, behavioural tasks for studying PPS representation with moving stimuli are lacking. Here we propose a new dynamic audio-tactile interaction task in order to assess the extension of PPS in a more functionally and ecologically valid condition.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nParticipants vocally responded to a tactile stimulus administered at the hand at different delays from the onset of task-irrelevant dynamic sounds which gave the impression of a sound source either approaching or receding from the subject's hand. Results showed that a moving auditory stimulus speeded up the processing of a tactile stimulus at the hand as long as it was perceived at a limited distance from the hand, that is within the boundaries of PPS representation. The audio-tactile interaction effect was stronger when sounds were approaching compared to when sounds were receding.\n\n\nCONCLUSION/SIGNIFICANCE\nThis study provides a new method to dynamically assess pps representation: The function describing the relationship between tactile processing and the position of sounds in space can be used to estimate the location of PPS boundaries, along a spatial continuum between far and near space, in a valuable and ecologically significant way." }, { "pmid": "23640106", "title": "Tool-use reshapes the boundaries of body and peripersonal space representations.", "abstract": "Interaction with objects in the environment typically requires integrating information concerning the object location with the position and size of body parts. The former information is coded in a multisensory representation of the space around the body, a representation of peripersonal space (PPS), whereas the latter is enabled by an online, constantly updated, action-orientated multisensory representation of the body (BR). Using a tool to act upon relatively distant objects extends PPS representation. This effect has been interpreted as indicating that tools can be incorporated into BR. However, empirical data showing that tool-use simultaneously affects PPS representation and BR are lacking. To study this issue, we assessed the extent of PPS representation by means of an audio-tactile interaction task and BR by means of a tactile distance perception task and a body-landmarks localisation task, before and after using a 1-m-long tool to reach far objects. Tool-use extended the representation of PPS along the tool axis and concurrently shaped BR; after tool-use, subjects perceived their forearm narrower and longer compared to before tool-use, a shape more similar to the one of the tool. Tool-use was necessary to induce these effects, since a pointing task did not affect PPS and BR. These results show that a brief training with a tool induces plastic changes both to the perceived dimensions of the body part acting upon the tool and to the space around it, suggesting a strong overlap between peripersonal space and body representation." }, { "pmid": "8404258", "title": "Is visually guided reaching in early infancy a myth?", "abstract": "The issue examined was whether infants require sight of their hand when first beginning to reach for, contact, and grasp objects. 7 infants were repeatedly tested between 6 and 25 weeks of age. Each session consisted of 8 trials of objects presented in the light and 8 trials of glowing or sounding objects in complete darkness. Infants first contacted the object in both conditions at comparable ages (mean age for light, 12.3 weeks, and for dark, 11.9 weeks). Infants first grasped the object in the light at 16.0 weeks and in the dark at 14.7 weeks, a nonsignificant difference. Once contact was observed, infants continued to touch and grasp the objects in both light and dark throughout all sessions. Because infants could not see their hand or arm in the dark, their early success in contacting the glowing and sounding objects indicates that proprioceptive cues, not sight of the limb, guided their early reaching. Reaching in the light developed in parallel with reaching in the dark, suggesting that visual guidance of the hand is not necessary to achieve object contact either at the onset of successful reaching or in the succeeding weeks." }, { "pmid": "24966847", "title": "Mapping the feel of the arm with the sight of the object: on the embodied origins of infant reaching.", "abstract": "For decades, the emergence and progression of infant reaching was assumed to be largely under the control of vision. More recently, however, the guiding role of vision in the emergence of reaching has been downplayed. Studies found that young infants can reach in the dark without seeing their hand and that corrections in infants' initial hand trajectories are not the result of visual guidance of the hand, but rather the product of poor movement speed calibration to the goal. As a result, it has been proposed that learning to reach is an embodied process requiring infants to explore proprioceptively different movement solutions, before they can accurately map their actions onto the intended goal. Such an account, however, could still assume a preponderant (or prospective) role of vision, where the movement is being monitored with the scope of approximating a future goal-location defined visually. At reach onset, it is unknown if infants map their action onto their vision, vision onto their action, or both. To examine how infants learn to map the feel of their hand with the sight of the object, we tracked the object-directed looking behavior (via eye-tracking) of three infants followed weekly over an 11-week period throughout the transition to reaching. We also examined where they contacted the object. We find that with some objects, infants do not learn to align their reach to where they look, but rather learn to align their look to where they reach. We propose that the emergence of reaching is the product of a deeply embodied process, in which infants first learn how to direct their movement in space using proprioceptive and haptic feedback from self-produced movement contingencies with the environment. As they do so, they learn to map visual attention onto these bodily centered experiences, not the reverse. We suggest that this early visuo-motor mapping is critical for the formation of visually-elicited, prospective movement control." }, { "pmid": "22778756", "title": "The grasp reflex and moro reflex in infants: hierarchy of primitive reflex responses.", "abstract": "The plantar grasp reflex is of great clinical significance, especially in terms of the detection of spasticity. The palmar grasp reflex also has diagnostic significance. This grasp reflex of the hands and feet is mediated by a spinal reflex mechanism, which appears to be under the regulatory control of nonprimary motor areas through the spinal interneurons. This reflex in human infants can be regarded as a rudiment of phylogenetic function. The absence of the Moro reflex during the neonatal period and early infancy is highly diagnostic, indicating a variety of compromised conditions. The center of the reflex is probably in the lower region of the pons to the medulla. The phylogenetic meaning of the reflex remains unclear. However, the hierarchical interrelation among these primitive reflexes seems to be essential for the arboreal life of monkey newborns, and the possible role of the Moro reflex in these newborns was discussed in relation to the interrelationship." }, { "pmid": "24478693", "title": "A psychology based approach for longitudinal development in cognitive robotics.", "abstract": "A major challenge in robotics is the ability to learn, from novel experiences, new behavior that is useful for achieving new goals and skills. Autonomous systems must be able to learn solely through the environment, thus ruling out a priori task knowledge, tuning, extensive training, or other forms of pre-programming. Learning must also be cumulative and incremental, as complex skills are built on top of primitive skills. Additionally, it must be driven by intrinsic motivation because formative experience is gained through autonomous activity, even in the absence of extrinsic goals or tasks. This paper presents an approach to these issues through robotic implementations inspired by the learning behavior of human infants. We describe an approach to developmental learning and present results from a demonstration of longitudinal development on an iCub humanoid robot. The results cover the rapid emergence of staged behavior, the role of constraints in development, the effect of bootstrapping between stages, and the use of a schema memory of experiential fragments in learning new skills. The context is a longitudinal experiment in which the robot advanced from uncontrolled motor babbling to skilled hand/eye integrated reaching and basic manipulation of objects. This approach offers promise for further fast and effective sensory-motor learning techniques for robotic learning." }, { "pmid": "30018547", "title": "Know Your Body Through Intrinsic Goals.", "abstract": "The first \"object\" that newborn children play with is their own body. This activity allows them to autonomously form a sensorimotor map of their own body and a repertoire of actions supporting future cognitive and motor development. Here we propose the theoretical hypothesis, operationalized as a computational model, that this acquisition of body knowledge is not guided by random motor-babbling, but rather by autonomously generated goals formed on the basis of intrinsic motivations. Motor exploration leads the agent to discover and form representations of the possible sensory events it can cause with its own actions. When the agent realizes the possibility of improving the competence to re-activate those representations, it is intrinsically motivated to select and pursue them as goals. The model is based on four components: (1) a self-organizing neural network, modulated by competence-based intrinsic motivations, that acquires abstract representations of experienced sensory (touch) changes; (2) a selector that selects the goal to pursue, and the motor resources to train to pursue it, on the basis of competence improvement; (3) an echo-state neural network that controls and learns, through goal-accomplishment and competence, the agent's motor skills; (4) a predictor of the accomplishment of the selected goals generating the competence-based intrinsic motivation signals. The model is tested as the controller of a simulated simple planar robot composed of a torso and two kinematic 3-DoF 2D arms. The robot explores its body covered by touch sensors by moving its arms. The results, which might be used to guide future empirical experiments, show how the system converges to goals and motor skills allowing it to touch the different parts of own body and how the morphology of the body affects the formed goals. The convergence is strongly dependent on competence-based intrinsic motivations affecting not only skill learning and the selection of formed goals, but also the formation of the goal representations themselves." }, { "pmid": "15221160", "title": "Infant grasp learning: a computational model.", "abstract": "This paper presents ILGM (the Infant Learning to Grasp Model), the first computational model of infant grasp learning that is constrained by the infant motor development literature. By grasp learning we mean learning how to make motor plans in response to sensory stimuli such that open-loop execution of the plan leads to a successful grasp. The open-loop assumption is justified by the behavioral evidence that early grasping is based on open-loop control rather than on-line visual feedback. Key elements of the infancy period, namely elementary motor schemas, the exploratory nature of infant motor interaction, and inherent motor variability are captured in the model. In particular we show, through computational modeling, how an existing behavior (reaching) yields a more complex behavior (grasping) through interactive goal-directed trial and error learning. Our study focuses on how the infant learns to generate grasps that match the affordances presented by objects in the environment. ILGM was designed to learn execution parameters for controlling the hand movement as well as for modulating the reach to provide a successful grasp matching the target object affordance. Moreover, ILGM produces testable predictions regarding infant motor learning processes and poses new questions to experimentalists." }, { "pmid": "27711136", "title": "Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin.", "abstract": "This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement." }, { "pmid": "29052630", "title": "Mastering the game of Go without human knowledge.", "abstract": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo." }, { "pmid": "8404257", "title": "The transition to reaching: mapping intention and intrinsic dynamics.", "abstract": "The onset of directed reaching demarks the emergence of a qualitatively new skill. In this study we asked how intentional reaching arises from infants' ongoing, intrinsic movement dynamics, and how first reaches become successively adapted to the task. We observed 4 infants weekly in a standard reaching task and identified the week of first arm-extended reach, and the 2 weeks before and after onset. The infants first reached at ages ranging from 12 to 22 weeks, and they used different strategies to get the toy. 2 infants, whose spontaneous movements were large and vigorous, damped down their fast, forceful movements. The 2 quieter infants generated faster and more energetic movements to lift their arms. The infants modulated reaches in task-appropriate ways in the weeks following onset. Reaching emerges when infants can intentionally adjust the force and compliance of the arm, often using muscle coactivation. These results suggest that the infant central nervous system does not contain programs that detail hand trajectory, joint coordination, and muscle activation patterns. Rather, these patterns are the consequences of the natural dynamics of the system and the active exploration of the match between those dynamics and the task." }, { "pmid": "25620939", "title": "Independent development of the Reach and the Grasp in spontaneous self-touching by human infants in the first 6 months.", "abstract": "The Dual Visuomotor Channel Theory proposes that visually guided reaching is a composite of two movements, a Reach that advances the hand to contact the target and a Grasp that shapes the digits for target purchase. The theory is supported by biometric analyses of adult reaching, evolutionary contrasts, and differential developmental patterns for the Reach and the Grasp in visually guided reaching in human infants. The present ethological study asked whether there is evidence for a dissociated development for the Reach and the Grasp in nonvisual hand use in very early infancy. The study documents a rich array of spontaneous self-touching behavior in infants during the first 6 months of life and subjected the Reach movements to an analysis in relation to body target, contact type, and Grasp. Video recordings were made of resting alert infants biweekly from birth to 6 months. In younger infants, self-touching targets included the head and trunk. As infants aged, targets became more caudal and included the hips, then legs, and eventually the feet. In younger infants hand contact was mainly made with the dorsum of the hand, but as infants aged, contacts included palmar contacts and eventually grasp and manipulation contacts with the body and clothes. The relative incidence of caudal contacts and palmar contacts increased concurrently and were significantly correlated throughout the period of study. Developmental increases in self-grasping contacts occurred a few weeks after the increase in caudal and palmar contacts. The behavioral and temporal pattern of these spontaneous self-touching movements suggest that the Reach, in which the hand extends to make a palmar self-contact, and the Grasp, in which the digits close and make manipulatory movements, have partially independent developmental profiles. The results additionally suggest that self-touching behavior is an important developmental phase that allows the coordination of the Reach and the Grasp prior to and concurrent with their use under visual guidance." }, { "pmid": "7839147", "title": "The functional significance of arm movements in neonates.", "abstract": "Arm movements made by newborn babies are usually dismissed as unintentional, purposeless, or reflexive. Spontaneous arm-waving movements were recorded while newborns lay supine facing to one side. They were allowed to see only the arm they were facing, only the opposite arm on a video monitor, or neither arm. Small forces pulled on their wrists in the direction of the toes. The babies opposed the perturbing force so as to keep an arm up and moving normally, but only when they could see the arm, either directly or on the video monitor. The findings indicate that newborns can purposely control their arm movements in the face of external forces and that development of visual control of arm movement is underway soon after birth." }, { "pmid": "14766510", "title": "Structuring of early reaching movements: a longitudinal study.", "abstract": "Reaches, performed by 5 infants, recorded at 19 weeks of age and every third week thereafter until 31 weeks of age, were studied quantitatively. Earlier findings about action units were confirmed. At all ages studied, movements were structured into phases of acceleration and deceleration. Reaching trajectories were found to be relatively straight within these units and to change direction between them. It was also found that at all ages, there was generally one dominating transport unit in each reach. The structuring of reaching movements changed in four important ways during the period studied. First, the sequential structuring became more systematic with age, with the dominating transport unit beginning the movement. Second, the duration of the transport unit became longer and covered a larger proportion of the approach. Third, the number of action units decreased with age, approaching the two-phase structure of adult reaching. Finally, reaching trajectories became straighter with age." } ]
Scientific Reports
30824754
PMC6397199
10.1038/s41598-019-39782-2
Tensor Decomposition for Colour Image Segmentation of Burn Wounds
Research in burns has been a continuing demand over the past few decades, and important advancements are still needed to facilitate more effective patient stabilization and reduce mortality rate. Burn wound assessment, which is an important task for surgical management, largely depends on the accuracy of burn area and burn depth estimates. Automated quantification of these burn parameters plays an essential role for reducing these estimate errors conventionally carried out by clinicians. The task for automated burn area calculation is known as image segmentation. In this paper, a new segmentation method for burn wound images is proposed. The proposed methods utilizes a method of tensor decomposition of colour images, based on which effective texture features can be extracted for classification. Experimental results showed that the proposed method outperforms other methods not only in terms of segmentation accuracy but also computational speed.
Related WorksSegmentation is one of the major research areas in image processing and computer vision. The goal of image segmentation is to extract the region of interest in an image that includes a background and other non-interest objects. There are many different techniques developed to accomplish the segmentation, such as edge detection, histogram thresholding, region growing, active contours or snake algorithm, clustering, and machine-learning based methods, as reviewed in8,9, which extract the characteristics that describe image such as: luminance, brightness, colour, texture, and shape9,10. The combination of these properties, where applicable, is expected to provide better segmentation results than those that utilize just one or fewer.Deng and Manjunath11 proposed the JSEG method, which separates the segmentation process into two stages: colour quantization and spatial segmentation. In the first stage, colours in the image are quantized to several representative classes that can be used to differentiate regions in the image. This quantization is performed in the colour space without considering the spatial distributions of the colours. The image pixel values are then replaced by their corresponding colour class labels, thus forming a class-map of the image. The class-map can be viewed as a special kind of texture composition. In the second stage, spatial segmentation is performed directly on this class-map without considering the corresponding pixel colour similarity. Cucchiara et al.12 developed a segmentation method for extracting skin lesions based on a recursive version of the fuzzy c-means algorithm13 (FCM) for 2D colour histograms constructed by the principal component analysis (PCA) of the CIELab colour space.Acha et al.14 worked with the CIELuv colour space for the image segmentation by extracting the colour-texture information from a 5 × 5 pixel area around a point that the user selects with the mouse. These features are combined, and the Euclidean distance between the previously chosen area and the others is calculated to classify two regions of burn and non-burn, using the Otsu’s thresholding method15. Gomez et al.16 developed an algorithm based on the CIELab colour space and independent histogram pursuit (IHP) to segment skin lesions images. The IHP is composed by 2 steps: firstly, the algorithm finds a combination of spectral bands that enhance the contrast between healthy skin and lesion; secondly, it estimates the remaining combinations which enhance subtle structures of the image. The classification is done by the k-means cluster analysis to identify the skin lesion on an image.Papazoglou et al.17 proposed an algorithm for wound segmentation which requires manual input, uses the combination of RGB and CIELab colour spaces, as well as the combination of threshold and pixel-based colour comparing segmentation methods. Cavalcanti et al.18 used the independent component analysis19,20 (ICA) to locate skin lesions in an image and to separate it from the healthy skin. Given the ICA results, an initial lesion localization is obtained, the lesion boundary is then determined by using the level-set method with post-processing steps. Wantanajittikul et al.21 utilized the Cr values of the YCbCr colour space to identify the skin from the background in the first step, secondly the u* and v* chromatic sets of the CIELuv colour space were used to capture the burnt region, and finally, the FCM was used to separate the burn wound region from the healthy skin. Loizou et al.22 applied the snake algorithm23 for image segmentation to extract texture and geometrical features for the evaluation of wound healing process.
[ "20958968", "15242917", "26700877", "16229658", "18232357", "20492631", "18249617", "23542950", "26188898", "22641706", "28060704", "19190712", "12804087", "23265730" ]
[ { "pmid": "20958968", "title": "Severe burn injury in Europe: a systematic review of the incidence, etiology, morbidity, and mortality.", "abstract": "INTRODUCTION\nBurn injury is a serious pathology, potentially leading to severe morbidity and significant mortality, but it also has a considerable health-economic impact. The aim of this study was to describe the European hospitalized population with severe burn injury, including the incidence, etiology, risk factors, mortality, and causes of death.\n\n\nMETHODS\nThe systematic literature search (1985 to 2009) involved PubMed, the Web of Science, and the search engine Google. The reference lists and the Science Citation Index were used for hand searching (snowballing). Only studies dealing with epidemiologic issues (for example, incidence and outcome) as their major topic, on hospitalized populations with severe burn injury (in secondary and tertiary care) in Europe were included. Language restrictions were set on English, French, and Dutch.\n\n\nRESULTS\nThe search led to 76 eligible studies, including more than 186,500 patients in total. The annual incidence of severe burns was 0.2 to 2.9/10,000 inhabitants with a decreasing trend in time. Almost 50% of patients were younger than 16 years, and ~60% were male patients. Flames, scalds, and contact burns were the most prevalent causes in the total population, but in children, scalds clearly dominated. Mortality was usually between 1.4% and 18% and is decreasing in time. Major risk factors for death were older age and a higher total percentage of burned surface area, as well as chronic diseases. (Multi) organ failure and sepsis were the most frequently reported causes of death. The main causes of early death (< 48 hours) were burn shock and inhalation injury.\n\n\nCONCLUSIONS\nDespite the lack of a large-scale European registration of burn injury, more epidemiologic information is available about the hospitalized population with severe burn injury than is generally presumed. National and international registration systems nevertheless remain necessary to allow better targeting of prevention campaigns and further improvement of cost-effectiveness in total burn care." }, { "pmid": "26700877", "title": "Standardised mortality ratio based on the sum of age and percentage total body surface area burned is an adequate quality indicator in burn care: An exploratory review.", "abstract": "Standardised Mortality Ratio (SMR) based on generic mortality predicting models is an established quality indicator in critical care. Burn-specific mortality models are preferred for the comparison among patients with burns as their predictive value is better. The aim was to assess whether the sum of age (years) and percentage total body surface area burned (which constitutes the Baux score) is acceptable in comparison to other more complex models, and to find out if data collected from a separate burn centre are sufficient for SMR based quality assessment. The predictive value of nine burn-specific models was tested by comparing values from the area under the receiver-operating characteristic curve (AUC) and a non-inferiority analysis using 1% as the limit (delta). SMR was analysed by comparing data from seven reference sources, including the North American National Burn Repository (NBR), with the observed mortality (years 1993-2012, n=1613, 80 deaths). The AUC values ranged between 0.934 and 0.976. The AUC 0.970 (95% CI 0.96-0.98) for the Baux score was non-inferior to the other models. SMR was 0.52 (95% CI 0.28-0.88) for the most recent five-year period compared with NBR based data. The analysis suggests that SMR based on the Baux score is eligible as an indicator of quality for setting standards of mortality in burn care. More advanced modelling only marginally improves the predictive value. The SMR can detect mortality differences in data from a single centre." }, { "pmid": "16229658", "title": "Segmentation and classification of burn images by color and texture information.", "abstract": "In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%." }, { "pmid": "18232357", "title": "Independent histogram pursuit for segmentation of skin lesions.", "abstract": "In this paper, an unsupervised algorithm, called the Independent Histogram Pursuit (IHP), for segmenting dermatological lesions is proposed. The algorithm estimates a set of linear combinations of image bands that enhance different structures embedded in the image. In particular, the first estimated combination enhances the contrast of the lesion to facilitate its segmentation. Given an N-band image, this first combination corresponds to a line in N dimensions, such that the separation between the two main modes of the histogram obtained by projecting the pixels onto this line, is maximized. The remaining combinations are estimated in a similar way under the constraint of being orthogonal to those already computed. The performance of the algorithm is tested on five different dermatological datasets. The results obtained on these datasets indicate the robustness of the algorithm and its suitability to deal with different types of dermatological lesions. The boundary detection precision using k-means segmentation was close to 97%. The proposed algorithm can be easily combined with the majority of classification algorithms." }, { "pmid": "20492631", "title": "Image analysis of chronic wounds for determining the surface area.", "abstract": "Progress in wound healing is primarily quantified by the rate of change of the wound's surface area. The most recent guidelines of the Wound Healing Society suggest that a reduction in wound size of <40% within 4 weeks necessitates a reevaluation of the treatment. However, accurate measurement of wound size is challenging due to the complexity of a chronic wound, the variable lighting conditions of examination rooms, and the time constraints of a busy clinical practice. In this paper, we present our methodology to quantify a wound boundary and measure the enclosed wound area reproducibly. The method derives from a combination of color-based image analysis algorithms, and our results are validated with wounds in animal models and human wounds of diverse patients. Images were taken by an inexpensive digital camera under variable lighting conditions. Approximately 100 patient images and 50 animal images were analyzed and a high overlap was achieved between the manual tracings and the calculated wound area by our method in both groups. The simplicity of our method combined with its robustness suggests that it can be a valuable tool in clinical wound evaluations. The basic challenge of our method is in deep wounds with very small surface areas where color-based detection can lead to erroneous results and which could be overcome by texture-based detection methods. The authors are willing to provide the developed MATLAB code for the work discussed in this paper." }, { "pmid": "18249617", "title": "Active contours without edges.", "abstract": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected." }, { "pmid": "23542950", "title": "Burn depth analysis using multidimensional scaling applied to psychophysical experiment data.", "abstract": "In this paper a psychophysical experiment and a multidimensional scaling (MDS) analysis are undergone to determine the physical characteristics that physicians employ to diagnose a burn depth. Subsequently, these characteristics are translated into mathematical features, correlated with these physical characteristics analysis. Finally, a study to verify the ability of these mathematical features to classify burns is performed. In this study, a space with axes correlated with the MDS axes has been developed. 74 images have been represented in this space and a k-nearest neighbor classifier has been used to classify these 74 images. A success rate of 66.2% was obtained when classifying burns into three burn depths and a success rate of 83.8% was obtained when burns were classified as those which needed grafts and those which did not. Additional studies have been performed comparing our system with a principal component analysis and a support vector machine classifier. Results validate the ability of the mathematical features extracted from the psychophysical experiment to classify burns into their depths. In addition, the method has been compared with another state-of-the-art method and the same database." }, { "pmid": "26188898", "title": "Features identification for automatic burn classification.", "abstract": "PURPOSE\nIn this paper an automatic system to diagnose burn depths based on colour digital photographs is presented.\n\n\nJUSTIFICATION\nThere is a low success rate in the determination of burn depth for inexperienced surgeons (around 50%), which rises to the range from 64 to 76% for experienced surgeons. In order to establish the first treatment, which is crucial for the patient evolution, the determination of the burn depth is one of the main steps. As the cost of maintaining a Burn Unit is very high, it would be desirable to have an automatic system to give a first assessment in local medical centres or at the emergency, where there is a lack of specialists.\n\n\nMETHOD\nTo this aim a psychophysical experiment to determine the physical characteristics that physicians employ to diagnose a burn depth is described. A Multidimensional Scaling Analysis (MDS) is then applied to the data obtained from the experiment in order to identify these physical features. Subsequently, these characteristics are translated into mathematical features. Finally, via a classifier (Support Vector Machine) and a feature selection method, the discriminant power of these mathematical features to distinguish among burn depths is analysed, and the subset of features that better estimates the burn depth is selected.\n\n\nRESULTS\nA success rate of 79.73% was obtained when burns were classified as those which needed grafts and those which did not.\n\n\nCONCLUSIONS\nResults validate the ability of the features extracted from the psychophysical experiment to classify burns into their depths." }, { "pmid": "22641706", "title": "SLIC superpixels compared to state-of-the-art superpixel methods.", "abstract": "Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation." }, { "pmid": "28060704", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.", "abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet." }, { "pmid": "19190712", "title": "Development and Validation of Biomarker Classifiers for Treatment Selection.", "abstract": "Many syndromes traditionally viewed as individual diseases are heterogeneous in molecular pathogenesis and treatment responsiveness. This often leads to the conduct of large clinical trials to identify small average treatment benefits for heterogeneous groups of patients. Drugs that demonstrate effectiveness in such trials may subsequently be used broadly, resulting in ineffective treatment of many patients. New genomic and proteomic technologies provide powerful tools for the selection of patients likely to benefit from a therapeutic without unacceptable adverse events. In spite of the large literature on developing predictive biomarkers, there is considerable confusion about the development and validation of biomarker based diagnostic classifiers for treatment selection. In this paper we attempt to clarify some of these issues and to provide guidance on the design of clinical trials for evaluating the clinical utility and robustness of pharmacogenomic classifiers." }, { "pmid": "12804087", "title": "Estimating dataset size requirements for classifying DNA microarray data.", "abstract": "A statistical methodology for estimating dataset size requirements for classifying microarray data using learning curves is introduced. The goal is to use existing classification results to estimate dataset size requirements for future classification experiments and to evaluate the gain in accuracy and significance of classifiers built with additional data. The method is based on fitting inverse power-law models to construct empirical learning curves. It also includes a permutation test procedure to assess the statistical significance of classification performance for a given dataset size. This procedure is applied to several molecular classification problems representing a broad spectrum of levels of complexity." }, { "pmid": "23265730", "title": "Sample size planning for classification models.", "abstract": "In biospectroscopy, suitably annotated and statistically independent samples (e.g. patients, batches, etc.) for classifier training and testing are scarce and costly. Learning curves show the model performance as function of the training sample size and can help to determine the sample size needed to train good classifiers. However, building a good model is actually not enough: the performance must also be proven. We discuss learning curves for typical small sample size situations with 5-25 independent samples per class. Although the classification models achieve acceptable performance, the learning curve can be completely masked by the random testing uncertainty due to the equally limited test sample size. In consequence, we determine test sample sizes necessary to achieve reasonable precision in the validation and find that 75-100 samples will usually be needed to test a good but not perfect classifier. Such a data set will then allow refined sample size planning on the basis of the achieved performance. We also demonstrate how to calculate necessary sample sizes in order to show the superiority of one classifier over another: this often requires hundreds of statistically independent test samples or is even theoretically impossible. We demonstrate our findings with a data set of ca. 2550 Raman spectra of single cells (five classes: erythrocytes, leukocytes and three tumour cell lines BT-20, MCF-7 and OCI-AML3) as well as by an extensive simulation that allows precise determination of the actual performance of the models in question." } ]
Heliyon
30886915
PMC6401533
10.1016/j.heliyon.2018.e01043
Prediction of patient's response to OnabotulinumtoxinA treatment for migraine
Migraine affects the daily life of millions of people around the world. The most well-known disabling symptom associated with this illness is the intense headache. Nowadays, there are treatments that can diminish the level of pain. OnabotulinumtoxinA (BoNT-A) has become a very popular medication for treating migraine headaches in those cases in which other medication is not working, typically in chronic migraines. Currently, the positive response to Botox treatment is not clearly understood, yet understanding the mechanisms that determine the effectiveness of the treatment could help with the development of more effective treatments.To solve this problem, this paper sets up a realistic scenario of electronic medical records of migraineurs under BoNT-A treatment where some clinical features from real patients are labeled by doctors. Medical registers have been preprocessed. A label encoding method based on simulated annealing has been proposed. Two methodologies for predicting the results of the first and the second infiltration of the BoNT-A based treatment are contempled. Firstly, a strategy based on the medical HIT6 metric is described, which achieves an accuracy over 91%. Secondly, when this value is not available, several classifiers and clustering methods have been performed in order to predict the reduction and adverse effects, obtaining an accuracy of 85%. Some clinical features as Greater occipital nerves (GON), chronic migraine time evolution and others have been detected as relevant features when examining the prediction models. The GON and the retroocular component have also been described as important features according to doctors.
2Related workSeveral studies have looked at the clinical features of patients with migraine which may be associated with a favorable response to BoNT-A treatment, although conclusive results are not yet available for use in clinical practice. Possible predictors of a good response have been proposed: allodynia (painful hypersensitivity to superficial stimuli) [23], the unilateral character of a migraine [23], [24], associated migraine aura (visual, language, motor or sensory alterations occurring prior to pain) [25], or the build-up time to maximum pain (shorter time, better response to BoNT-A) [26]. Pain directionality also seems to be a possible clinical predictor. This feature refers to whether the headache feels like it is exploding, imploding or ocular. The term exploding refers to when the discomfort is felt pushing from the inside out. Patients suffering from imploding or ocular pain tend to be relieved with the BoNT-A treatment than those with the exploding [27]. Pagola et al. studied a number of possible clinical predictive features in parallel, including unilateral location of headache, pericranial muscular tension, directionality of pain, duration of migraine history and medication overuse, comparing responders to BoNT-A treatment with non-responders, but no significant differences emerged [28].In order to find the most significant features of patients and classify them, there is a vast number of algorithms available [29]. C4.5, k-means, Support Vector Machines (SVM), Expectation-Maximization (EM) algorithm, PageRank, AdaBoost, k-NN, Naive Bayes, and CART are among the most common data mining algorithms used by the research community in many fields. A Feature Subset Selection (FSS) approach is typically applied first [30] in order to improve the accuracy of the classifiers. This approach has certain advantages, such as offering a better understanding of the prediction model or a better generalization by reducing overfitting. This problem happens when a prediction model is very closely adjusted to the training data, so it does not perform well when predicting new observations [31]. These methods have been applied to different neurological anomalies, for example: a feature extraction and selection from EEG signals in combination with a sleep stages classifier [32], an automatic seizure detection system for newborns [33], or to assess the feasibility of employing accelerometers to characterize the postural behavior of early Parkinson's disease subjects [34]. Furthermore, in order to improve migraine treatment predictions, we consider that simulated annealing (SA) [35] is a particularly interesting approach to take into account. SA is a stochastic, metaheuristic technique used in difficult optimization problems to approximate the global optimum of a given function in its search space. This approach has been widely employed to improve the performance of other algorithms. For example, SA has been used to improve FSS in [36]. Furthermore, SVM and SA have been combined to find the best selected features to increase the accuracy of anomaly intrusion detection in [37], and for a hepatitis diagnosis method in [38].A key point to mention is how to measure the impact headaches have on daily life. In this sense, an important metric that allows the measurement of this issue is HIT6. The HIT6 [22] scale is a perceptional survey that is filled out by patients in order to measure their level of pain related with the migraine. In regular clinical practice, BoNT-A response is considered successful by doctors if it reduces migraine attack frequency or days with attacks by at least 50% within 3 months. Response features such as the HIT6 score (Headache Impact Test) are reflected less consistently. Thus, in our study, where data were obtained retrospectively through the review of clinical histories, we were able to obtain only a small set of patients for whom the HIT6 score had been collected. As a consequence, for the vast majority of the cases we must define an alternative way to determine the efficiency of the BoNT-A based treatment.Therefore, although there is an ongoing research into the prediction of the appearance of migraines and even the effects of migraine treatment, to the best of our knowledge there is no existing method for predicting the efficiency of the BoNT-A treatment. For this purpose, we propose two methodologies that are customized for the migraine patients' clinical data, and which are able to deal with incomplete as well as heterogeneous data. Firstly, we present an approach that considers the medical HIT6 metric in order to predict the treatment success. Secondly, as this metric is rarely found in our medical databases, an alternative approach that uses SA in combination with classification and clustering methods is presented.
[ "23771276", "19614702", "20164501", "25304766", "12603635", "22468643", "20647170", "21883197", "20647171", "20487038", "17441971", "17300356", "19912346", "21070228", "25431141", "21956721", "21298315", "21499747", "25500317", "28527052", "14651415", "17868356", "23126597", "18484982", "17069972", "24610690", "24609509", "16376606", "21349795", "17813860", "21968203", "17720704", "28132209", "19260029", "16761367", "22281045", "16315276", "10225344", "22331030", "16002144", "25523108", "19539239", "15836567" ]
[ { "pmid": "19614702", "title": "Global prevalence of chronic migraine: a systematic review.", "abstract": "The aim of this review was to summarize population-based studies reporting prevalence and/or incidence of chronic migraine (CM) and to explore variation across studies. A systematic literature search was conducted. Relevant data were abstracted and estimates were subdivided based on the criteria used in each study. Sixteen publications representing 12 studies were accepted. None presented data on CM incidence. The prevalence of CM was 0-5.1%, with estimates typically in the range of 1.4-2.2%. Seven studies used Silberstein-Lipton criteria (or equivalent), with prevalence ranging from 0.9% to 5.1%. Three estimates used migraine that occurred ≥15 days per month, with prevalence ranging from 0 to 0.7%. Prevalence varied by World Health Organization region and gender. This review identified population-based studies of CM prevalence, although heterogeneity across studies and lack of data from certain regions leaves an incomplete picture. Future studies on CM would benefit from an International Classification of Headache Disorders consensus diagnosis that is clinically appropriate and operational in epidemiological studies." }, { "pmid": "20164501", "title": "Sociodemographic and comorbidity profiles of chronic migraine and episodic migraine sufferers.", "abstract": "OBJECTIVE\nTo characterise and compare the sociodemographic profiles and the frequency of common comorbidities for adults with chronic migraine (CM) and episodic migraine (EM) in a large population-based sample.\n\n\nMETHODS\nThe American Migraine Prevalence and Prevention (AMPP) study is a longitudinal, population-based, survey. Data from the 2005 survey were analysed to assess differences in sociodemographic profiles and rates of common comorbidities between two groups of respondents: CM (ICHD-2 defined migraine; > or =15 days of headache per month) and EM (ICHD-2 defined migraine; 0-14 days of headache per month). Categories of comorbid conditions included psychiatric, respiratory, cardiovascular, pain and 'other' such as obesity and diabetes.\n\n\nRESULTS\nOf 24 000 headache sufferers surveyed in 2005, 655 respondents had CM, and 11 249 respondents had EM. Compared with EM, respondents with CM had stastically significant lower levels of household income, were less likely to be employed full time and were more likely to be occupationally disabled. Those with CM were approximately twice as likely to have depression, anxiety and chronic pain. Respiratory disorders including asthma, bronchitis and chronic obstructive pulmonary disease, and cardiac risk factors including hypertension, diabetes, high cholesterol and obesity, were also significantly more likely to be reported by those with CM.\n\n\nDISCUSSION\nSociodemographic and comorbidity profiles of the CM population differ from the EM population on multiple dimensions, suggesting that CM and EM differ in important ways other than headache frequency." }, { "pmid": "25304766", "title": "The impact of chronic migraine: The Chronic Migraine Epidemiology and Outcomes (CaMEO) Study methods and baseline results.", "abstract": "BACKGROUND\nLongitudinal migraine studies have rarely assessed headache frequency and disability variation over a year.\n\n\nMETHODS\nThe Chronic Migraine Epidemiology and Outcomes (CaMEO) Study is a cross-sectional and longitudinal Internet study designed to characterize the course of episodic migraine (EM) and chronic migraine (CM). Participants were recruited from a Web-panel using quota sampling in an attempt to obtain a sample demographically similar to the US population. Participants who passed the screener were assessed every three months with the Core (baseline, six, and 12 months) and Snapshot (months three and nine) modules, which assessed headache frequency, headache-related disability, treatments, and treatment satisfaction. The Core also assessed resource use, health-related quality of life, and other features. One-time cross-sectional modules measured family burden, barriers to medical care, and comorbidities/endophenotypes.\n\n\nRESULTS\nOf 489,537 invitees, we obtained 58,418 (11.9%) usable returns including 16,789 individuals who met ICHD-3 beta migraine criteria (EM (<15 headache days/mo): n = 15,313 (91.2%); CM (≥ 15 headache days/mo): n = 1476 (8.8%)). At baseline, all qualified respondents (n = 16,789) completed the Screener, Core, and Barriers to Care modules. Subsequent modules showed some attrition (Comorbidities/Endophenotypes, n = 12,810; Family Burden (Proband), n = 13,064; Family Burden (Partner), n = 4022; Family Burden (Child), n = 2140; Snapshot (three months), n = 9741; Core (six months), n = 7517; Snapshot (nine months), n = 6362; Core (12 months), n = 5915). A total of 3513 respondents (21.0%) completed all modules, and 3626 (EM: n = 3303 (21.6%); CM: n = 323 (21.9%)) completed all longitudinal assessments.\n\n\nCONCLUSIONS\nThe CaMEO Study provides cross-sectional and longitudinal data that will contribute to our understanding of the course of migraine over one year and quantify variations in headache frequency, headache-related disability, comorbidities, treatments, and familial impact." }, { "pmid": "12603635", "title": "Migraine preventive medication reduces resource utilization.", "abstract": "OBJECTIVE\nTo determine if long-term resource utilization is reduced by adding a preventive medication to a migraine management regimen that already includes acute medication.\n\n\nBACKGROUND\nIn 2000, new evidence-based guidelines for the treatment of migraine were released by the US Headache Consortium and the American Academy of Neurology. Although these guidelines emphasize the role of preventive medication in achieving significant clinical improvement, little yet is known concerning the impact of such management on medical and pharmaceutical resources. Methods.-Resource utilization information in a large claims database was analyzed retrospectively.\n\n\nRESULTS\nAdding a preventive medication to migraine management reduced the use of other migraine medications, as well as visits to physician offices and emergency departments. In addition, both acute and preventive medications were associated with lower utilization of computed tomography and magnetic resonance imaging scans.\n\n\nCONCLUSION\nMigraine preventive drug therapy is effective in reducing resource consumption when added to therapy consisting only of an acute medication." }, { "pmid": "22468643", "title": "OnabotulinumtoxinA (BOTOX®): a review of its use in the prophylaxis of headaches in adults with chronic migraine.", "abstract": "This article reviews the pharmacology, therapeutic efficacy and tolerability profile of intramuscularly injected onabotulinumtoxinA (onaBoNTA; BOTOX®) for headache prophylaxis in adults with chronic migraine, with a focus on UK labelling for the drug. The pharmacological actions of onaBoNTA include a direct antinociceptive (analgesic) effect; while not fully understood, the mechanism of action underlying its headache prophylaxis effect in chronic migraine is presumed to involve inhibition of peripheral and central sensitization in trigeminovascular neurones. Pooled findings from two large phase III studies of virtually identical design (PREEMPT [Phase III REsearch Evaluating Migraine Prophylaxis Therapy] 1 and 2) showed that treatment with up to five cycles of onaBoNTA (155-195 units/cycle) at 12-week intervals was effective in reducing headache symptoms, decreasing headache-related disability, and improving health-related quality of life (HR-QOL) in patients with chronic migraine, approximately two-thirds of whom were overusing acute headache medications at baseline. During the double-blind phase of both trials, significantly more patients treated with onaBoNTA (two cycles) than placebo experienced clinically meaningful improvements in the monthly frequencies of headache days, moderate to severe headache days and migraine days, and in the cumulative hours of headache on headache days/month. OnaBoNTA therapy also resulted in statistically significant and clinically meaningful improvements in functioning and HR-QOL compared with placebo. Notably, improvements in headache symptoms, functioning and HR-QOL favouring onaBoNTA over placebo were seen regardless of whether or not patients were medication overusers and irrespective of whether or not they were naive to (oral) prophylactic therapy. Further improvements relative to baseline in headache symptoms, functioning and HR-QOL were observed during the open-label extension phase of both trials (all patients received three cycles of onaBoNTA). Treatment with up to five cycles of onaBoNTA was generally well tolerated in the PREEMPT trials. Treatment-related adverse events reported by onaBoNTA recipients (e.g. neck pain, facial paresis and eyelid ptosis) were consistent with the well established tolerability profile of the neurotoxin when injected into head and neck muscles; no new safety events were observed. Debate surrounding the PREEMPT studies has centred on the small treatment effect of onaBoNTA relative to placebo, the possibility that blinding was inadequate and the relevance of the evaluated population. Nonetheless, the totality of the data showed that onaBoNTA therapy produced clinically meaningful improvements in headache symptoms, functioning and HR-QOL; on the basis of these trials, it has become the first (and so far only) headache prophylactic therapy to be specifically approved for chronic migraine in the UK and US. Overall, onaBoNTA offers a beneficial, acceptably tolerated and potentially convenient option for the management of this highly disabling condition, for example in patients who are refractory to oral medications used for prophylaxis." }, { "pmid": "20647170", "title": "OnabotulinumtoxinA for treatment of chronic migraine: results from the double-blind, randomized, placebo-controlled phase of the PREEMPT 1 trial.", "abstract": "OBJECTIVES\nThis is the first of a pair of studies designed to assess efficacy, safety and tolerability of onabotulinumtoxinA (BOTOX) as headache prophylaxis in adults with chronic migraine.\n\n\nMETHODS\nThe Phase III REsearch Evaluating Migraine Prophylaxis Therapy 1 (PREEMPT 1) is a phase 3 study, with a 24-week, double-blind, parallel-group, placebo-controlled phase followed by a 32-week, open-label phase. Subjects were randomized (1:1) to injections every 12 weeks of onabotulinumtoxinA (155 U-195 U; n = 341) or placebo (n = 338) (two cycles). The primary endpoint was mean change from baseline in headache episode frequency at week 24.\n\n\nRESULTS\nNo significant between-group difference for onabotulinumtoxinA versus placebo was observed for the primary endpoint, headache episodes (-5.2 vs. -5.3; p = 0.344). Large within-group decreases from baseline were observed for all efficacy variables. Significant between-group differences for onabotulinumtoxinA were observed for the secondary endpoints, headache days (p = .006) and migraine days (p = 0.002). OnabotulinumtoxinA was safe and well tolerated, with few treatment-related adverse events. Few subjects discontinued due to adverse events.\n\n\nCONCLUSIONS\nThere was no between-group difference for the primary endpoint, headache episodes. However, significant reductions from baseline were observed for onabotulinumtoxinA for headache and migraine days, cumulative hours of headache on headache days and frequency of moderate/severe headache days, which in turn reduced the burden of illness in adults with disabling chronic migraine." }, { "pmid": "21883197", "title": "OnabotulinumtoxinA for treatment of chronic migraine: pooled analyses of the 56-week PREEMPT clinical program.", "abstract": "OBJECTIVE\nTo evaluate safety and efficacy of onabotulinumtoxinA (BOTOX(®) ) as headache prophylaxis in adults with chronic migraine.\n\n\nBACKGROUND\nChronic migraine is a prevalent, disabling, and undertreated neurological disorder. OnabotulinumtoxinA is the only approved prophylactic therapy in this highly disabled patient population.\n\n\nDESIGN AND METHODS\nTwo phase III, 24-week, double-blind, parallel-group, placebo-controlled studies, followed by a 32-week, open-label, single-treatment, onabotulinumtoxinA phase, were conducted (January 23, 2006 to August 11, 2008). Qualified subjects were randomized (1:1) to injections of onabotulinumtoxinA (155-195 U) or placebo every 12 weeks for 5 cycles (double-blind: 2, open-label: 3). The pooled primary variable was mean change from baseline in frequency of headache days. Secondary variables included proportion of patients with severe Headache Impact Test-6 score (≥ 60) and mean changes from baseline in frequencies of migraine days, moderate/severe headache days, and migraine episodes; cumulative hours of headache on headache days; and acute headache medication intakes. The primary time point was week 24. Assessments for the open-label phase (all patients treated with onabotulinumtoxinA) compared double-blind treatment groups (onabotulinumtoxinA/onabotulinumtoxinA vs placebo/onabotulinumtoxinA) and are summarized to give a descriptive view of consistent study results, with inferences regarding statistical significance only examined for week 56.\n\n\nRESULTS\nA total of 1384 patients were randomized to onabotulinumtoxinA (n = 688) or placebo (n = 696) in the double-blind phase; 607 (88.2%) onabotulinumtoxinA/onabotulinumtoxinA and 629 (90.4%) placebo/onabotulinumtoxinA patients continued into the open-label phase. OnabotulinumtoxinA/onabotulinumtoxinA treatment statistically significantly reduced headache-day frequency vs placebo/onabotulinumtoxinA in patients with chronic migraine at week 56 (-11.7 onabotulinumtoxinA/onabotulinumtoxinA, -10.8 placebo/onabotulinumtoxinA; P = .019). Statistically significant reductions also favored onabotulinumtoxinA/onabotulinumtoxinA for several secondary efficacy variables at week 56, including frequencies of migraine days (-11.2 onabotulinumtoxinA/onabotulinumtoxinA, -10.3 placebo/onabotulinumtoxinA; P = .018) and moderate/severe headache days (-10.7 onabotulinumtoxinA/onabotulinumtoxinA, -9.9 placebo/onabotulinumtoxinA; P = .027) and cumulative headache hours on headache days (-169.1 onabotulinumtoxinA/onabotulinumtoxinA, -145.7 placebo/onabotulinumtoxinA; P = .018). After the open-label phase (all treated with onabotulinumtoxinA), statistically significant within-group changes from baseline were observed for all efficacy variables. Most patients (72.6%) completed the open-label phase; few discontinued because of adverse events. No new safety or tolerability issues emerged.\n\n\nCONCLUSIONS\nRepeated treatment with ≤ 5 cycles of onabotulinumtoxinA was effective, safe, and well tolerated in adults with chronic migraine." }, { "pmid": "20647171", "title": "OnabotulinumtoxinA for treatment of chronic migraine: results from the double-blind, randomized, placebo-controlled phase of the PREEMPT 2 trial.", "abstract": "OBJECTIVES\nThis is the second of a pair of studies designed to evaluate the efficacy and safety of onabotulinumtoxinA (BOTOX) for prophylaxis of headaches in adults with chronic migraine.\n\n\nMETHODS\nPREEMPT 2 was a phase 3 study, with a 24-week, double-blind, placebo-controlled phase, followed by a 32-week, open-label phase. Subjects were randomized (1:1) to injections of onabotulinumtoxinA (155U-195U; n = 347) or placebo (n = 358) every 12 weeks for two cycles. The primary efficacy endpoint was mean change in headache days per 28 days from baseline to weeks 21-24 post-treatment.\n\n\nRESULTS\nOnabotulinumtoxinA was statistically significantly superior to placebo for the primary endpoint, frequency of headache days per 28 days relative to baseline (-9.0 onabotulinumtoxinA/-6.7 placebo, p < .001). OnabotulinumtoxinA was significantly favoured in all secondary endpoint comparisons. OnabotulinumtoxinA was safe and well tolerated, with few treatment-related adverse events. Few patients (3.5% onabotulinumtoxinA/1.4% placebo) discontinued due to adverse events.\n\n\nCONCLUSIONS\nThe results of PREEMPT 2 demonstrate that onabotulinumtoxinA is effective for prophylaxis of headache in adults with chronic migraine. Repeated onabotulinumtoxinA treatments were safe and well tolerated." }, { "pmid": "20487038", "title": "OnabotulinumtoxinA for treatment of chronic migraine: pooled results from the double-blind, randomized, placebo-controlled phases of the PREEMPT clinical program.", "abstract": "OBJECTIVE\nTo assess the efficacy, safety, and tolerability of onabotulinumtoxinA (BOTOX) as headache prophylaxis in adults with chronic migraine.\n\n\nBACKGROUND\nChronic migraine is a prevalent, disabling, and undertreated neurological disorder. Few preventive treatments have been investigated and none is specifically indicated for chronic migraine.\n\n\nMETHODS\nThe 2 multicenter, pivotal trials in the PREEMPT: Phase 3 REsearch Evaluating Migraine Prophylaxis Therapy clinical program each included a 24-week randomized, double-blind phase followed by a 32-week open-label phase (ClinicalTrials.gov identifiers NCT00156910, NCT00168428). Qualified patients were randomized (1:1) to onabotulinumtoxinA (155-195 U) or placebo injections every 12 weeks. Study visits occurred every 4 weeks. These studies were identical in design (eg, inclusion/exclusion criteria, randomization, visits, double-blind phase, open-label phase, safety assessments, treatment), with the only exception being the designation of the primary and secondary endpoints. Therefore, the predefined pooling of the results was justified and performed to provide a complete overview of between-group differences in efficacy, safety, and tolerability that may not have been evident in individual studies. The primary endpoint for the pooled analysis was mean change from baseline in frequency of headache days at 24 weeks. Secondary endpoints were mean change from baseline to week 24 in frequency of migraine/probable migraine days, frequency of moderate/severe headache days, total cumulative hours of headache on headache days, frequency of headache episodes, frequency of migraine/probable migraine episodes, frequency of acute headache pain medication intakes, and the proportion of patients with severe (> or =60) Headache Impact Test-6 score at week 24. Results of the pooled analyses of the 2 PREEMPT double-blind phases are presented.\n\n\nRESULTS\nA total of 1384 adults were randomized to onabotulinumtoxinA (n = 688) or placebo (n = 696). Pooled analyses demonstrated a large mean decrease from baseline in frequency of headache days, with statistically significant between-group differences favoring onabotulinumtoxinA over placebo at week 24 (-8.4 vs -6.6; P < .001) and at all other time points. Significant differences favoring onabotulinumtoxinA were also observed for all secondary efficacy variables at all time points, with the exception of frequency of acute headache pain medication intakes. Adverse events occurred in 62.4% of onabotulinumtoxinA patients and 51.7% of placebo patients. Most patients reported adverse events that were mild to moderate in severity and few discontinued (onabotulinumtoxinA, 3.8%; placebo, 1.2%) due to adverse events. No unexpected treatment-related adverse events were identified.\n\n\nCONCLUSIONS\nThe pooled PREEMPT results demonstrate that onabotulinumtoxinA is an effective prophylactic treatment for chronic migraine. OnabotulinumtoxinA resulted in significant improvements compared with placebo in multiple headache symptom measures, and significantly reduced headache-related disability and improved functioning, vitality, and overall health-related quality of life. Repeat treatments with onabotulinumtoxinA were safe and well tolerated." }, { "pmid": "17441971", "title": "Topiramate reduces headache days in chronic migraine: a randomized, double-blind, placebo-controlled study.", "abstract": "The aim of this study was to evaluate the efficacy and tolerability of topiramate for the prevention of chronic migraine in a randomized, double-blind, placebo-controlled trial. Chronic migraine is a common form of disabling headache presenting in headache subspecialty practice. Preventive treatments are essential for chronic migraine management, although there are few or no controlled empirical trial data on their use in this patient population. Topiramate is approved for the prophylaxis of migraine headache in adults. Patients (18-65 years) who experienced chronic migraine (defined as > or =15 monthly migraine days) for > or =3 months prior to trial entry and had > or =12 migraine days during the 4-week (28-day) baseline phase were randomized to topiramate or placebo for a 16-week, double-blind trial. Topiramate was titrated (25 mg weekly) to a target dose of 100 mg/day, allowing dosing flexibility from 50 to 200 mg/day, according to patient need. Existing migraine preventive treatments, except for antiepileptic drugs, were continued throughout the trial. The primary efficacy measure was the change in number of migraine days from the 28-day baseline phase to the last 28 days of the double-blind phase in the intent-to-treat population, which consisted of all patients who received at least one dose of study medication and had one outcome assessment during the double-blind phase. Health-related quality of life was evaluated with the Migraine Specific Quality of Life Questionnaire (MSQ, Version 2.1), the Headache Impact Test (HIT-6) and the Migraine Disability Assessment (MIDAS) questionnaires, and tolerability was assessed by adverse event (AE) reports and early trial discontinuations. Eighty-two patients were screened. Thirty-two patients in the intent-to-treat population (mean age 46 years; 75% female) received topiramate (mean modal dose +/- SD = 100 +/- 17 mg/day) and 27 patients received placebo. Mean (+/-SD) baseline number of migraine days per 4 weeks was 15.5 +/- 4.6 in the topiramate group and 16.4 +/- 4.4 in the placebo group. Most patients (78%) met the definition for acute medication overuse at baseline. The mean duration of treatment was 100 and 92 days for topiramate- and placebo-treated patients, respectively. Study completion rates for topiramate- and placebo-treated patients were 75% and 52%, respectively. Topiramate significantly reduced the mean number of monthly migraine days (+/-SD) by 3.5 +/- 6.3, compared with placebo (-0.2 +/- 4.7, P < 0.05). No significant intergroup differences were found for MSQ and HIT-6. MIDAS showed improvement with the topiramate treatment group (P = 0.042 vs. placebo). Treatment emergent adverse events were reported by 75% of topiramate-treated patients (37%, placebo). The most common AEs, paraesthesia, nausea, dizziness, dyspepsia, fatigue, anorexia and disturbance in attention, were reported by 53%, 9%, 6%, 6%, 6%, 6% and 6% of topiramate-treated patients, respectively, vs. 7%, 0%, 0%, 0%, 0%, 4% and 4% of placebo-treated patients. This randomized, double-blind, placebo-controlled trial demonstrates that topiramate is effective and reasonably well tolerated when used for the preventive treatment of chronic migraine, even in the presence of medication overuse." }, { "pmid": "17300356", "title": "Efficacy and safety of topiramate for the treatment of chronic migraine: a randomized, double-blind, placebo-controlled trial.", "abstract": "OBJECTIVE\nTo evaluate the efficacy and safety of topiramate (100 mg/day) compared with placebo for the treatment of chronic migraine.\n\n\nMETHODS\nThis was a randomized, placebo-controlled, parallel-group, multicenter study consisting of 16 weeks of double-blind treatment. Subjects aged 18 to 65 years with 15 or more headache days per month, at least half of which were migraine/migrainous headaches, were randomized 1:1 to either topiramate 100 mg/day or placebo. An initial dose of topiramate 25 mg/day (or placebo) was titrated upward in weekly increments of 25 mg/day to a maximum of 100 mg/day (or to the maximum tolerated dose). Concomitant preventive migraine treatment was not allowed, and acute headache medication use was not to exceed 4 days per week during the double-blind maintenance period. The primary efficacy endpoint was the change from baseline in the mean monthly number of migraine/migrainous days; the change in the mean monthly number of migraine days also was analyzed. A fixed sequence approach (ie, gatekeeper approach) using analysis of covariance was used to analyze the efficacy endpoints. Assessments of safety and tolerability included physical and neurologic examinations, clinical laboratory parameters, and spontaneous reports of clinical adverse events.\n\n\nRESULTS\nThe intent-to-treat population included 306 (topiramate, n = 153; placebo, n = 153) of 328 randomized subjects who provided at least 1 efficacy assessment; 55.8% of the topiramate group and 55.2% on placebo were trial completers. The mean final topiramate maintenance dose was 86.0 mg/day. The mean duration of therapy was 91.7 days for the topiramate group and 90.6 days for the placebo group. Topiramate treatment resulted in a statistically significant mean reduction of migraine/migrainous headache days (topiramate -6.4 vs placebo -4.7, P= .010) and migraine headache days relative to baseline (topiramate -5.6 vs placebo -4.1, P= .032). Treatment-emergent adverse events occurred in 132 (82.5%) and 113 (70.2%) of topiramate-treated and placebo-treated subjects, respectively, and were generally of mild or moderate severity. Most commonly reported adverse events in the topiramate group were paresthesia (n = 46, 28.8%), upper respiratory tract infection (n = 22, 13.8%), and fatigue (n = 19, 11.9%). The most common adverse events in the placebo group were upper respiratory tract infection (n = 20, 12.4%), fatigue (n = 16, 9.9%), and nausea (n = 13, 8.1%). Discontinuations due to adverse events occurred in 18 (10.9%) topiramate subjects and 10 (6.1%) placebo subjects. There were no serious adverse events or deaths.\n\n\nCONCLUSIONS\nTopiramate treatment at daily doses of approximately 100 mg resulted in statistically significant improvements compared with placebo in mean monthly migraine/migrainous and migraine headache days. Topiramate is safe and generally well tolerated in this group of subjects with chronic migraine, a burdensome condition with important unmet treatment needs. Safety and tolerability of topiramate were consistent with experience in previous clinical trials involving the drug." }, { "pmid": "19912346", "title": "A double-blind comparison of onabotulinumtoxina (BOTOX) and topiramate (TOPAMAX) for the prophylactic treatment of chronic migraine: a pilot study.", "abstract": "BACKGROUND\nThere is a need for effective prophylactic therapy for chronic migraine (CM) that has minimal side effects.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of onabotulinumtoxinA (BOTOX), Allergan, Inc., Irvine, CA) and topiramate (TOPAMAX), Ortho-McNeil, Titusville, NJ) prophylactic treatment in patients with CM.\n\n\nMETHODS\nIn this single-center, double-blind trial, patients with CM received either onabotulinumtoxinA, maximum 200 units (U) at baseline and month 3 (100 U fixed-site and 100 U follow-the-pain), plus an oral placebo, or topiramate, 4-week titration to 100 mg/day with option for additional 4-week titration to 200 mg/day, plus placebo saline injections. OnabotulinumtoxinA or placebo saline injection was administered at baseline and month 3 only, while topiramate oral treatment or oral placebo was continued through the end of the study. The primary endpoint was treatment responder rate assessed using Physician Global Assessment 9-point scale (+4 = clearance of signs and symptoms and -4 = very marked worsening [about 100% worse]). Secondary endpoints included the change from baseline in the number of headache (HA)/migraine days per month (HA diary), and HA disability measured using Headache Impact Test (HIT-6), HA diary, Migraine Disability Assessment (MIDAS) questionnaire, and Migraine Impact Questionnaire (MIQ). The overall study duration was approximately 10.5 months, which included a 4-week screening period and a 2-week optional final safety visit. Follow-up visits for assessments occurred at months 1, 3, 6, and 9. Adverse events (AEs) were documented.\n\n\nRESULTS\nOf 60 patients randomized to treatment (mean age, 36.8 +/- 10.3 years; 90% female), 36 completed the study at the end of the 9 months of active treatment (onabotulinumtoxinA, 19/30 [63.3%]; topiramate, 17/30 [56.7%]). In the topiramate group, 7/29 (24.1%) discontinued study because of treatment-related AEs vs 2/26 (7.7%) in the onabotulinumtoxinA group. Between 68% and 83% of patients for both onabotulinumtoxinA and topiramate groups reported at least a slight (25%) improvement in migraine; response to treatment was assessed using Physician Global Assessment at months 1, 3, 6, and 9. Most patients in both groups reported moderate to marked improvements at all time points. No significant between-group differences were observed, except for marked improvement at month 9 (onabotulinumtoxinA, 27.3% vs topiramate, 60.9%, P = .0234, chi-square). In both groups, HA/migraine days decreased and MIDAS and HIT-6 scores improved. Patient-reported quality of life measures assessed using MIQ after treatment with onabotulinumtoxinA paralleled those seen after treatment with topiramate in most respects. At month 9, 40.9% and 42.9% of patients in the onabotulinumtoxinA and topiramate groups, respectively, reported > or =50% reduction in HA/migraine days. Forty-one treatment-related AEs were reported in 18 onabotulinumtoxinA-treated patients vs 87 in 25 topiramate-treated patients, and 2.7% of patients in the onabotulinumtoxinA group and 24.1% of patients in the topiramate group reported AEs that required permanent discontinuation of study treatment.\n\n\nCONCLUSIONS\nOnabotulinumtoxinA and topiramate demonstrated similar efficacy in the prophylactic treatment of CM. Patients receiving onabotulinumtoxinA had fewer AEs and discontinuations." }, { "pmid": "21070228", "title": "A multi-center double-blind pilot comparison of onabotulinumtoxinA and topiramate for the prophylactic treatment of chronic migraine.", "abstract": "OBJECTIVE\nThis multi-center pilot study compared the efficacy of onabotulinumtoxinA with topiramate (a Food and Drug Administration approved and widely accepted treatment for prevention of migraine) in individuals with chronic migraine (CM).\n\n\nMETHODS\nA total of 59 subjects with CM were randomly assigned to one of 2 groups: Group 1 (n = 30) received topiramate plus placebo injections, Group 2 (n = 29) received onabotulinumtoxinA injections plus placebo tablets. Subjects maintained daily headache diaries over a 4-week baseline period and a 12-week active study period. The primary endpoint was the Physician Global Assessment, which measured the treatment responder rate and indicated improvement in both groups over 12 weeks. Secondary endpoints, measured at weeks 4 and 12, included headache days per month, migraine days, headache-free days, days on acute medication, severity of headache episodes, Migraine Impact & Disability Assessment, Headache Impact Test, effectiveness of and satisfaction with current treatment on the amount of medication needed, and the frequency and severity of migraine symptoms. At 12 weeks subjects were re-evaluated and tapered off oral study medications over a 2-week time period. Subjects not reporting a >50% reduction of headache frequency at 12 weeks were invited to participate in a 12-week open label extension study with onabotulinumtoxinA. Of these, 20 subjects, 9 from the Topiramate Group and 11 from the OnabotulinumtoxinA Group, volunteered for this extension from weeks 14 to 26.\n\n\nRESULTS\nThis study demonstrated positive benefit for both onabotulinumtoxinA and topiramate in subjects with CM. Overall, the results were statistically significant within groups but not between groups. By week 26, subjects had a reduction of headache days per month compared with baseline. This was a significant within-group finding.\n\n\nCONCLUSION\nOnabotulinumtoxinA and topiramate demonstrated similar efficacy for subjects with CM as determined by Global Physician Assessment and supported by multiple secondary endpoint measures." }, { "pmid": "25431141", "title": "Long-term experience with onabotulinumtoxinA in the treatment of chronic migraine: What happens after one year?", "abstract": "BACKGROUND\nOnabotulinumtoxinA (onabotA) has shown its efficacy over placebo in chronic migraine (CM), but clinical trials lasted only up to one year.\n\n\nOBJECTIVE\nThe objective of this article is to analyse our experience with onabotA treatment of CM, paying special attention to what happens after one year.\n\n\nPATIENTS AND METHODS\nWe reviewed the charts of patients with CM on onabotA. Patients were injected quarterly during the first year but the fifth appointment was delayed to the fourth month to explore the need for further injections.\n\n\nRESULTS\nWe treated 132 CM patients (mean age 47 years; 119 women). A total of 108 (81.8%) showed response during the first year. Adverse events, always transient and mild-moderate, were seen in 19 (14.4%) patients during the first year; two showed frontotemporal muscle atrophy after being treated for more than five years. The mean number of treatments was 7.7 (limits 2-29). Among those 108 patients with treatment longer than one year, 49 (45.4%) worsened prior to the next treatment, which obliged us to return to quarterly injections and injections were stopped in 14: in 10 (9.3%) due to a lack of response and in four due to the disappearance of attacks. In responders, after an average of two years of treatment, consumption of any acute medication was reduced by 53% (62.5% in triptan overusers) and emergency visits decreased 61%.\n\n\nCONCLUSIONS\nOur results confirm the long-term response to onabotA in three-quarters of CM patients. After one year, lack of response occurs in about one out of 10 patients and injections can be delayed, but not stopped, to four months in around 40% of patients. Except for local muscle atrophy in two cases treated more than five years, adverse events are comparable to those already described in short-term clinical trials." }, { "pmid": "21956721", "title": "OnabotulinumtoxinA improves quality of life and reduces impact of chronic migraine.", "abstract": "OBJECTIVE\nTo assess the effects of treatment with onabotulinumtoxinA (Botox, Allergan, Inc., Irvine, CA) on health-related quality of life (HRQoL) and headache impact in adults with chronic migraine (CM).\n\n\nMETHODS\nThe Phase III Research Evaluating Migraine Prophylaxis Therapy (PREEMPT) clinical program (PREEMPT 1 and 2) included a 24-week, double-blind phase (2 12-week cycles) followed by a 32-week, open-label phase (3 cycles). Thirty-one injections of 5U each (155 U of onabotulinumtoxinA or placebo) were administered to fixed sites. An additional 40 U could be administered \"following the pain.\" Prespecified analysis of headache impact (Headache Impact Test [HIT]-6) and HRQoL (Migraine-Specific Quality of Life Questionnaire v2.1 [MSQ]) assessments were performed. Because the studies were similar in design and did not notably differ in outcome, pooled results are presented here.\n\n\nRESULTS\nA total of 1,384 subjects were included in the pooled analyses (onabotulinumtoxinA, n = 688; placebo, n = 696). Baseline mean total HIT-6 and MSQ v2.1 scores were comparable between groups; 93.1% were severely impacted based on HIT-6 scores ≥60. At 24 weeks, in comparison with placebo, onabotulinumtoxinA treatment significantly reduced HIT-6 scores and the proportion of patients with HIT-6 scores in the severe range at all timepoints including week 24 (p < 0.001). OnabotulinumtoxinA treatment significantly improved all domains of the MSQ v2.1 at 24 weeks (p < 0.001).\n\n\nCONCLUSIONS\nTreatment of CM with onabotulinumtoxinA is associated with significant and clinically meaningful reductions in headache impact and improvements in HRQoL.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class 1A evidence that onabotulinumtoxinA treatment reduces headache impact and improves HRQoL." }, { "pmid": "21298315", "title": "Experience with onabotulinumtoxinA (BOTOX) in chronic refractory migraine: focus on severe attacks.", "abstract": "The objective of this study is to analyse our experience in the treatment of refractory chronic migraine (CM) with onabotulinumtoxinA (BTA) and specifically in its effects over disabling attacks. Patients with CM and inadequate response or intolerance to oral preventatives were treated with pericranial injections of 100 U of TBA every 3 months. The dose was increased up to 200 U in case of no response. The patients kept a headache diary. In addition, we specifically asked on the effect of BTA on the frequency of disabling attacks, consumption of triptans and visits to Emergency for the treatment of severe attacks. This series comprises a total of 35 patients (3 males), aged 24-68 years. All except three met IHS criteria for analgesic overuse. The number of sessions with BTA ranged from 2 to 15 (median 4) and nine (26%) responded (reduction of >50% in headache days). However, the frequency of severe attacks was reduced to an average of 46%. Oral triptan consumption (29 patients) was reduced by 50% (from an average of 22 to 11 tablets/month). Those six patients who used subcutaneous sumatriptan reduced its consumption to a mean of 69% (from 4.5 to 1.5 injections per month). Emergency visits went from an average of 3 to 0.4 per trimester (-83%). Six patients complained of mild adverse events, transient local cervical pain being the most common. Although our data must be taken with caution as this is an open trial, in clinical practice treatment of refractory CM with BTA reduces the frequency of disabling attacks, the consumption of triptans and the need of visits to Emergency, which makes this treatment a profitable option both clinically and pharmacoeconomically." }, { "pmid": "21499747", "title": "Botulinum toxin type-A in the prophylactic treatment of medication-overuse headache: a multicenter, double-blind, randomized, placebo-controlled, parallel group study.", "abstract": "Medication-overuse headache (MOH) represents a severely disabling condition, with a low response to prophylactic treatments. Recently, consistent evidences have emerged in favor of botulinum toxin type-A (onabotulinum toxin A) as prophylactic treatment in chronic migraine. In a 12-week double-blind, parallel group, placebo-controlled study, we tested the efficacy and safety of onabotulinum toxin A as prophylactic treatment for MOH. A total of 68 patients were randomized (1:1) to onabotulinum toxin A (n = 33) or placebo (n = 35) treatment and received 16 intramuscular injections. The primary efficacy end point was mean change from baseline in the frequency of headache days for the 28-day period ending with week 12. No significant differences between onabotulinum toxin A and placebo treatment were detected in the primary (headache days) end point (12.0 vs. 15.9; p = 0.81). A significant reduction was recorded in the secondary end point, mean acute pain drug consumption at 12 weeks in onabotulinum toxin A-treated patients when compared with those with placebo (12.1 vs. 18.0; p = 0.03). When we considered the subgroup of patients with pericranial muscle tenderness, we recorded a significant improvement in those treated with onabotulinum toxin A compared to placebo treated in both primary (headache days) and secondary end points (acute pain drug consumption, days with drug consumption), as well as in pain intensity and disability measures (HIT-6 and MIDAS) at 12 weeks. Onabotulinum toxin A was safe and well tolerated, with few treatment-related adverse events. Few subjects discontinued due to adverse events. Our data identified the presence of pericranial muscle tenderness as predictor of response to onabotulinum toxin A in patients with complicated form of migraine such as MOH, the presence of pericranial muscle tenderness and support it as prophylactic treatment in these patients." }, { "pmid": "25500317", "title": "Per cent of patients with chronic migraine who responded per onabotulinumtoxinA treatment cycle: PREEMPT.", "abstract": "OBJECTIVE\nThe approved use of onabotulinumtoxinA for prophylaxis of headaches in patients with chronic migraine (CM) involves treatment every 12 weeks. It is currently unknown whether patients who fail to respond to the first onabotulinumtoxinA treatment cycle will respond to subsequent treatment cycles. To help inform decisions about treating non-responders, we examined the probability of treatment cycle 1 non-responders responding in cycle 2, and cycle 1 and 2 non-responders responding in cycle 3.\n\n\nMETHODS\nPooled PREEMPT data (two studies: a 24-week, 2-cycle, double-blind, randomised (1:1), placebo-controlled, parallel-group phase, followed by a 32-week, 3-cycle, open-label phase) evaluated onabotulinumtoxinA (155-195 U) for prophylaxis of headaches in persons with CM (≥15 days/month with headache ≥4 h/day). End points of interest included the proportion of study patients who first achieved a ≥50% reduction in headache days, moderate/severe headache days, total cumulative hours of headache on headache days, or a ≥5-point improvement in Headache Impact Test (HIT)-6. For treatment cycle 1, all eligible participants were included. For subsequent cycles, responders in a previous cycle were no longer considered first responders.\n\n\nRESULTS\nAmong onabotulinumtoxinA-treated patients (n=688) 49.3% had a ≥50% reduction in headache-day frequency during treatment cycle 1, with 11.3% and 10.3% of patients first responding during cycles 2 and 3, respectively. 54.2%, 11.6% and 7.4% of patients first responded with a ≥50% reduction in cumulative hours of headache, and 56.3%, 14.5% and 7.7% of patients first responded with a ≥5-point improvement in total HIT-6 during treatment cycles 1, 2 and 3, respectively.\n\n\nCONCLUSIONS\nA meaningful proportion of patients with CM treated with onabotulinumtoxinA who did not respond to the first treatment cycle responded in the second and third cycles of treatment.\n\n\nTRIAL REGISTRATION NUMBER\nNCT00156910, NCT00168428." }, { "pmid": "28527052", "title": "Action mechanisms of Onabotulinum toxin-A: hints for selection of eligible patients.", "abstract": "In the past few decades, the so-feared botulinum toxin has conversely acquired the role of a ever more versatile therapeutic substance, used in an increasing number of pathological situations, including chronic headache and more precisely in the prophylaxis of chronic migraine. The medical use of botulinum toxin allowed to better understand its multiple mechanisms of action. Investigations about the pathophysiology of primary and secondary headaches has shown a series of common biological elements that frequently are also targets of the action of botulinum toxin. These increasing evidences allowed to identify some biochemical, neurophysiological and radiological markers that may be useful in the individuation of patients which probably will respond to the treatment with Onabotulinum toxin-A among chronic migraineurs. These predictors include CGRP plasmatic levels, specific laser-evoked potential responses, peculiar brain MRI and fMRI and characteristic clinical manifestations. Unfortunately, at now, these predictors are still not available for the clinical practice. Furthermore, the better knowledge about biology of headaches and regarding botulinum toxin activities may also help in directing investigations on the possible use of Onabotulinum toxin-A in other headaches different from migraine. This review tries to show in detail these biological mechanisms and their implication in selecting patients eligible for the treatment with Onabotulinum toxin-A." }, { "pmid": "14651415", "title": "A six-item short-form survey for measuring headache impact: the HIT-6.", "abstract": "BACKGROUND\nMigraine and other severe headaches can cause suffering and reduce functioning and productivity. Patients are the best source of information about such impact.\n\n\nOBJECTIVE\nTo develop a new short form (HIT-6) for assessing the impact of headaches that has broad content coverage but is brief as well as reliable and valid enough to use in screening and monitoring patients in clinical research and practice.\n\n\nMETHODS\nHIT-6 items were selected from an existing item pool of 54 items and from 35 items suggested by clinicians. Items were selected and modified based on content validity, item response theory (IRT) information functions, item internal consistency, distributions of scores, clinical validity, and linguistic analyses. The HIT-6 was evaluated in an Internet-based survey of headache sufferers (n = 1103) who were members of America Online (AOL). After 14 days, 540 participated in a follow-up survey.\n\n\nRESULTS\nHIT-6 covers six content categories represented in widely used surveys of headache impact. Internal consistency, alternate forms, and test-retest reliability estimates of HIT-6 were 0.89, 0.90, and 0.80, respectively. Individual patient score confidence intervals (95%) of app. +/-5 were observed for 88% of all respondents. In tests of validity in discriminating across diagnostic and headache severity groups, relative validity (RV) coefficients of 0.82 and 1.00 were observed for HIT-6, in comparison with the Total Score. Patient-level classifications based in HIT-6 were accurate 88.7% of the time at the recommended cut-off score for a probability of migraine diagnosis. HIT-6 was responsive to self-reported changes in headache impact.\n\n\nCONCLUSIONS\nThe IRT model estimated for a 'pool' of items from widely used measures of headache impact was useful in constructing an efficient, reliable, and valid 'static' short form (HIT-6) for use in screening and monitoring patient outcomes." }, { "pmid": "17868356", "title": "Predictors of response to botulinum toxin type A (BoNTA) in chronic daily headache.", "abstract": "OBJECTIVE\nTo evaluate predictors of response to botulinum toxin type A (BoNTA; BOTOX, Allergan Inc., Irvine, CA, USA) in patients with chronic daily headache (CDH).\n\n\nBACKGROUND\nChronic migraine (CM) and chronic tension-type headache (CTTH) form the majority of CDH disorders. Controlled trials indicate that BoNTAis effective in reducing the frequency of headache and number of headache days in patients with CDH disorders. A recent migraine study found that patients with imploding or ocular types of headaches were responders to BoNTA, whereas those with exploding headaches were not. To date, there are no data on factors that might predict response to BoNTA in patients with CDH.\n\n\nMETHODS\nA total of 71 patients with CM and 11 patients with CTTH were treated with 100 units BoNTA. Every patient received at least 2 sets of injections at intervals of 12-15 weeks; fixed sites, fixed dose, and \"follow-the-pain\" approaches were used for the injections. A detailed medical history was taken for each patient in addition to recording Migraine Disability Assessment Scale (MIDAS) scores at baseline and every 3 months after each set of injections. Headache frequency was assessed throughout the study from baseline to weeks 24-27. Patients recorded the frequency, severity, and duration of headaches in Headache Diaries. Patients were divided into responders (> or = 50% reduction in both headache frequency and MIDAS scores compared with baseline) and nonresponders (< 50% reduction in either of the above variables). Variables analyzed for predictors of response include headache that is predominantly unilateral or bilateral in location, presence of cutaneous allodynia (scalp allodynia), and presence of pericranial muscle tenderness (also referred to as muscle allodynia). Chi-square analysis was used for parallel-group comparisons (proportion of CM responders vs proportion of CM nonresponders and proportion of CTTH responders vs proportion of CTTH nonresponders).\n\n\nRESULTS\nIn the CM group, 76.1% (54 /71) of patients were responders to BoNTA, of which 68.5% (37/54) had headache that was predominantly unilateral in location and the remaining 31.5% (17/54) had headache that was predominantly bilateral in location (both P < .01 vs CM nonresponders). Of the 23.9% (17/71) CM nonresponders, 76.5% (13/17) reported predominantly bilateral headache and in the remaining 23.5% (4/17) the headache was unilateral. In the CM responders group, 81.5% (44/54) had clinically detectable scalp allodynia, while pericranial muscle tenderness was present in 61.1% (33/54) (both P < .01 vs CM nonresponders). The presence of scalp allodynia and pericranial muscle tenderness in the CM nonresponders was 11.8% (2/17) and 17.6% (3/17), respectively. In the CTTH group where all patients (100%, 11/11) had bilateral headache, 36.4% (4/11) of patients were responders to BoNTA. All of those CTTH responders (100%, 4/4) had pericranial muscle tenderness (P < .05 vs CTTH nonresponders). None of the CTTH nonresponders had pericranial muscle tenderness. No clinically significant serious adverse events (AEs) were reported. Mild AEs, eg, injection-site pain that persisted for 1-9 days, were reported in 11 patients. One patient had transient brow ptosis.\n\n\nCONCLUSIONS\nA greater percentage of patients with CM responded to BoNTA than patients with CTTH. Headaches that were predominantly unilateral in location, presence of scalp allodynia, and pericranial muscle tenderness appear to be predictors of response to BoNTA in CM, whereas in CTTH, pericranial muscle tenderness may be a predictor of response." }, { "pmid": "23126597", "title": "Headache direction and aura predict migraine responsiveness to rimabotulinumtoxin B.", "abstract": "OBJECTIVE\nTo report a retrospective analysis of patients with migraine headaches treated with rimabotulinumtoxin B as preventive treatment, investigating an association between clinical responsiveness with migraine directionality and migrainous aura.\n\n\nBACKGROUND\nThe Phase III Research Evaluating Migraine Prophylaxis Therapy studies demonstrated onabotulinumtoxin A is effective in the preventive management of chronic migraine headaches. Jakubowski et al reported greater response to onabotulinumtoxin A in migraine patients reporting inward-directed head pain (imploding or ocular) compared with outward-directed head pain (exploding), suggesting subpopulations of patients may be better candidates for its use. No correlation was found between those reporting migrainous aura and onabotulinumtoxin A responsiveness.\n\n\nMETHODS\nOne hundred twenty-eight migraine patients were identified who had received rimabotulinumtoxin B injections over an average of 22 months, or 7 injection cycles. Migraine directionality was reported as inward directed (imploding, n = 72), eye centered (ocular, n = 28), outward directed (exploding, n = 16), and mixed (n = 12).\n\n\nRESULTS\nOne hundred two out of one hundred twenty-eight patients (80%) improved; of these, 58 (57%) demonstrated a >75% reduction in monthly headache frequency (\">75%-responders\"), 76% of which noted sustained benefits >12 months with repeated injections every 10-12 weeks. Those reporting ocular- and imploding-directed headaches were significantly more likely to be >75%-responders, compared with exploding- and mixed-directed headaches (P < .0025). Patients with ocular-directed headaches were most likely to be sustained >75%-responders. Patients reporting migrainous aura were more likely to be >75%-responders (P = .0007). Those reporting exploding- and mixed-directed headaches were more likely to be nonresponders (P < .0001).\n\n\nCONCLUSIONS\nReported migraine directionality and presence of migrainous aura predict migraine headache responsiveness to rimabotulinumtoxin B injections." }, { "pmid": "18484982", "title": "Defining refractory migraine and refractory chronic migraine: proposed criteria from the Refractory Headache Special Interest Section of the American Headache Society.", "abstract": "Certain migraines are labeled as refractory, but the entity lacks a well-accepted operational definition. This article summarizes the results of a survey sent to American Headache Society members to evaluate interest in a definition for RM and what were considered necessary criteria. Review of the literature, collaborative discussions and results of the survey contributed to the proposed definition for RM. We also comment on our considerations in formulating the criteria and any issues in making the criteria operational. For the proposed definition for RM and refractory chronic migraine, patients must meet the International Classification of Headache Disorders, Second Edition criteria for migraine or chronic migraine, respectively. Headaches need to cause significant interference with function or quality of life despite modification of triggers, lifestyle factors, and adequate trials of acute and preventive medicines with established efficacy. The definition requires that patients fail adequate trials of preventive medicines, alone or in combination, from at least 2 of 4 drug classes including: beta-blockers, anticonvulsants, tricyclics, and calcium channel blockers. Patients must also fail adequate trials of abortive medicines, including both a triptan and dihydroergotamine (DHE) intranasal or injectable formulation and either nonsteroidal anti-inflammatory drugs (NSAIDs) or combination analgesic, unless contraindicated. An adequate trial is defined as a period of time during which an appropriate dose of medication is administered, typically at least 2 months at optimal or maximum-tolerated dose, unless terminated early due to adverse effects. The definition also employs modifiers for the presence or absence of medication overuse, and with or without significant disability." }, { "pmid": "17069972", "title": "Exploding vs. imploding headache in migraine prophylaxis with Botulinum Toxin A.", "abstract": "Migraine headache is routinely managed using medications that abort attacks as they occur. An alternative approach to migraine management is based on prophylactic medications that reduce attack frequency. One approach has been based on local intramuscular injections of Botulinum Toxin Type A (BTX-A). Here, we explored for neurological markers that might distinguish migraine patients who benefit from BTX-A treatment (100 units divided into 21 injections sites across pericranial and neck muscles). Responders and non-responders to BTX-A treatment were compared prospectively (n=27) and retrospectively (n=36) for a host of neurological symptoms associated with their migraine. Data pooled from all 63 patients are summarized below. The number of migraine days per month dropped from 16.0+/-1.7 before BTX-A to 0.8+/-0.3 after BTX-A (down 95.3+/-1.0%) in 39 responders, and remained unchanged (11.3+/-1.9 vs. 11.7+/-1.8) in 24 non-responders. The prevalence of aura, photophobia, phonophobia, osmophobia, nausea, and throbbing was similar between responders and non-responders. However, the two groups offered different accounts of their pain. Among non-responders, 92% described a buildup of pressure inside their head (exploding headache). Among responders, 74% perceived their head to be crushed, clamped or stubbed by external forces (imploding headache), and 13% attested to an eye-popping pain (ocular headache). The finding that exploding headache was impervious to extracranial BTX-A injections is consistent with the prevailing view that migraine pain is mediated by intracranial innervation. The amenability of imploding and ocular headaches to BTX-A treatment suggests that these types of migraine pain involve extracranial innervation as well." }, { "pmid": "24610690", "title": "[Predictive factors of the response to treatment with onabotulinumtoxinA in refractory migraine].", "abstract": "AIM\nTo identify the clinical features that predict a favourable response to onabotulinumtoxinA (OnabotA) treatment in patients with refractory migraine.\n\n\nPATIENTS AND METHODS\nRetrospective analysis of patients with refractory migraine who underwent at least two pericranial injections of OnabotA between 2008 and 2012. Patients were divided into responders and non-responders. Some clinical features including unilateral location of headache, presence of pericranial muscle tension, type of pain (imploding or exploding), duration of migraine (less than or greater than 10 years) and medication overuse were compared between the two groups.\n\n\nRESULTS\n39 patients were included (35 women) with a mean age of 46 years. 18 patients (46.2%) showed a greater than 50% reduction in the number of headache days/month (responders). When analyzing the different features of migraine, we observed that all were equally prevalent in responders and non-responders (p > 0.05): unilateral location (66.7% vs 66.6% respectively), implosive pain (27.8% vs 38.1%), presence of pericranial muscle tension (33.3% vs 38.1%), duration of migraine more than 10 years (77.8% vs 69.2%) and presence of medication overuse (50% vs 81%).\n\n\nCONCLUSION\nWe failed to identify any clinical feature in our patients with refractory migraine that predicts a favourable response to OnabotA treatment." }, { "pmid": "24609509", "title": "A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms.", "abstract": "Sleep scoring is one of the most important diagnostic methods in psychiatry and neurology. Sleep staging is a time consuming and difficult task undertaken by sleep experts. This study aims to identify a method which would classify sleep stages automatically and with a high degree of accuracy and, in this manner, will assist sleep experts. This study consists of three stages: feature extraction, feature selection from EEG signals, and classification of these signals. In the feature extraction stage, it is used 20 attribute algorithms in four categories. 41 feature parameters were obtained from these algorithms. Feature selection is important in the elimination of irrelevant and redundant features and in this manner prediction accuracy is improved and computational overhead in classification is reduced. Effective feature selection algorithms such as minimum redundancy maximum relevance (mRMR); fast correlation based feature selection (FCBF); ReliefF; t-test; and Fisher score algorithms are preferred at the feature selection stage in selecting a set of features which best represent EEG signals. The features obtained are used as input parameters for the classification algorithms. At the classification stage, five different classification algorithms (random forest (RF); feed-forward neural network (FFNN); decision tree (DT); support vector machine (SVM); and radial basis function neural network (RBF)) classify the problem. The results, obtained from different classification algorithms, are provided so that a comparison can be made between computation times and accuracy rates. Finally, it is obtained 97.03 % classification accuracy using the proposed method. The results show that the proposed method indicate the ability to design a new intelligent assistance sleep scoring system." }, { "pmid": "16376606", "title": "Automated neonatal seizure detection: a multistage classification system through feature selection based on relevance and redundancy analysis.", "abstract": "OBJECTIVE\nAutomatic seizure detection obtains valuable information concerning duration and timing of seizures. Commonly used methods for EEG seizure detection in adults are inadequate for the same task in neonates because they lack the specific age-dependant characteristics of normal and pathological EEG. This paper presents an automatic seizure detection system for newborn with focus on feature selection via relevance and redundancy analysis.\n\n\nMETHODS\nTwo linear correlation-based feature selection methods and the ReliefF method were applied to parameterized EEG data acquired from six neonates aged between 39 and 42 weeks. To evaluate the effectiveness of these methods, features extracted from seizure and non-seizure segments were ranked by these methods. The optimized ranked feature subsets were fed into a backpropagation neural network for classifying. Its performance was used as indicator for the feature selection effectiveness.\n\n\nRESULTS\nResults showed an average seizure detection rate of 91%, an average non-seizure detection rate of 95%, an average false rejection rate of 95% and an overall average detection rate of 93% with a false seizure detection rate of 1.17/h.\n\n\nCONCLUSIONS\nThis good performance in detecting newborn ictal activities has been achieved based on an optimized subset of 30 features determined by the ReliefF-based detector, which corresponds to a reduction of the number of features of up to 75%.\n\n\nSIGNIFICANCE\nThe presented approach takes into account specific characteristics of normal and pathological EEG. Thus, it can improve the accuracy of conventional seizure detection systems in newborn." }, { "pmid": "21349795", "title": "Feature selection for accelerometer-based posture analysis in Parkinson's disease.", "abstract": "Posture analysis in quiet standing is a key component of the clinical evaluation of Parkinson's disease (PD), postural instability being one of PD's major symptoms. The aim of this study was to assess the feasibility of using accelerometers to characterize the postural behavior of early mild PD subjects. Twenty PD and 20 control subjects, wearing an accelerometer on the lower back, were tested in five conditions characterized by sensory and attentional perturbation. A total of 175 measures were computed from the signals to quantify tremor, acceleration, and displacement of body sway. Feature selection was implemented to identify the subsets of measures that better characterize the distinctive behavior of PD and control subjects. It was based on different classifiers and on a nested cross validation, to maximize robustness of selection with respect to changes in the training set. Several subsets of three features achieved misclassification rates as low as 5%. Many of them included a tremor-related measure, a postural measure in the frequency domain, and a postural displacement measure. Results suggest that quantitative posture analysis using a single accelerometer and a simple test protocol may provide useful information to characterize early PD subjects. This protocol is potentially usable to monitor the disease's progression." }, { "pmid": "17813860", "title": "Optimization by simulated annealing.", "abstract": "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods." }, { "pmid": "21968203", "title": "Hepatitis disease diagnosis using a novel hybrid method based on support vector machine and simulated annealing (SVM-SA).", "abstract": "In this study, diagnosis of hepatitis disease, which is a very common and important disease, is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM) and simulated annealing (SA). Simulated annealing is a stochastic method currently in wide use for difficult optimization problems. Intensively explored support vector machine due to its several unique advantages is successfully verified as a predicting method in recent years. We take the dataset used in our study from the UCI machine learning database. The classification accuracy is obtained via 10-fold cross validation. The obtained classification accuracy of our method is 96.25% and it is very promising with regard to the other classification methods in the literature for this problem." }, { "pmid": "17720704", "title": "A review of feature selection techniques in bioinformatics.", "abstract": "Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this article, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications." }, { "pmid": "19260029", "title": "Systems biology and its application to the understanding of neurological diseases.", "abstract": "Recent advances in molecular biology, neurobiology, genetics, and imaging have demonstrated important insights about the nature of neurological diseases. However, a comprehensive understanding of their pathogenesis is still lacking. Although reductionism has been successful in enumerating and characterizing the components of most living organisms, it has failed to generate knowledge on how these components interact in complex arrangements to allow and sustain two of the most fundamental properties of the organism as a whole: its fitness, also termed its robustness, and its capacity to evolve. Systems biology complements the classic reductionist approaches in the biomedical sciences by enabling integration of available molecular, physiological, and clinical information in the context of a quantitative framework typically used by engineers. Systems biology employs tools developed in physics and mathematics such as nonlinear dynamics, control theory, and modeling of dynamic systems. The main goal of a systems approach to biology is to solve questions related to the complexity of living systems such as the brain, which cannot be reconciled solely with the currently available tools of molecular biology and genomics. As an example of the utility of this systems biological approach, network-based analyses of genes involved in hereditary ataxias have demonstrated a set of pathways related to RNA splicing, a novel pathogenic mechanism for these diseases. Network-based analysis is also challenging the current nosology of neurological diseases. This new knowledge will contribute to the development of patient-specific therapeutic approaches, bringing the paradigm of personalized medicine one step closer to reality." }, { "pmid": "16761367", "title": "Machine learning in bioinformatics.", "abstract": "This article reviews machine learning methods for bioinformatics. It presents modelling methods, such as supervised classification, clustering and probabilistic graphical models for knowledge discovery, as well as deterministic and stochastic heuristics for optimization. Applications in genomics, proteomics, systems biology, evolution and text mining are also shown." }, { "pmid": "22281045", "title": "Ensemble transcript interaction networks: a case study on Alzheimer's disease.", "abstract": "Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of AD." }, { "pmid": "16315276", "title": "Model-guided microarray implicates the retromer complex in Alzheimer's disease.", "abstract": "Although, in principle, gene expression profiling is well suited to isolate pathogenic molecules associated with Alzheimer's disease (AD), techniques such as microarray present unique analytic challenges when applied to disorders of the brain. Here, we addressed these challenges by first constructing a spatiotemporal model, predicting a priori how a molecule underlying AD should behave anatomically and over time. Then, guided by the model, we generated gene expression profiles of the entorhinal cortex and the dentate gyrus, harvested from the brains of AD cases and controls covering a broad age span. Among many expression differences, the retromer trafficking molecule VPS35 best conformed to the spatiotemporal model of AD. Western blotting confirmed the abnormality, establishing that VPS35 levels are reduced in brain regions selectively vulnerable to AD. VPS35 is the core molecule of the retromer trafficking complex and further analysis revealed that VPS26, another member of the complex, is also downregulated in AD. Cell culture studies, using small interfering RNAs or expression vectors, showed that VPS35 regulates Abeta peptide levels, establishing the relevance of the retromer complex to AD. Reviewing our findings in the context of recent studies suggests how downregulation of the retromer complex in AD can regulate local levels of Abeta peptide." }, { "pmid": "10225344", "title": "Selected techniques for data mining in medicine.", "abstract": "Widespread use of medical information systems and explosive growth of medical databases require traditional manual data analysis to be coupled with methods for efficient computer-assisted analysis. This paper presents selected data mining techniques that can be applied in medicine, and in particular some machine learning techniques including the mechanisms that make them better suited for the analysis of medical databases (derivation of symbolic rules, use of background knowledge, sensitivity and specificity of induced descriptions). The importance of the interpretability of results of data analysis is discussed and illustrated on selected medical applications." }, { "pmid": "22331030", "title": "Chronic migraine--classification, characteristics and treatment.", "abstract": "According to the revised 2nd Edition of the International Classification of Headache Disorders, primary headaches can be categorized as chronic or episodic; chronic migraine is defined as headaches in the absence of medication overuse, occurring on ≥15 days per month for ≥3 months, of which headaches on ≥8 days must fulfill the criteria for migraine without aura. Prevalence and incidence data for chronic migraine are still uncertain, owing to the heterogeneous definitions used to identify the condition in population-based studies over the past two decades. Chronic migraine is severely disabling and difficult to manage, as affected patients experience substantially more-frequent headaches, comorbid pain and affective disorders, and fewer pain-free intervals, than do those with episodic migraine. Data on the treatment of chronic migraine are scarce because most migraine-prevention trials excluded patients who had headaches for ≥15 days per month. Despite this lack of reliable data, a wealth of expert opinion and a few evidence-based treatment options are available for managing chronic migraine. Trial data are available for topiramate and botulinum toxin type A, and expert opinion suggests that conventional preventive therapy for episodic migraine may also be useful. This Review discusses the evolution of our understanding of chronic migraine, including its epidemiology, pathophysiology, clinical characteristics and treatment options." }, { "pmid": "16002144", "title": "Review of a proposed mechanism for the antinociceptive action of botulinum toxin type A.", "abstract": "Botulinum toxin type A (BOTOX) has been used to treat pathological pain conditions although the mechanism is not entirely understood. Subcutaneous (s.c.) BOTOX also inhibits inflammatory pain in the rat formalin model, and the present study examined whether this could be due to a direct action on sensory neurons. BOTOX (3.5-30 U/kg) was injected s.c. into the subplantar surface of the rat hind paw followed 1-5 days later by 50 mL of 5% formalin. Using microdialysis, we found that BOTOX significantly inhibited formalin-induced glutamate release (peak inhibitions: 35%, 41%, and 45% with 3.5, 7, and 15 U/kg, respectively). BOTOX also dose dependently reduced the number of formalin-induced Fos-like immunoreactive cells in the dorsal horn of the spinal cord and significantly (15 and 30 U/kg) inhibited the excitation of wide dynamic range neurons of the dorsal horn in Phase II but not Phase I of the formalin response. These results indicate that s.c. BOTOX inhibits neurotransmitter release from primary sensory neurons in the rat formalin model. Through this mechanism, BOTOX inhibits peripheral sensitization in these models, which leads to an indirect reduction in central sensitization." }, { "pmid": "25523108", "title": "Pharmacological trials in migraine: it's time to reappraise where the headache is and what the pain is like.", "abstract": "Most pharmacological trials deal with migraine as if it were a clinically homogeneous disease, and when detailing its characteristics, they usually report only the presence, or absence, of aura and attack frequency but provide no information on pain location, a non-trivial clinical detail. The past decade has witnessed growing emerging evidence suggesting that individuals with unilateral pain, especially those with associated unilateral cranial autonomic symptoms, are more responsive than others to trigeminal-targeted symptomatic and preventive therapy with drugs such as triptans or botulinum toxin. A simple way for migraine research treatment to take a step forward might be to step back, reappraise, and critically evaluate easily obtainable patient-reported clinical findings along with current knowledge on pain features." }, { "pmid": "19539239", "title": "Origin of pain in migraine: evidence for peripheral sensitisation.", "abstract": "Migraine is the most common neurological disorder, and much has been learned about its mechanisms in recent years. However, the origin of painful impulses in the trigeminal nerve is still uncertain. Despite the attention paid recently to the role of central sensitisation in migraine pathophysiology, in our view, neuronal hyperexcitability depends on activation of peripheral nociceptors. Although the onset of a migraine attack might take place in deep-brain structures, some evidence indicates that the headache phase depends on nociceptive input from perivascular sensory nerve terminals. The input from arteries is probably more important than the input from veins. Several studies provide evidence for input from extracranial, dural, and pial arteries but, likewise, there is also evidence against all three of these locations. On balance, afferents are most probably excited in all three territories or the importance of individual territories varies from patient to patient. We suggest that migraine can be explained to patients as a disorder of the brain, and that the headache originates in the sensory fibres that convey pain signals from intracranial and extracranial blood vessels." }, { "pmid": "15836567", "title": "Botulinum toxin type a for the prophylaxis of chronic daily headache: subgroup analysis of patients not receiving other prophylactic medications: a randomized double-blind, placebo-controlled study.", "abstract": "OBJECTIVE\nTo assess the efficacy and safety of botulinum toxin type A (BoNT-A; BOTOX, Allergan, Inc., Irvine, CA) for the prophylaxis of headaches in patients with chronic daily headache (CDH) without the confounding factor of concurrent prophylactic medications.\n\n\nBACKGROUND\nSeveral open-label studies and an 11-month, randomized, double-blind, placebo-controlled study suggest that BoNT-A may be an effective therapy for the prophylaxis of headaches in patients with CDH.\n\n\nDESIGN AND METHODS\nThis was a subgroup analysis of an 11-month, randomized double-blind, placebo-controlled study of BoNT-A for the treatment of adult patients with 16 or more headache days per 30-day periods conducted at 13 North American study centers. All patients had a history of migraine or probable migraine. This analysis involved data for patients who were not receiving concomitant prophylactic headache medication and who constituted 64% of the full study population. Following a 30-day screening period and a 30-day single-blind, placebo injection, eligible patients were injected with BoNT-A or placebo and assessed every 30 days for 9 months The following efficacy measures were analyzed per 30-day periods: change from baseline in number of headache-free days; change from baseline in headache frequency; proportion of patients with at least 30% or at least 50% decrease from baseline in headache frequency; and change from baseline in mean headache severity. Acute medication use was assessed, and adverse events were recorded at each study visit.\n\n\nRESULTS\nOf the 355 patients randomized in the study, 228 (64%) were not taking prophylactic medication and were included in this analysis (117 received BoNT-A, 111 received placebo injections). Mean age was 42.4+/-10.90 years; the mean frequency of headaches per 30 days at baseline was 14.1 for the BoNT-A group and 12.9 for the placebo group (P=.205). After two injection sessions, the maximum change in the mean frequency of headaches per 30 days was -7.8 in the BoNT-A group compared with only -4.5 in the placebo group (P=.032), a statistically significant between-group difference of 3.3 headaches. The between-group difference favoring BoNT-A treatment continued to improve to 4.2 headaches after a third injection session (P=.023). In addition, BoNT-A treatment at least halved the frequency of baseline headaches in over 50% of patients after three injection sessions compared to baseline. Statistically significant differences between BoNT-A and placebo were evident for the change from baseline in headache frequency and headache severity for most time points from day 180 through day 270. Only 5 patients (4 patients receiving BoNT-A treatment; 1 patient receiving placebo) discontinued the study due to adverse events and most treatment-related events were transient and mild to moderate in severity.\n\n\nCONCLUSIONS\nBoNT-A is an effective and well-tolerated prophylactic treatment in migraine patients with CDH who are not using other prophylactic medications." } ]
Polymers
30961021
PMC6403594
10.3390/polym10101096
A Novel Fault Diagnosis System on Polymer Insulation of Power Transformers Based on 3-stage GA–SA–SVM OFC Selection and ABC–SVM Classifier
Dissolved gas analysis (DGA) has been widely used in various scenarios of power transformers’ online monitoring and diagnoses. However, the diagnostic accuracy of traditional DGA methods still leaves much room for improvement. In this context, numerous new DGA diagnostic models that combine artificial intelligence with traditional methods have emerged. In this paper, a new DGA artificial intelligent diagnostic system is proposed. There are two modules that make up the diagnosis system. The two modules are the optimal feature combination (OFC) selection module based on 3-stage GA–SA–SVM and the ABC–SVM fault diagnosis module. The diagnosis system has been completely realized and embodied in its outstanding performances in diagnostic accuracy, reliability, and efficiency. Comparing the result with other artificial intelligence diagnostic methods, the new diagnostic system proposed in this paper performed superiorly.
1.2. Related WorkMainstream transformers’ fault diagnostic methods include chemical quantity based methods and electrical quantity based methods [10]. Chemical based methods typically include dissolved gas analysis (DGA) [11], degree of polymerization (DP) measurements [12], moisture analysis (MA) [13], and Furan analysis by high performance liquid chromatography (HPLC) [14], among others. The electrical based methods involve the time domain method [15] and frequency domain polarization measurement [16]. Among these, DGA is the most widely exploited [17]. Since DGA was proposed in 1973, this online method has been widely accepted and exploited all around the world, owing to its outstanding economic efficiency and capability to detect failure in advance, which effectively alleviates the pressure brought on by the allowable-time problem [18]. The DGA works via detecting hydrogen (H2), methane (CH4), acetylene (C2H2), ethylene (C2H4), ethane (C2H6), carbon monoxide (CO), and carbon dioxide (CO2) gases dissolved in the transformer oil, which is produced by pyrolysis of insulation paper (board) cellulose. In this proposal, we divided DGA methods into traditional methods and intelligence methods. Traditional methods include: The Doernerburg Ratio Method [19], Rogers Ratio [20], IEC 60,599 Method [21,22], Duval Triangles Method [23], and Pentagon Method [24]. Despite having a long history, most of these methods are unstable in a complex operating environment. On the other hand, while research of intelligent DGA diagnostic methods has appeared, these are implemented less frequently compared with the traditional methods [25]. Therefore, this paper hopes to improve this situation as much as possible.The development of an intelligent diagnosis of the power transformer is promising. In general, intelligent diagnosis designs are built on the ideas of traditional methods. They combine most of the advantages of both traditional ideas and intelligent algorithms. Recently, intelligent methods, such as fuzzy logic inference systems [26], artificial neural networks [27], support vector machines (SVM) [28,29], and some other machine learning algorithms have been applied to transformer fault diagnosis and have had impressive performances [10,30,31]. However, limitations also exist with intelligent diagnostic methods. For example, fuzzy inference depends excessively on the experience of researchers [32]. In addition, “local minima” and “overfit” are two of the marked weaknesses of Artificial Neural Network (ANN) [33]. Compared to these methods, the application of SVM in abnormal detection and fault diagnosis has marked advantages [34]. It overcomes the local minimum, dimension, and over-fitting problems, and requires less in the scale of the training sample.
[]
[]
Crime Science
30931233
PMC6404783
10.1186/s40163-018-0094-4
Automatically identifying the function and intent of posts in underground forums
The automatic classification of posts from hacking-related online forums is of potential value for the understanding of user behaviour in social networks relating to cybercrime. We designed annotation schema to label forum posts for three properties: post type, author intent, and addressee. The post type indicates whether the text is a question, a comment, and so on. The author’s intent in writing the post could be positive, negative, moderating discussion, showing gratitude to another user, etc. The addressee of a post tends to be a general audience (e.g. other forum users) or individual users who have already contributed to a threaded discussion. We manually annotated a sample of posts and returned substantial agreement for post type and addressee, and fair agreement for author intent. We trained rule-based (logical) and machine learning (statistical) classification models to predict these labels automatically, and found that a hybrid logical–statistical model performs best for post type and author intent, whereas a purely statistical model is best for addressee. We discuss potential applications for this data, including the analysis of thread conversations in forum data and the identification of key actors within social networks.
Related workVarious researchers have studied the linguistic and behavioural conventions of online forums, and furthermore the best methods for information retrieval and text mining in this domain. Hoogeveen and colleagues (2018) provide a comprehensive overview of the field of web forum retrieval and text analytics. They divide the set of tasks in two: those relating to retrieval and those relating to classification. Our interests span both task types for the purpose of forum user analysis and classification: here we consider classification within the context of information retrieval. Hoogeveen and colleagues look at many forum types, while we focus on hacking-related forums.Information retrieval refers to the extraction of content, facts, and relations from collections of text and other media. Classification is a type of machine learning which predicts the most probably label y for an instance X (in our case a document). Machine learning may generally be supervised to some degree by human labelled training data. Unsupervised learning involves a fully automated approach without any pre-labelled training data. Semi-supervised learning relies on a seed set of labelled training instances to start from, with the remainder (usually larger) being unlabelled; the learning algorithm ‘bootstraps’ from that seed set in a process which is often found to improve on fully unsupervised learning. We adopt a supervised approach in which our classifier is trained on human labelled data only, since this type of machine learning is still held to yield the highest accuracy outcomes. However, there is clearly a trade-off between accuracy and the human labour involved in preparing the training data. We opted for a supervised approach since the domain is non-standard, linguistically-speaking, and we wished to fully explore and understand the type of data we are dealing with. In future work, though, semi-supervised approaches may be of use, as we indeed have a much larger corpus of unlabelled texts than we can feasibly annotate in any reasonable amount of time.Meanwhile Lui and Baldwin (2010) share our interest in categorising forum users, though they do so with a higher dimensional schema than the one we use, labelling the clarity, positivity, effort and proficiency found in users’ forum contributions. Thus they can classify a user as an ‘unintelligible, demon, slacker hack[er]’ (in order of clarity, positivity, effort, proficiency), at worst, or a ‘very clear, jolly, strider guru’ at best. Multiple annotators labelled a reference set on the basis of users’ texts, and automatically extracted features were used in a machine learning experiment. Their features include the presence of emoticons, URLs and ‘newbie’ terms (all Booleans), word counts, question counts, topic relevance and overlap with previous posts in the thread. We use similar features, and can investigate implementation of their full set in future work.Portnoff and colleagues (2017) aim to identify forum posts relating to product or currency trade, and to determine what is being bought or sold and for what price. This work has many similarities to ours, in that the first task is to classify posts into different types, and identifying the entities being discussed is a subsequent task of interest. However, they only seek to retrieve posts relating to trade, a narrower focus than ours. We concur with their observation that forum texts are not like those found in ‘well-written English text of The Wall Street Journal’, and consequently off-the-shelf natural language processing (NLP) tools, such as part-of-speech taggers, syntactic parsers, and named entity recognisers (as might be used to identify products) perform poorly in this domain. In response they discuss NLP ‘building blocks’ which might support human analysis of trade in forum data, essentially using lexico-syntactic pattern matching to good effect for the retrieval of products, prices and currency exchange from online forum texts.Durrett and colleagues elaborate on the Portnoff et al. paper by discussing forum data in the context of ‘fine-grained domain adaptation’, showing that standard techniques for semi-supervised learning and domain adaptation (e.g. Daumé 2007; Turian et al. 2010; Garrette et al. 2013) work insufficiently well, and that improved methods are needed (Durrett et al. 2017). At the moment we adopt a holistic view of user behaviour on forums; however, if in future work we decide to focus on subsections of forum activity, such as trade-related activity, then the findings and proposals of Portnoff, Durrett and colleagues will be valuable and influential to our own methods.Li and Chen (2014) construct a pipeline of keyword extraction, thread classification, and deep learning based sentiment analysis to identify the top sellers of credit card fraud techniques and stolen data. All stages of their pipeline are of relevance to us because the ‘snowball sampling’ (a.k.a ‘bootstrapping’) method they use for keyword extraction is one we could employ in future work to accelerate knowledge discovery. Thread classification is one of the tasks we discuss in this report, as is sentiment analysis, while ‘deep learning’ (i.e. unsupervised machine learning with neural networks) is a technique of great potential for the type and size of data we are working with. In Li and Chen’s work, sentiment analysis is used as it is so often used—to assess whether people have reviewed a product positively or negatively—but what is unusual here is that, rather than, say, Amazon, the reviewing forum is a blackhat site, and rather than books, toys or other general consumer goods, the product under review has criminal intent or has been illegally obtained. This is a noteworthy revision of ‘vanilla’ sentiment analysis and one we can consider for future research using the CrimeBB dataset.Our work therefore builds on the work of others in the field by adopting existing information retrieval and text classification approaches, applying them to a corpus of wider scope than previously used, and using the resultant dataset for downstream analysis of social networks and identification of key actors in cybercrime communities.
[ "843571" ]
[ { "pmid": "843571", "title": "The measurement of observer agreement for categorical data.", "abstract": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature." } ]
Royal Society Open Science
30891266
PMC6408419
10.1098/rsos.181301
Resilience or robustness: identifying topological vulnerabilities in rail networks
Many critical infrastructure systems have network structures and are under stress. Despite their national importance, the complexity of large-scale transport networks means that we do not fully understand their vulnerabilities to cascade failures. The research conducted through this paper examines the interdependent rail networks in Greater London and surrounding commuter area. We focus on the morning commuter hours, where the system is under the most demand stress. There is increasing evidence that the topological shape of the network plays an important role in dynamic cascades. Here, we examine whether the different topological measures of resilience (stability) or robustness (failure) are more appropriate for understanding poor railway performance. The results show that resilience, not robustness, has a strong correlation with the consumer experience statistics. Our results are a way of describing the complexity of cascade dynamics on networks without the involvement of detailed agent-based models, showing that cascade effects are more responsible for poor performance than failures. The network science analysis hints at pathways towards making the network structure more resilient by reducing feedback loops.
1.1.Related work1.1.1.UK rail networkThe UK rail network transports more than 1.7 billion passengers per year, of which 1.1 billion passengers commute in and around London.1 According to the Office of Rail and Road,2 last year in and around London, only 86.9% of passenger trains arrived on time and 4.8% of the journeys were cancelled or significantly late. Often these delays are interrelated and the relationship between cascade effects and network dynamics is not well understood.In the current literature, most of the proposed studies consider natural or man-made disasters, but they do not consider the stress of the network during the peak-hours and how the structure of the network created by the massive flows of people can influence their ability to maintain a good service. For example, several graph-based approaches have been proposed to improve the performances by revising the design and maintenance of the rail networks [5],3 but do not consider dynamic passenger flows. Other studies focus on specific extreme scenarios [6]4 or unfavourable conditions [7] that cause disruptions.As our data show, under the same external conditions, the major rail companies in and around London show dramatically different performance levels. In this work, we hypothesize that this difference can, in part, be attributed to the peak passenger demand. A coupling relationship between flow and network structure can tease out the indicative measures that correlate strongly with overall performance.1.1.2.Vulnerability of transport networksThe concept of vulnerability of transportation network, introduced in the literature by Berdica [8], is generally defined as the susceptibility to disruptions that could cause considerable reductions in network service or the ability to use a particular network link or route at a given time. Many have applied general network science disruption analysis. For example, several studies [9–11] have been conducted for modelling railway vulnerability with promising predictive results. Bababeik et al. [12] recently proposed a mathematical programming model that is able to identify critical links with consideration of supply and demand interactions under different disruption scenarios. Recent work has also used graph properties to infer interaction strengths and use an epidemic spreading model to predict delays in railway networks [13].
[ "4559589", "28512222", "25799585", "26838176", "29906843", "28496187" ]
[ { "pmid": "28512222", "title": "Looplessness in networks is linked to trophic coherence.", "abstract": "Many natural, complex systems are remarkably stable thanks to an absence of feedback acting on their elements. When described as networks these exhibit few or no cycles, and associated matrices have small leading eigenvalues. It has been suggested that this architecture can confer advantages to the system as a whole, such as \"qualitative stability,\" but this observation does not in itself explain how a loopless structure might arise. We show here that the number of feedback loops in a network, as well as the eigenvalues of associated matrices, is determined by a structural property called trophic coherence, a measure of how neatly nodes fall into distinct levels. Our theory correctly classifies a variety of networks-including those derived from genes, metabolites, species, neurons, words, computers, and trading nations-into two distinct regimes of high and low feedback and provides a null model to gauge the significance of related magnitudes. Because trophic coherence suppresses feedback, whereas an absence of feedback alone does not lead to coherence, our work suggests that the reasons for \"looplessness\" in nature should be sought in coherence-inducing mechanisms." }, { "pmid": "25799585", "title": "Rich-cores in networks.", "abstract": "A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively." }, { "pmid": "26838176", "title": "Two betweenness centrality measures based on Randomized Shortest Paths.", "abstract": "This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP's have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice." }, { "pmid": "29906843", "title": "Network overload due to massive attacks.", "abstract": "We study the cascading failure of networks due to overload, using the betweenness centrality of a node as the measure of its load following the Motter and Lai model. We study the fraction of survived nodes at the end of the cascade p_{f} as a function of the strength of the initial attack, measured by the fraction of nodes p that survive the initial attack for different values of tolerance α in random regular and Erdös-Renyi graphs. We find the existence of a first-order phase-transition line p_{t}(α) on a p-α plane, such that if p<p_{t}, the cascade of failures leads to a very small fraction of survived nodes p_{f} and the giant component of the network disappears, while for p>p_{t}, p_{f} is large and the giant component of the network is still present. Exactly at p_{t}, the function p_{f}(p) undergoes a first-order discontinuity. We find that the line p_{t}(α) ends at a critical point (p_{c},α_{c}), in which the cascading failures are replaced by a second-order percolation transition. We find analytically the average betweenness of nodes with different degrees before and after the initial attack, we investigate their roles in the cascading failures, and we find a lower bound for p_{t}(α). We also study the difference between localized and random attacks." }, { "pmid": "28496187", "title": "Geometric explanation of the rich-club phenomenon in complex networks.", "abstract": "The rich club organization (the presence of highly connected hub core in a network) influences many structural and functional characteristics of networks including topology, the efficiency of paths and distribution of load. Despite its major role, the literature contains only a very limited set of models capable of generating networks with realistic rich club structure. One possible reason is that the rich club organization is a divisive property among complex networks which exhibit great diversity, in contrast to other metrics (e.g. diameter, clustering or degree distribution) which seem to behave very similarly across many networks. Here we propose a simple yet powerful geometry-based growing model which can generate realistic complex networks with high rich club diversity by controlling a single geometric parameter. The growing model is validated against the Internet, protein-protein interaction, airport and power grid networks." } ]
Frontiers in Genetics
30886626
PMC6409355
10.3389/fgene.2019.00120
Prediction of Gene Expression Patterns With Generalized Linear Regression Model
Cell reprogramming has played important roles in medical science, such as tissue repair, organ reconstruction, disease treatment, new drug development, and new species breeding. Oct4, a core pluripotency factor, has especially played a key role in somatic cell reprogramming through transcriptional control and affects the expression level of genes by its combination intensity. However, the quantitative relationship between Oct4 combination intensity and target gene expression is still not clear. Therefore, firstly, a generalized linear regression method was constructed to predict gene expression values in promoter regions affected by Oct4 combination intensity. Training data, including Oct4 combination intensity and target gene expression, were from promoter regions of genes with different cell development stages. Additionally, the quantitative relationship between gene expression and Oct4 combination intensity was analyzed with the proposed model. Then, the quantitative relationship between gene expression and Oct4 combination intensity at each stage of cell development was classified into high and low levels. Experimental analysis showed that the combination height of Oct4-inhibited gene expression decremented by a temporal exponential value, whereas the combination width of Oct4-promoted gene expression incremented by a temporal logarithmic value. Experimental results showed that the proposed method can achieve goodness of fit with high confidence.
Related WorkPrevious studies reported mechanisms and methods of cell reprogramming. Earlier, Gurdon et al. applied the nuclear transfer method to cell reprogramming of Xenopus laevis (Gurdon, 1958). Campbell, McCreath, and Polejaeva cultivated cloning animals using nuclear transfer technology (Campbell et al., 1996; McCreath et al., 2000; Polejaeva et al., 2000). Håkelien and Hochedlinger analyzed a cell recombination mechanism based on nuclear fusion and nuclear transfer technology (Håkelien et al., 2002; Hochedlinger and Jaenisch, 2002). Later, Stadtfeld and Zardo analyzed the effects of specific transcription factors and epigenetic plasticity of chromatin on cell reprogramming (Stadtfeld et al., 2008; Zardo et al., 2008). Studies by Hanna and Li showed that overexpression of transcription factor Oct4 had an effect on cell reprogramming (Hanna et al., 2009; Li et al., 2009). Doege et al. elaborated the effects of the interaction of Oct4, Sox2, Klf4, and c-Myc on cell reprogramming in the early stages of cell reprogramming (Doege et al., 2012). Apostolou and Chen found that the dynamic mechanisms of chromatin change and DNA methylation had important effects on cell reprogramming (Apostolou and Hochedlinger, 2013; Chen et al., 2013). Koqa et al. analyzed the role of transcription factor Foxd1 in cell reprogramming (Koga et al., 2014). Recently, Poli and Stadhouders elaborated the roles of specific transcription factors used as inducing factors in cell reprogramming (Poli et al., 2018; Stadhouders et al., 2018).The process of cell reprogramming was closely related to the regulation of gene expression. Moreover, regulation of gene expression is the molecular basis of many life activities, including cell differentiation, morphogenesis, and ontogeny (Chen et al., 2016). Earlier, Chen and Rimsky analyzed regulation effects of cis- and trans-regulatory elements on gene expression (Rimsky et al., 1989; Chen et al., 1990). Later, Ueda et al. analyzed effects of diurnal variation of transcription factors on gene expression (Ueda et al., 2002). Patricia et al. analyzed effects of the interaction of cis- and trans-regulatory elements on gene expression (Wittkopp et al., 2004). Sullivan CS et al. studied the regulation effect of microRNAs encoded by SV40 on gene expression (Sullivan et al., 2005). Jeffery et al. found factors related to gene expression using gene expression data and binding sites of transcription factor (Jeffery et al., 2007). Han et al. found that certain types of genomic organization by SATB1 had an effect on gene expression (Han et al., 2008). Afterward, Costa et al. predicted gene expression in T cell differentiation by using histone modification and binding affinity of transcription factor via a linear mixed model (Costa et al., 2011). Maienscheincline et al. searched for target genes regulated by transcription factors based on some information, including binding sites of transcription factors and target genes (Maienschein-Cline et al., 2012). MT and Holoch analyzed the effects of specific transcription factors and the regulation effect of RNA on gene expression, respectively (Lee et al., 2013; Holoch and Moazed, 2015). Recently, Engreitz and Singh clarified effects of lncRNA promoter, transcription factor, variable splicing, and histone modification on gene expression, respectively (Engreitz et al., 2016; Singh et al., 2016). Thomou and Wu analyzed effects of miRNAs and histone modifications on gene expression (Thomou et al., 2017; Wu et al., 2017). Additionally, Duren et al. predicted gene expression based on chromatin accessibility data, cis-acting and trans-acting element data by logistic regression models (Duren et al., 2017). Neumann and Stadhouders analyzed effects of LncRNA and the dynamic interaction of transcription factors with expression of target genes (Neumann et al., 2018; Stadhouders et al., 2018).Many methods were proposed for deciphering regulation mechanisms of cis-regulatory and trans-regulatory elements based on gene expression. Studies showed that gene expression was closely related to Oct4 combination intensity in promoter regions (Machado et al., 2011; Machado, 2017; Yan et al., 2017; Antão et al., 2018). However, the quantitative relationship between gene expression and Oct4 combination intensity was not considered. Therefore, firstly, a generalized linear regression model was proposed for quantifying the relationship of gene expression and Oct4 combination intensity based on eight gene datapoints. Then, testing data were applied to test the generalization ability of the model. On the one hand, experiments of 27 genes, as well as all genes, from GEO were applied to analyze the quantitative relationship between Oct4 combination intensity and target gene expression at each stage of cell development by our proposed model. On the other hand, 27 genes were divided into positive and negative samples by our proposed method.
[ "24153299", "27563027", "16153702", "8598906", "26832419", "23202127", "2142999", "21342559", "22902501", "28576882", "27783602", "24504871", "11981558", "18337816", "19898493", "11875572", "25554358", "14594712", "17127681", "24496101", "24056933", "19668188", "21672622", "22084256", "10890449", "29339785", "18157115", "10993078", "29523784", "2677743", "27587684", "29335546", "18371448", "15931223", "28199304", "12152080", "22155867", "16170780", "30467462", "17554336", "15229602", "28665997", "18713471", "23409062", "29051499", "18548105", "14630659" ]
[ { "pmid": "24153299", "title": "Chromatin dynamics during cellular reprogramming.", "abstract": "Induced pluripotency is a powerful tool to derive patient-specific stem cells. In addition, it provides a unique assay to study the interplay between transcription factors and chromatin structure. Here, we review the latest insights into chromatin dynamics that are inherent to induced pluripotency. Moreover, we compare and contrast these events with other physiological and pathological processes that involve changes in chromatin and cell state, including germ cell maturation and tumorigenesis. We propose that an integrated view of these seemingly diverse processes could provide mechanistic insights into cell fate transitions in general and might lead to new approaches in regenerative medicine and cancer treatment." }, { "pmid": "27563027", "title": "Prediction of nucleosome positioning by the incorporation of frequencies and distributions of three different nucleotide segment lengths into a general pseudo k-tuple nucleotide composition.", "abstract": "MOTIVATION\nNucleosome positioning plays important roles in many eukaryotic intranuclear processes, such as transcriptional regulation and chromatin structure formation. The investigations of nucleosome positioning rules provide a deeper understanding of these intracellular processes.\n\n\nRESULTS\nNucleosome positioning prediction was performed using a model consisting of three types of variables characterizing a DNA sequence-the number of five-nucleotide sequences, the number of three-nucleotide combinations in one period of a helix, and mono- and di-nucleotide distributions in DNA fragments. Using recently proposed stringent benchmark datasets with low biases for Saccharomyces cerevisiae, Homo sapiens, Caenorhabditis elegans and Drosophila melanogaster, the present model was shown to have a better prediction performance than the recently proposed predictors. This model was able to display the common and organism-dependent factors that affect nucleosome forming and inhibiting sequences as well. Therefore, the predictors developed here can accurately predict nucleosome positioning and help determine the key factors influencing this process.\n\n\nCONTACT\[email protected] information: Supplementary data are available at Bioinformatics online." }, { "pmid": "16153702", "title": "Core transcriptional regulatory circuitry in human embryonic stem cells.", "abstract": "The transcription factors OCT4, SOX2, and NANOG have essential roles in early development and are required for the propagation of undifferentiated embryonic stem (ES) cells in culture. To gain insights into transcriptional regulation of human ES cells, we have identified OCT4, SOX2, and NANOG target genes using genome-scale location analysis. We found, surprisingly, that OCT4, SOX2, and NANOG co-occupy a substantial portion of their target genes. These target genes frequently encode transcription factors, many of which are developmentally important homeodomain proteins. Our data also indicate that OCT4, SOX2, and NANOG collaborate to form regulatory circuitry consisting of autoregulatory and feedforward loops. These results provide new insights into the transcriptional regulation of stem cells and reveal how OCT4, SOX2, and NANOG contribute to pluripotency and self-renewal." }, { "pmid": "8598906", "title": "Sheep cloned by nuclear transfer from a cultured cell line.", "abstract": "Nuclear transfer has been used in mammals as both a valuable tool in embryological studies and as a method for the multiplication of 'elite' embryos. Offspring have only been reported when early embryos, or embryo-derived cells during primary culture, were used as nuclear donors. Here we provide the first report, to our knowledge, of live mammalian offspring following nuclear transfer from an established cell line. Lambs were born after cells derived from sheep embryos, which had been cultured for 6 to 13 passages, were induced to quiesce by serum starvation before transfer of their nuclei into enucleated oocytes. Induction of quiescence in the donor cells may modify the donor chromatin structure to help nuclear reprogramming and allow development. This approach will provide the same powerful opportunities for analysis and modification of gene function in livestock species that are available in the mouse through the use of embryonic stem cells." }, { "pmid": "26832419", "title": "Hierarchical Oct4 Binding in Concert with Primed Epigenetic Rearrangements during Somatic Cell Reprogramming.", "abstract": "The core pluripotency factor Oct4 plays key roles in somatic cell reprogramming through transcriptional control. Here, we profile Oct4 occupancy, epigenetic changes, and gene expression in reprogramming. We find that Oct4 binds in a hierarchical manner to target sites with primed epigenetic modifications. Oct4 binding is temporally continuous and seldom switches between bound and unbound. Oct4 occupancy in most of promoters is maintained throughout the entire reprogramming process. In contrast, somatic cell-specific enhancers are silenced in the early and intermediate stages, whereas stem cell-specific enhancers are activated in the late stage in parallel with cell fate transition. Both epigenetic remodeling and Oct4 binding contribute to the hyperdynamic enhancer signature transitions. The hierarchical Oct4 bindings are associated with distinct functional themes at different stages. Collectively, our results provide a comprehensive molecular roadmap of Oct4 binding in concert with epigenetic rearrangements and rich resources for future reprogramming studies." }, { "pmid": "23202127", "title": "H3K9 methylation is a barrier during somatic cell reprogramming into iPSCs.", "abstract": "The induction of pluripotent stem cells (iPSCs) by defined factors is poorly understood stepwise. Here, we show that histone H3 lysine 9 (H3K9) methylation is the primary epigenetic determinant for the intermediate pre-iPSC state, and its removal leads to fully reprogrammed iPSCs. We generated a panel of stable pre-iPSCs that exhibit pluripotent properties but do not activate the core pluripotency network, although they remain sensitive to vitamin C for conversion into iPSCs. Bone morphogenetic proteins (BMPs) were subsequently identified in serum as critical signaling molecules in arresting reprogramming at the pre-iPSC state. Mechanistically, we identified H3K9 methyltransferases as downstream targets of BMPs and showed that they function with their corresponding demethylases as the on/off switch for the pre-iPSC fate by regulating H3K9 methylation status at the core pluripotency loci. Our results not only establish pre-iPSCs as an epigenetically stable signpost along the reprogramming road map, but they also provide mechanistic insights into the epigenetic reprogramming of cell fate." }, { "pmid": "2142999", "title": "Autoregulation of pit-1 gene expression mediated by two cis-active promoter elements.", "abstract": "The pit-1 gene is a member of a large family of genes that encode proteins which are involved in development and which contain a highly homologous region, referred to as the POU domain. Pit-1, a pituitary-specific transcription factor, can activate the transcription of the growth hormone and prolactin promoters. It is expressed in mature thyrotroph, somatotroph and lactotroph cell types of the anterior pituitary which arise sequentially during development; somatotrophs and lactotrophs, which secrete growth hormone and prolactin, respectively, are the last to arise. Intriguingly, during ontogeny, pit-1 transcripts are observed in the rat neural tube and neural plate (embryonic day 10-11) and disappear thereafter (day 13), only to reappear exclusively in the anterior lobe of the pituitary gland (day 15) just before activation of prolactin and growth hormone. This biphasic pattern suggests a complex mechanism of initial activation of pit-1 gene expression. Transcription and transfection analyses in vitro using wild-type and mutated promoters indicate that Pit-1 can positively autoregulate the expression of the pit-1 promoter as a consequence of binding to two Pit-1-binding elements. Mutation of the 5' Pit-1-binding site abolished positive autoregulation, whereas mutation of the element located immediately 3' of the cap site markedly increased expression of the pit-1 promoter. These data are consistent with a positive, attenuated autoregulatory loop that seems to function in maintaining pit-1 gene expression." }, { "pmid": "21342559", "title": "Predicting gene expression in T cell differentiation from histone modifications and transcription factor binding affinities by linear mixture models.", "abstract": "BACKGROUND\nThe differentiation process from stem cells to fully differentiated cell types is controlled by the interplay of chromatin modifications and transcription factor activity. Histone modifications or transcription factors frequently act in a multi-functional manner, with a given DNA motif or histone modification conveying both transcriptional repression and activation depending on its location in the promoter and other regulatory signals surrounding it.\n\n\nRESULTS\nTo account for the possible multi functionality of regulatory signals, we model the observed gene expression patterns by a mixture of linear regression models. We apply the approach to identify the underlying histone modifications and transcription factors guiding gene expression of differentiated CD4+ T cells. The method improves the gene expression prediction in relation to the use of a single linear model, as often used by previous approaches. Moreover, it recovered the known role of the modifications H3K4me3 and H3K27me3 in activating cell specific genes and of some transcription factors related to CD4+ T differentiation." }, { "pmid": "22902501", "title": "Early-stage epigenetic modification during somatic cell reprogramming by Parp1 and Tet2.", "abstract": "Somatic cells can be reprogrammed into induced pluripotent stem cells (iPSCs) by using the pluripotency factors Oct4, Sox2, Klf4 and c-Myc (together referred to as OSKM). iPSC reprogramming erases somatic epigenetic signatures—as typified by DNA methylation or histone modification at silent pluripotency loci—and establishes alternative epigenetic marks of embryonic stem cells (ESCs). Here we describe an early and essential stage of somatic cell reprogramming, preceding the induction of transcription at endogenous pluripotency loci such as Nanog and Esrrb. By day 4 after transduction with OSKM, two epigenetic modification factors necessary for iPSC generation, namely poly(ADP-ribose) polymerase-1 (Parp1) and ten-eleven translocation-2 (Tet2), are recruited to the Nanog and Esrrb loci. These epigenetic modification factors seem to have complementary roles in the establishment of early epigenetic marks during somatic cell reprogramming: Parp1 functions in the regulation of 5-methylcytosine (5mC) modification, whereas Tet2 is essential for the early generation of 5-hydroxymethylcytosine (5hmC) by the oxidation of 5mC (refs 3,4). Although 5hmC has been proposed to serve primarily as an intermediate in 5mC demethylation to cytosine in certain contexts, our data, and also studies of Tet2-mutant human tumour cells, argue in favour of a role for 5hmC as an epigenetic mark distinct from 5mC. Consistent with this, Parp1 and Tet2 are each needed for the early establishment of histone modifications that typify an activated chromatin state at pluripotency loci, whereas Parp1 induction further promotes accessibility to the Oct4 reprogramming factor. These findings suggest that Parp1 and Tet2 contribute to an epigenetic program that directs subsequent transcriptional induction at pluripotency loci during somatic cell reprogramming." }, { "pmid": "28576882", "title": "Modeling gene regulation from paired expression and chromatin accessibility data.", "abstract": "The rapid increase of genome-wide datasets on gene expression, chromatin states, and transcription factor (TF) binding locations offers an exciting opportunity to interpret the information encoded in genomes and epigenomes. This task can be challenging as it requires joint modeling of context-specific activation of cis-regulatory elements (REs) and the effects on transcription of associated regulatory factors. To meet this challenge, we propose a statistical approach based on paired expression and chromatin accessibility (PECA) data across diverse cellular contexts. In our approach, we model (i) the localization to REs of chromatin regulators (CRs) based on their interaction with sequence-specific TFs, (ii) the activation of REs due to CRs that are localized to them, and (iii) the effect of TFs bound to activated REs on the transcription of target genes (TGs). The transcriptional regulatory network inferred by PECA provides a detailed view of how trans- and cis-regulatory elements work together to affect gene expression in a context-specific manner. We illustrate the feasibility of this approach by analyzing paired expression and accessibility data from the mouse Encyclopedia of DNA Elements (ENCODE) and explore various applications of the resulting model." }, { "pmid": "27783602", "title": "Local regulation of gene expression by lncRNA promoters, transcription and splicing.", "abstract": "Mammalian genomes are pervasively transcribed to produce thousands of long non-coding RNAs (lncRNAs). A few of these lncRNAs have been shown to recruit regulatory complexes through RNA-protein interactions to influence the expression of nearby genes, and it has been suggested that many other lncRNAs can also act as local regulators. Such local functions could explain the observation that lncRNA expression is often correlated with the expression of nearby genes. However, these correlations have been challenging to dissect and could alternatively result from processes that are not mediated by the lncRNA transcripts themselves. For example, some gene promoters have been proposed to have dual functions as enhancers, and the process of transcription itself may contribute to gene regulation by recruiting activating factors or remodelling nucleosomes. Here we use genetic manipulation in mouse cell lines to dissect 12 genomic loci that produce lncRNAs and find that 5 of these loci influence the expression of a neighbouring gene in cis. Notably, none of these effects requires the specific lncRNA transcripts themselves and instead involves general processes associated with their production, including enhancer-like activity of gene promoters, the process of transcription, and the splicing of the transcript. Furthermore, such effects are not limited to lncRNA loci: we find that four out of six protein-coding loci also influence the expression of a neighbour. These results demonstrate that cross-talk among neighbouring genes is a prevalent phenomenon that can involve multiple mechanisms and cis-regulatory signals, including a role for RNA splice sites. These mechanisms may explain the function and evolution of some genomic loci that produce lncRNAs and broadly contribute to the regulation of both coding and non-coding genes." }, { "pmid": "24504871", "title": "iNuc-PseKNC: a sequence-based predictor for predicting nucleosome positioning in genomes with pseudo k-tuple nucleotide composition.", "abstract": "MOTIVATION\nNucleosome positioning participates in many cellular activities and plays significant roles in regulating cellular processes. With the avalanche of genome sequences generated in the post-genomic age, it is highly desired to develop automated methods for rapidly and effectively identifying nucleosome positioning. Although some computational methods were proposed, most of them were species specific and neglected the intrinsic local structural properties that might play important roles in determining the nucleosome positioning on a DNA sequence.\n\n\nRESULTS\nHere a predictor called 'iNuc-PseKNC' was developed for predicting nucleosome positioning in Homo sapiens, Caenorhabditis elegans and Drosophila melanogaster genomes, respectively. In the new predictor, the samples of DNA sequences were formulated by a novel feature-vector called 'pseudo k-tuple nucleotide composition', into which six DNA local structural properties were incorporated. It was observed by the rigorous cross-validation tests on the three stringent benchmark datasets that the overall success rates achieved by iNuc-PseKNC in predicting the nucleosome positioning of the aforementioned three genomes were 86.27%, 86.90% and 79.97%, respectively. Meanwhile, the results obtained by iNuc-PseKNC on various benchmark datasets used by the previous investigators for different genomes also indicated that the current predictor remarkably outperformed its counterparts.\n\n\nAVAILABILITY\nA user-friendly web-server, iNuc-PseKNC is freely accessible at http://lin.uestc.edu.cn/server/iNuc-PseKNC." }, { "pmid": "11981558", "title": "Reprogramming fibroblasts to express T-cell functions using cell extracts.", "abstract": "We demonstrate here the functional reprogramming of a somatic cell using a nuclear and cytoplasmic extract derived from another somatic cell type. Reprogramming of 293T fibroblasts in an extract from primary human T cells or from a transformed T-cell line is evidenced by nuclear uptake and assembly of transcription factors, induction of activity of a chromatin remodeling complex, histone acetylation, and activation of lymphoid cell specific genes. Reprogrammed cells express T cell specific receptors and assemble the interleukin-2 receptor in response to T cell receptor CD3 (TCR CD3) complex stimulation. Reprogrammed primary skin fibroblasts also express T cell specific antigens. After exposure to a neuronal precursor extract, 293T fibroblasts express a neurofilament protein and extend neurite-like outgrowths. In vitro reprogramming of differentiated somatic cells creates possibilities for producing isogenic replacement cells for therapeutic applications." }, { "pmid": "18337816", "title": "SATB1 reprogrammes gene expression to promote breast tumour growth and metastasis.", "abstract": "Mechanisms underlying global changes in gene expression during tumour progression are poorly understood. SATB1 is a genome organizer that tethers multiple genomic loci and recruits chromatin-remodelling enzymes to regulate chromatin structure and gene expression. Here we show that SATB1 is expressed by aggressive breast cancer cells and its expression level has high prognostic significance (P < 0.0001), independent of lymph-node status. RNA-interference-mediated knockdown of SATB1 in highly aggressive (MDA-MB-231) cancer cells altered the expression of >1,000 genes, reversing tumorigenesis by restoring breast-like acinar polarity and inhibiting tumour growth and metastasis in vivo. Conversely, ectopic SATB1 expression in non-aggressive (SKBR3) cells led to gene expression patterns consistent with aggressive-tumour phenotypes, acquiring metastatic activity in vivo. SATB1 delineates specific epigenetic modifications at target gene loci, directly upregulating metastasis-associated genes while downregulating tumour-suppressor genes. SATB1 reprogrammes chromatin organization and the transcription profiles of breast tumours to promote growth and metastasis; this is a new mechanism of tumour progression." }, { "pmid": "19898493", "title": "Direct cell reprogramming is a stochastic process amenable to acceleration.", "abstract": "Direct reprogramming of somatic cells into induced pluripotent stem (iPS) cells can be achieved by overexpression of Oct4, Sox2, Klf4 and c-Myc transcription factors, but only a minority of donor somatic cells can be reprogrammed to pluripotency. Here we demonstrate that reprogramming by these transcription factors is a continuous stochastic process where almost all mouse donor cells eventually give rise to iPS cells on continued growth and transcription factor expression. Additional inhibition of the p53/p21 pathway or overexpression of Lin28 increased the cell division rate and resulted in an accelerated kinetics of iPS cell formation that was directly proportional to the increase in cell proliferation. In contrast, Nanog overexpression accelerated reprogramming in a predominantly cell-division-rate-independent manner. Quantitative analyses define distinct cell-division-rate-dependent and -independent modes for accelerating the stochastic course of reprogramming, and suggest that the number of cell divisions is a key parameter driving epigenetic reprogramming to pluripotency." }, { "pmid": "11875572", "title": "Monoclonal mice generated by nuclear transfer from mature B and T donor cells.", "abstract": "Cloning from somatic cells is inefficient, with most clones dying during gestation. Cloning from embryonic stem (ES) cells is much more effective, suggesting that the nucleus of an embryonic cell is easier to reprogram. It is thus possible that most surviving clones are, in fact, derived from the nuclei of rare somatic stem cells present in adult tissues, rather than from the nuclei of differentiated cells, as has been assumed. Here we report the generation of monoclonal mice by nuclear transfer from mature lymphocytes. In a modified two-step cloning procedure, we established ES cells from cloned blastocysts and injected them into tetraploid blastocysts to generate mice. In this approach, the embryo is derived from the ES cells and the extra-embryonic tissues from the tetraploid host. Animals cloned from a B-cell nucleus were viable and carried fully rearranged immunoglobulin alleles in all tissues. Similarly, a mouse cloned from a T-cell nucleus carried rearranged T-cell-receptor genes in all tissues. This is an unequivocal demonstration that a terminally differentiated cell can be reprogrammed to produce an adult cloned animal." }, { "pmid": "25554358", "title": "RNA-mediated epigenetic regulation of gene expression.", "abstract": "Diverse classes of RNA, ranging from small to long non-coding RNAs, have emerged as key regulators of gene expression, genome stability and defence against foreign genetic elements. Small RNAs modify chromatin structure and silence transcription by guiding Argonaute-containing complexes to complementary nascent RNA scaffolds and then mediating the recruitment of histone and DNA methyltransferases. In addition, recent advances suggest that chromatin-associated long non-coding RNA scaffolds also recruit chromatin-modifying complexes independently of small RNAs. These co-transcriptional silencing mechanisms form powerful RNA surveillance systems that detect and silence inappropriate transcription events, and provide a memory of these events via self-reinforcing epigenetic loops." }, { "pmid": "14594712", "title": "Linear regression and two-class classification with gene expression data.", "abstract": "MOTIVATION\nUsing gene expression data to classify (or predict) tumor types has received much research attention recently. Due to some special features of gene expression data, several new methods have been proposed, including the weighted voting scheme of Golub et al., the compound covariate method of Hedenfalk et al. (originally proposed by Tukey), and the shrunken centroids method of Tibshirani et al. These methods look different and are more or less ad hoc.\n\n\nRESULTS\nWe point out a close connection of the three methods with a linear regression model. Casting the classification problem in the general framework of linear regression naturally leads to new alternatives, such as partial least squares (PLS) methods and penalized PLS (PPLS) methods. Using two real data sets, we show the competitive performance of our new methods when compared with the other three methods." }, { "pmid": "17127681", "title": "Integrating transcription factor binding site information with gene expression datasets.", "abstract": "MOTIVATION\nMicroarrays are widely used to measure gene expression differences between sets of biological samples. Many of these differences will be due to differences in the activities of transcription factors. In principle, these differences can be detected by associating motifs in promoters with differences in gene expression levels between the groups. In practice, this is hard to do.\n\n\nRESULTS\nWe combine correspondence analysis, between group analysis and co-inertia analysis to determine which motifs, from a database of promoter motifs, are strongly associated with differences in gene expression levels. Given a database of motifs and gene expression levels from a set of arrays, the method produces a ranked list of motifs associated with any specified split in the arrays. We give an example using the Gene Atlas compendium of gene expression levels for human tissues where we search for motifs that are associated with expression in central nervous system (CNS) or muscle tissues. Most of the motifs that we find are known from previous work to be strongly associated with expression in CNS or muscle. We give a second example using a published prostate cancer dataset where we can simply and clearly find which transcriptional pathways are associated with differences between benign and metastatic samples.\n\n\nAVAILABILITY\nThe source code is freely available upon request from the authors." }, { "pmid": "24496101", "title": "Foxd1 is a mediator and indicator of the cell reprogramming process.", "abstract": "It remains unclear how changes in gene expression profiles that establish a pluripotent state are induced during cell reprogramming. Here we identify two forkhead box transcription factors, Foxd1 and Foxo1, as mediators of gene expression programme changes during reprogramming. Knockdown of Foxd1 or Foxo1 reduces the number of iPSCs, and the double knockdown further reduces it. Knockout of Foxd1 inhibits downstream transcriptional events, including the expression of Dax1, a component of the autoregulatory network for maintaining pluripotency. Interestingly, the expression level of Foxd1 is transiently increased in a small population of cells in the middle stage of reprogramming. The transient Foxd1 upregulation in this stage is correlated with a future cell fate as iPSCs. Fate mapping analyses further reveal that >95% of iPSC colonies are derived from the Foxd1-positive cells. Thus, Foxd1 is a mediator and indicator of successful progression of reprogramming." }, { "pmid": "24056933", "title": "Nanog, Pou5f1 and SoxB1 activate zygotic gene expression during the maternal-to-zygotic transition.", "abstract": "After fertilization, maternal factors direct development and trigger zygotic genome activation (ZGA) at the maternal-to-zygotic transition (MZT). In zebrafish, ZGA is required for gastrulation and clearance of maternal messenger RNAs, which is in part regulated by the conserved microRNA miR-430. However, the factors that activate the zygotic program in vertebrates are unknown. Here we show that Nanog, Pou5f1 (also called Oct4) and SoxB1 regulate zygotic gene activation in zebrafish. We identified several hundred genes directly activated by maternal factors, constituting the first wave of zygotic transcription. Ribosome profiling revealed that nanog, sox19b and pou5f1 are the most highly translated transcription factors pre-MZT. Combined loss of these factors resulted in developmental arrest before gastrulation and a failure to activate >75% of zygotic genes, including miR-430. Our results demonstrate that maternal Nanog, Pou5f1 and SoxB1 are required to initiate the zygotic developmental program and induce clearance of the maternal program by activating miR-430 expression." }, { "pmid": "19668188", "title": "The Ink4/Arf locus is a barrier for iPS cell reprogramming.", "abstract": "The mechanisms involved in the reprogramming of differentiated cells into induced pluripotent stem (iPS) cells by the three transcription factors Oct4 (also known as Pou5f1), Klf4 and Sox2 remain poorly understood. The Ink4/Arf locus comprises the Cdkn2a-Cdkn2b genes encoding three potent tumour suppressors, namely p16(Ink4a), p19(Arf) and p15(Ink4b), which are basally expressed in differentiated cells and upregulated by aberrant mitogenic signals. Here we show that the locus is completely silenced in iPS cells, as well as in embryonic stem (ES) cells, acquiring the epigenetic marks of a bivalent chromatin domain, and retaining the ability to be reactivated after differentiation. Cell culture conditions during reprogramming enhance the expression of the Ink4/Arf locus, further highlighting the importance of silencing the locus to allow proliferation and reprogramming. Indeed, the three factors together repress the Ink4/Arf locus soon after their expression and concomitant with the appearance of the first molecular markers of 'stemness'. This downregulation also occurs in cells carrying the oncoprotein large-T, which functionally inactivates the pathways regulated by the Ink4/Arf locus, thus indicating that the silencing of the locus is intrinsic to reprogramming and not the result of a selective process. Genetic inhibition of the Ink4/Arf locus has a profound positive effect on the efficiency of iPS cell generation, increasing both the kinetics of reprogramming and the number of emerging iPS cell colonies. In murine cells, Arf, rather than Ink4a, is the main barrier to reprogramming by activation of p53 (encoded by Trp53) and p21 (encoded by Cdkn1a); whereas, in human fibroblasts, INK4a is more important than ARF. Furthermore, organismal ageing upregulates the Ink4/Arf locus and, accordingly, reprogramming is less efficient in cells from old organisms, but this defect can be rescued by inhibiting the locus with a short hairpin RNA. All together, we conclude that the silencing of Ink4/Arf locus is rate-limiting for reprogramming, and its transient inhibition may significantly improve the generation of iPS cells." }, { "pmid": "21672622", "title": "Wavelet analysis of human DNA.", "abstract": "This paper studies the human DNA in the perspective of signal processing. Six wavelets are tested for analyzing the information content of the human DNA. By adopting real Shannon wavelet several fundamental properties of the code are revealed. A quantitative comparison of the chromosomes and visualization through multidimensional and dendograms is developed." }, { "pmid": "22084256", "title": "Discovering transcription factor regulatory targets using gene expression and binding data.", "abstract": "MOTIVATION\nIdentifying the target genes regulated by transcription factors (TFs) is the most basic step in understanding gene regulation. Recent advances in high-throughput sequencing technology, together with chromatin immunoprecipitation (ChIP), enable mapping TF binding sites genome wide, but it is not possible to infer function from binding alone. This is especially true in mammalian systems, where regulation often occurs through long-range enhancers in gene-rich neighborhoods, rather than proximal promoters, preventing straightforward assignment of a binding site to a target gene.\n\n\nRESULTS\nWe present EMBER (Expectation Maximization of Binding and Expression pRofiles), a method that integrates high-throughput binding data (e.g. ChIP-chip or ChIP-seq) with gene expression data (e.g. DNA microarray) via an unsupervised machine learning algorithm for inferring the gene targets of sets of TF binding sites. Genes selected are those that match overrepresented expression patterns, which can be used to provide information about multiple TF regulatory modes. We apply the method to genome-wide human breast cancer data and demonstrate that EMBER confirms a role for the TFs estrogen receptor alpha, retinoic acid receptors alpha and gamma in breast cancer development, whereas the conventional approach of assigning regulatory targets based on proximity does not. Additionally, we compare several predicted target genes from EMBER to interactions inferred previously, examine combinatorial effects of TFs on gene regulation and illustrate the ability of EMBER to discover multiple modes of regulation.\n\n\nAVAILABILITY\nAll code used for this work is available at http://dinner-group.uchicago.edu/downloads.html." }, { "pmid": "10890449", "title": "Production of gene-targeted sheep by nuclear transfer from cultured somatic cells.", "abstract": "It is over a decade since the first demonstration that mouse embryonic stem cells could be used to transfer a predetermined genetic modification to a whole animal. The extension of this technique to other mammalian species, particularly livestock, might bring numerous biomedical benefits, for example, ablation of xenoreactive transplantation antigens, inactivation of genes responsible for neuropathogenic disease and precise placement of transgenes designed to produce proteins for human therapy. Gene targeting has not yet been achieved in mammals other than mice, however, because functional embryonic stem cells have not been derived. Nuclear transfer from cultured somatic cells provides an alternative means of cell-mediated transgenesis. Here we describe efficient and reproducible gene targeting in fetal fibroblasts to place a therapeutic transgene at the ovine alpha1(I) procollagen (COL1A1) locus and the production of live sheep by nuclear transfer." }, { "pmid": "29339785", "title": "The lncRNA GATA6-AS epigenetically regulates endothelial gene expression via interaction with LOXL2.", "abstract": "Impaired or excessive growth of endothelial cells contributes to several diseases. However, the functional involvement of regulatory long non-coding RNAs in these processes is not well defined. Here, we show that the long non-coding antisense transcript of GATA6 (GATA6-AS) interacts with the epigenetic regulator LOXL2 to regulate endothelial gene expression via changes in histone methylation. Using RNA deep sequencing, we find that GATA6-AS is upregulated in endothelial cells during hypoxia. Silencing of GATA6-AS diminishes TGF-β2-induced endothelial-mesenchymal transition in vitro and promotes formation of blood vessels in mice. We identify LOXL2, known to remove activating H3K4me3 chromatin marks, as a GATA6-AS-associated protein, and reveal a set of angiogenesis-related genes that are inversely regulated by LOXL2 and GATA6-AS silencing. As GATA6-AS silencing reduces H3K4me3 methylation of two of these genes, periostin and cyclooxygenase-2, we conclude that GATA6-AS acts as negative regulator of nuclear LOXL2 function." }, { "pmid": "18157115", "title": "Reprogramming of human somatic cells to pluripotency with defined factors.", "abstract": "Pluripotency pertains to the cells of early embryos that can generate all of the tissues in the organism. Embryonic stem cells are embryo-derived cell lines that retain pluripotency and represent invaluable tools for research into the mechanisms of tissue formation. Recently, murine fibroblasts have been reprogrammed directly to pluripotency by ectopic expression of four transcription factors (Oct4, Sox2, Klf4 and Myc) to yield induced pluripotent stem (iPS) cells. Using these same factors, we have derived iPS cells from fetal, neonatal and adult human primary cells, including dermal fibroblasts isolated from a skin biopsy of a healthy research subject. Human iPS cells resemble embryonic stem cells in morphology and gene expression and in the capacity to form teratomas in immune-deficient mice. These data demonstrate that defined factors can reprogramme human cells to pluripotency, and establish a method whereby patient-specific cells might be established in culture." }, { "pmid": "10993078", "title": "Cloned pigs produced by nuclear transfer from adult somatic cells.", "abstract": "Since the first report of live mammals produced by nuclear transfer from a cultured differentiated cell population in 1995 (ref. 1), successful development has been obtained in sheep, cattle, mice and goats using a variety of somatic cell types as nuclear donors. The methodology used for embryo reconstruction in each of these species is essentially similar: diploid donor nuclei have been transplanted into enucleated MII oocytes that are activated on, or after transfer. In sheep and goat pre-activated oocytes have also proved successful as cytoplast recipients. The reconstructed embryos are then cultured and selected embryos transferred to surrogate recipients for development to term. In pigs, nuclear transfer has been significantly less successful; a single piglet was reported after transfer of a blastomere nucleus from a four-cell embryo to an enucleated oocyte; however, no live offspring were obtained in studies using somatic cells such as diploid or mitotic fetal fibroblasts as nuclear donors. The development of embryos reconstructed by nuclear transfer is dependent upon a range of factors. Here we investigate some of these factors and report the successful production of cloned piglets from a cultured adult somatic cell population using a new nuclear transfer procedure." }, { "pmid": "29523784", "title": "MYC-driven epigenetic reprogramming favors the onset of tumorigenesis by inducing a stem cell-like state.", "abstract": "Breast cancer consists of highly heterogeneous tumors, whose cell of origin and driver oncogenes are difficult to be uniquely defined. Here we report that MYC acts as tumor reprogramming factor in mammary epithelial cells by inducing an alternative epigenetic program, which triggers loss of cell identity and activation of oncogenic pathways. Overexpression of MYC induces transcriptional repression of lineage-specifying transcription factors, causing decommissioning of luminal-specific enhancers. MYC-driven dedifferentiation supports the onset of a stem cell-like state by inducing the activation of de novo enhancers, which drive the transcriptional activation of oncogenic pathways. Furthermore, we demonstrate that the MYC-driven epigenetic reprogramming favors the formation and maintenance of tumor-initiating cells endowed with metastatic capacity. This study supports the notion that MYC-driven tumor initiation relies on cell reprogramming, which is mediated by the activation of MYC-dependent oncogenic enhancers, thus establishing a therapeutic rational for treating basal-like breast cancers." }, { "pmid": "2677743", "title": "Trans-dominant inactivation of HTLV-I and HIV-1 gene expression by mutation of the HTLV-I Rex transactivator.", "abstract": "The rex gene of the type I human T-cell leukaemia virus (HTLV-I) encodes a phosphorylated nuclear protein of relative molecular mass 27,000 which is required for viral replication. The Rex protein acts by promoting the cytoplasmic expression of the incompletely spliced viral messenger RNAs that encode the virion structural proteins. To identify the biologically important peptide domains within Rex, we introduced a series of mutations throughout its sequence. Two distinct classes of mutations lacking Rex biological activity were identified. One class corresponds to trans-dominant repressors as they inhibit the function of the wild-type Rex protein. The second class of mutants, in contrast, are recessive negative, rather than dominant negative, as they are not appropriately targeted to the cell nucleus. These results indicate the presence of at least two functionally distinct domains within the Rex protein, one involved in protein localization and a second involved in effector function. The trans-dominant Rex mutants may represent a promising new class of anti-viral agents." }, { "pmid": "27587684", "title": "DeepChrome: deep-learning for predicting gene expression from histone modifications.", "abstract": "MOTIVATION\nHistone modifications are among the most important factors that control gene regulation. Computational methods that predict gene expression from histone modification signals are highly desirable for understanding their combinatorial effects in gene regulation. This knowledge can help in developing 'epigenetic drugs' for diseases like cancer. Previous studies for quantifying the relationship between histone modifications and gene expression levels either failed to capture combinatorial effects or relied on multiple methods that separate predictions and combinatorial analysis. This paper develops a unified discriminative framework using a deep convolutional neural network to classify gene expression using histone modification data as input. Our system, called DeepChrome, allows automatic extraction of complex interactions among important features. To simultaneously visualize the combinatorial interactions among histone modifications, we propose a novel optimization-based technique that generates feature pattern maps from the learnt deep model. This provides an intuitive description of underlying epigenetic mechanisms that regulate genes.\n\n\nRESULTS\nWe show that DeepChrome outperforms state-of-the-art models like Support Vector Machines and Random Forests for gene expression classification task on 56 different cell-types from REMC database. The output of our visualization technique not only validates the previous observations but also allows novel insights about combinatorial interactions among histone modification marks, some of which have recently been observed by experimental studies.\n\n\nAVAILABILITY AND IMPLEMENTATION\nCodes and results are available at www.deepchrome.org\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "29335546", "title": "Transcription factors orchestrate dynamic interplay between genome topology and gene regulation during cell reprogramming.", "abstract": "Chromosomal architecture is known to influence gene expression, yet its role in controlling cell fate remains poorly understood. Reprogramming of somatic cells into pluripotent stem cells (PSCs) by the transcription factors (TFs) OCT4, SOX2, KLF4 and MYC offers an opportunity to address this question but is severely limited by the low proportion of responding cells. We have recently developed a highly efficient reprogramming protocol that synchronously converts somatic into pluripotent stem cells. Here, we used this system to integrate time-resolved changes in genome topology with gene expression, TF binding and chromatin-state dynamics. The results showed that TFs drive topological genome reorganization at multiple architectural levels, often before changes in gene expression. Removal of locus-specific topological barriers can explain why pluripotency genes are activated sequentially, instead of simultaneously, during reprogramming. Together, our results implicate genome topology as an instructive force for implementing transcriptional programs and cell fate in mammals." }, { "pmid": "18371448", "title": "Defining molecular cornerstones during fibroblast to iPS cell reprogramming in mouse.", "abstract": "Ectopic expression of the transcription factors Oct4, Sox2, c-Myc, and Klf4 in fibroblasts generates induced pluripotent stem (iPS) cells. Little is known about the nature and sequence of molecular events accompanying nuclear reprogramming. Using doxycycline-inducible vectors, we have shown that exogenous factors are required for about 10 days, after which cells enter a self-sustaining pluripotent state. We have identified markers that define cell populations prior to and during this transition period. While downregulation of Thy1 and subsequent upregulation of SSEA-1 occur at early time points, reactivation of endogenous Oct4, Sox2, telomerase, and the silent X chromosome mark late events in the reprogramming process. Cell sorting with these markers allows for a significant enrichment of cells with the potential to become iPS cells. Our results suggest that factor-induced reprogramming is a gradual process with defined intermediate cell populations that contain the majority of cells poised to become iPS cells." }, { "pmid": "15931223", "title": "SV40-encoded microRNAs regulate viral gene expression and reduce susceptibility to cytotoxic T cells.", "abstract": "MicroRNAs (miRNAs) are small (approximately 22-nucleotide) RNAs that in lower organisms serve important regulatory roles in development and gene expression, typically by forming imperfect duplexes with target messenger RNAs. miRNAs have also been described in mammalian cells and in infections with Epstein-Barr virus (EBV), but the function of most of them is unknown. Although one EBV miRNA probably altered the processing of a viral mRNA, the regulatory significance of this event is uncertain, because other transcripts exist that can supply the targeted function. Here we report the identification of miRNAs encoded by simian virus 40 (SV40) and define their functional significance for viral infection. SVmiRNAs accumulate at late times in infection, are perfectly complementary to early viral mRNAs, and target those mRNAs for cleavage. This reduces the expression of viral T antigens but does not reduce the yield of infectious virus relative to that generated by a mutant lacking SVmiRNAs. However, wild-type SV40-infected cells are less sensitive than the mutant to lysis by cytotoxic T cells, and trigger less cytokine production by such cells. Thus, viral evolution has taken advantage of the miRNA pathway to generate effectors that enhance the probability of successful infection." }, { "pmid": "28199304", "title": "Adipose-derived circulating miRNAs regulate gene expression in other tissues.", "abstract": "Adipose tissue is a major site of energy storage and has a role in the regulation of metabolism through the release of adipokines. Here we show that mice with an adipose-tissue-specific knockout of the microRNA (miRNA)-processing enzyme Dicer (ADicerKO), as well as humans with lipodystrophy, exhibit a substantial decrease in levels of circulating exosomal miRNAs. Transplantation of both white and brown adipose tissue-brown especially-into ADicerKO mice restores the level of numerous circulating miRNAs that are associated with an improvement in glucose tolerance and a reduction in hepatic Fgf21 mRNA and circulating FGF21. This gene regulation can be mimicked by the administration of normal, but not ADicerKO, serum exosomes. Expression of a human-specific miRNA in the brown adipose tissue of one mouse in vivo can also regulate its 3' UTR reporter in the liver of another mouse through serum exosomal transfer. Thus, adipose tissue constitutes an important source of circulating exosomal miRNAs, which can regulate gene expression in distant tissues and thereby serve as a previously undescribed form of adipokine." }, { "pmid": "12152080", "title": "A transcription factor response element for gene expression during circadian night.", "abstract": "Mammalian circadian clocks consist of complex integrated feedback loops that cannot be elucidated without comprehensive measurement of system dynamics and determination of network structures. To dissect such a complicated system, we took a systems-biological approach based on genomic, molecular and cell biological techniques. We profiled suprachiasmatic nuclei and liver genome-wide expression patterns under light/dark cycles and constant darkness. We determined transcription start sites of human orthologues for newly identified cycling genes and then performed bioinformatical searches for relationships between time-of-day specific expression and transcription factor response elements around transcription start sites. Here we demonstrate the role of the Rev-ErbA/ROR response element in gene expression during circadian night, which is in phase with Bmal1 and in antiphase to Per2 oscillations. This role was verified using an in vitro validation system, in which cultured fibroblasts transiently transfected with clock-controlled reporter vectors exhibited robust circadian bioluminescence." }, { "pmid": "22155867", "title": "Identifying quantitative trait loci via group-sparse multitask regression and feature selection: an imaging genetics study of the ADNI cohort.", "abstract": "MOTIVATION\nRecent advances in high-throughput genotyping and brain imaging techniques enable new approaches to study the influence of genetic variation on brain structures and functions. Traditional association studies typically employ independent and pairwise univariate analysis, which treats single nucleotide polymorphisms (SNPs) and quantitative traits (QTs) as isolated units and ignores important underlying interacting relationships between the units. New methods are proposed here to overcome this limitation.\n\n\nRESULTS\nTaking into account the interlinked structure within and between SNPs and imaging QTs, we propose a novel Group-Sparse Multi-task Regression and Feature Selection (G-SMuRFS) method to identify quantitative trait loci for multiple disease-relevant QTs and apply it to a study in mild cognitive impairment and Alzheimer's disease. Built upon regression analysis, our model uses a new form of regularization, group ℓ(2,1)-norm (G(2,1)-norm), to incorporate the biological group structures among SNPs induced from their genetic arrangement. The new G(2,1)-norm considers the regression coefficients of all the SNPs in each group with respect to all the QTs together and enforces sparsity at the group level. In addition, an ℓ(2,1)-norm regularization is utilized to couple feature selection across multiple tasks to make use of the shared underlying mechanism among different brain regions. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performance in empirical evaluations and a compact set of selected SNP predictors relevant to the imaging QTs.\n\n\nAVAILABILITY\nSoftware is publicly available at: http://ranger.uta.edu/%7eheng/imaging-genetics/." }, { "pmid": "16170780", "title": "Prediction and evolutionary information analysis of protein solvent accessibility using multiple linear regression.", "abstract": "A multiple linear regression method was applied to predict real values of solvent accessibility from the sequence and evolutionary information. This method allowed us to obtain coefficients of regression and correlation between the occurrence of an amino-acid residue at a specific target and its sequence neighbor positions on the one hand, and the solvent accessibility of that residue on the other. Our linear regression model based on sequence information and evolutionary models was found to predict residue accessibility with 18.9% and 16.2% mean absolute error respectively, which is better than or comparable to the best available methods. A correlation matrix for several neighbor positions to examine the role of evolutionary information at these positions has been developed and analyzed. As expected, the effective frequency of hydrophobic residues at target positions shows a strong negative correlation with solvent accessibility, whereas the reverse is true for charged and polar residues. The correlation of solvent accessibility with effective frequencies at neighboring positions falls abruptly with distance from target residues. Longer protein chains have been found to be more accurately predicted than their smaller counterparts." }, { "pmid": "30467462", "title": "Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling.", "abstract": "Aim: Multiple sclerosis is a severe brain and/or spinal cord disease. It may lead to a wide range of symptoms. Hence, the early diagnosis and treatment is quite important. Method: This study proposed a 14-layer convolutional neural network, combined with three advanced techniques: batch normalization, dropout, and stochastic pooling. The output of the stochastic pooling was obtained via sampling from a multinomial distribution formed from the activations of each pooling region. In addition, we used data augmentation method to enhance the training set. In total 10 runs were implemented with the hold-out randomly set for each run. Results: The results showed that our 14-layer CNN secured a sensitivity of 98.77 ± 0.35%, a specificity of 98.76 ± 0.58%, and an accuracy of 98.77 ± 0.39%. Conclusion: Our results were compared with CNN using maximum pooling and average pooling. The comparison shows stochastic pooling gives better performance than other two pooling methods. Furthermore, we compared our proposed method with six state-of-the-art approaches, including five traditional artificial intelligence methods and one deep learning method. The comparison shows our method is superior to all other six state-of-the-art approaches." }, { "pmid": "17554336", "title": "In vitro reprogramming of fibroblasts into a pluripotent ES-cell-like state.", "abstract": "Nuclear transplantation can reprogramme a somatic genome back into an embryonic epigenetic state, and the reprogrammed nucleus can create a cloned animal or produce pluripotent embryonic stem cells. One potential use of the nuclear cloning approach is the derivation of 'customized' embryonic stem (ES) cells for patient-specific cell treatment, but technical and ethical considerations impede the therapeutic application of this technology. Reprogramming of fibroblasts to a pluripotent state can be induced in vitro through ectopic expression of the four transcription factors Oct4 (also called Oct3/4 or Pou5f1), Sox2, c-Myc and Klf4. Here we show that DNA methylation, gene expression and chromatin state of such induced reprogrammed stem cells are similar to those of ES cells. Notably, the cells-derived from mouse fibroblasts-can form viable chimaeras, can contribute to the germ line and can generate live late-term embryos when injected into tetraploid blastocysts. Our results show that the biological potency and epigenetic state of in-vitro-reprogrammed induced pluripotent stem cells are indistinguishable from those of ES cells." }, { "pmid": "15229602", "title": "Evolutionary changes in cis and trans gene regulation.", "abstract": "Differences in gene expression are central to evolution. Such differences can arise from cis-regulatory changes that affect transcription initiation, transcription rate and/or transcript stability in an allele-specific manner, or from trans-regulatory changes that modify the activity or expression of factors that interact with cis-regulatory sequences. Both cis- and trans-regulatory changes contribute to divergent gene expression, but their respective contributions remain largely unknown. Here we examine the distribution of cis- and trans-regulatory changes underlying expression differences between closely related Drosophila species, D. melanogaster and D. simulans, and show functional cis-regulatory differences by comparing the relative abundance of species-specific transcripts in F1 hybrids. Differences in trans-regulatory activity were inferred by comparing the ratio of allelic expression in hybrids with the ratio of gene expression between species. Of 29 genes with interspecific expression differences, 28 had differences in cis-regulation, and these changes were sufficient to explain expression divergence for about half of the genes. Trans-regulatory differences affected 55% (16 of 29) of genes, and were always accompanied by cis-regulatory changes. These data indicate that interspecific expression differences are not caused by select trans-regulatory changes with widespread effects, but rather by many cis-acting changes spread throughout the genome." }, { "pmid": "28665997", "title": "Independent regulation of gene expression level and noise by histone modifications.", "abstract": "The inherent stochasticity generates substantial gene expression variation among isogenic cells under identical conditions, which is frequently referred to as gene expression noise or cell-to-cell expression variability. Similar to (average) expression level, expression noise is also subject to natural selection. Yet it has been observed that noise is negatively correlated with expression level, which manifests as a potential constraint for simultaneous optimization of both. Here, we studied expression noise in human embryonic cells with computational analysis on single-cell RNA-seq data and in yeast with flow cytometry experiments. We showed that this coupling is overcome, to a certain degree, by a histone modification strategy in multiple embryonic developmental stages in human, as well as in yeast. Importantly, this epigenetic strategy could fit into a burst-like gene expression model: promoter-localized histone modifications (such as H3K4 methylation) are associated with both burst size and burst frequency, which together influence expression level, while gene-body-localized ones (such as H3K79 methylation) are more associated with burst frequency, which influences both expression level and noise. We further knocked out the only \"writer\" of H3K79 methylation in yeast, and observed that expression noise is indeed increased. Consistently, dosage sensitive genes, such as genes in the Wnt signaling pathway, tend to be marked with gene-body-localized histone modifications, while stress responding genes, such as genes regulating autophagy, tend to be marked with promoter-localized ones. Our findings elucidate that the \"division of labor\" among histone modifications facilitates the independent regulation of expression level and noise, extend the \"histone code\" hypothesis to include expression noise, and shed light on the optimization of transcriptome in evolution." }, { "pmid": "18713471", "title": "A robust linear regression based algorithm for automated evaluation of peptide identifications from shotgun proteomics by use of reversed-phase liquid chromatography retention time.", "abstract": "BACKGROUND\nRejection of false positive peptide matches in database searches of shotgun proteomic experimental data is highly desirable. Several methods have been developed to use the peptide retention time as to refine and improve peptide identifications from database search algorithms. This report describes the implementation of an automated approach to reduce false positives and validate peptide matches.\n\n\nRESULTS\nA robust linear regression based algorithm was developed to automate the evaluation of peptide identifications obtained from shotgun proteomic experiments. The algorithm scores peptides based on their predicted and observed reversed-phase liquid chromatography retention times. The robust algorithm does not require internal or external peptide standards to train or calibrate the linear regression model used for peptide retention time prediction. The algorithm is generic and can be incorporated into any database search program to perform automated evaluation of the candidate peptide matches based on their retention times. It provides a statistical score for each peptide match based on its retention time.\n\n\nCONCLUSION\nAnalysis of peptide matches where the retention time score was included resulted in a significant reduction of false positive matches with little effect on the number of true positives. Overall higher sensitivities and specificities were achieved for database searches carried out with MassMatrix, Mascot and X!Tandem after implementation of the retention time based score algorithm." }, { "pmid": "23409062", "title": "iSNO-PseAAC: predict cysteine S-nitrosylation sites in proteins by incorporating position specific amino acid propensity into pseudo amino acid composition.", "abstract": "Posttranslational modifications (PTMs) of proteins are responsible for sensing and transducing signals to regulate various cellular functions and signaling events. S-nitrosylation (SNO) is one of the most important and universal PTMs. With the avalanche of protein sequences generated in the post-genomic age, it is highly desired to develop computational methods for timely identifying the exact SNO sites in proteins because this kind of information is very useful for both basic research and drug development. Here, a new predictor, called iSNO-PseAAC, was developed for identifying the SNO sites in proteins by incorporating the position-specific amino acid propensity (PSAAP) into the general form of pseudo amino acid composition (PseAAC). The predictor was implemented using the conditional random field (CRF) algorithm. As a demonstration, a benchmark dataset was constructed that contains 731 SNO sites and 810 non-SNO sites. To reduce the homology bias, none of these sites were derived from the proteins that had [Formula: see text] pairwise sequence identity to any other. It was observed that the overall cross-validation success rate achieved by iSNO-PseAAC in identifying nitrosylated proteins on an independent dataset was over 90%, indicating that the new predictor is quite promising. Furthermore, a user-friendly web-server for iSNO-PseAAC was established at http://app.aporc.org/iSNO-PseAAC/, by which users can easily obtain the desired results without the need to follow the mathematical equations involved during the process of developing the prediction method. It is anticipated that iSNO-PseAAC may become a useful high throughput tool for identifying the SNO sites, or at the very least play a complementary role to the existing methods in this area." }, { "pmid": "29051499", "title": "An integrative method to decode regulatory logics in gene transcription.", "abstract": "Modeling of transcriptional regulatory networks (TRNs) has been increasingly used to dissect the nature of gene regulation. Inference of regulatory relationships among transcription factors (TFs) and genes, especially among multiple TFs, is still challenging. In this study, we introduced an integrative method, LogicTRN, to decode TF-TF interactions that form TF logics in regulating target genes. By combining cis-regulatory logics and transcriptional kinetics into one single model framework, LogicTRN can naturally integrate dynamic gene expression data and TF-DNA-binding signals in order to identify the TF logics and to reconstruct the underlying TRNs. We evaluated the newly developed methodology using simulation, comparison and application studies, and the results not only show their consistence with existing knowledge, but also demonstrate its ability to accurately reconstruct TRNs in biological complex systems." }, { "pmid": "18548105", "title": "Epigenetic plasticity of chromatin in embryonic and hematopoietic stem/progenitor cells: therapeutic potential of cell reprogramming.", "abstract": "During embryonic development and adult life, the plasticity and reversibility of modifications that affect the chromatin structure is important in the expression of genes involved in cell fate decisions and the maintenance of cell-differentiated state. Epigenetic changes in DNA and chromatin, which must occur to allow the accessibility of transcriptional factors at specific DNA-binding sites, are regarded as emerging major players for embryonic and hematopoietic stem cell (HSC) development and lineage differentiation. Epigenetic deregulation of gene expression, whether it be in conjunction with chromosomal alterations and gene mutations or not, is a newly recognized mechanism that leads to several diseases, including leukemia. The reversibility of epigenetic modifications makes DNA and chromatin changes attractive targets for therapeutic intervention. Here we review some of the epigenetic mechanisms that regulate gene expression in pluripotent embryonic and multipotent HSCs but may be deregulated in leukemia, and the clinical approaches designed to target the chromatin structure in leukemic cells." }, { "pmid": "14630659", "title": "Missing-value estimation using linear and non-linear regression with Bayesian gene selection.", "abstract": "MOTIVATION\nData from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule.\n\n\nRESULTS\nWe propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error.\n\n\nAVAILABILITY\nThe appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab)." } ]
Frontiers in Psychology
30886595
PMC6409498
10.3389/fpsyg.2019.00344
Bowing Gestures Classification in Violin Performance: A Machine Learning Approach
Gestures in music are of paramount importance partly because they are directly linked to musicians' sound and expressiveness. At the same time, current motion capture technologies are capable of detecting body motion/gestures details very accurately. We present a machine learning approach to automatic violin bow gesture classification based on Hierarchical Hidden Markov Models (HHMM) and motion data. We recorded motion and audio data corresponding to seven representative bow techniques (Détaché, Martelé, Spiccato, Ricochet, Sautillé, Staccato, and Bariolage) performed by a professional violin player. We used the commercial Myo device for recording inertial motion information from the right forearm and synchronized it with audio recordings. Data was uploaded into an online public repository. After extracting features from both the motion and audio data, we trained an HHMM to identify the different bowing techniques automatically. Our model can determine the studied bowing techniques with over 94% accuracy. The results make feasible the application of this work in a practical learning scenario, where violin students can benefit from the real-time feedback provided by the system.
2. Related Work2.1. Automatic Gesture RecognitionAmong many existing machine learning algorithms, Hidden Markov models (HMMs) have been widely applied to motion and gesture recognition. HMMs describe motion-temporal signature events with internal discrete probabilistic states defined by Gaussian progressions (Brand et al., 1997; Wilson and Bobick, 1999; Bevilacqua et al., 2010; Caramiaux and Tanaka, 2013). They have been applied to music education, interactive installations, live performances, and studies in non-verbal motion communication. Yamato et al. (1992) is probably the first reference of applying HMMs to describe temporal events in consecutive-image sequences. The resulting model identified with high accuracy (around 90%) six different tennis stroke gestures. Brand et al. (1997) presented a method based on two coupled HMMs as a suitable strategy for highly accurate action recognition and description over discrete temporal events. In their study, they defined T'ai Chi gestures tracked by a set of two cameras, in which a blob is extracted forming a 3D model of the hand's centroids. Authors argued that simple HMMs were not accurate enough where coupled HMMs succeed in classification and regression. Wilson and Bobick (1999) introduced an on-line algorithm for learning and classifying gestural postures in the context of interactive interfaces design. The authors applied computer vision techniques to extract body and hands positions from camera information and defined an HMM with a structure based on Markov Chains to identify when a gesture is being performed without previous training. In another study conducted by Yoon et al. (2001), an HMM is used to develop a hand tracking, hand location and gesture identification system based on computer vision techniques. Based on a database consisting of hand positions, velocity and angles, it employs k-means clustering together with an HMM, to accurately classify 2,400 hand gestures. The resulting system can control a graphical editor consisting of twelve 2D primitives shapes (lines, rectangles, triangles, etc.) and 36 alphanumeric characters. Haker et al. (2009) have presented a detector of deictic gestures based on a time-of-flight (TOF) camera. The model can determine if a gesture is related to the specific meaning by pointing to some information projected in a board; it also handles a slide-show, switching to the previous or next slide with a thumb-index finger gesture. Kerber et al. (2017) presented a method based on a support vector machine (SVM) in a custom Python program, to recognize 40 gestures in real-time. Gestures are defined by finger dispositions and hand orientations. Motion data is acquired using the Myo device with an overall accuracy of 95% of correct gestures estimations. Authors have implemented a matrix automatic transposition allowing the user to place the armband with any precise alignment or left/right forearm considerations.2.2. Automatic Music Gesture RecognitionThere have been several approaches to study gestures in a musical context. Sawada and Hashimoto (1997) applied an IMU device consisting of an accelerometer sensor to describe rotational and directional attributes to classify music gestural expressions. Their motivation was to measure non-verbal communication and emotional intentions in music performance. They applied tempo recognition in orchestra conductors to describe how the gestural information is imprinted in the musical outcome. Peiper et al. (2003) presented a study of violin bow articulations classification. They applied a decision tree algorithm to identify four standard bow articulations, Détaché, Martelé, Spiccato, and Staccato. The gestural information is extracted using an electromagnetic motion tracking device mounted close to the performer right hand. The visual outcome is displayed in a CAVE as a room with a four-wall projection setup for immersive virtual reality applications and research. Their system reported high accuracy (around 85%) when classifying two gestures; however, the accuracy decreased to 71% when four or more articulations were considered.Kolesnik and Wanderley (2005) implemented a discrete Hidden Markov Model for gestural timing recognition and applied it to perform and generate musical or gestural related sounds. Their model is able to be trained with arbitrary gestures to track the user's motion. Gibet et al. (2005) developed an “augmented violin” as an acoustic instrument with aggregated gestural electronic-sound manipulation. They modeled a k-Nearest Neighbor (k-NN) algorithm for the classification of three standard violin bow strokes: Détaché, Martelé and Spiccato. Authors used an analog device (i.e., ADXL202), placed at the bow-frog, to transmit bow inertial motion information. It consisted of two accelerometers to detect bowing direction. Gibet et al. (2005) described a linear discrete analysis to identify important spacial dissimilitudes among bow articulations, giving a highly accurate gestural prediction in the three models presented (Detaché 96.7%, Martelé 85.8%, and Spiccato 89.0%). They also described a k-NN model with 100% accuracy estimation in Detaché and Martelé, 68.7% in Spiccato. They conclude that accuracy is directly related to dynamics, i.e., pp, mf and ff. Caramiaux et al. (2009) presented a real-time gesture follower and recognition model based on HMMs. The system was applied to music education, music performances, dance performances, and interactive installations. Vatavu et al. (2009) proposed a naive detection algorithm to discretize temporal events in a two-dimensional gestural drawing matrix. The similitude between two gestures (template vs. new drawing) is computed with a minimum alignment cost between the curvature functions of both gestures. Bianco et al. (2009) addressed the question of how to describe acoustic sound variations directly mapped to gesture performance articulations based on the Principal component analysis (PCA) and segmentation in their sound analysis. They focused on the study of a professional trumpet player performing a set of exercises with specific dynamical changes. The authors claimed that the relationship between gestures and sound is not linear, hypothesizing the at least two motor-cortex control events are involved in the performance of single notes.Caramiaux et al. (2009) presented a method called canonical correlation analysis (CCA) as a gesture tool to describe the relationship among sound and its corresponding motion-gestural actions in musical performance. The study is based on the principle that speech and gestures are complementary and co-expressive in human communication. Also, imagery-speech can be reported as a muscular activity in the mandibular area. The study described features extracted to define motion in body movements, defining a multi-dimensional stream with coordinates, vector velocities and acceleration to represent a trajectory over time; as well as its correlation with sound features, giving an insight on methodologies to extract useful information and describe sound-gesture relationships. Tuuri (2009) proposed a gestural-based model as an interface for sound design. Following the principle that stereotypical gesture expression communicates intentions and represent non-linguistic meanings, sound can be modeled as an extension of the dynamical changes naturally involved on those gestures. In his study, he described body movement as semantics regarding sound design.Bevilacqua et al. (2010) presented a study in which an HMM-based system is implemented. Their goal was not to describe a specific gestural repertoire, but instead, they proposed an optimal “low-cost” algorithm for any gestural classification without the need for big datasets. Gillian et al. (2011) exposed a different approach to the standard Markov Model described above. They extended Dynamic Time Warping (DTW) to classify N-dimensional signal with a low number of training samples, having an accuracy rate of 99%. To test DTW algorithms, the authors first defined a set of 10 gestures as an “air drawing” articulations of the right hand. Drawn numbers from 1 to 5, a square, a circle, a triangle, a horizontal and vertical gestural line similar to an orchestral conducting, were the final gestural repertoire. Their methodology, in conclusion, gives a valid and optimal approach to classify any gesture. In the same year, a study conducted by Van Der Linden et al. (2011) described the invention of a set of sensors and wearables called MusicJacket. They aimed to give postural feedback and bowing technique references to novice violin players. Authors reported that vibrotactile feedback directly engages the subjects' motor learning systems, correcting their postures almost immediately, shortening the period needed to acquire motor skills and reduces cognitive overload.Schedel and Fiebrink (2011), have implemented the Wekinator application Fiebrink and Cook (2010) to classify seven standard cello bow articulations such as legato, spiccato, or marcato, among others. Using a commercial IMU device known as K-Bow for the motion data acquisition. The cello performer used a foot pedal to stop and start articulation training examples. For each stroke, she varied the string, bow position, bow pressure, and bow speed. After training a model, the cellist evaluated it by demonstrating different articulations. The authors created an interactive system for composition and sound manipulation in real-time based on the bow gesture classifications. Françoise et al. (2014) introduced the “mapping by demonstration” principle where users create their gestural repertoire by simple-direct examples in real-time. Françoise et al. (2012, 2014) presented a set of probabilistic models [i.e., Gaussian Mixture Models (GMM), Gaussian Mixture Regression (GMR), Hierarchical HMM (HHMM) and Multimodal Hierarchical HMM (MHMM), Schnell et al., 2009] and compared their features for real-time sound mapping manipulation.In the context of IoMusT, Turchet et al. (2018b) extended a percussive instrument called Cajón with embedded technology such as piezo pickups, condenser microphone and a Beaglebone Black board audio processor with WIFI connectivity. Authors have applied machine learning (k-NN) and real-time onset detection techniques to classify the hit-locations, dynamics and gestural timbres of professional performers with accuracies over 90% on timber estimations and 100% on onset and hit location detection.
[ "13800728" ]
[]
Frontiers in Neuroscience
30899212
PMC6416793
10.3389/fnins.2019.00095
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain.
2. Related WorkBroadly, there are two main categories for training SNNs—supervised and unsupervised. Although unsupervised learning mechanisms like Spike-Timing Dependent Plasticity (STDP) are attractive for the implementation of low-power on-chip local learning, their performance is still outperformed by supervised networks on even simple digit recognition platforms like the MNIST dataset (Diehl and Cook, 2015). Driven by this fact, a particular category of supervised SNN learning algorithms attempts to train ANNs using standard training schemes like backpropagation (to leverage the superior performance of standard training techniques for ANNs) and subsequently convert to event-driven SNNs for network operation (Pérez-Carrasco et al., 2013; Cao et al., 2015; Diehl et al., 2015; Zhao et al., 2015). This can be particularly appealing for NN implementations in low-power neuromorphic hardware specialized for SNNs (Merolla et al., 2014; Akopyan et al., 2015) or interfacing with silicon cochleas or event-driven sensors (Posch et al., 2011, 2014). Our work falls in this category and is based on the ANN-SNN conversion scheme proposed by authors in Diehl et al. (2015). However, while prior work considers the ANN operation only during the conversion process, we show that considering the actual SNN operation during the conversion step is crucial for achieving minimal loss in classification accuracy. To that effect, we propose a novel weight-normalization technique that ensures that the actual SNN operation is in the loop during the conversion phase. Note that this work tries to exploit neural activation sparsity by converting networks to the spiking domain for power-efficient hardware implementation and are complementary to efforts aimed at exploring sparsity in synaptic connections (Han et al., 2015a).
[ "26941637", "27651489", "22518097", "18244602", "25104385", "24051730", "29375284", "25347889" ]
[ { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "27651489", "title": "Convolutional networks for fast, energy-efficient neuromorphic computing.", "abstract": "Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer." }, { "pmid": "22518097", "title": "Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.", "abstract": "Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons." }, { "pmid": "18244602", "title": "Simple model of spiking neurons.", "abstract": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "24051730", "title": "Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.", "abstract": "Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given \"frame rate.\" Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or \"temporal contrast.\" The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to \"reality.\" These events can be processed \"as they flow\" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules." }, { "pmid": "29375284", "title": "Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.", "abstract": "Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications." }, { "pmid": "25347889", "title": "Feedforward Categorization on AER Motion Events Using Cortex-Like Features in a Spiking Neural Network.", "abstract": "This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%." } ]
BMC Medical Informatics and Decision Making
30866913
PMC6417112
10.1186/s12911-019-0792-1
Predicting hospital-acquired pneumonia among schizophrenic patients: a machine learning approach
BackgroundMedications are frequently used for treating schizophrenia, however, anti-psychotic drug use is known to lead to cases of pneumonia. The purpose of our study is to build a model for predicting hospital-acquired pneumonia among schizophrenic patients by adopting machine learning techniques.MethodsData related to a total of 185 schizophrenic in-patients at a Taiwanese district mental hospital diagnosed with pneumonia between 2013 ~ 2018 were gathered. Eleven predictors, including gender, age, clozapine use, drug-drug interaction, dosage, duration of medication, coughing, change of leukocyte count, change of neutrophil count, change of blood sugar level, change of body weight, were used to predict the onset of pneumonia. Seven machine learning algorithms, including classification and regression tree, decision tree, k-nearest neighbors, naïve Bayes, random forest, support vector machine, and logistic regression were utilized to build predictive models used in this study. Accuracy, area under receiver operating characteristic curve, sensitivity, specificity, and kappa were used to measure overall model performance.ResultsAmong the seven adopted machine learning algorithms, random forest and decision tree exhibited the optimal predictive accuracy versus the remaining algorithms. Further, six most important risk factors, including, dosage, clozapine use, duration of medication, change of neutrophil count, change of leukocyte count, and drug-drug interaction, were also identified.ConclusionsAlthough schizophrenic patients remain susceptible to the threat of pneumonia whenever treated with anti-psychotic drugs, our predictive model may serve as a useful support tool for physicians treating such patients.
Related workTo date, various studies have analyzed risk factors of contracting pneumonia based on traditional statistical models which require strict assumptions. Their findings revealed that multiple risk factors can influence the occurrence of pneumonia among patients. For example, Mortensen, Coley, Singer, Marrie, Obrosky, Kapoor and Fine [2] reported that leukopenia was one of the factors that associated with the mortality of pneumonia. Manabe, Teramoto, Tamiya, Okochi and Hizawa [8] concluded that sputum suctioning, deterioration of the swallowing function, dehydration, and dementia were all risk factors associated with aspiration pneumonia. Gupta, Boville, Blanton, Lukasiewicz, Wincek, Bai and Forbes [7] identified that mechanical ventilation patients have an increased risk of mortality. Regarding pneumonia-related studies that employed schizophrenic patients as their subjects, despite being efficacious medication for treating schizophrenia, anti-psychotic drugs may however cause unanticipated side-effects for schizophrenic patients. As example, several previous studies have found that anti-psychotic drugs can lead to the development of pneumonia [5, 12]. Further, drug-drug interaction between anti-psychotic drugs and anxiolytic or anti-convulsive drugs could probably accelerate the occurrence of the pneumonia [13]. Evidence [6, 14] even showed that community-acquired pneumonia was associated with taking anti-psychotic drugs in elderly patients. Women were more likely to have a recurrence of pneumonia than men. The potential transmission mechanism underlying the influence of anti-psychotics remained unclear, but cardiopulmonary [15], agranulocytosis [16], and abnormal glucose regulation [17], are reported.Moreover, Kuo, Yang, Liao, Chen, Lee, Shau, Chang, Tsai and Chen [5] reported although an increased risk of pneumonia was detected among the use of available anti-psychotics, only clozapine was associated with a dose-dependent increase. Therefore, use and titration of clozapine possesses a higher threat to patients with long-term management of schizophrenia.Recently, a number of studies have adopted machine learning techniques to predict various issues concerning pneumonia. For example, Cooper, Aliferis, Ambrosino, Aronis, Buchanan, Caruana, Fine, Glymour, Gordon, Hanusa, et al. [9] applied eight machine learning methods to predict the mortality of inpatients with pneumonia. They found that neural network, hierarchical mixtures of experts, and logistic regression can attain the lowest error rate. Chapman, Fizman, Chapman and Haug [18] adopted machine learning algorithms including expert-rules, Bayesian network, and decision tree to identify onset pneumonia from thoracic X-ray reports. The performance of three algorithms differs in sensitivity, specificity, and precision; but, it is similar to physicians’ practice. Heckerling, Gerber, Tape and Wigton [10] integrated neural networks and genetic algorithms for predicting community-acquired pneumonia, and found that inclusion of genetic algorithms can help optimize neural networks algorithms. Caruana, Lou, Gehrke, Koch, Sturm and Elhadad [19] utilized high-performance, generalized additive models with pairwise interactions to predict the probability of death due to pneumonia. The results reveal that their proposed algorithm outperforms other algorithms such as logistic regression, random forest, and logitboost. Kim, Diggans, Pankratz, Huang, Pagan, Sindy, Tom, Anderson, Choi, Lynch, et al. [11] developed a machine learning model to classify usual interstitial pneumonia patients, and concludes that their model is feasible for predicting usual interstitial pneumonia occurrence.A review of the literature reveals a clear gap regarding pneumonia-related studies. Despite a great deal of previous research having been focused on the risk factors of or outcome of pneumonia [2, 7, 8], less research utilizing machine learning techniques was carried out specifically related to schizophrenic patients. Due to the special characteristics and possible influences of schizophrenia on patients’ health conditions, it is therefore imperative to develop a predictive model for risk factors associated with pneumonia. Such a model can be based on machine learning techniques which can analyze health data while even successfully violating statistical assumptions.
[ "11732939", "11996618", "20163295", "18266664", "26444916", "9040894", "14684266", "26613551", "20368647", "8792945", "11926934", "11376542", "14759964", "25160603", "843571", "27732102" ]
[ { "pmid": "11732939", "title": "Economic burden of pneumonia in an employed population.", "abstract": "OBJECTIVE\nTo estimate the overall economic burden of pneumonia from an employer perspective.\n\n\nMETHODS\nThe annual, per capita cost of pneumonia was determined for beneficiaries of a major employer by analyzing medical, pharmaceutical, and disability claims data. The incremental costs of 4036 patients with a diagnosis of pneumonia identified in a health claims database of a national Fortune 100 company were compared with a 10% random sample of beneficiaries in the employer overall population.\n\n\nRESULTS\nTotal annual, per capita, employer costs were approximately 5 times higher for patients with pneumonia ($11 544) than among typical beneficiaries in the employer overall population ($2368). The increases in costs were for all components (eg, medical care, prescription drug, disability, and particularly for inpatient services). A small proportion (10%) of pneumonia patients (almost all of whom were hospitalized) accounted for most (59%) of the costs.\n\n\nCONCLUSIONS\nPatients with pneumonia present an important financial burden to employers. These patients use more medical care services, particularly inpatient services, than the average beneficiary in the employer overall population. In addition to direct health care costs related to medical utilization and the use of prescription drugs, indirect costs due to disability and absenteeism also contribute to the high cost of pneumonia to an employer." }, { "pmid": "11996618", "title": "Causes of death for patients with community-acquired pneumonia: results from the Pneumonia Patient Outcomes Research Team cohort study.", "abstract": "BACKGROUND\nTo our knowledge, no previous study has systematically examined pneumonia-related and pneumonia-unrelated mortality. This study was performed to identify the cause(s) of death and to compare the timing and risk factors associated with pneumonia-related and pneumonia-unrelated mortality.\n\n\nMETHODS\nFor all deaths within 90 days of presentation, a synopsis of all events preceding death was independently reviewed by 2 members of a 5-member review panel (C.M.C., D.E.S., T.J.M., W.N.K., and M.J.F.). The underlying and immediate causes of death and whether pneumonia had a major, a minor, or no apparent role in the death were determined using consensus. Death was defined as pneumonia related if pneumonia was the underlying or immediate cause of death or played a major role in the cause of death. Competing-risk Cox proportional hazards regression models were used to identify baseline characteristics associated with mortality.\n\n\nRESULTS\nPatients (944 outpatients and 1343 inpatients) with clinical and radiographic evidence of pneumonia were enrolled, and 208 (9%) died by 90 days. The most frequent immediate causes of death were respiratory failure (38%), cardiac conditions (13%), and infectious conditions (11%); the most frequent underlying causes of death were neurological conditions (29%), malignancies (24%), and cardiac conditions (14%). Mortality was pneumonia related in 110 (53%) of the 208 deaths. Pneumonia-related deaths were 7.7 times more likely to occur within 30 days of presentation compared with pneumonia-unrelated deaths. Factors independently associated with pneumonia-related mortality were hypothermia, altered mental status, elevated serum urea nitrogen level, chronic liver disease, leukopenia, and hypoxemia. Factors independently associated with pneumonia-unrelated mortality were dementia, immunosuppression, active cancer, systolic hypotension, male sex, and multilobar pulmonary infiltrates. Increasing age and evidence of aspiration were independent predictors of both types of mortality.\n\n\nCONCLUSIONS\nFor patients with community-acquired pneumonia, only half of all deaths are attributable to their acute illness. Differences in the timing of death and risk factors for mortality suggest that future studies of community-acquired pneumonia should differentiate all-cause and pneumonia-related mortality." }, { "pmid": "20163295", "title": "Burden of schizophrenia in recently diagnosed patients: healthcare utilisation and cost perspective.", "abstract": "BACKGROUND\nInpatient care to manage relapse of patients with schizophrenia contributes greatly to the overall financial burden of treatment. The present study explores to what extent this is influenced by duration of illness.\n\n\nMETHODS\nMedical and pharmaceutical claims data for patients diagnosed with schizophrenia (ICD-9 295.xx) were obtained from the PharMetrics Integrated Database, a large, regionally representative US insurance claims database, for the period 1998-2007. Recently diagnosed (n = 970) and chronic patients (n = 2996) were distinguished based on ICD-9 295.xx classification, age and claims history relative to the first year (recently diagnosed) and the third year onwards (chronic) after the first index schizophrenia event.\n\n\nRESULTS\nThe medical resource use and costs during the year following the index schizophrenia event differed significantly between cohorts. A higher proportion of recently diagnosed patients were hospitalised compared with chronic patients (22.3% vs 12.4%; p < 0.0001), spending a greater mean number of days in hospital (5.1 days vs 3.0 days; p = 0.0065) as well as making more frequent use of emergency room (ER) resources during this time. The mean annual healthcare costs of recently diagnosed patients were also greater ($20,654 vs $15,489; p < 0.0001) with inpatient costs making up a higher proportion of total costs (62.9%) compared with chronic patients (38.5%).\n\n\nCONCLUSIONS\nThere is a considerably higher overall economic burden in the year following their first schizophrenia event in the treatment of recently diagnosed schizophrenia patients compared with chronic patients. Since hospitalisations and ER visits are the most significant components contributing to this finding, efforts that focus on measures to reduce the risk of relapse, particularly amongst recently diagnosed patients, such as improved adherence programs, may lead to better clinical and economic outcomes in the management of schizophrenia.\n\n\nLIMITATIONS\nOnly commercially insured patients and direct medical costs were included, therefore, results may underestimate the economic burden of schizophrenia." }, { "pmid": "18266664", "title": "Antipsychotic drug use and risk of pneumonia in elderly people.", "abstract": "OBJECTIVES\nTo investigate the association between antipsychotic drug use and risk of pneumonia in elderly people.\n\n\nDESIGN\nA nested case-control analysis.\n\n\nSETTING\nData were used from the PHARMO database, which collates information from community pharmacies and hospital discharge records.\n\n\nPARTICIPANTS\nA cohort of 22,944 elderly people with at least one antipsychotic prescription; 543 cases of hospital admission for pneumonia were identified. Cases were compared with four randomly selected controls matched on index date.\n\n\nMEASUREMENTS\nAntipsychotic drug use in the year before the index date was classified as current, recent, or past use. No prescription for an antipsychotic in the year before the index date was classified as no use. The strength of the association between use of antipsychotics and the development of pneumonia was estimated using multivariate logistic regression analysis and expressed as odds ratios (ORs) with 95% confidence intervals (CIs).\n\n\nRESULTS\nCurrent use of antipsychotics was associated with an almost 60% increase in the risk of pneumonia (adjusted OR=1.6, 95% CI=1.3-2.1). The risk was highest during the first week after initiation of an antipsychotic (adjusted OR=4.5, 95% CI=2.8-7.3). Similar associations were found after exclusion of elderly people with a diagnosis of delirium. Current users of atypical agents showed a higher risk of pneumonia (adjusted OR=3.1, 95% CI=1.9-5.1) than users of conventional agents (adjusted OR=1.5, 95% CI=1.2-1.9). There was no clear dose-response relationship.\n\n\nCONCLUSION\nUse of antipsychotics in elderly people is associated with greater risk of pneumonia. This risk is highest shortly after the initiation of treatment, with the greatest increase in risk found for atypical antipsychotics." }, { "pmid": "26444916", "title": "Risk Factors for Aspiration Pneumonia in Older Adults.", "abstract": "BACKGROUNDS\nAspiration pneumonia is a dominant form of community-acquired and healthcare-associated pneumonia, and a leading cause of death among ageing populations. However, the risk factors for developing aspiration pneumonia in older adults have not been fully evaluated. The purpose of the present study was to determine the risk factors for aspiration pneumonia among the elderly.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nWe conducted an observational study using data from a nationwide survey of geriatric medical and nursing center in Japan. The study subjects included 9930 patients (median age: 86 years, women: 76%) who were divided into two groups: those who had experienced an episode of aspiration pneumonia in the previous 3 months and those who had not. Data on demographics, clinical status, activities of daily living (ADL), and major illnesses were compared between subjects with and without aspiration pneumonia. Two hundred and fifty-nine subjects (2.6% of the total sample) were in the aspiration pneumonia group. In the univariate analysis, older age was not found to be a risk factor for aspiration pneumonia, but the following were: sputum suctioning (odds ratio [OR] = 17.25, 95% confidence interval [CI]: 13.16-22.62, p < 0.001), daily oxygen therapy (OR = 8.29, 95% CI: 4.39-15.65), feeding support dependency (OR = 8.10, 95% CI: 6.27-10.48, p < 0.001), and urinary catheterization (OR = 4.08, 95% CI: 2.81-5.91, p < 0.001). In the multiple logistic regression analysis, the risk factors associated with aspiration pneumonia after propensity-adjustment (258 subjects each) were sputum suctioning (OR = 3.276, 95% CI: 1.910-5.619), deterioration of swallowing function in the past 3 months (OR = 3.584, 95% CI: 1.948-6.952), dehydration (OR = 8.019, 95% CI: 2.720-23.643), and dementia (OR = 1.618, 95% CI: 1.031-2.539).\n\n\nCONCLUSION\nThe risk factors for aspiration pneumonia were sputum suctioning, deterioration of swallowing function, dehydration, and dementia. These results could help improve clinical management for preventing repetitive aspiration pneumonia." }, { "pmid": "9040894", "title": "An evaluation of machine-learning methods for predicting pneumonia mortality.", "abstract": "This paper describes the application of eight statistical and machine-learning methods to derive computer models for predicting mortality of hospital patients with pneumonia from their findings at initial presentation. The eight models were each constructed based on 9847 patient cases and they were each evaluated on 4352 additional cases. The primary evaluation metric was the error in predicted survival as a function of the fraction of patients predicted to survive. This metric is useful in assessing a model's potential to assist a clinician in deciding whether to treat a given patient in the hospital or at home. We examined the error rates of the models when predicting that a given fraction of patients will survive. We examined survival fractions between 0.1 and 0.6. Over this range, each model's predictive error rate was within 1% of the error rate of every other model. When predicting that approximately 30% of the patients will survive, all the models have an error rate of less than 1.5%. The models are distinguished more by the number of variables and parameters that they contain than by their error rates; these differences suggest which models may be the most amenable to future implementation as paper-based guidelines." }, { "pmid": "14684266", "title": "Use of genetic algorithms for neural networks to predict community-acquired pneumonia.", "abstract": "BACKGROUND\nGenetic algorithms have been used to solve optimization problems for artificial neural networks (ANN) in several domains. We used genetic algorithms to search for optimal hidden-layer architectures, connectivity, and training parameters for ANN for predicting community-acquired pneumonia among patients with respiratory complaints.\n\n\nMETHODS\nFeed-forward back-propagation ANN were trained on sociodemographic, symptom, sign, comorbidity, and radiographic outcome data among 1044 patients from the University of Illinois (the training cohort), and were applied to 116 patients from the University of Nebraska (the testing cohort). Binary chromosomes with genes representing network attributes, including the number of nodes in the hidden layers, learning rate and momentum parameters, and the presence or absence of implicit within-layer connectivity using a competition algorithm, were operated on by various combinations of crossover, mutation, and probabilistic selection based on network mean-square error (MSE), and separately on average cross entropy (ENT). Predictive accuracy was measured as the area under a receiver-operating characteristic (ROC) curve.\n\n\nRESULTS\nOver 50 generations, the baseline genetic algorithm evolved an optimized ANN with nine nodes in the first hidden layer, zero nodes in the second hidden layer, learning rate and momentum parameters of 0.5, and no within-layer competition connectivity. This ANN had an ROC area in the training cohort of 0.872 and in the testing cohort of 0.934 (P-value for difference, 0.181). Algorithms based on cross-generational selection, Gray coding of genes prior to mutation, and crossover recombination at different genetic levels, evolved optimized ANN identical to the baseline genetic strategy. Algorithms based on other strategies, including elite selection within generations (training ROC area 0.819), and inversions of genetic material during recombination (training ROC area 0.812), evolved less accurate ANN.\n\n\nCONCLUSION\nANN optimized by genetic algorithms accurately discriminated pneumonia within a training cohort, and within a testing cohort consisting of cases on which the networks had not been trained. Genetic algorithms can be used to implement efficient search strategies for optimal ANN to predict pneumonia." }, { "pmid": "26613551", "title": "Antipsychotic reexposure and recurrent pneumonia in schizophrenia: a nested case-control study.", "abstract": "OBJECTIVE\nFew studies have used systematic datasets to assess the safety of antipsychotic rechallenge after an adverse event. This nested case-control study estimated the risk for recurrent pneumonia after reexposure to antipsychotic treatment.\n\n\nMETHOD\nIn a nationwide schizophrenia (ICD-9-CM code 295) cohort (derived from the National Health Insurance Research Database in Taiwan) who were hospitalized for pneumonia (ICD-9-CM codes 480-486, 507) between 2000 and 2008 (N = 2,201), we identified 494 subjects that developed recurrent pneumonia after a baseline pneumonia episode. Based on risk-set sampling in a 1:3 ratio, 1,438 matched controls were selected from the cohort. Exposures to antipsychotics were categorized by type, duration, and defined daily dose. Using propensity score-adjusted analysis, we assessed individual antipsychotics for the risk of recurrent pneumonia; we furthermore assessed the effect of reexposure to these antipsychotics on the risk of recurrent pneumonia.\n\n\nRESULTS\nOf the antipsychotics studied, current use of clozapine was the only one associated with a clear dose-dependent increase in the risk for recurrent pneumonia (adjusted risk ratio = 1.40, P = .024). Intriguingly, patients reexposed to clozapine had a higher risk for recurrent pneumonia (adjusted risk ratio = 1.99, P = .023) than those receiving clozapine only prior to the baseline pneumonia, and this risk was associated with gender. Women reexposed to clozapine were more susceptible to recurrent pneumonia (adjusted risk ratio = 4.93, P = .050).\n\n\nCONCLUSIONS\nIn patients experiencing pneumonia while undergoing clozapine treatment, physicians should carefully consider the increased risk of pneumonia recurrence when clozapine is reintroduced. Future studies should try to quantify the risk of other medical conditions associated with clozapine reexposure." }, { "pmid": "20368647", "title": "Association of community-acquired pneumonia with antipsychotic drug use in elderly patients: a nested case-control study.", "abstract": "BACKGROUND\nAccording to safety alerts from the U.S. Food and Drug Administration, pneumonia is one of the most frequently reported causes of death in elderly patients with dementia who are treated with antipsychotic drugs. However, epidemiologic evidence of the association between antipsychotic drug use and pneumonia is limited.\n\n\nOBJECTIVE\nTo evaluate whether typical or atypical antipsychotic use is associated with fatal or nonfatal pneumonia in elderly persons.\n\n\nDESIGN\nPopulation-based, nested case-control study.\n\n\nSETTING\nDutch Integrated Primary Care Information database.\n\n\nPATIENTS\nCohort of persons who used an antipsychotic drug, were 65 years or older, and were registered in the IPCI database from 1996 to 2006. Case patients were all persons with incident community-acquired pneumonia. Up to 20 control participants were matched to each case patient on the basis of age, sex, and date of onset.\n\n\nMEASUREMENTS\nRisk for fatal or nonfatal community-acquired pneumonia with atypical and typical antipsychotic use. Antipsychotic exposure was categorized by type, timing, and daily dose, and the association with pneumonia was assessed by using conditional logistic regression.\n\n\nRESULTS\n258 case patients with incident pneumonia were matched to 1686 control participants. Sixty-five (25%) of the case patients died in 30 days, and their disease was considered fatal. Current use of either atypical (odds ratio [OR], 2.61 [95% CI, 1.48 to 4.61]) or typical (OR, 1.76 [CI, 1.22 to 2.53]) antipsychotic drugs was associated with a dose-dependent increase in the risk for pneumonia compared with past use of antipsychotic drugs. Only atypical antipsychotic drugs were associated with an increase in the risk for fatal pneumonia (OR, 5.97 [CI, 1.49 to 23.98]).\n\n\nLIMITATIONS\nAntipsychotic exposure was based on prescription files. Residual confounding due to unmeasured covariates or severity of disease was possible.\n\n\nCONCLUSION\nThe use of either atypical or typical antipsychotic drugs in elderly patients is associated in a dose-dependent manner with risk for community-acquired pneumonia." }, { "pmid": "8792945", "title": "Cardiomyopathy associated with clozapine.", "abstract": "OBJECTIVE\nTo report a case of symptomatic cardiomyopathy induced during treatment with clozapine, an antipsychotic of the dibenzodiazepine class.\n\n\nCASE SUMMARY\nA patient with a long psychiatric history significant for schizophrenia and no prior cardiac history developed dyspnea, malaise, and edema with a low ventricular ejection fraction during clozapine therapy.\n\n\nDISCUSSION\nThe literature concerning cardiorespiratory complications with clozapine therapy is reviewed.\n\n\nCONCLUSIONS\nCardiorespiratory complications associated with clozapine use are rare. Caution may be warranted in patients treated with other medications, such as benzodiazepines, and in patients with underlying cardiac disease." }, { "pmid": "11926934", "title": "Abnormalities in glucose regulation during antipsychotic treatment of schizophrenia.", "abstract": "BACKGROUND\nHyperglycemia and type 2 diabetes mellitus are more common in schizophrenia than in the general population. Glucoregulatory abnormalities have also been associated with the use of antipsychotic medications themselves. While antipsychotics may increase adiposity, which can decrease insulin sensitivity, disease- and medication-related differences in glucose regulation might also occur independent of differences in adiposity.\n\n\nMETHODS\nModified oral glucose tolerance tests were performed in schizophrenic patients (n = 48) receiving clozapine, olanzapine, risperidone, or typical antipsychotics, and untreated healthy control subjects (n = 31), excluding subjects with diabetes and matching groups for adiposity and age. Plasma was sampled at 0 (fasting), 15, 45, and 75 minutes after glucose load.\n\n\nRESULTS\nSignificant time x treatment group interactions were detected for plasma glucose (F(12,222) = 4.89, P<.001) and insulin (F(12,171) = 2.10, P =.02) levels, with significant effects of treatment group on plasma glucose level at all time points. Olanzapine-treated patients had significant (1.0-1.5 SDs) glucose elevations at all time points, in comparison with patients receiving typical antipsychotics as well as untreated healthy control subjects. Clozapine-treated patients had significant (1.0-1.5 SDs) glucose elevations at fasting and 75 minutes after load, again in comparison with patients receiving typical antipsychotics and untreated control subjects. Risperidone-treated patients had elevations in fasting and postload glucose levels, but only in comparison with untreated healthy control subjects. No differences in mean plasma glucose level were detected when comparing risperidone-treated vs typical antipsychotic-treated patients and when comparing typical antipsychotic-treated patients vs untreated control subjects.\n\n\nCONCLUSION\nAntipsychotic treatment of nondiabetic patients with schizophrenia can be associated with adverse effects on glucose regulation, which can vary in severity independent of adiposity and potentially increase long-term cardiovascular risk." }, { "pmid": "11376542", "title": "A comparison of classification algorithms to automatically identify chest X-ray reports that support pneumonia.", "abstract": "We compared the performance of expert-crafted rules, a Bayesian network, and a decision tree at automatically identifying chest X-ray reports that support acute bacterial pneumonia. We randomly selected 292 chest X-ray reports, 75 (25%) of which were from patients with a hospital discharge diagnosis of bacterial pneumonia. The reports were encoded by our natural language processor and then manually corrected for mistakes. The encoded observations were analyzed by three expert systems to determine whether the reports supported pneumonia. The reference standard for radiologic support of pneumonia was the majority vote of three physicians. We compared (a) the performance of the expert systems against each other and (b) the performance of the expert systems against that of four physicians who were not part of the gold standard. Output from the expert systems and the physicians was transformed so that comparisons could be made with both binary and probabilistic output. Metrics of comparison for binary output were sensitivity (sens), precision (prec), and specificity (spec). The metric of comparison for probabilistic output was the area under the receiver operator characteristic (ROC) curve. We used McNemar's test to determine statistical significance for binary output and univariate z-tests for probabilistic output. Measures of performance of the expert systems for binary (probabilistic) output were as follows: Rules--sens, 0.92; prec, 0.80; spec, 0.86 (Az, 0.960); Bayesian network--sens, 0.90; prec, 0.72; spec, 0.78 (Az, 0.945); decision tree--sens, 0.86; prec, 0.85; spec, 0.91 (Az, 0.940). Comparisons of the expert systems against each other using binary output showed a significant difference between the rules and the Bayesian network and between the decision tree and the Bayesian network. Comparisons of expert systems using probabilistic output showed no significant differences. Comparisons of binary output against physicians showed differences between the Bayesian network and two physicians. Comparisons of probabilistic output against physicians showed a difference between the decision tree and one physician. The expert systems performed similarly for the probabilistic output but differed in measures of sensitivity, precision, and specificity produced by the binary output. All three expert systems performed similarly to physicians." }, { "pmid": "14759964", "title": "Advanced statistics: understanding medical record review (MRR) studies.", "abstract": "Medical record review (MRR) studies have been reported to make up 25% of all scientific studies published in emergency medical (EM) journals. However, unlike other study designs, there are no standards for reporting MRRs and very little literature on the methodology for conducting them. The purpose of this article is to provide the reader with methodological guidance regarding the strengths and weaknesses of these types of studies." }, { "pmid": "25160603", "title": "Using decision trees to manage hospital readmission risk for acute myocardial infarction, heart failure, and pneumonia.", "abstract": "To improve healthcare quality and reduce costs, the Affordable Care Act places hospitals at financial risk for excessive readmissions associated with acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN). Although predictive analytics is increasingly looked to as a means for measuring, comparing, and managing this risk, many modeling tools require data inputs that are not readily available and/or additional resources to yield actionable information. This article demonstrates how hospitals and clinicians can use their own structured discharge data to create decision trees that produce highly transparent, clinically relevant decision rules for better managing readmission risk associated with AMI, HF, and PN. For illustrative purposes, basic decision trees are trained and tested using publically available data from the California State Inpatient Databases and an open-source statistical package. As expected, these simple models perform less well than other more sophisticated tools, with areas under the receiver operating characteristic (ROC) curve (or AUC) of 0.612, 0.583, and 0.650, respectively, but achieve a lift of at least 1.5 or greater for higher-risk patients with any of the three conditions. More importantly, they are shown to offer substantial advantages in terms of transparency and interpretability, comprehensiveness, and adaptability. By enabling hospitals and clinicians to identify important factors associated with readmissions, target subgroups of patients at both high and low risk, and design and implement interventions that are appropriate to the risk levels observed, decision trees serve as an ideal application for addressing the challenge of reducing hospital readmissions." }, { "pmid": "843571", "title": "The measurement of observer agreement for categorical data.", "abstract": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature." }, { "pmid": "27732102", "title": "Guide to the Management of Clozapine-Related Tolerability and Safety Concerns.", "abstract": "Clozapine is a highly effective antipsychotic medication, which provides a range of significant benefits for patients with schizophrenia, and is the standard of care for treatment-resistant schizophrenia as well as for reducing the risk of suicidal behaviors in schizophrenia and schizoaffective disorder. However, clozapine is widely underutilized, largely because prescribing clinicians lack experience in prescribing it and managing its adverse events (AEs). Clozapine is associated with three uncommon but immediately dangerous AEs-agranulocytosis, myocarditis/cardiomyopathy, and seizures-as well as AEs that may become dangerous if neglected, including weight gain, metabolic syndrome and constipation, and others that are annoying or distressing such as sedation, nighttime enuresis and hypersalivation. Because of the risk of agranulocytosis, clozapine formulations are available only through restricted distribution via a patient registry, with mandatory, systematized monitoring for absolute neutrophil count using a specific algorithm. We identified articles on managing clozapine-associated AEs by searching PubMed using appropriate key words and search techniques for each topic. A review of the prevalence and clinical characteristics of clozapine-associated AEs shows that these risks can be managed efficiently and effectively. The absolute risks for both agranulocytosis and myocarditis/cardiomyopathy are low, diminish after the first six months, and are further reduced with appropriate monitoring. Weight gain/metabolic disorders and constipation, which develop more gradually, can be mitigated with regular monitoring and timely interventions. Sedation, hypersalivation, and enuresis are common but manageable with ameliorative measures and/or medications." } ]
International Journal of Multimedia Information Retrieval
30956928
PMC6417456
10.1007/s13735-017-0145-8
A faceted approach to reachability analysis of graph modelled collections
Nowadays, there is a proliferation of available information sources from different modalities—text, images, audio, video and more. Information objects are not isolated anymore. They are frequently connected via metadata, semantic links, etc. This leads to various challenges in graph-based information retrieval. This paper is concerned with the reachability analysis of multimodal graph modelled collections. We use our framework to leverage the combination of features of different modalities through our formulation of faceted search. This study highlights the effect of different facets and link types in improving reachability of relevant information objects. The experiments are performed on the Image CLEF 2011 Wikipedia collection with about 400,000 documents and images. The results demonstrate that the combination of different facets is conductive to obtain higher reachability. We obtain 373% recall gain for very hard topics by using our graph model of the collection. Further, by adding semantic links to the collection, we gain a 10% increase in the overall recall.
Related workWe divide the related work on MMIR into the two categories of non-structured and structured IR. Traditional IR is based on retrieval from independent documents. We refer to this type of IR as non-structured IR, in which an explicit relation between the information objects is not considered. We provide a brief overview on this part of related work in Sect. 2.1. Leveraging a graph of relations between information objects imposes structured MMIR. We provide the related work in this area in Sect. 2.2.Non-structured MMIRMost popular search engines like Google, Yahoo and Bing build upon text search techniques by using e.g. user-provided tags or related text for images or videos. These approaches have limited access to the data as they completely ignore the information of visual content and the indexes do not contain multimodal information [7, 12, 17, 18, 24]. Another reason is that the surrounding text is usually noisy and this decreases the performance of text-based multimodal search engines. Content-based image retrieval (CBIR) is one of the earliest methods that started to consider image content in the retrieval process. Many systems considered similarity measures only based on the content of the images, which are called pure CBIR systems [6, 26, 33, 42]. One of the earliest research in this category is to utilize low-level feature representation of image content and calculate the distance to query examples to capture similarity. Top documents in the result list are the ones which are visually similar to the query example. On the other hand, systems that include a combination of text and image content in addition to a flexible query interface are considered as composite CBIR systems [16, 21], which comply with the definition of MMIR. This category is suitable for web image retrieval as most images are surrounded by tags, hyperlinks and other relevant metadata. In a search engine, for example, the text result can be reranked regarding the similarity of the results to a given image example of the query. Lazaridis et al. [23] leverage different modalities, such as audio and image, to perform a multimodal search. Their I-Search project is a multimodal search engine, where multimodality relations are defined between different modalities of an information object, for example, between an image of a dog, its barking sound and its 3D representation. They define a neighbourhood relation between two multimodal objects which are similar in at least one of their modalities. In I-Search, however, neither semantic relations between information objects (e.g. a dog and a cat object) are considered, nor the importance of these relations in answering a user’s query.In addition to using features from different modalities, the interaction process with users is considered in multimodal retrieval as well. Cheng et al. [8] suggest two interactive retrieval procedures for image retrieval. The first method incorporates a relevance feedback mechanism based on textual information, while the second approach combines textual and image information to help users find a target image. Hwang and Grauman [19] have also explored ranking object importance in static images, learning what people mention first from human-annotated tags.Sometimes in the literature, MMIR is simply called reranking. For instance, Mei et al. [29] definition of reranking (leveraging different modalities, like image or video, rather than only text to find a better ranking) is compatible to what we define as MMIR. They thoroughly survey reranking methods in MMIR and categorize related work in four groups: (1) Self-reranking: mining knowledge from the initial ranked list. (2) Example-based reranking: leveraging few query examples that the user provides along with the textual query. (3) Crowd-reranking: utilizing knowledge obtained from crowd as user-labelled data to perform meta-search in a supervised manner. (4) Interactive reranking which reranks involving user interactions. Based on this categorization, graph-based methods belong to the self-reranking category. Mostly the methods in this category are inspired by PageRank techniques. They create a graph from top-ranked results. Structured IR is similar to this graph-based method, with the difference that the nodes in the graph are not necessarily from top-ranked list.Structured MMIRBy structured IR, we denote those approaches that consider the explicit relations between information objects. Usually in such models, a graph is created, which may be based on similarity or semantic relations between information objects. Nodes and edges may hold different definitions in each model. For example, in Liu et al. [25] graph-based model, the video search reranking problem is formulated in a PageRank fashion using random walks. The video sequences are the nodes, and the multimodal (textual and visual) similarity is the hyperlinks. The relevance score is propagated per topic through these hyperlinks.Jing and Baluja [20] cast the image-ranking problem into the task of identifying “authority” nodes on a visual similarity graph and propose VisualRank to analyse the visual link structure among images. Schinas et al. [40] present a framework to provide summaries of social posts related to an event. They leverage different modalities (such as posts and pictures of the event) to maximize the relevancy of posts to a topic. Clements et al. [9] propose the use of a personalized random walk on a tripartite graph, which connects users to tags and tags to items. The stationary distribution shows the probability of relevance of the items.Targeting RDF data, Elbassuoni and Blanco [14] select subgraphs to match the query and rank with statistical language models. As a desktop search engine, Beagle++ utilizes a combination of indexed and structured search [30]. Wang et al. [48] propose a graph-based learning approach with multiple features for web image search. They create one graph per feature. Links in each graph are based on the similarity of the images in each graph based on that feature. The weight of different features and relevance scores are learned through this model.Related research on the ImageCLEF 2011 Wikipedia collection is generally based on a combination of text and image retrieval [47]. To our best knowledge, there is no approach that has modelled the collection as a graph structure and no approach has therefore leveraged the explicit links between information objects and between information objects and their features.Astera belongs to the structured MMIR category, as it uses a graph of information objects. It differs, however, in the way it creates the graph. Firstly, we include all information objects in the collection, not only the top-ranked results. Top results are used as starting points in traversing the graph. Secondly, different modalities such as image, text, audio and video can be searched in the same graph, whereas related work mostly consider a graph of one modality (e.g. video or image). Finally, we utilize different types of relations between information objects of a collection. In related work, it is only similarity links or only semantic links, whereas we consider both links in the same graph. Further, we add “part-of” and facet links as well. This creates a much richer representation of the complex relationships between information objects.
[ "21659029", "18787237", "22829401" ]
[ { "pmid": "21659029", "title": "Improving Web image search by bag-based reranking.", "abstract": "Given a textual query in traditional text-based image retrieval (TBIR), relevant images are to be reranked using visual features after the initial text-based search. In this paper, we propose a new bag-based reranking framework for large-scale TBIR. Specifically, we first cluster relevant images using both textual and visual features. By treating each cluster as a \"bag\" and the images in the bag as \"instances,\" we formulate this problem as a multi-instance (MI) learning problem. MI learning methods such as mi-SVM can be readily incorporated into our bag-based reranking framework. Observing that at least a certain portion of a positive bag is of positive instances while a negative bag might also contain positive instances, we further use a more suitable generalized MI (GMI) setting for this application. To address the ambiguities on the instance labels in the positive and negative bags under this GMI setting, we develop a new method referred to as GMI-SVM to enhance retrieval performance by propagating the labels from the bag level to the instance level. To acquire bag annotations for (G)MI learning, we propose a bag ranking method to rank all the bags according to the defined bag ranking score. The top ranked bags are used as pseudopositive training bags, while pseudonegative training bags can be obtained by randomly sampling a few irrelevant images that are not associated with the textual query. Comprehensive experiments on the challenging real-world data set NUS-WIDE demonstrate our framework with automatic bag annotation can achieve the best performances compared with existing image reranking methods. Our experiments also demonstrate that GMI-SVM can achieve better performances when using the manually labeled training bags obtained from relevance feedback." }, { "pmid": "18787237", "title": "VisualRank: applying PageRank to large-scale image search.", "abstract": "Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying \"authority\" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be \"authorities\" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines." }, { "pmid": "22829401", "title": "Multimodal graph-based reranking for web image search.", "abstract": "This paper introduces a web image search reranking approach that explores multiple modalities in a graph-based learning scheme. Different from the conventional methods that usually adopt a single modality or integrate multiple modalities into a long feature vector, our approach can effectively integrate the learning of relevance scores, weights of modalities, and the distance metric and its scaling for each modality into a unified scheme. In this way, the effects of different modalities can be adaptively modulated and better reranking performance can be achieved. We conduct experiments on a large dataset that contains more than 1000 queries and 1 million images to evaluate our approach. Experimental results demonstrate that the proposed reranking approach is more robust than using each individual modality, and it also performs better than many existing methods." } ]
Journal of Cheminformatics
30874969
PMC6419809
10.1186/s13321-019-0342-y
KMR: knowledge-oriented medicine representation learning for drug–drug interaction and similarity computation
Efficient representations of drugs provide important support for healthcare analytics, such as drug–drug interaction (DDI) prediction and drug–drug similarity (DDS) computation. However, incomplete annotated data and drug feature sparseness create substantial barriers for drug representation learning, making it difficult to accurately identify new drug properties prior to public release. To alleviate these deficiencies, we propose KMR, a knowledge-oriented feature-driven method which can learn drug related knowledge with an accurate representation. We conduct series of experiments on real-world medical datasets to demonstrate that KMR is capable of drug representation learning. KMR can support to discover meaningful DDI with an accuracy rate of 92.19%, demonstrating that techniques developed in KMR significantly improve the prediction quality for new drugs not seen at training. Experimental results also indicate that KMR can identify DDS with an accuracy rate of 88.7% by facilitating drug knowledge, outperforming existing state-of-the-art drug similarity measures.
Related workTechnically, the work in this paper relates to the representation learning of words, knowledge bases and textual information. Practically, our work is mainly related to the representation learning of drug. Related works are reviewed as follows.Representation learning of words Learning pre-trained word embedding is a fundamental step in various NLP tasks. Word embedding is a distributed word representation which is typically induced using neural language models [12]. Several methods, e.g., Continuous bag-of-words (CBOW) and Skip-Gram [13], have been proposed for word embedding training, and have shown their power in NLP tasks.There are many methods for learning word representations based on term-document, word-context, and pair-pattern matrices. For example, Turney et al. [14] presented a frequency-based method that follows the distribution hypothesis to conduct word representation via context learning. Mikolov et al. [15] learned high-quality distributed vector representations by predicting the word occurrences in a given context.Representation learning of knowledge bases Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space. Translation-based methods, including TransE [16], TransH [17], and TransR [18], have achieved the state-of-the-art performance by converting entities and relation into vectors and regarding each relation as one translation from head entity to tail entity. On the other hand, many studies have tried to use network embedding methods, e.g., Path Ranking Algorithm (PRA) [19], DeepWalk [20], node2vec [21], to reason over entities and relationships in knowledge base. The network embedding methods achieve the state-of-the-art performance of representation learning for knowledge bases, especially for those large-scale and sparse knowledge bases.Representation learning of textual information Many studies have tried to automatically learn information from text using neural network models. For example, Socher et al. [22] introduced a recursive neural network (RNN) model to learn compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Wang et al. [23] combined the convolutional neural networks (CNN) together with unsupervised feature learning to train highly-accurate text detector and character recognizer modules. Here attention mechanism can show its power. Many researchers have been interested in attention mechanism in neural networks and apply it to many areas such as machine translation [24], memory addressing [25] and image captioning [26].Instead of learning the representations of different information separately, we develop a knowledge-oriented interactive learning architecture, which exploits the interactive information from input texts and knowledge bases to supervise the representation learning of words, structural and textual knowledge.Representation learning of drugs Recently, some notable efforts have been made to design databases for drug representation learning and discovery. One well known example is DrugBank [27], a comprehensive resource that combines detailed drug (i.e. chemical) data with comprehensive drug target (i.e. protein) information. In addition to the DrugBank, a number of databases have also released including Therapeutic Target Database (TTD),1 Pharmacogenomics Knowledgebase (PharmGKB),2 and Kyoto Encyclopedia of Genes and Genomes (KEGG),3 Chemical Entities of Biological Interest (ChEBI)4 and PubChem.5 The on-line pharmaceutical encyclopedias such as RxList6 tend to offer detailed clinical information about many drugs but they were not designed to contain structural, chemical or physico-chemical information.Many studies have demonstrated that it is possible to learn efficient representations of medical concept by improving the performance of medical predictive or classification models [28]. For example, Minarro et al. [29] learned the representations of medical terms by applying the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. De Vine et al. [30] explored a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free text, for the task of measuring semantic similarity between medical concepts. Despite this progress, learning efficient representations of drug concepts, however, is still a relatively new territory and under-explored.
[ "25224081", "16381955", "25160253", "26481350", "27867422", "23638278", "26196247", "16875881" ]
[ { "pmid": "25224081", "title": "Drug/nondrug classification using Support Vector Machines with various feature selection strategies.", "abstract": "In conjunction with the advance in computer technology, virtual screening of small molecules has been started to use in drug discovery. Since there are thousands of compounds in early-phase of drug discovery, a fast classification method, which can distinguish between active and inactive molecules, can be used for screening large compound collections. In this study, we used Support Vector Machines (SVM) for this type of classification task. SVM is a powerful classification tool that is becoming increasingly popular in various machine-learning applications. The data sets consist of 631 compounds for training set and 216 compounds for a separate test set. In data pre-processing step, the Pearson's correlation coefficient used as a filter to eliminate redundant features. After application of the correlation filter, a single SVM has been applied to this reduced data set. Moreover, we have investigated the performance of SVM with different feature selection strategies, including SVM-Recursive Feature Elimination, Wrapper Method and Subset Selection. All feature selection methods generally represent better performance than a single SVM while Subset Selection outperforms other feature selection methods. We have tested SVM as a classification tool in a real-life drug discovery problem and our results revealed that it could be a useful method for classification task in early-phase of drug discovery." }, { "pmid": "16381955", "title": "DrugBank: a comprehensive resource for in silico drug discovery and exploration.", "abstract": "DrugBank is a unique bioinformatics/cheminformatics resource that combines detailed drug (i.e. chemical) data with comprehensive drug target (i.e. protein) information. The database contains >4100 drug entries including >800 FDA approved small molecule and biotech drugs as well as >3200 experimental drugs. Additionally, >14,000 protein or drug target sequences are linked to these drug entries. Each DrugCard entry contains >80 data fields with half of the information being devoted to drug/chemical data and the other half devoted to drug target or protein data. Many data fields are hyperlinked to other databases (KEGG, PubChem, ChEBI, PDB, Swiss-Prot and GenBank) and a variety of structure viewing applets. The database is fully searchable supporting extensive text, sequence, chemical structure and relational query searches. Potential applications of DrugBank include in silico drug target discovery, drug design, drug docking or screening, drug metabolism prediction, drug interaction prediction and general pharmaceutical education. DrugBank is available at http://redpoll.pharmacy.ualberta.ca/drugbank/." }, { "pmid": "25160253", "title": "Exploring the application of deep learning techniques on medical text corpora.", "abstract": "With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies." }, { "pmid": "26481350", "title": "The SIDER database of drugs and side effects.", "abstract": "Unwanted side effects of drugs are a burden on patients and a severe impediment in the development of new drugs. At the same time, adverse drug reactions (ADRs) recorded during clinical trials are an important source of human phenotypic data. It is therefore essential to combine data on drugs, targets and side effects into a more complete picture of the therapeutic mechanism of actions of drugs and the ways in which they cause adverse reactions. To this end, we have created the SIDER ('Side Effect Resource', http://sideeffects.embl.de) database of drugs and ADRs. The current release, SIDER 4, contains data on 1430 drugs, 5880 ADRs and 140 064 drug-ADR pairs, which is an increase of 40% compared to the previous version. For more fine-grained analyses, we extracted the frequency with which side effects occur from the package inserts. This information is available for 39% of drug-ADR pairs, 19% of which can be compared to the frequency under placebo treatment. SIDER furthermore contains a data set of drug indications, extracted from the package inserts using Natural Language Processing. These drug indications are used to reduce the rate of false positives by identifying medical terms that do not correspond to ADRs." }, { "pmid": "27867422", "title": "ClassyFire: automated chemical classification with a comprehensive, computable taxonomy.", "abstract": "BACKGROUND\nScientists have long been driven by the desire to describe, organize, classify, and compare objects using taxonomies and/or ontologies. In contrast to biology, geology, and many other scientific disciplines, the world of chemistry still lacks a standardized chemical ontology or taxonomy. Several attempts at chemical classification have been made; but they have mostly been limited to either manual, or semi-automated proof-of-principle applications. This is regrettable as comprehensive chemical classification and description tools could not only improve our understanding of chemistry but also improve the linkage between chemistry and many other fields. For instance, the chemical classification of a compound could help predict its metabolic fate in humans, its druggability or potential hazards associated with it, among others. However, the sheer number (tens of millions of compounds) and complexity of chemical structures is such that any manual classification effort would prove to be near impossible.\n\n\nRESULTS\nWe have developed a comprehensive, flexible, and computable, purely structure-based chemical taxonomy (ChemOnt), along with a computer program (ClassyFire) that uses only chemical structures and structural features to automatically assign all known chemical compounds to a taxonomy consisting of >4800 different categories. This new chemical taxonomy consists of up to 11 different levels (Kingdom, SuperClass, Class, SubClass, etc.) with each of the categories defined by unambiguous, computable structural rules. Furthermore each category is named using a consensus-based nomenclature and described (in English) based on the characteristic common structural properties of the compounds it contains. The ClassyFire webserver is freely accessible at http://classyfire.wishartlab.com/. Moreover, a Ruby API version is available at https://bitbucket.org/wishartlab/classyfire_api, which provides programmatic access to the ClassyFire server and database. ClassyFire has been used to annotate over 77 million compounds and has already been integrated into other software packages to automatically generate textual descriptions for, and/or infer biological properties of over 100,000 compounds. Additional examples and applications are provided in this paper.\n\n\nCONCLUSION\nClassyFire, in combination with ChemOnt (ClassyFire's comprehensive chemical taxonomy), now allows chemists and cheminformaticians to perform large-scale, rapid and automated chemical classification. Moreover, a freely accessible API allows easy access to more than 77 million \"ClassyFire\" classified compounds. The results can be used to help annotate well studied, as well as lesser-known compounds. In addition, these chemical classifications can be used as input for data integration, and many other cheminformatics-related tasks." }, { "pmid": "23638278", "title": "Statistics corner: A guide to appropriate use of correlation coefficient in medical research.", "abstract": "Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided." }, { "pmid": "26196247", "title": "Label Propagation Prediction of Drug-Drug Interactions Based on Clinical Side Effects.", "abstract": "Drug-drug interaction (DDI) is an important topic for public health, and thus attracts attention from both academia and industry. Here we hypothesize that clinical side effects (SEs) provide a human phenotypic profile and can be translated into the development of computational models for predicting adverse DDIs. We propose an integrative label propagation framework to predict DDIs by integrating SEs extracted from package inserts of prescription drugs, SEs extracted from FDA Adverse Event Reporting System, and chemical structures from PubChem. Experimental results based on hold-out validation demonstrated the effectiveness of the proposed algorithm. In addition, the new algorithm also ranked drug information sources based on their contributions to the prediction, thus not only confirming that SEs are important features for DDI prediction but also paving the way for building more reliable DDI prediction models by prioritizing multiple data sources. By applying the proposed algorithm to 1,626 small-molecule drugs which have one or more SE profiles, we obtained 145,068 predicted DDIs. The predicted DDIs will help clinicians to avoid hazardous drug interactions in their prescriptions and will aid pharmaceutical companies to design large-scale clinical trial by assessing potentially hazardous drug combinations. All data sets and predicted DDIs are available at http://astro.temple.edu/~tua87106/ddi.html." }, { "pmid": "16875881", "title": "Measures of semantic similarity and relatedness in the biomedical domain.", "abstract": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures." } ]
Frontiers in Neuroscience
30930729
PMC6427904
10.3389/fnins.2019.00144
Supervised Brain Tumor Segmentation Based on Gradient and Context-Sensitive Features
Gliomas have the highest mortality rate and prevalence among the primary brain tumors. In this study, we proposed a supervised brain tumor segmentation method which detects diverse tumoral structures of both high grade gliomas and low grade gliomas in magnetic resonance imaging (MRI) images based on two types of features, the gradient features and the context-sensitive features. Two-dimensional gradient and three-dimensional gradient information was fully utilized to capture the gradient change. Furthermore, we proposed a circular context-sensitive feature which captures context information effectively. These features, totally 62, were compressed and optimized based on an mRMR algorithm, and random forest was used to classify voxels based on the compact feature set. To overcome the class-imbalanced problem of MRI data, our model was trained on a class-balanced region of interest dataset. We evaluated the proposed method based on the 2015 Brain Tumor Segmentation Challenge database, and the experimental results show a competitive performance.
2. Related WorkNumerous methods of brain tumor detection and segmentation including semi-automatic methods and full-automatic techniques have been proposed (Tang et al., 2017). These segmentation techniques can be roughly divided into 4 categories: threshold-based techniques, region-based techniques, model-based techniques, and pixel/voxel classification techniques.The threshold-based techniques, region-based techniques, and pixel classification techniques are commonly used for two-dimensional image segmentation (Vijayakumar and Gharpure, 2011). Model-based techniques and voxel classification methods are usually used for three-dimensional image segmentation. We will review the four types of methods in the following subsections.2.1. Threshold-Based TechniquesThreshold-based method is a simple and computationally efficient approach to segment brain tumors because only intensity values need to be considered. The objects in the image are classified by comparing their intensities with one or more intensity threshold values (Gordillo et al., 2013). The Otsu algorithm (Otsu, 1979), Bernsen algorithm (Bernsen, 1986), and Niblack algorithm (Niblack, 1986) are simple and commonly used algorithms.Gibbs et al. proposed an unsupervised approach using a global threshold to segment. The ROI for the tumor extraction task from the MRI images (Gibbs et al., 1996). Stadlbauer et al. used the Gaussian distribution of intensity values as the threshold to segment tumors in brain T2-weighted MRI (Stadlbauer et al., 2004). However, if the information in the image is too complex, the threshold-based algorithm is not suitable. It is also limited to extract enhanced tumor areas.2.2. Region-Based TechniquesRegion-based methods divide an image into several regions that have homogeneity properties according to a predefined criterion (Adams and Bischof, 1994). Region growing and watershed methods are the most commonly used region-based methods for brain tumor segmentation.Ho et al. proposed a region competition method which modulates the propagation term with a signed local statistical force to reach a stable state (Ho et al., 2002). Salman et al. examined the seeded region growing and active contour to be compared against experts' manual segmentations (Salman et al., 2012). Sato et al. proposed a Sobel gradient magnitude-based region growing algorithm which solves the partial volume effect problem (Sato et al., 2000). Deng proposed a region growing method which was based on the gradients and variances along and inside of the boundary curve (Deng et al., 2010).Letteboer et al. and Dam et al. described multi-scale watershed segmentation (Letteboer et al., 2001; Dam et al., 2004). Letteboer et al. proposed a semi-automatic multi-scale watershed algorithm for brain tumor segmentation in MR images (Letteboer et al., 2001). Region-based techniques are used commonly in brain tumor segmentation. However, region-based segmentation has the over-segmentation problem and there is considerable difficulty in marker extraction when using marker-based watershed segmentation. Li and Wan solved these problems by proposing an improved watershed segmentation method with an optimal scale based on ordered dither halftone and mutual information (Li and Wan, 2010).2.3. Model-Based TechniquesModel-based segmentation techniques could be divided into parametric deformable and geometric deformable approaches. There are a number of studies on image segmentation based on active contours, which is a popular parametric deformable method (Boscolo et al., 2002; Amini et al., 2004). Snake is one of the most commonly used geometric deformable algorithm for brain tumor segmentation. Luo et al. proposed a deformable model to segment brain tumors (Luo et al., 2003). This method combined the adaptive balloon force and the gradient vector flow (GVF) force to increase the GVF snake's capture range and convergence speed. Ho et al. proposed a new region competition method for automatic 3D brain tumor segmentation based on level-set snakes which overcome the difficulty in initialization and the missing boundary problems by modulating the propagation term with a signed local statistical force (Ho et al., 2002).2.4. Pixel/Voxel Classification TechniquesVoxel-based classification usually uses voxel attributes for each voxel in the image such as gray level and color information. In brain tumor segmentation, voxel-based techniques are classified as unsupervised classifiers and supervised classifiers to cluster each voxel in the feature space (Gordillo et al., 2013).Juang and Wu proposed a color-converted segmentation approach with the K-means clustering technique for MRI which converts the input gray-level MRI image into a color space image and the image is labeled by cluster indices (Juang and Wu, 2010). Selvakumar et al. implemented a voxel classification method which combined K-means clustering and fuzzy C-means (FCM) segmentation (Selvakumar et al., 2012). Vasuda and Satheesh improved the conventional FCM by implementing data compression including quantization and aggregation to significantly reduce the dimensionality of the input (Vasuda and Satheesh, 2010). Comparing to the conventional FCM, the modified FCM has a higher convergence rate. Ji et al. proposed a modified possibilistic FCM clustering of MRI utilizing local contextual information to impose local spatial continuity to reduce noise and resolve classification ambiguity (Ji et al., 2011). Autoencoders were used in Vaidhya et al. and Zeng et al. work for brain tumor segmentation and other imaging tasks (Vaidhya et al., 2015; Zeng et al., 2018b). Zhang et al. proposed a hidden Markov random field model and the expectation-maximization algorithm for brain segmentation on MRI (Zhang et al., 2001).For the voxel-classification MRI processing techniques, proper depiction of voxels is required as a criteria to accurately classify each voxel. In the previous studies, Zulpe et al. used gray-level co-occurrence matrix (GLCM) textural features to detect the brain tumors (Zulpe and Pawar, 2012); Context-sensitive features were used in Meier et al.'s study to classify tumors and non-tumors (Meier et al., 2014). Meanwhile, a feature selection algorithm also requires good designs to select a compact set of features in order to reduce the computation cost (Zou et al., 2016a,b; Su et al., 2018), considering the huge data size of the MRI. In our study, one set of informative features and efficient feature selection algorithm were proposed. The experimental results have demonstrated that promising brain tumor segmentation performance can be achieved using the proposed method.
[ "15132506", "23743802", "21869435", "11896232", "11150363", "8938037", "26259241", "23790354", "21256710", "15093927", "25494501", "16119262", "9617910", "15488395", "29028927", "21897560", "11293691", "29040911", "28155714" ]
[ { "pmid": "15132506", "title": "Automatic segmentation of thalamus from brain MRI integrating fuzzy clustering and dynamic contours.", "abstract": "Thalamus is an important neuro-anatomic structure in the brain. In this paper, an automated method is presented to segment thalamus from magnetic resonance images (MRI). The method is based on a discrete dynamic contour model that consists of vertices and edges connecting adjacent vertices. The model starts from an initial contour and deforms by external and internal forces. Internal forces are calculated from local geometry of the model and external forces are estimated from desired image features such as edges. However, thalamus has low contrast and discontinues edges on MRI, making external force estimation a challenge. The problem is solved using a new algorithm based on fuzzy C-means (FCM) unsupervised clustering, Prewitt edge-finding filter, and morphological operators. In addition, manual definition of the initial contour for the model makes the final segmentation operator-dependent. To eliminate this dependency, new methods are developed for generating the initial contour automatically. The proposed approaches are evaluated and validated by comparing automatic and radiologist's segmentation results and illustrating their agreement." }, { "pmid": "23743802", "title": "A survey of MRI-based medical image analysis for brain tumor studies.", "abstract": "MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines." }, { "pmid": "21869435", "title": "Edge focusing.", "abstract": "Edge detection in a gray-scale image at a fine resolution typically yields noise and unnecessary detail, whereas edge detection at a coarse resolution distorts edge contours. We show that ``edge focusing'', i.e., a coarse-to-fine tracking in a continuous manner, combines high positional accuracy with good noise-reduction. This is of vital interest in several applications. Junctions of different kinds are in this way restored with high precision, which is a basic requirement when performing (projective) geometric analysis of an image for the purpose of restoring the three-dimensional scene. Segmentation of a scene using geometric clues like parallelism, etc., is also facilitated by the algorithm, since unnecessary detail has been filtered away. There are indications that an extension of the focusing algorithm can classify edges, to some extent, into the categories diffuse and nondiffuse (for example diffuse illumination edges). The edge focusing algorithm contains two parameters, namely the coarseness of the resolution in the blurred image from where we start the focusing procedure, and a threshold on the gradient magnitude at this coarse level. The latter parameter seems less critical for the behavior of the algorithm and is not present in the focusing part, i.e., at finer resolutions. The step length of the scale parameter in the focusing scheme has been chosen so that edge elements do not move more than one pixel per focusing step." }, { "pmid": "11896232", "title": "Medical image segmentation with knowledge-guided robust active contours.", "abstract": "Medical image segmentation techniques typically require some form of expert human supervision to provide accurate and consistent identification of anatomic structures of interest. A novel segmentation technique was developed that combines a knowledge-based segmentation system with a sophisticated active contour model. This approach exploits the guidance of a higher-level process to robustly perform the segmentation of various anatomic structures. The user need not provide initial contour placement, and the high-level process carries out the required parameter optimization automatically. Knowledge about the anatomic structures to be segmented is defined statistically in terms of probability density functions of parameters such as location, size, and image intensity (eg, computed tomographic [CT] attenuation value). Preliminary results suggest that the performance of the algorithm at chest and abdominal CT is comparable to that of more traditional segmentation techniques like region growing and morphologic operators. In some cases, the active contour-based technique may outperform standard segmentation methods due to its capacity to fully enforce the available a priori knowledge concerning the anatomic structure of interest. The active contour algorithm is particularly suitable for integration with high-level image understanding frameworks, providing a robust and easily controlled low-level segmentation tool. Further study is required to determine whether the proposed algorithm is indeed capable of providing consistently superior segmentation." }, { "pmid": "8938037", "title": "Tumour volume determination from MR images by morphological segmentation.", "abstract": "Accurate tumour volume measurement from MR images requires some form of objective image segmentation, and therefore a certain degree of automation. Manual methods of separating data according to the various tissue types which they are thought to represent are inherently prone to operator subjectivity and can be very time consuming. A segmentation procedure based on morphological edge detection and region growing has been implemented and tested on a phantom of known adjustable volume. Comparisons have been made with a traditional data thresholding procedure for the determination of tumour volumes on a set of patients with intracerebral glioma. The two methods are shown to give similar results, with the morphological segmentation procedure having the advantages of being automated and faster." }, { "pmid": "26259241", "title": "DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated MR Images.", "abstract": "We propose a new method that employs transfer learning techniques to effectively correct sampling selection errors introduced by sparse annotations during supervised learning for automated tumor segmentation. The practicality of current learning-based automated tissue classification approaches is severely impeded by their dependency on manually segmented training databases that need to be recreated for each scenario of application, site, or acquisition setup. The comprehensive annotation of reference datasets can be highly labor-intensive, complex, and error-prone. The proposed method derives high-quality classifiers for the different tissue classes from sparse and unambiguous annotations and employs domain adaptation techniques for effectively correcting sampling selection errors introduced by the sparse sampling. The new approach is validated on labeled, multi-modal MR images of 19 patients with malignant gliomas and by comparative analysis on the BraTS 2013 challenge data sets. Compared to training on fully labeled data, we reduced the time for labeling and training by a factor greater than 70 and 180 respectively without sacrificing accuracy. This dramatically eases the establishment and constant extension of large annotated databases in various scenarios and imaging setups and thus represents an important step towards practical applicability of learning-based approaches in tissue classification." }, { "pmid": "23790354", "title": "State of the art survey on MRI brain tumor segmentation.", "abstract": "Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). In brain tumor studies, the existence of abnormal tissues may be easily detectable most of the time. However, accurate and reproducible segmentation and characterization of abnormalities are not straightforward. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation, and the degree of user supervision. Interactive or semiautomatic methods are likely to remain dominant in practice for some time, especially in these applications where erroneous interpretations are unacceptable. This article presents an overview of the most relevant brain tumor segmentation methods, conducted after the acquisition of the image. Given the advantages of magnetic resonance imaging over other diagnostic imaging, this survey is focused on MRI brain tumor segmentation. Semiautomatic and fully automatic techniques are emphasized." }, { "pmid": "21256710", "title": "A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image.", "abstract": "A modified possibilistic fuzzy c-means clustering algorithm is presented for fuzzy segmentation of magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities and noise. By introducing a novel adaptive method to compute the weights of local spatial in the objective function, the new adaptive fuzzy clustering algorithm is capable of utilizing local contextual information to impose local spatial continuity, thus allowing the suppression of noise and helping to resolve classification ambiguity. To estimate the intensity inhomogeneity, the global intensity is introduced into the coherent local intensity clustering algorithm and takes the local and global intensity information into account. The segmentation target therefore is driven by two forces to smooth the derived optimal bias field and improve the accuracy of the segmentation task. The proposed method has been successfully applied to 3 T, 7 T, synthetic and real MR images with desirable results. Comparisons with other approaches demonstrate the superior performance of the proposed algorithm. Moreover, the proposed algorithm is robust to initialization, thereby allowing fully automatic applications." }, { "pmid": "15093927", "title": "Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation.", "abstract": "PURPOSE\nTo assess the effectiveness of two automated magnetic resonance imaging (MRI) segmentation methods in determining the gross tumor volume (GTV) of brain tumors for use in radiation therapy treatment planning.\n\n\nMETHODS AND MATERIALS\nTwo automated MRI tumor segmentation methods (supervised k-nearest neighbors [kNN] and automatic knowledge-guided [KG]) were evaluated for their potential as \"cyber colleagues.\" This required an initial determination of the accuracy and variability of radiation oncologists engaged in the manual definition of the GTV in MRI registered with computed tomography images for 11 glioma patients. Three sets of contours were defined for each of these patients by three radiation oncologists. These outlines were compared directly to establish inter- and intraoperator variability among the radiation oncologists. A novel, probabilistic measurement of accuracy was introduced to compare the level of agreement among the automated MRI segmentations. The accuracy was determined by comparing the volumes obtained by the automated segmentation methods with the weighted average volumes prepared by the radiation oncologists.\n\n\nRESULTS\nIntra- and inter-operator variability in outlining was found to be an average of 20% +/- 15% and 28% +/- 12%, respectively. Lowest intraoperator variability was found for the physician who spent the most time producing the contours. The average accuracy of the kNN segmentation method was 56% +/- 6% for all 11 cases, whereas that of the KG method was 52% +/- 7% for 7 of the 11 cases when compared with the physician contours. For the areas of the contours where the oncologists were in substantial agreement (i.e., the center of the tumor volume), the accuracy of kNN and KG was 75% and 72%, respectively. The automated segmentation methods were found to be least accurate in outlining at the edges of the tumor volume.\n\n\nCONCLUSIONS\nThe kNN method was able to segment all cases, whereas the KG method was limited to enhancing tumors and gliomas with clear enhancing edges and no cystic formation. Both methods undersegment the tumor volume when compared with the radiation oncologists and performed within the variability of the contouring performed by experienced radiation oncologists based on the same data." }, { "pmid": "25494501", "title": "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).", "abstract": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource." }, { "pmid": "16119262", "title": "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.", "abstract": "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy." }, { "pmid": "9617910", "title": "A nonparametric method for automatic correction of intensity nonuniformity in MRI data.", "abstract": "A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present. The method has the advantage that it can be applied at an early stage in an automated data analysis, before a tissue model is available. Described as nonparametric nonuniform intensity normalization (N3), the method is independent of pulse sequence and insensitive to pathological data that might otherwise violate model assumptions. To eliminate the dependence of the field estimate on anatomy, an iterative approach is employed to estimate both the multiplicative bias field and the distribution of the true tissue intensities. The performance of this method is evaluated using both real and simulated MR data." }, { "pmid": "15488395", "title": "Improved delineation of brain tumors: an automated method for segmentation based on pathologic changes of 1H-MRSI metabolites in gliomas.", "abstract": "In this study, we developed a method to improve the delineation of intrinsic brain tumors based on the changes in metabolism due to tumor infiltration. Proton magnetic resonance spectroscopic imaging ((1)H-MRSI) with a nominal voxel size of 0.45 cm(3) was used to investigate the spatial distribution of choline-containing compounds (Cho), creatine (Cr) and N-acetyl-aspartate (NAA) in brain tumors and normal brain. Ten patients with untreated gliomas were examined on a 1.5 T clinical scanner using a MRSI sequence with PRESS volume preselection. Metabolic maps of Cho, Cr, NAA and Cho/NAA ratios were calculated. Tumors were automatically segmented in the Cho/NAA images based on the assumption of Gaussian distribution of Cho/NAA values in normal brain using a limit for normal brain tissue of the mean + three times the standard deviation. Based on this threshold, an area was calculated which was delineated as pathologic tissue. This area was then compared to areas of hyperintense signal caused by the tumor in T2-weighted MRI, which were determined by a region growing algorithm in combination with visual inspection by two experienced clinicians. The area that was abnormal on (1)H-MRSI exceeded the area delineated via T2 signal changes in the tumor (mean difference 24%) in all cases. For verification of higher sensitivity of our spectroscopic imaging strategy we developed a method for coregistration of MRI and MRSI data sets. Integration of the biochemical information into a frameless stereotactic system allowed biopsy sampling from the brain areas that showed normal T2-weighted signal but abnormal (1)H-MRSI changes. The histological findings showed tumor infiltration ranging from about 4-17% in areas differentiated from normal tissue by (1)H-MRSI only. We conclude that high spatial resolution (1)H-MRSI (nominal voxel size = 0.45 cm(3)) in combination with our segmentation algorithm can improve delineation of tumor borders compared to routine MRI tumor diagnosis." }, { "pmid": "29028927", "title": "Tumor origin detection with tissue-specific miRNA and DNA methylation markers.", "abstract": "Motivation\nA clear identification of the primary site of tumor is of great importance to the next targeted site-specific treatments and could efficiently improve patient's overall survival. Even though many classifiers based on gene expression had been proposed to predict the tumor primary, only a few studies focus on using DNA methylation (DNAm) profiles to develop classifiers, and none of them compares the performance of classifiers based on different profiles.\n\n\nResults\nWe introduced novel selection strategies to identify highly tissue-specific CpG sites and then used the random forest approach to construct the classifiers to predict the origin of tumors. We also compared the prediction performance by applying similar strategy on miRNA expression profiles. Our analysis indicated that these classifiers had an accuracy of 96.05% (Maximum-Relevance-Maximum-Distance: 90.02-99.99%) or 95.31% (principal component analysis: 79.82-99.91%) on independent DNAm datasets, and an overall accuracy of 91.30% (range 79.33-98.74%) on independent miRNA test sets for predicting tumor origin. This suggests that our feature selection methods are very effective to identify tissue-specific biomarkers and the classifiers we developed can efficiently predict the origin of tumors. We also developed a user-friendly webserver that helps users to predict the tumor origin by uploading miRNA expression or DNAm profile of their interests.\n\n\nAvailability and implementation\nThe webserver, and relative data, code are accessible at http://server.malab.cn/MMCOP/.\n\n\nContact\[email protected] or [email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online." }, { "pmid": "21897560", "title": "Development of image-processing software for automatic segmentation of brain tumors in MR images.", "abstract": "Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called 'Prometheus,' which performs neural system-based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively." }, { "pmid": "11293691", "title": "Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm.", "abstract": "The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation--no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, we show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation." }, { "pmid": "29040911", "title": "A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.", "abstract": "Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans." }, { "pmid": "28155714", "title": "Pretata: predicting TATA binding proteins with novel features and dimensionality reduction strategy.", "abstract": "BACKGROUND\nIt is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies.\n\n\nRESULTS\nFirstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods.\n\n\nCONCLUSIONS\nThe experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ ." } ]
Frontiers in Psychology
30936845
PMC6431661
10.3389/fpsyg.2019.00513
Text-Based Detection of the Risk of Depression
This study examines the relationship between language use and psychological characteristics of the communicator. The aim of the study was to find models predicting the depressivity of the writer based on the computational linguistic markers of his/her written text. Respondents’ linguistic fingerprints were traced in four texts of different genres. Depressivity was measured using the Depression, Anxiety and Stress Scale (DASS-21). The research sample (N = 172, 83 men, 89 women) was created by quota sampling an adult Czech population. Morphological variables of the texts showing differences (M-W test) between the non-depressive and depressive groups were incorporated into predictive models. Results: Across all participants, the data best fit predictive models of depressivity using morphological characteristics from the informal text “letter from holidays” (Nagelkerke r2 = 0.526 for men and 0.670 for women). For men, models for the formal texts “cover letter” and “complaint” showed moderate fit with the data (r2 = 0.479 and 0.435). The constructed models show weak to substantial recall (0.235 – 0.800) and moderate to substantial precision (0.571 – 0.889). Morphological variables appearing in the final models vary. There are no key morphological characteristics suitable for all models or for all genres. The resulting models’ properties demonstrate that they should be suitable for screening individuals at risk of depression and the most suitable genre is informal text (“letter from holidays”).
Related WorksA frequent type of study focusing on the relationship between text variables and mood disorders (e.g., depression) is a case-study. Case studies analyze texts written spontaneously by authors (individuals) suffering from depression. An important example of this approach is Demjén’s (2014) analysis of diaries and works of the American writer Sylvia Plath, who was suffering from lifetime depression and committed suicide in 1963 at the age of 30. In her study, Demjén focused on an analysis of metaphors used by people suffering from depression (i.e., metaphors of separation or loss of control). She found that Sylvia Plath used the second person singular when writing about experiences of conflict or separation. A quantitative analysis of whole texts (not only of the area of metaphors) showed that writers suffering from depression tended to use negative words and expressions with quantifiers with extreme poles (e.g., “everything,” “nothing,” “always,” “never”; Demjén, 2014). Similar results were found in an analysis of texts of the traveler and surveyor Henry Hellyer, who committed suicide at the age of 42. This analysis showed that the pronoun used most frequently was in the first person singular, while the use of the first person plural was much scarcer. Hellyer also tended to use negative words more often, like Sylvia Plath (Baddeley et al., 2011). Pennebaker and Chung (2007) analyzed spontaneous texts of two prominent representatives of Al-Qaeda (Zawahiri and Bin Laden), showing a surprising shift in the use of pronouns closely related to social status, individual and group identity, insecurity, and depression changes.Automatisation of the processing of linguistic data has recently enabled the use of extensive research strategies. For example, Rude et al. (2004) asked 124 female students attending psychology seminars to write an essay about their deepest thoughts and feelings about college. The students also completed the Beck Depression Inventory, according to which they were divided into groups of currently depressed, formerly depressed and never-depressed people. The authors discovered a positive correlation between degrees of depression and use of the word “I” (i.e., pronouns in the first person singular) and a significantly scarcer use of the pronoun in the second and third persons. It is interesting that other pronouns in the first person singular (“me,” “my” and “mine”) do not show this correlation. Study based on LIWC analyzes conducted by Lieberman and Goldstein (2006) found that women with breast cancer who used more anger words improved in their health and quality of life, whereas women who used more anxiety words experienced increased depression. Ramírez-Esparza et al. (2008) compared the linguistic markers used by people who write about their depression in internet depression forums with linguistic markers used by people with breast cancer on bbc forums in English and in Spanish. It was found that online depressed writers used significantly more 1st person singular pronouns, less first person plural pronouns in both the English and Spanish forums. Women from depressed forums used less positive emotion words and more negative emotion words than women from breast cancer forums in English and Spanish. Sonnenschein et al. (2018) in their LIWC study provide evidence that the texts of people with mood disorders contain increasingly first-person singular pronouns, depressed as well as anxious, but differ in semantic terms (depressed patients used more words related to sadness). Van der Zanden et al. (2014) found that depression improvement during web-based psychological treatments based on textual communication was predicted by increasing use of ‘discrepancy words’ during treatment (e.g., would, should – a conditional in Czech language). Self-referencing verbal behavior appears to have specific interpersonal implications beyond general interpersonal distress and depressive symptoms (Zimmerman et al., 2013). A meta-analysis (k = 21, N = 3758) of correlations between first person singular pronoun use and individual differences in depression (which occurs in a number of studies dealing with our topic) were conducted by Edwards and Holtzman (2017) who proven evidence that depression is linked to the use of first person singular pronouns (r = 0.13), this effect is not moderated by demographic factors, such as gender and there is little to no evidence of publication bias in this literature.Several studies (e.g., Mairesse et al., 2007; Litvinova et al., 2016b) show that indexes combining several studied markers are also important. For example, a reliable predictor of self-destructive behavior (depression is one of the characteristics of such behavior) is the pronominalisation index: the ratio of pronouns to nouns (Litvinova et al., 2016b).Existing studies do not only show relationships between the way of writing a text and mood disorders (e.g., depression and associated symptoms), but also a reciprocal healing effect of writing certain types of texts. For example, Sayer et al. (2015) conducted an experiment to test the effects of different writing styles using a sample of 1,292 Afghanistan and Iraq war veterans with self-reported reintegration difficulty. In their experiment, veterans who were instructed to write expressively experienced greater reductions in physical complaints, anger, and distress compared to veterans who were instructed to write factually, and, moreover, both writing groups showed reductions in PTSD symptoms and reintegration difficulty compared to veterans who did not write at all. The correlation between occurrence of words and successful intervention was also documented by Alvarez-Conrad et al. (2001).Studies in clinical psychology clearly show that research on the relationship between the user of a language (e.g., speaker or writer) and their text is meaningful and has potential for the future. A worldwide and rapidly developing approach is the detection of the personality of authors from their texts, involving the design of predictive models based on correlations between quantifiable text parameters and individual psychological traits (Mairesse et al., 2007; Litvinova et al., 2016b). The present study was designed to add to this body of research.
[ "12451460", "21940249", "9925185", "26402963", "25070409", "28025257", "15880627", "7726811", "11927204", "17878497", "10883707", "11102321", "17009190", "23744982", "18000452", "26467326", "8564312", "17403826", "26813211", "2788995", "29345528", "12813119", "12534439", "24709016" ]
[ { "pmid": "12451460", "title": "Gender differences in depression. Epidemiological findings from the European DEPRES I and II studies.", "abstract": "BACKGROUND\nWhile there is ample evidence that the prevalence rates for major depressive disorder (MDD) in the general population are higher in women than in men, there is little data on gender differences as regard to symptoms, causal attribution, help-seeking, coping, or the consequences of depression.\n\n\nMETHOD\nThe large DEPRES Study dataset covering representative population samples of six European countries (wave I: 38,434 men and 40,024 women; wave II: 563 men and 1321 women treated for depression) was analyzed for gender differences.\n\n\nRESULTS\nIn wave I marked gender differences were found in the six-month prevalence rate for major depression but less so for minor depression; the gender differences for major depression persisted across all age groups. Even after stratification by clinically significant impairment and paid employment status, men reported fewer symptoms than women; as a consequence, men reached the diagnostic threshold less often. In wave II there were clear gender differences in causal attribution and in coping. Men coped by increasing their sports activity and consumption of alcohol and women through emotional release and religion. Women felt the effects of depression in their quality of sleep and general health, whereas men felt it more in their ability to work.\n\n\nLIMITATIONS\nThe second wave of the study comprises treated depressives only and may be less representative than the first wave." }, { "pmid": "21940249", "title": "How Henry Hellyer's use of language foretold his suicide.", "abstract": "UNLABELLED\nHenry Hellyer was an accomplished surveyor and explorer in Australia in the early 1800s whose apparent suicide at the age of 42 has puzzled historians for generations. He left behind several written works, including letters, journals, and reports.\n\n\nAIMS\nThe current study assessed changes in the ways Hellyer used words in his various written documents during the last 7 years of his life.\n\n\nMETHODS\nHellyer's writings were analyzed using the Linguistic Inquiry and Word Count program.\n\n\nRESULTS\nHellyer showed increases in first-person singular pronoun use, decreases in first-person plural pronoun use, and increases in negative emotion word use. As this is a single, uncontrolled case study, caution is recommended in generalizing from the current results.\n\n\nCONCLUSIONS\nResults suggest Hellyer's increasing self-focused attention, social isolation, and negative emotion. Findings are consistent with increasing depression and suicidal ideation. Implications for using computerized text analysis to decode people's psychological states from their written records are discussed." }, { "pmid": "9925185", "title": "Prevalence of symptoms of depression in a nationwide sample of Korean adults.", "abstract": "The prevalence and correlates of symptoms of depression in a nationwide sample of Korean adults, collected during the National Health and Health Behavior Examination Survey, were examined. A probability sample of 3,711 respondents (a response rate of 81.3%) completed the Center for Epidemiologic Studies Depression Scale (CES-D) and a variety of sociodemographic questions. In this sample 23.1% of males and 27.4% of females had scores above the cutoff point of 16 (probable depression) on the CES-D scale, and 6.8% of males and 10.4% of females were above the cutoff point of 25 (severe, definite depression). Apart from a few reports describing Afro-American and Puerto-Rican samples, these rates were somewhat higher than those found in the US and Western countries. In this report, female gender, fewer than 13 years of education, and disrupted marriage (widowed/divorced/separated) proved to be statistically significant predictors of severe, definite symptoms of depression." }, { "pmid": "26402963", "title": "Drowning in negativism, self-hate, doubt, madness: Linguistic insights into Sylvia Plath's experience of depression.", "abstract": "This paper demonstrates how a range of linguistic methods can be harnessed in pursuit of a deeper understanding of the 'lived experience' of psychological disorders. It argues that such methods should be applied more in medical contexts, especially in medical humanities. Key extracts from The Unabridged Journals of Sylvia Plath are examined, as a case study of the experience of depression. Combinations of qualitative and quantitative linguistic methods, and inter- and intra-textual comparisons are used to consider distinctive patterns in the use of metaphor, personal pronouns and (the semantics of) verbs, as well as other relevant aspects of language. Qualitative techniques provide in-depth insights, while quantitative corpus methods make the analyses more robust and ensure the breadth necessary to gain insights into the individual experience. Depression emerges as a highly complex and sometimes potentially contradictory experience for Plath, involving both a sense of apathy and inner turmoil. It involves a sense of a split self, trapped in a state that one cannot overcome, and intense self-focus, a turning in on oneself and a view of the world that is both more negative and more polarized than the norm. It is argued that a linguistic approach is useful beyond this specific case." }, { "pmid": "25070409", "title": "[Gender differences in depression].", "abstract": "Depression is one of the most prevalent and debilitating diseases. In recent years there has been increased awareness of sex- and gender-specific issues in depression. This narrative review presents and discusses differences in prevalence, symptom profile, age at onset and course, comorbidity, biological and psychosocial factors, the impact of sexual stereotyping, help-seeking, emotion regulation and doctor-patient communication. Typically, women are diagnosed with depression twice as often as men, and their disease follows a more chronic course. Comorbid anxiety is more prevalent in women, whereas comorbid alcohol abuse is a major concern in men. Sucide rates for men are between three and five times higher compared with women. Although there are different symptom profiles in men and women, it is difficult to define a gender-specific symptom profile. Socially mediated gender roles have a significant impact on psychosocial factors associated with risk, sickness behavior and coping strategies. In general, too little attention has been paid to the definition and handling of depression and the gender-related requirements it makes on the healthcare system." }, { "pmid": "28025257", "title": "An introduction to multiplicity issues in clinical trials: the what, why, when and how.", "abstract": "In clinical trials it is not uncommon to face a multiple testing problem which can have an impact on both type I and type II error rates, leading to inappropriate interpretation of trial results. Multiplicity issues may need to be considered at the design, analysis and interpretation stages of a trial. The proportion of trial reports not adequately correcting for multiple testing remains substantial. The purpose of this article is to provide an introduction to multiple testing issues in clinical trials, and to reduce confusion around the need for multiplicity adjustments. We use a tutorial, question-and-answer approach to address the key issues of why, when and how to consider multiplicity adjustments in trials. We summarize the relevant circumstances under which multiplicity adjustments ought to be considered, as well as options for carrying out multiplicity adjustments in terms of trial design factors including Population, Intervention/Comparison, Outcome, Time frame and Analysis (PICOTA). Results are presented in an easy-to-use table and flow diagrams. Confusion about multiplicity issues can be reduced or avoided by considering the potential impact of multiplicity on type I and II errors and, if necessary pre-specifying statistical approaches to either avoid or adjust for multiplicity in the trial protocol or analysis plan." }, { "pmid": "15880627", "title": "Not all negative emotions are equal: the role of emotional expression in online support groups for women with breast cancer.", "abstract": "The repression/suppression of negative emotions has long been considered detrimental for breast cancer (BC) patients, leading to poor coping, progression of symptoms, and general lower quality of life. Therapies have focused on encouraging the expression of negative emotions. While group therapies have proven to be successful for BC patients, no study has looked at the role of expressing negative emotions during the therapeutic interaction. We examined written expressed emotions by women participating in a common form of psychosocial support, Internet based bulletin boards (BBs). Fifty-two new members to BC BBs were studied. They completed measures of quality of life and depression. After 6 months the measures were again assessed and messages during that time were collected and analyzed for emotional content. For the 52 women, results showed that greater expression of anger was associated with higher quality of life and lower depression, while the expression of fear and anxiety was associated with lower quality of life and higher depression. The expression of sadness was unrelated to change scores. Our results serve to challenge the commonly held belief that the expression of all negative emotions are beneficial for BC patients. Instead, expressing specific negative emotions are beneficial, while others are not." }, { "pmid": "7726811", "title": "The structure of negative emotional states: comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories.", "abstract": "The psychometric properties of the Depression Anxiety Stress Scales (DASS) were evaluated in a normal sample of N = 717 who were also administered the Beck Depression Inventory (BDI) and the Beck Anxiety Inventory (BAI). The DASS was shown to possess satisfactory psychometric properties, and the factor structure was substantiated both by exploratory and confirmatory factor analysis. In comparison to the BDI and BAI, the DASS scales showed greater separation in factor loadings. The DASS Anxiety scale correlated 0.81 with the BAI, and the DASS Depression scale correlated 0.74 with the BDI. Factor analyses suggested that the BDI differs from the DASS Depression scale primarily in that the BDI includes items such as weight loss, insomnia, somatic preoccupation and irritability, which fail to discriminate between depression and other affective states. The factor structure of the combined BDI and BAI items was virtually identical to that reported by Beck for a sample of diagnosed depressed and anxious patients, supporting the view that these clinical states are more severe expressions of the same states that may be discerned in normals. Implications of the results for the conceptualisation of depression, anxiety and tension/stress are considered, and the utility of the DASS scales in discriminating between these constructs is discussed." }, { "pmid": "11927204", "title": "Self-rated health, chronic diseases, and symptoms among middle-aged and elderly men and women.", "abstract": "The objective was to study the association between chronic diseases, symptoms, and poor self-rated health among men and women and in different age groups, and to assess the contribution of chronic diseases and symptoms to the burden of poor self-rated health in the general population. Self-rated health and self-reported diseases and symptoms were investigated in a population sample of 6,061 men and women aged 35-79 years in Värmland County in Sweden. Odds ratios (OR) and population attributable risks (PAR) were calculated to quantify the contribution of chronic diseases and symptoms to poor self-rated health. Depression, neurological disease, rheumatoid arthritis, and tiredness/weakness had the largest contributions to poor self-rated health in individuals. Among the elderly (65-79 years), neurological disease and cancer had the largest contribution to self-rated health in men, and renal disease, rheumatoid arthritis, and cancer in women. Among the middle-aged (35-64 years), depression and tiredness/weakness were also important, especially in women. From a population perspective, tiredness/weakness explained the largest part of poor self-rated health due to its high prevalence in the population. Depression and musculoskeletal pains were also more important than other chronic diseases and symptoms at the population level. Even though many chronic diseases (such as neurological disease, rheumatoid arthritis, and cancer) are strongly associated with poor self-rated health in the individual, common symptoms (such as tiredness/weakness and musculoskeletal pains) as well as depression contribute more to the total burden of poor self-rated health in the population. More preventive measures should therefore be directed against these conditions, especially when they are not consequences of other diseases." }, { "pmid": "17878497", "title": "Gender differences in depression and chronic pain conditions in a national epidemiologic survey.", "abstract": "The authors explored gender differences in the prevalence of depression in four chronic pain conditions and pain severity indices in a national database. In 131,535 adults, the prevalence of depression in women (9.1%) was almost twice that of men (5%). One-third (32.8%) had a chronic pain condition (fibromyalgia, arthritis/rheumatism, back problems, and migraine headaches). The prevalence of depression in individuals with chronic pain conditions was 11.3%, versus 5.3% in those without. Women reported higher rates of chronic pain conditions and depression and higher pain severity than men. Depression and chronic pain conditions represent significant sources of disability, especially for women." }, { "pmid": "10883707", "title": "Incidence of depression in the Stirling County Study: historical and comparative perspectives.", "abstract": "BACKGROUND\nThe Stirling County Study provides a 40-year perspective on the epidemiology of psychiatric disorders in an adult population in Atlantic Canada. Across samples selected in 1952, 1970 and 1992 current prevalence of depression was stable. This paper concerns time trends in annual incidence as assessed through cohorts selected from the first two samples.\n\n\nMETHODS\nConsistent interview data were analysed by a computerized diagnostic algorithm. The cohorts consisted of subjects at risk for a first depression: Cohort-1 (N = 575) was followed 1952-1970; Cohort-2 (N = 639) was followed 1970-1992. Life-table methods were used to calculate incidence rates and proportional hazards procedures were used for statistical assessment.\n\n\nRESULTS\nAverage annual incidence of depression was 4.5 per 1000 for Cohort-1 and 3.7 for Cohort-2. Differences by gender, age and time were not statistically significant. The stability of incidence and the similarity of distribution by gender and age in these two cohorts corresponds to findings about the two early samples. In contrast, current prevalence in the recent sample was distributed differently and showed an increase among women under 45 years.\n\n\nCONCLUSIONS\nThe stability of the incidence of depression emphasizes the distinctive characteristics of current prevalence in the recent sample and suggests that the dominance of women in rates of depression may have occurred among those born after the Second World War. The results offer partial support for the interpretation of an increase in depression based on retrospective data in other recent studies but they indicate that the increase is specific to women." }, { "pmid": "11102321", "title": "Gender differences in depression. Critical review.", "abstract": "BACKGROUND\nWith few exceptions, the prevalence, incidence and morbidity risk of depressive disorders are higher in females than in males, beginning at mid-puberty and persisting through adult life.\n\n\nAIMS\nTo review putative risk factors leading to gender differences in depressive disorders.\n\n\nMETHOD\nA critical review of the literature, dealing separately with artefactual and genuine determinants of gender differences in depressive disorders.\n\n\nRESULTS\nAlthough artefactual determinants may enhance a female preponderance to some extent, gender differences in depressive disorders are genuine. At present, adverse experiences in childhood, depression and anxiety disorders in childhood and adolescence, sociocultural roles with related adverse experiences, and psychological attributes related to vulnerability to life events and coping skills are likely to be involved. Genetic and biological factors and poor social support, however, have few or no effects in the emergence of gender differences.\n\n\nCONCLUSIONS\nDeterminants of gender differences in depressive disorders are far from being established and their combination into integrated aetiological models continues to be lacking." }, { "pmid": "17009190", "title": "Rural-urban differences in depression prevalence: implications for family medicine.", "abstract": "BACKGROUND AND OBJECTIVES\nRural populations experience more adverse living circumstances than urban populations, but the evidence regarding the prevalence of mental health disorders in rural areas is contradictory. We examined the prevalence of depression in rural versus urban areas.\n\n\nMETHODS\nWe performed a cross-sectional study using the 1999 National Health Interview Survey (NHIS). In face-to-face interviews, the NHIS administered the Composite International Diagnostic Interview Short Form (CIDI-SF) depression scale to a nationally representative sample of 30,801 adults, ages 18 and over.\n\n\nRESULTS\nAn estimated 2.6 million rural adults suffer from depression. The unadjusted prevalence of depression was significantly higher among rural than urban populations (6.1% versus 5.2% ). After adjusting for rural/urban population characteristics, however, the odds of depression did not differ by residence. Depression risk was higher among persons likely to be encountered in a primary care setting: those with fair or poor self-reported health, hypertension, with limitations in daily activities, or whose health status changed during the previous year.\n\n\nCONCLUSIONS\nThe prevalence of depression is slightly but significantly higher in residents of rural areas compared to urban areas, possibly due to differing population characteristics." }, { "pmid": "23744982", "title": "Development of scales to assess mental health literacy relating to recognition of and interventions for depression, anxiety disorders and schizophrenia/psychosis.", "abstract": "OBJECTIVE\nThe aim of this study was to develop scales to assess mental health literacy relating to affective disorders, anxiety disorders and schizophrenia/psychosis.\n\n\nMETHOD\nScales were created to assess mental health literacy in relation to depression, depression with suicidal thoughts, early schizophrenia, chronic schizophrenia, social phobia and post-traumatic stress disorder using data from a survey of 1536 health professionals (general practitioners, clinical psychologists and psychiatrists), assessing recognition of these disorders and beliefs about the helpfulness of interventions. This was done by using the consensus of experts about the helpfulness and harmfulness of treatments for each disorder as a criterion. Data from a general population survey of 6019 Australians aged ≥ 15 was used to examine associations between scale scores, exposure to mental disorders and sociodemographic variables, to assess scale validity.\n\n\nRESULTS\nThose with a close friend or family member with a mental disorder had significantly higher mean scores on all mental health literacy scales, providing support for scale validity. Personal experience of the problem and working with people with a similar problem was linked to higher scores on some scales. Male sex, a lower level of education and age > 60 were linked to lower levels of mental health literacy. Higher scores were also linked to a greater belief that people with mental disorders are sick rather than weak.\n\n\nCONCLUSIONS\nThe scales developed in this study allow for the assessment of mental health literacy in relation to depression, depression with suicidal thoughts, early schizophrenia, chronic schizophrenia, social phobia and PTSD. Those with exposure to mental disorders had higher scores on the scales, and analyses of the links between scale scores and sociodemographic variables of age, gender and level of education were in line with those seen in other studies, providing support for scale validity." }, { "pmid": "18000452", "title": "Gender differences in the symptoms of major depressive disorder.", "abstract": "Data from the Canadian Community Health Survey 1.2 were used for a gender analysis of individual symptoms and overall rates of depression in the preceding 12 months. Major depressive disorder was assessed using the Composite International Diagnostic Interview in this national, cross-sectional survey. The female to male ratio of major depressive disorder prevalence was 1.64:1, with n = 1766 having experienced depression (men 668, women 1098). Women reported statistically more depressive symptoms than men (p < 0.001). Depressed women were more likely to report \"increased appetite\" (15.5% vs. 10.7%), being \"often in tears\" (82.6% vs. 44.0%), \"loss of interest\" (86.9% vs. 81.1%), and \"thoughts of death\" (70.3% vs. 63.4%). No significant gender differences were found for the remaining symptoms. The data are interpreted against women's greater tendency to cry and to restrict food intake when not depressed. The question is raised whether these items preferentially bias assessment of gender differences in depression, particularly in nonclinic samples." }, { "pmid": "26467326", "title": "Randomized Controlled Trial of Online Expressive Writing to Address Readjustment Difficulties Among U.S. Afghanistan and Iraq War Veterans.", "abstract": "We examined the efficacy of a brief, accessible, nonstigmatizing online intervention-writing expressively about transitioning to civilian life. U.S. Afghanistan and Iraq war veterans with self-reported reintegration difficulty (N = 1,292, 39.3% female, M = 36.87, SD = 9.78 years) were randomly assigned to expressive writing (n = 508), factual control writing (n = 507), or no writing (n = 277). Using intention to treat, generalized linear mixed models demonstrated that 6-months postintervention, veterans who wrote expressively experienced greater reductions in physical complaints, anger, and distress compared with veterans who wrote factually (ds = 0.13 to 0.20; ps < .05) and greater reductions in PTSD symptoms, distress, anger, physical complaints, and reintegration difficulty compared with veterans who did not write at all (ds = 0.22 to 0.35; ps ≤ .001). Veterans who wrote expressively also experienced greater improvement in social support compared to those who did not write (d = 0.17). Relative to both control conditions, expressive writing did not lead to improved life satisfaction. Secondary analyses also found beneficial effects of expressive writing on clinically significant distress, PTSD screening, and employment status. Online expressive writing holds promise for improving health and functioning among veterans experiencing reintegration difficulty, albeit with small effect sizes." }, { "pmid": "8564312", "title": "Psychotherapy for bipolar disorder.", "abstract": "BACKGROUND\nPsychosocial factors may contribute 25-30% to the outcome variance in bipolar disorders. Sufferers have identified benefits from psychotherapy, but biological models and treatments dominate the research agenda. The author reviews research on psychosocial issues and interventions in this disorder.\n\n\nMETHOD\nResearch on adjustment to the disorder, interpersonal stressors and obstacles to treatment compliance were located by computerised searches and the author's knowledge of the literature. All published outcome studies of psychosocial interventions in bipolar disorder are reviewed.\n\n\nRESULTS\nThere is an inadequate database on psychosocial factors associated with onset and maintenance of bipolar disorder. While the outcome studies available are methodologically inadequate, the accumulated evidence suggests that psychosocial interventions may have significant benefits for bipolar sufferers and their families.\n\n\nCONCLUSIONS\nGiven the significant associated morbidity and mortality, there is a clear need for more systematic clinical management that addresses psychosocial as well as biological aspects of bipolar disorder. The author identifies appropriate research strategies to improve knowledge of effective psychosocial interventions." }, { "pmid": "17403826", "title": "Association between parental depression and children's health care use.", "abstract": "OBJECTIVE\nThe objective of this study was to determine the association between parental depression and pediatric health care use patterns.\n\n\nMETHODS\nWe selected all children who were 0 to 17 years of age, enrolled in Kaiser Permanente of Colorado during the study period July 1997 to December 2002, and linked to at least 1 parent/subscriber who was enrolled for at least 6 months during that period. Unexposed children were selected from a pool of children whose parents did not have a depression diagnosis. Outcome measures were derived from the child's payment files and electronic medical charts and included 5 categories of use: well-child-care visits, sick visits to primary care departments, specialty clinic visits, emergency department visits, and inpatient visits. We compared the rate of use per enrollment month for these 5 categories between exposed and unexposed children within each of the 5 age strata.\n\n\nRESULTS\nOur study population had 24,391 exposed and 45,274 age-matched, unexposed children. For the outcome of well-child-care visits, teenagers showed decreased rates of visits among exposed children. The rate of specialty department visits was higher in exposed children in the 4 oldest age groups. The rates of both emergency department visits and sick visits to primary care departments were higher for exposed children across all 5 age categories. The rate of inpatient visits was higher among exposed children in 2 of the 5 age groups.\n\n\nCONCLUSIONS\nOverall, having at least 1 depressed parent is associated with greater rate of emergency department and sick visits across all age groups, greater use of inpatient and specialty services in some age groups, and a lower rate of well-child-care visits among 13- to 17-year-olds. This pattern of increased use of expensive resources and decreased use of preventive services represents one of the hidden costs of adult depression." }, { "pmid": "26813211", "title": "Screening for Depression in Adults: US Preventive Services Task Force Recommendation Statement.", "abstract": "DESCRIPTION\nUpdate of the 2009 US Preventive Services Task Force (USPSTF) recommendation on screening for depression in adults.\n\n\nMETHODS\nThe USPSTF reviewed the evidence on the benefits and harms of screening for depression in adult populations, including older adults and pregnant and postpartum women; the accuracy of depression screening instruments; and the benefits and harms of depression treatment in these populations.\n\n\nPOPULATION\nThis recommendation applies to adults 18 years and older.\n\n\nRECOMMENDATION\nThe USPSTF recommends screening for depression in the general adult population, including pregnant and postpartum women. Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up. (B recommendation)." }, { "pmid": "2788995", "title": "The prevalence of major depression in black and white adults in five United States communities.", "abstract": "There have been inconsistent findings on race differences in the rates and nature of depression, which are probably due to methodological differences between studies. Data are presented on the prevalence of major depression in white and black adults from the Epidemiologic Catchment Area Study, which examined a large community sample of five United States sites using diagnostic criteria based on the American Psychiatric Association Diagnostic and Statistical Manual, Third Edition. A total of 16,436 adults living in New Haven (Connecticut), Baltimore (Maryland), St. Louis (Missouri), the Piedmont area of North Carolina, and Los Angeles (California) were surveyed in 1980-1983. In the five sites, age-adjusted analyses by site and sex did not show any consistent black excess in lifetime prevalence or six-month prevalence; white men as compared with black men in particular tended to have slightly higher prevalence of major depression. At all sites, in the 18-24 years age group, black women as compared with white women showed a trend for higher six-month prevalence. White men in the 18-24 years age group showed a trend for higher six-month prevalence than black men. In New Haven, Baltimore, and the Piedmont area of North Carolina, logistic regression analyses of lifetime prevalence (by site and sex) showed no significant or consistent interaction of race with household income or age. Controlling for age and household income, whites tended to have higher lifetime prevalence than black at each of these three sites, regardless of sex." }, { "pmid": "29345528", "title": "Linguistic analysis of patients with mood and anxiety disorders during cognitive behavioral therapy.", "abstract": "We analyzed the verbal behavior of patients with mood or/and anxiety disorders during psychotherapy. Investigating the words people used, we expected differences due to cognitive and emotional foci in patients with depression vs. anxiety. Transcripts of therapy sessions from 85 outpatients treated with cognitive behavioral therapy were analyzed using the software program Linguistic Inquiry and Word Count. Multivariate group comparisons were carried out investigating the LIWC-categories first-person-singular pronouns, sad, anxiety and fillers. Differences between the three diagnostic groups were found in verbal utterances related to sadness (p = .05). No differences were found for first-person-singular pronouns and content-free fillers. Comparing the distinct groups \"depression\" and \"anxiety\", depressed patients used more words related to sadness (p = .01). Mood and anxiety disorders differ in the experience of emotions, but only slightly in self-focused attention. This points to differences in language use for different diagnostic groups and may help to improve diagnostic procedures or language-driven interventions which enhance therapists' attention to patients' verbal behavior." }, { "pmid": "12813119", "title": "Cost of lost productive work time among US workers with depression.", "abstract": "CONTEXT\nEvidence consistently indicates that depression has adversely affected work productivity. Estimates of the cost impact in lost labor time in the US workforce, however, are scarce and dated.\n\n\nOBJECTIVE\nTo estimate the impact of depression on labor costs (ie, work absence and reduced performance while at work) in the US workforce.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nAll employed individuals who participated in the American Productivity Audit (conducted August 1, 2001-July 31, 2002) between May 20 and July 11, 2002, were eligible for the Depressive Disorders Study. Those who responded affirmatively to 2 depression-screening questions (n = 692), as well as a 1:4 stratified random sample of those responding in the negative (n = 435), were recruited for and completed a supplemental interview using the Primary Care Evaluation of Mental Disorders Mood Module for depression, the Somatic Symptom Inventory, and a medical and treatment history for depression. Excess lost productive time (LPT) costs from depression were derived as the difference in LPT among individuals with depression minus the expected LPT in the absence of depression projected to the US workforce.\n\n\nMAIN OUTCOME MEASURE\nEstimated LPT and associated labor costs (work absence and reduced performance while at work) due to depression.\n\n\nRESULTS\nWorkers with depression reported significantly more total health-related LPT than those without depression (mean, 5.6 h/wk vs an expected 1.5 h/wk, respectively). Eighty-one percent of the LPT costs are explained by reduced performance while at work. Major depression accounts for 48% of the LPT among those with depression, again with a majority of the cost explained by reduced performance while at work. Self-reported use of antidepressants in the previous 12 months among those with depression was low (<33%) and the mean reported treatment effectiveness was only moderate. Extrapolation of these survey results and self-reported annual incomes to the population of US workers suggests that US workers with depression employed in the previous week cost employers an estimated 44 billion dollars per year in LPT, an excess of 31 billion dollars per year compared with peers without depression. This estimate does not include labor costs associated with short- and long-term disability.\n\n\nCONCLUSIONS\nA majority of the LPT costs that employers face from employee depression is invisible and explained by reduced performance while at work. Use of treatments for depression appears to be relatively low. The combined LPT burden among those with depression and the low level of treatment suggests that there may be cost-effective opportunities for improving depression-related outcomes in the US workforce." }, { "pmid": "12534439", "title": "The association between age and depression in the general population: a multivariate examination.", "abstract": "OBJECTIVE\nIn a large general population study we found a close to linear rise with age in the mean score and prevalence of self-reported symptoms of depression. The aim of this study was to examine if this linear relation prevailed when controlled for multiple variables and to examine factors that eventually explained the association.\n\n\nMETHOD\nAmong individuals aged 20-89 years living in Nord-Trøndelag County of Norway, 60 869 filled in valid ratings of the Hospital Anxiety and Depression Scale as well as many other variables. Covariates were grouped into a multivariate model with six blocks. Logistic regression was used to model the blocks and variables with caseness of depression as the dependent variable.\n\n\nRESULTS\nThe model explains a considerable part of the age-related pattern on depression. The pattern became less distinct in the age groups above 50 years. Variables within the blocks of somatic diagnoses and symptoms, as well as impairment, had most explanatory power.\n\n\nCONCLUSION\nBecause of our large sample we were able to control for more relevant variables than earlier studies. In contrast to most other studies, we found that an age-related increase of the prevalence of depression persisted after control for multiple variables." }, { "pmid": "24709016", "title": "Web-based depression treatment: associations of clients' word use with adherence and outcome.", "abstract": "BACKGROUND\nThe growing number of web-based psychological treatments, based on textual communication, generates a wealth of data that can contribute to knowledge of online and face-to-face treatments. We investigated whether clients' language use predicted treatment outcomes and adherence in Master Your Mood (MYM), an online group course for young adults with depressive symptoms.\n\n\nMETHODS\nAmong 234 participants from a randomised controlled trial of MYM, we tested whether their word use on course application forms predicted baseline levels of depression, anxiety and mastery, or subsequent treatment adherence. We then analysed chat session transcripts of course completers (n=67) to investigate whether word use changes predicted changes in treatment outcomes.\n\n\nRESULTS\nDepression improvement was predicted by increasing use of 'discrepancy words' during treatment (e.g. should). At baseline, more discrepancy words predicted higher mastery level. Adherence was predicted by more words used at application, more social words and fewer discrepancy words.\n\n\nLIMITATIONS\nMany variables were included, increasing the chance of coincidental results. This risk was constrained by examining only those word categories that have been investigated in relation to depression or adherence.\n\n\nCONCLUSIONS\nThis is the first study to link word use during treatment to outcomes of treatment that has proven to be effective in an RCT. The results suggest that paying attention to the length of problem articulation at application and to 'discrepancy words' may be wise, as these seem to be psychological markers. To expand knowledge of word use as psychological marker, research on web-based treatment should include text analysis." } ]
Frontiers in Neuroscience
30941003
PMC6434391
10.3389/fnins.2019.00189
ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.
4.1. Comparison With Related WorksWe compare ReStoCNet with convolutional SNNs, which employ unsupervised training methodology for the convolutional layers and supervised training algorithms like error backpropagation for the fully-connected layer, using classification accuracy (on the test set) and kernel memory compression as the evaluation metrics. The memory compression offered by ReStoCNet as a result of using binary kernels in the convolutional layers, referred to as kernel memory compression, is computed as specified by(8)kernel memory compression =Nbaseline×ksizebaseline×ksizebaseline×nbitsfull_precisionNReStoCNet×ksizeReStoCNet×ksizeReStoCNet×nbitsbinarywhere NReStoCNet (Nbaseline) and ksizeReStoCNet (ksizebaseline) are the number of kernels and kernel size, respectively, in ReStoCNet (baseline convolutional SNN used for comparison), and nbitsbinary and nbitsfull_precision are the hardware bit-precision required for storing the binary and full-precision kernels, which are set to 2-bits and 32-bits, respectively. Note that the binary kernels in ReStoCNet require storage capacity of 2-bits per synaptic weight since they are constrained to binary states −1 and +1. Table 5 shows that the classification accuracy offered by ReStoCNet for MNIST digit recognition is comparable to that reported for convolutional SNNs composed of full-precision kernels trained using unsupervised learning methodologies. Specifically, a 36C3-2P-128FC-10FC ReStoCNet offers 98.54% accuracy on the MNIST test set, which compares favorably with that (98.36%) provided by the convolutional SNN presented in Tavanaei and Maida (2017), composed of single convolutional layer with 32 maps and 5 × 5 full-precision kernels trained using STDP. The proposed ReStoCNet offers 39.5 × kernel memory compression by virtue of using smaller 3 × 3 binary kernels under iso-accuracy conditions for MNIST digit recognition. On the contrary, very few works have benchmarked convolutional SNNs, trained using unsupervised learning algorithms, on the CIFAR-10 dataset. Panda and Roy (2016) proposed spike-based convolutional Auto-Encoders, where the kernels in every convolutional layer are trained in an unsupervised manner using error backpropagation to regenerate the input spike patterns. Ferré et al. (2018) presented convolutional SNN (without residual connections), where the kernels are trained using a simple Hebbian STDP learning rule. Table 6 shows that ReStoCNet provides 4–5% lower accuracy than that reported in both the related works. In particular, a 256C3-2P-1024FC-10FC ReStoCNet yields 4.97% lower accuracy than that provided by the 64C7-8P-512FC-512FC-10FC convolutional SNN (Ferré et al., 2018) while offering 21.7 × kernel memory compression. Note that the convolutional SNN presented in Ferré et al. (2018) is simulated by single-step forward propagation using input rates while ReStoCNet is simulated using input spike trains over multiple time-steps.Table 5Classification accuracy of SNN models, which use unsupervised training methodology for the hidden/convolutional layers and supervised training algorithm for the output (classification) layer, on the MNIST test set.ModelSizeTraining methodologyAccuracy (%)FC_SNN (Yousefzadeh et al., 2018)6400FC-10FCProbabilistic STDP +95.70ANN backpropagationConvSNN (Panda and Roy, 2016)12C5-2P-64C5-2P-10FCSNN backpropagation99.08ConvSNN (Stromatias et al., 2017)18C7-2P-10FCFixed Gabor kernels +98.20ANN backpropagationConvSNN (Lee et al., 2018b)16C3-16C3-2P-10FCSTDP91.10ConvSNN (Ferré et al., 2018)8C5-2P-16C5-2P-STDP +98.49120FC-60FC-10FCANN backpropagationConvSNN (Kheradpisheh et al., 2018)30C5-2P-100C5-2P-10FCSTDP +98.40Support Vector MachineConvSNN (Tavanaei et al., 2018)64C5-2P-1500FC-10FCSTDP98.61ConvSNN (Mozafari et al., 2018)30C5-2P-250C3-3P-200C5-5PReward-modulated STDP97.20ConvSNN (Tavanaei and Maida, 2017)32C5-2P-128FC-10FCSTDP +98.36Support Vector MachineReStoCNet (our work)36C3-2P-128FC-10FCProbabilistic eHB-STDP +98.54ANN backpropagationTable 6Classification accuracy of SNN models, which use unsupervised training methodology for the hidden/convolutional layers and supervised training algorithm for the output (classification) layer, on the CIFAR-10 test set.ModelSizeTraining methodologyAccuracy (%)ConvSNN (Panda and Roy, 2016)32C5-2P-32C5-2P-64C4-10FCSNN backpropagation70.16ConvSNN (Ferré et al., 2018)64C7-8P-512FC-512FC-10FCSTDP +71.20ANN backpropagationReStoCNet (our work)256C3-2P-1024FC-10FCProbabilistic e/iHB-STDP +66.23ANN backpropagationFinally, we note that deep learning Binary Neural Networks (BNNs) (Courbariaux et al., 2015; Rastegari et al., 2016; Hubara et al., 2017), which use binary activations for the neurons in every layer except the input and output layers and binary weights, have been demonstrated to yield superior classification accuracy than that provided by ReStoCNet. Nevertheless, ReStoCNet offers the following advantages over BNNs. First, ReStoCNet is inherently suited for processing spatiotemporal spike trains from event-based audio and vision sensors as shown by Stromatias et al. (2017) for convolutional SNNs with full-precision weights since it computes with static image pixels mapped to spike trains. BNNs, on the contrary, use real-valued pixel intensities for the input layer. Second, ReStoCNet is amenable for efficient implementation in event-driven asynchronous neuromorphic hardware platforms like IBM TrueNorth (Merolla et al., 2014) and Intel Loihi (Davies et al., 2018) since it uses {0, 1} for the outputs of the spiking neurons in every convolutional layer. The weighted sum of the input spikes with the synaptic weights in the convolutional layers needs to be computed only in the event of a spike fired by the corresponding input neurons. In addition, only the sparse spiking events need to be transmitted between the layers. The event-driven computing capability offered by ReStoCNet can be exploited to achieve higher energy efficiency in neuromorphic hardware implementations by minimizing the computation and communication energy in the absence of spiking events. BNNs, on the other hand, use {1, −1} for the neuronal activations and either {1, −1} (Courbariaux et al., 2015) or {α, −α} (Rastegari et al., 2016) where α is a layer-wise scaling factor for the weights to achieve good accuracy and stable training convergence (Pfeiffer and Pfeil, 2018). Hence, the computation of the weighted input sum and communication of the binarized neuronal activations need to be carried out for all the neurons in every layer in a synchronous manner, which is in contrast to the event-based asynchronous computing capability provided by ReStoCNet. Last, ReStoCNet offers a memory-efficient solution for enabling on-chip intelligence in resource-constrained battery-powered Internet of Things (IoT) edge devices since the binary kernels are trained using probabilistic-STDP based local learning rule that can be efficiently implemented on-chip. Learning is achieved by probabilistically switching the binary kernel weights between the allowed states based on spike timing, which precludes the need for storing the full-precision weights and enhances the memory efficiency during training. BNNs, on the other hand, are trained using error backpropagation algorithms that update the full-precision weights based on the backpropagated error gradients and binarize the modified weights for forward propagation and computing the error gradients. Thus, ReStoCNet provides a promising alternative for energy- and memory-efficient computing during both training and inference in IoT edge devices, for instance, surveillance cameras, which produce large volumes of real-time data. It is inefficient for these devices to continuously offload raw/compressed data to the cloud for training. This is because the sheer volume of generated data could exceed the bandwidth available for transmitting them to the cloud. Alternatively, there could be connectivity issues restricting communication between the edge and the cloud. In addition, there are also security and data privacy issues that need to be addressed while sending (receiving) data to (from) the cloud. Hence, it is highly desirable to equip the edge devices with on-chip intelligence so that they can learn from real-time input data and invoke the cloud occasionally to update the on-chip trained weights using more complex algorithms. The proposed approach is also suited for building intelligent autonomous systems like robots and self-flying drones. For example, it is beneficial to embed on-chip learning in autonomous robots used for disaster relief operations that enables them to navigate obstacles and scour the disaster site for survivors. In the instance of self-flying drones used for reconnaissance operations, on-chip intelligence can enable them to effectively navigate the enemy territory and improve the chances of a successful mission.The classification accuracy of ReStoCNet for complex applications could be improved by augmenting the layer-wise unsupervised training methodology with a global supervised training mechanism. Recent works have proposed error backpropagation algorithms for the supervised training of SNNs (Lee et al., 2016, 2018a; Panda and Roy, 2016; Jin et al., 2018; Mostafa, 2018; Wu et al., 2018). However, the backpropagation algorithms for SNNs, some of which backpropagate errors at multiple time-steps, are computationally prohibitive and prone to unstable convergence behaviors (Lee et al., 2018a). In this regard, Neftci et al. (2017) proposed event-driven random backpropagation that prevents the need for calculating and backpropagating precise error gradients. Future works could explore a hybrid unsupervised (local) and supervised (global) training methodology for ReStoCNet to obtain favorable trade-offs between classification accuracy and training effort as was shown by Lee et al. (2018a) for full-precision convolutional SNNs without residual connections. Such a hybrid approach would also preclude the need for using the pooled spiking activations of all the convolutional layers for inference, thereby enhancing the scalability of deep ReStoCNets.
[ "9852584", "26941637", "29674961", "19115011", "20192230", "29328958", "30123103", "27877107", "1372754", "17305422", "25104385", "28783639", "28680387", "30410432", "27443913", "30899212", "10966623", "27405788", "28701911", "29962943", "27183057", "25879967", "29875621", "30374283" ]
[ { "pmid": "9852584", "title": "Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type.", "abstract": "In cultures of dissociated rat hippocampal neurons, persistent potentiation and depression of glutamatergic synapses were induced by correlated spiking of presynaptic and postsynaptic neurons. The relative timing between the presynaptic and postsynaptic spiking determined the direction and the extent of synaptic changes. Repetitive postsynaptic spiking within a time window of 20 msec after presynaptic activation resulted in long-term potentiation (LTP), whereas postsynaptic spiking within a window of 20 msec before the repetitive presynaptic activation led to long-term depression (LTD). Significant LTP occurred only at synapses with relatively low initial strength, whereas the extent of LTD did not show obvious dependence on the initial synaptic strength. Both LTP and LTD depended on the activation of NMDA receptors and were absent in cases in which the postsynaptic neurons were GABAergic in nature. Blockade of L-type calcium channels with nimodipine abolished the induction of LTD and reduced the extent of LTP. These results underscore the importance of precise spike timing, synaptic strength, and postsynaptic cell type in the activity-induced modification of central synapses and suggest that Hebb's rule may need to incorporate a quantitative consideration of spike timing that reflects the narrow and asymmetric window for the induction of synaptic modification." }, { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "29674961", "title": "Unsupervised Feature Learning With Winner-Takes-All Based STDP.", "abstract": "We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods." }, { "pmid": "19115011", "title": "Brian: a simulator for spiking neural networks in python.", "abstract": "\"Brian\" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience." }, { "pmid": "20192230", "title": "Nanoscale memristor device as synapse in neuromorphic systems.", "abstract": "A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. Here we experimentally demonstrate a nanoscale silicon-based memristor device and show that a hybrid system composed of complementary metal-oxide semiconductor neurons and memristor synapses can support important synaptic functions such as spike timing dependent plasticity. Using memristors as synapses in neuromorphic circuits can potentially offer both high connectivity and high density required for efficient computing." }, { "pmid": "29328958", "title": "STDP-based spiking deep convolutional neural networks for object recognition.", "abstract": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions." }, { "pmid": "30123103", "title": "Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning.", "abstract": "Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training." }, { "pmid": "27877107", "title": "Training Deep Spiking Neural Networks Using Backpropagation.", "abstract": "Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations." }, { "pmid": "1372754", "title": "Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity.", "abstract": "In the visual cortex of the brain, long-ranging tangentially oriented axon collaterals interconnect regularly spaced clusters of cells. These connections develop after birth and attain their specificity by pruning. To test whether there is selective stabilization of connections between those cells that exhibit correlated activity, kittens were raised with artificially induced strabismus (eye deviation) to eliminate the correlation between signals from the two eyes. In area 17, cell clusters were driven almost exclusively from either the right or the left eye and tangential intracortical fibers preferentially connected cell groups activated by the same eye. Thus, circuit selection depends on visual experience, and the selection criterion is the correlation of activity." }, { "pmid": "17305422", "title": "Unsupervised learning of visual features through spike timing dependent plasticity.", "abstract": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "28783639", "title": "Supervised Learning Based on Temporal Coding in Spiking Neural Networks.", "abstract": "Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information." }, { "pmid": "28680387", "title": "Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.", "abstract": "An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning." }, { "pmid": "30410432", "title": "Deep Learning With Spiking Neurons: Opportunities and Challenges.", "abstract": "Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications." }, { "pmid": "27443913", "title": "Magnetic Tunnel Junction Mimics Stochastic Cortical Spiking Neurons.", "abstract": "Brain-inspired computing architectures attempt to mimic the computations performed in the neurons and the synapses in the human brain in order to achieve its efficiency in learning and cognitive tasks. In this work, we demonstrate the mapping of the probabilistic spiking nature of pyramidal neurons in the cortex to the stochastic switching behavior of a Magnetic Tunnel Junction in presence of thermal noise. We present results to illustrate the efficiency of neuromorphic systems based on such probabilistic neurons for pattern recognition tasks in presence of lateral inhibition and homeostasis. Such stochastic MTJ neurons can also potentially provide a direct mapping to the probabilistic computing elements in Belief Networks for performing regenerative tasks." }, { "pmid": "30899212", "title": "Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.", "abstract": "Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain." }, { "pmid": "10966623", "title": "Competitive Hebbian learning through spike-timing-dependent synaptic plasticity.", "abstract": "Hebbian models of development and learning require both activity-dependent synaptic plasticity and a mechanism that induces competition between different synapses. One form of experimentally observed long-term synaptic plasticity, which we call spike-timing-dependent plasticity (STDP), depends on the relative timing of pre- and postsynaptic action potentials. In modeling studies, we find that this form of synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregular but more sensitive to presynaptic spike timing. It has been argued that neurons in vivo operate in such a balanced regime. Synapses modifiable by STDP compete for control of the timing of postsynaptic action potentials. Inputs that fire the postsynaptic neuron with short latency or that act in correlated groups are able to compete most successfully and develop strong synapses, while synapses of longer-latency or less-effective inputs are weakened." }, { "pmid": "27405788", "title": "Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning.", "abstract": "Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses." }, { "pmid": "28701911", "title": "An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data.", "abstract": "This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips." }, { "pmid": "29962943", "title": "Event-Based, Timescale Invariant Unsupervised Online Deep Learning With STDP.", "abstract": "Learning of hierarchical features with spiking neurons has mostly been investigated in the database framework of standard deep learning systems. However, the properties of neuromorphic systems could be particularly interesting for learning from continuous sensor data in real-world settings. In this work, we introduce a deep spiking convolutional neural network of integrate-and-fire (IF) neurons which performs unsupervised online deep learning with spike-timing dependent plasticity (STDP) from a stream of asynchronous and continuous event-based data. In contrast to previous approaches to unsupervised deep learning with spikes, where layers were trained successively, we introduce a mechanism to train all layers of the network simultaneously. This allows approximate online inference already during the learning process and makes our architecture suitable for online learning and inference. We show that it is possible to train the network without providing implicit information about the database, such as the number of classes and the duration of stimuli presentation. By designing an STDP learning rule which depends only on relative spike timings, we make our network fully event-driven and able to operate without defining an absolute timescale of its dynamics. Our architecture requires only a small number of generic mechanisms and therefore enforces few constraints on a possible neuromorphic hardware implementation. These characteristics make our network one of the few neuromorphic architecture which could directly learn features and perform inference from an event-based vision sensor." }, { "pmid": "27183057", "title": "Stochastic phase-change neurons.", "abstract": "Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals." }, { "pmid": "25879967", "title": "Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems.", "abstract": "Spin-transfer torque magnetic memory (STT-MRAM) is currently under intense academic and industrial development, since it features non-volatility, high write and read speed and high endurance. In this work, we show that when used in a non-conventional regime, it can additionally act as a stochastic memristive device, appropriate to implement a \"synaptic\" function. We introduce basic concepts relating to spin-transfer torque magnetic tunnel junction (STT-MTJ, the STT-MRAM cell) behavior and its possible use to implement learning-capable synapses. Three programming regimes (low, intermediate and high current) are identified and compared. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. Monte Carlo simulations show its robustness to device variations. The simulations also allow comparing system operation when the different programming regimes of STT-MTJs are used. In comparison to the high and low current regimes, the intermediate current regime allows minimization of energy consumption, while retaining a high robustness to device variations. These results open the way for unexplored applications of STT-MTJs in robust, low power, cognitive-type systems." }, { "pmid": "29875621", "title": "Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.", "abstract": "Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics." }, { "pmid": "30374283", "title": "On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights.", "abstract": "In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware." } ]
Frontiers in Neuroscience
30949018
PMC6436577
10.3389/fnins.2019.00210
A New Pulse Coupled Neural Network (PCNN) for Brain Medical Image Fusion Empowered by Shuffled Frog Leaping Algorithm
Recent research has reported the application of image fusion technologies in medical images in a wide range of aspects, such as in the diagnosis of brain diseases, the detection of glioma and the diagnosis of Alzheimer’s disease. In our study, a new fusion method based on the combination of the shuffled frog leaping algorithm (SFLA) and the pulse coupled neural network (PCNN) is proposed for the fusion of SPECT and CT images to improve the quality of fused brain images. First, the intensity-hue-saturation (IHS) of a SPECT and CT image are decomposed using a non-subsampled contourlet transform (NSCT) independently, where both low-frequency and high-frequency images, using NSCT, are obtained. We then used the combined SFLA and PCNN to fuse the high-frequency sub-band images and low-frequency images. The SFLA is considered to optimize the PCNN network parameters. Finally, the fused image was produced from the reversed NSCT and reversed IHS transforms. We evaluated our algorithms against standard deviation (SD), mean gradient (Ḡ), spatial frequency (SF) and information entropy (E) using three different sets of brain images. The experimental results demonstrated the superior performance of the proposed fusion method to enhance both precision and spatial resolution significantly.
Related WorksImage fusion involves a wide range of disciplines and can be classified under the category of information fusion, where a series of methods have been presented. A novel fusion method, for multi-scale images has been presented by Zhang X. et al. (2017) using Empirical Wavelet Transform (EWT). In the proposed method, simultaneous empirical wavelet transforms (SEWT) were used for one-dimensional and two-dimensional signals, to ensure the optimal wavelets for processed signals. A satisfying visual perception was achieved through a series of experiments and in terms of objective evaluations, it was demonstrated that the method was superior to other traditional algorithms. However, time consumption of the proposed method is high, mainly during the process of image decomposition, causing application difficulties in a real time system. Noised images should also be considered in future work where the process of generating optimal wavelets may be affected (Zeng et al., 2016b; Zhang X. et al., 2017).Aishwarya and Thangammal (2017) also proposed a fusion method based on a supervised dictionary learning approach. During the dictionary training, in order to reduce the number of input patches, gradient information was first obtained for every patch in the training set. Second, both the information content and edge strength was measured for each gradient patch. Finally, the patches with better focus features were selected by a selection rule, to train the over complete dictionary. Additionally, in the process of fusion, the globally learned dictionary was used to achieve better visual quality. Nevertheless, high computational costs also exist in this proposed approach during the process of sparse coding and final fusion performance, which may be affected by high frequency noise (Zeng et al., 2016a; Aishwarya and Thangammal, 2017).Moreover, an algorithm for the fusion of thermal and visual images was introduced by M Kanmani et al. in order to obtain a single comprehensive fused image. A novel method called self tuning particle swarm optimization (STPSO) was presented to calculate the optimal weights. A weighted averaging fusion rule was also used to fuse the low frequency- and high frequency coefficients, obtained through Dual Tree Discrete Wavelet Transform (DT-DWT) (Kanmani and Narasimhan, 2017; Zeng et al., 2017a). Xinxia Ji et al. proposed a new fusion algorithm based on an adaptive weighted method in combination with the idea of fuzzy theory. In the algorithm, a membership function with fuzzy logic variables were designed to achieve the transformation of different leveled coefficients by different weights. Experimental results indicated that the proposed algorithm outperformed existing algorithms in aspects of visual quality and objective measures (Ji and Zhang, 2017; Zeng et al., 2017b).
[ "27778089", "15384540", "19054734", "22562768", "25879835", "24892057", "24770917" ]
[ { "pmid": "27778089", "title": "Comparison of initial and tertiary centre second opinion reads of multiparametric magnetic resonance imaging of the prostate prior to repeat biopsy.", "abstract": "OBJECTIVES\nTo investigate the value of second-opinion evaluation of multiparametric prostate magnetic resonance imaging (MRI) by subspecialised uroradiologists at a tertiary centre for the detection of significant cancer in transperineal fusion prostate biopsy.\n\n\nMETHODS\nEvaluation of prospectively acquired initial and second-opinion radiology reports of 158 patients who underwent MRI at regional hospitals prior to transperineal MR/untrasound fusion biopsy at a tertiary referral centre over a 3-year period. Gleason score (GS) 7-10 cancer, positive predictive value (PPV) and negative (NPV) predictive value (±95 % confidence intervals) were calculated and compared by Fisher's exact test.\n\n\nRESULTS\nDisagreement between initial and tertiary centre second-opinion reports was observed in 54 % of cases (86/158). MRIs had a higher NPV for GS 7-10 in tertiary centre reads compared to initial reports (0.89 ± 0.08 vs 0.72 ± 0.16; p = 0.04), and a higher PPV in the target area for all cancer (0.61 ± 0.12 vs 0.28 ± 0.10; p = 0.01) and GS 7-10 cancer (0.43 ± 0.12 vs 0.2 3 ± 0.09; p = 0.02). For equivocal suspicion, the PPV for GS 7-10 was 0.12 ± 0.11 for tertiary centre and 0.11 ± 0.09 for initial reads; p = 1.00.\n\n\nCONCLUSIONS\nSecond readings of prostate MRI by subspecialised uroradiologists at a tertiary centre significantly improved both NPV and PPV. Reporter experience may help to reduce overcalling and avoid overtargeting of lesions.\n\n\nKEY POINTS\n• Multiparametric MRIs were more often called negative in subspecialist reads (41 % vs 20 %). • Second readings of prostate mpMRIs by subspecialist uroradiologists significantly improved NPV and PPV. • Reporter experience may reduce overcalling and avoid overtargeting of lesions. • Greater education and training of radiologists in prostate MRI interpretation is advised." }, { "pmid": "15384540", "title": "A constructive approach for finding arbitrary roots of polynomials by neural networks.", "abstract": "This paper proposes a constructive approach for finding arbitrary (real or complex) roots of arbitrary (real or complex) polynomials by multilayer perceptron network (MLPN) using constrained learning algorithm (CLA), which encodes the a priori information of constraint relations between root moments and coefficients of a polynomial into the usual BP algorithm (BPA). Moreover, the root moment method (RMM) is also simplified into a recursive version so that the computational complexity can be further decreased, which leads the roots of those higher order polynomials to be readily found. In addition, an adaptive learning parameter with the CLA is also proposed in this paper; an initial weight selection method is also given. Finally, several experimental results show that our proposed neural connectionism approaches, with respect to the nonneural ones, are more efficient and feasible in finding the arbitrary roots of arbitrary polynomials." }, { "pmid": "19054734", "title": "A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks.", "abstract": "In this paper, a novel heuristic structure optimization methodology for radial basis probabilistic neural networks (RBPNNs) is proposed. First, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to select the initial hidden-layer centers of the RBPNN, and then the recursive orthogonal least square algorithm (ROLSA) combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. The proposed algorithms are evaluated through eight benchmark classification problems and two real-world application problems, a plant species identification task involving 50 plant species and a palmprint recognition task. Experimental results show that our proposed algorithm is feasible and efficient for the structure optimization of the RBPNN. The RBPNN achieves higher recognition rates and better classification efficiency than multilayer perceptron networks (MLPNs) and radial basis function neural networks (RBFNNs) in both tasks. Moreover, the experimental results illustrated that the generalization performance of the optimized RBPNN in the plant species identification task was markedly better than that of the optimized RBFNN." }, { "pmid": "22562768", "title": "A general CPL-AdS methodology for fixing dynamic parameters in dual environments.", "abstract": "The algorithm of Continuous Point Location with Adaptive d-ary Search (CPL-AdS) strategy exhibits its efficiency in solving stochastic point location (SPL) problems. However, there is one bottleneck for this CPL-AdS strategy which is that, when the dimension of the feature, or the number of divided subintervals for each iteration, d is large, the decision table for elimination process is almost unavailable. On the other hand, the larger dimension of the features d can generally make this CPL-AdS strategy avoid oscillation and converge faster. This paper presents a generalized universal decision formula to solve this bottleneck problem. As a matter of fact, this decision formula has a wider usage beyond handling out this SPL problems, such as dealing with deterministic point location problems and searching data in Single Instruction Stream-Multiple Data Stream based on Concurrent Read and Exclusive Write parallel computer model. Meanwhile, we generalized the CPL-AdS strategy with an extending formula, which is capable of tracking an unknown dynamic parameter λ in both informative and deceptive environments. Furthermore, we employed different learning automata in the generalized CPL-AdS method to find out if faster learning algorithm will lead to better realization of the generalized CPL-AdS method. All of these aforementioned contributions are vitally important whether in theory or in practical applications. Finally, extensive experiments show that our proposed approaches are efficient and feasible." }, { "pmid": "25879835", "title": "Left-Ventricle Segmentation of SPECT Images of Rats.", "abstract": "Single-photon emission computed tomography (SPECT) imaging of the heart is helpful to quantify the left-ventricular ejection fraction and study myocardial perfusion scans. However, these evaluations require a 3-D segmentation of the left-ventricular wall on each phase of the cardiac cycle. This paper presents a fast and interactive graph cut method for 3-D segmentation of the left ventricle (LV) of rats in SPECT images. The method is carried out in three steps. First, 3-D sampling of the LV cavity is made in a spherical-cylindrical coordinate system. Then, a graph-cut-based energy minimization procedure provides delineation of the myocardium centerline surface. From there, it is possible to outline the epicardial and endocardial boundaries by considering the second statistical moment of the SPECT images. An important aspect of our method is to always produce anatomically coherent U-shape results. It also relies on only two intuitive parameters regulating the smoothness and the thickness of the segmentation result. Results show not only that our method is statistically as accurate as human experts, but it is one order of magnitude faster than a state-of-the-art method with a processing time of at most 2 s on a 4-D cardiac image after having determined the LV orientation." }, { "pmid": "24892057", "title": "Hybrid algorithms for fuzzy reverse supply chain network design.", "abstract": "In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods." }, { "pmid": "24770917", "title": "Image-based quantitative analysis of gold immunochromatographic strip via cellular neural network approach.", "abstract": "Gold immunochromatographic strip assay provides a rapid, simple, single-copy and on-site way to detect the presence or absence of the target analyte. This paper aims to develop a method for accurately segmenting the test line and control line of the gold immunochromatographic strip (GICS) image for quantitatively determining the trace concentrations in the specimen, which can lead to more functional information than the traditional qualitative or semi-quantitative strip assay. The canny operator as well as the mathematical morphology method is used to detect and extract the GICS reading-window. Then, the test line and control line of the GICS reading-window are segmented by the cellular neural network (CNN) algorithm, where the template parameters of the CNN are designed by the switching particle swarm optimization (SPSO) algorithm for improving the performance of the CNN. It is shown that the SPSO-based CNN offers a robust method for accurately segmenting the test and control lines, and therefore serves as a novel image methodology for the interpretation of GICS. Furthermore, quantitative comparison is carried out among four algorithms in terms of the peak signal-to-noise ratio. It is concluded that the proposed CNN algorithm gives higher accuracy and the CNN is capable of parallelism and analog very-large-scale integration implementation within a remarkably efficient time." } ]