journal-title
stringclasses
191 values
pmid
stringlengths
8
8
pmc
stringlengths
10
11
doi
stringlengths
12
31
article-title
stringlengths
11
423
abstract
stringlengths
18
3.69k
related-work
stringlengths
12
84k
references
sequencelengths
0
206
reference_info
listlengths
0
192
Scientific Reports
27578529
PMC5006166
10.1038/srep32404
Large-Scale Discovery of Disease-Disease and Disease-Gene Associations
Data-driven phenotype analyses on Electronic Health Record (EHR) data have recently drawn benefits across many areas of clinical practice, uncovering new links in the medical sciences that can potentially affect the well-being of millions of patients. In this paper, EHR data is used to discover novel relationships between diseases by studying their comorbidities (co-occurrences in patients). A novel embedding model is designed to extract knowledge from disease comorbidities by learning from a large-scale EHR database comprising more than 35 million inpatient cases spanning nearly a decade, revealing significant improvements on disease phenotyping over current computational approaches. In addition, the use of the proposed methodology is extended to discover novel disease-gene associations by including valuable domain knowledge from genome-wide association studies. To evaluate our approach, its effectiveness is compared against a held-out set where, again, it revealed very compelling results. For selected diseases, we further identify candidate gene lists for which disease-gene associations were not studied previously. Thus, our approach provides biomedical researchers with new tools to filter genes of interest, thus, reducing costly lab studies.
Background and related workIn the treatment of ailments, the focus of medical practitioners can be roughly divided between two complementary approaches: 1) treating the symptoms of already sick patients (reactive medicine); and 2) understanding disease etiology in order to prevent manifestation and further spread of the disease (preventative medicine). In the first approach, the disease symptoms are a part of a broader phenotype profile of an individual, with phenotype being defined as the presence of a specific observable characteristic in an organism, such as blood type, response to administered medication, or the presence of a disease13. The identification process of useful, meaningful medical characteristics and insights for the purposes of medical treatment is referred to as phenotyping14. In the second approach, researchers identify the genetic basis of disease by discovering the relationship between exhibited phenotypes and the patient’s genetic makeup in a process refereed to as genotyping15. Establishing a relationship between a phenotype and its associated genes is a major component of gene discovery and allows biomedical scientists to gain a deeper understanding of the condition and a potential cure at its very origin16. Gene discovery is a central problem in a number of published disease-gene association studies, and its prevalence in the scientific community is increasing steadily as novel discoveries lead to improved medical care. For example, results in the existing literature show that gene discovery allows clinicians to better understand the severity of patients symptoms17, to anticipate onset and path of disease progressions (particularly important for cancer patients in later stages18), or to better understand disease processes on a molecular level enabling the development of better treatments19. As suggested in previous studies20, such knowledge may be hidden in vast EHR databases that are yet to be exploited to their fullest potential. Clearly, both phenotyping and gene discovery are important steps in the fight for global health, and advancing tools for these tasks is a critical part of this battle. The emerging use of gene editing techniques to precisely target disease genes21 will require such computational tools at precision medicine’s disposal.EHR records, containing abundant information relating to patients’ phenotypes that have been generated from actual clinical observations and physician-patient interactions, present an unprecedented resource and testbed to apply novel phenotyping approaches. Moreover, the data is complemented by large amounts of gene-disease associations derived from readily available genome-wide association studies. However, current approaches for phenotyping and gene discovery using EHR data rely on highly supervised rule-based or heuristic-based methods, which require manual labor and often a consensus of medical experts22. This severely limits the scalability and effectiveness of the process3. Some researchers proposed to combat this issue by employing active learning approaches to obtain limited number of expert labels used by supervised methods2324. Nevertheless, the state-of-the-art is far from optimal as the labeling process can still be tedious, and models require large numbers of labels to achieve satisfactory performance on noisy EHR data3. Therefore, we approach solving this problem in an unsupervised manner.Early work on exploiting EHR databases to understand human disease focused on graphical representations of diseases, genes, and proteins. Disease networks were proposed in Goh et al.25 where certain genes play a central role in the human disease interactome, which is defined as all interactions (connections) of diseases, genes, and proteins discovered on humans. Follow up studies by Hidalgo et al.26 proposed human phenotypic networks (commonly referred to as comorbidity networks) to map with disease networks derived from EHR datasets, which were shown to successfully associate a higher connectivity of diseases with higher mortality. Based on these advances, a body of work linked predictions of disease-disease and disease-gene networks627 even when a mediocre degree of correlation (~40%, also confirmed on data used in this study) was detected between disease and gene networks, indicating potential causality between them. Such studies provided important evidence of modeling disease and human interactome networks to discover associated phenotypes. Recently, network studies of the human interactome have focused on uncovering patterns28 and, as the human interactome is incomplete, discovering novel relationships5. However, it has been suggested that network-based approaches to phenotyping and discoveries of meaningful concepts in medicine have yet to be fully exploited and tested29. This study offers a novel approach to represent diseases and genes by utilizing the same sources of data as network approaches, but in a different manner, as discussed in greater detail in the section, below.In addition, to create more scalable, effective tools, recent approaches distinct from networks have focused on the development of data-driven phenotyping with minimal manual input and rigorous evaluation procedures33031. Part of the emerging field of computational phenotyping includes the methods of Zhou et al.32 which formulates EHRs as temporal matrices of medical events for each patient, and proposes an optimization-based technology for discovering temporal patterns of medical events as phenotypes. Further, Ho et al.33 formulated patient EHRs as tensors, where each dimension is represented by a different medical event, and the use of non-negative tensor factorization in the identification of phenotypes. Deep learning has also been applied to the task of phenotyping30, as well as graph mining31 and clustering34, used to identify patient subgroups based on individual clinical markers. Finally, Žitnik et al.35, conducted a study on non-negative matrix factorization techniques for fusing various molecular data to uncover disease-disease associations and show that available domain knowledge can help reconstruct known and obtain novel associations. Nonetheless, the need for a comprehensive procedure to obtain manually labeled samples remains one of the main limitations of modern phenotyping tools14. Although state-of-the-art machine learning methods have been utilized to automate the process, current approaches still observe degraded performance in the face of limited availability of labeled samples that are manually annotated by medical experts36.In this paper, we compare representatives of the above approaches against our proposed approach in a fair setup and, overall, demonstrate the benefits of our neural embedding approach (described below) on several tasks in a quantifiable manner.
[ "21587298", "22955496", "24383880", "10874050", "26506899", "11313775", "21941284", "23287718", "21269473", "17502601", "25038555", "22127105", "24097178", "16723398", "25841328", "2579841" ]
[ { "pmid": "21587298", "title": "Using electronic health records to drive discovery in disease genomics.", "abstract": "If genomic studies are to be a clinically relevant and timely reflection of the relationship between genetics and health status--whether for common or rare variants--cost-effective ways must be found to measure both the genetic variation and the phenotypic characteristics of large populations, including the comprehensive and up-to-date record of their medical treatment. The adoption of electronic health records, used by clinicians to document clinical care, is becoming widespread and recent studies demonstrate that they can be effectively employed for genetic studies using the informational and biological 'by-products' of health-care delivery while maintaining patient privacy." }, { "pmid": "22955496", "title": "Next-generation phenotyping of electronic health records.", "abstract": "The national adoption of electronic health records (EHR) promises to make an unprecedented amount of data available for clinical research, but the data are complex, inaccurate, and frequently missing, and the record reflects complex processes aside from the patient's physiological state. We believe that the path forward requires studying the EHR as an object of interest in itself, and that new models, learning from data, and collaboration will lead to efficient use of the valuable information currently locked in health records." }, { "pmid": "24383880", "title": "New mini- zincin structures provide a minimal scaffold for members of this metallopeptidase superfamily.", "abstract": "BACKGROUND\nThe Acel_2062 protein from Acidothermus cellulolyticus is a protein of unknown function. Initial sequence analysis predicted that it was a metallopeptidase from the presence of a motif conserved amongst the Asp-zincins, which are peptidases that contain a single, catalytic zinc ion ligated by the histidines and aspartic acid within the motif (HEXXHXXGXXD). The Acel_2062 protein was chosen by the Joint Center for Structural Genomics for crystal structure determination to explore novel protein sequence space and structure-based function annotation.\n\n\nRESULTS\nThe crystal structure confirmed that the Acel_2062 protein consisted of a single, zincin-like metallopeptidase-like domain. The Met-turn, a structural feature thought to be important for a Met-zincin because it stabilizes the active site, is absent, and its stabilizing role may have been conferred to the C-terminal Tyr113. In our crystallographic model there are two molecules in the asymmetric unit and from size-exclusion chromatography, the protein dimerizes in solution. A water molecule is present in the putative zinc-binding site in one monomer, which is replaced by one of two observed conformations of His95 in the other.\n\n\nCONCLUSIONS\nThe Acel_2062 protein is structurally related to the zincins. It contains the minimum structural features of a member of this protein superfamily, and can be described as a \"mini- zincin\". There is a striking parallel with the structure of a mini-Glu-zincin, which represents the minimum structure of a Glu-zincin (a metallopeptidase in which the third zinc ligand is a glutamic acid). Rather than being an ancestral state, phylogenetic analysis suggests that the mini-zincins are derived from larger proteins." }, { "pmid": "10874050", "title": "Impact of genomics on drug discovery and clinical medicine.", "abstract": "Genomics, particularly high-throughput sequencing and characterization of expressed human genes, has created new opportunities for drug discovery. Knowledge of all the human genes and their functions may allow effective preventive measures, and change drug research strategy and drug discovery development processes. Pharmacogenomics is the application of genomic technologies such as gene sequencing, statistical genetics, and gene expression analysis to drugs in clinical development and on the market. It applies the large-scale systematic approaches of genomics to speed the discovery of drug response markers, whether they act at the level of the drug target, drug metabolism, or disease pathways. The potential implication of genomics and pharmacogenomics in clinical research and clinical medicine is that disease could be treated according to genetic and specific individual markers, selecting medications and dosages that are optimized for individual patients. The possibility of defining patient populations genetically may improve outcomes by predicting individual responses to drugs, and could improve safety and efficacy in therapeutic areas such as neuropsychiatry, cardiovascular medicine, endocrinology (diabetes and obesity) and oncology. Ethical questions need to be addressed and guidelines established for the use of genomics in clinical research and clinical medicine. Significant achievements are possible with an interdisciplinary approach that includes genetic, technological and therapeutic measures." }, { "pmid": "26506899", "title": "Standardized phenotyping enhances Mendelian disease gene identification.", "abstract": "Whole-exome sequencing has revolutionized the identification of genes with dominant disease-associated variants for rare clinically and genetically heterogeneous disorders, but the identification of genes with recessive disease-associated variants has been less successful. A new study now provides a framework integrating Mendelian variant filtering with statistical assessments of patients' genotypes and phenotypes, thereby catalyzing the discovery of novel mutations associated with recessive disease." }, { "pmid": "11313775", "title": "The family based association test method: strategies for studying general genotype--phenotype associations.", "abstract": "With possibly incomplete nuclear families, the family based association test (FBAT) method allows one to evaluate any test statistic that can be expressed as the sum of products (covariance) between an arbitrary function of an offspring's genotype with an arbitrary function of the offspring's phenotype. We derive expressions needed to calculate the mean and variance of these test statistics under the null hypothesis of no linkage. To give some guidance on using the FBAT method, we present three simple data analysis strategies for different phenotypes: dichotomous (affection status), quantitative and censored (eg, the age of onset). We illustrate the approach by applying it to candidate gene data of the NIMH Alzheimer Disease Initiative. We show that the RC-TDT is equivalent to a special case of the FBAT method. This result allows us to generalise the RC-TDT to dominant, recessive and multi-allelic marker codings. Simulations compare the resulting FBAT tests to the RC-TDT and the S-TDT. The FBAT software is freely available." }, { "pmid": "21941284", "title": "A decade of exploring the cancer epigenome - biological and translational implications.", "abstract": "The past decade has highlighted the central role of epigenetic processes in cancer causation, progression and treatment. Next-generation sequencing is providing a window for visualizing the human epigenome and how it is altered in cancer. This view provides many surprises, including linking epigenetic abnormalities to mutations in genes that control DNA methylation, the packaging and the function of DNA in chromatin, and metabolism. Epigenetic alterations are leading candidates for the development of specific markers for cancer detection, diagnosis and prognosis. The enzymatic processes that control the epigenome present new opportunities for deriving therapeutic strategies designed to reverse transcriptional abnormalities that are inherent to the cancer epigenome." }, { "pmid": "23287718", "title": "Multiplex genome engineering using CRISPR/Cas systems.", "abstract": "Functional elucidation of causal genetic variants and elements requires precise genome editing technologies. The type II prokaryotic CRISPR (clustered regularly interspaced short palindromic repeats)/Cas adaptive immune system has been shown to facilitate RNA-guided site-specific DNA cleavage. We engineered two different type II CRISPR/Cas systems and demonstrate that Cas9 nucleases can be directed by short RNAs to induce precise cleavage at endogenous genomic loci in human and mouse cells. Cas9 can also be converted into a nicking enzyme to facilitate homology-directed repair with minimal mutagenic activity. Lastly, multiple guide sequences can be encoded into a single CRISPR array to enable simultaneous editing of several sites within the mammalian genome, demonstrating easy programmability and wide applicability of the RNA-guided nuclease technology." }, { "pmid": "21269473", "title": "The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies.", "abstract": "INTRODUCTION\nThe eMERGE (electronic MEdical Records and GEnomics) Network is an NHGRI-supported consortium of five institutions to explore the utility of DNA repositories coupled to Electronic Medical Record (EMR) systems for advancing discovery in genome science. eMERGE also includes a special emphasis on the ethical, legal and social issues related to these endeavors.\n\n\nORGANIZATION\nThe five sites are supported by an Administrative Coordinating Center. Setting of network goals is initiated by working groups: (1) Genomics, (2) Informatics, and (3) Consent & Community Consultation, which also includes active participation by investigators outside the eMERGE funded sites, and (4) Return of Results Oversight Committee. The Steering Committee, comprised of site PIs and representatives and NHGRI staff, meet three times per year, once per year with the External Scientific Panel.\n\n\nCURRENT PROGRESS\nThe primary site-specific phenotypes for which samples have undergone genome-wide association study (GWAS) genotyping are cataract and HDL, dementia, electrocardiographic QRS duration, peripheral arterial disease, and type 2 diabetes. A GWAS is also being undertaken for resistant hypertension in ≈ 2,000 additional samples identified across the network sites, to be added to data available for samples already genotyped. Funded by ARRA supplements, secondary phenotypes have been added at all sites to leverage the genotyping data, and hypothyroidism is being analyzed as a cross-network phenotype. Results are being posted in dbGaP. Other key eMERGE activities include evaluation of the issues associated with cross-site deployment of common algorithms to identify cases and controls in EMRs, data privacy of genomic and clinically-derived data, developing approaches for large-scale meta-analysis of GWAS data across five sites, and a community consultation and consent initiative at each site.\n\n\nFUTURE ACTIVITIES\nPlans are underway to expand the network in diversity of populations and incorporation of GWAS findings into clinical care.\n\n\nSUMMARY\nBy combining advanced clinical informatics, genome science, and community consultation, eMERGE represents a first step in the development of data-driven approaches to incorporate genomic information into routine healthcare delivery." }, { "pmid": "17502601", "title": "The human disease network.", "abstract": "A network of disorders and disease genes linked by known disorder-gene associations offers a platform to explore in a single graph-theoretic framework all known phenotype and disease gene associations, indicating the common genetic origin of many diseases. Genes associated with similar disorders show both higher likelihood of physical interactions between their products and higher expression profiling similarity for their transcripts, supporting the existence of distinct disease-specific functional modules. We find that essential human genes are likely to encode hub proteins and are expressed widely in most tissues. This suggests that disease genes also would play a central role in the human interactome. In contrast, we find that the vast majority of disease genes are nonessential and show no tendency to encode hub proteins, and their expression pattern indicates that they are localized in the functional periphery of the network. A selection-based model explains the observed difference between essential and disease genes and also suggests that diseases caused by somatic mutations should not be peripheral, a prediction we confirm for cancer genes." }, { "pmid": "25038555", "title": "Limestone: high-throughput candidate phenotype generation via tensor factorization.", "abstract": "The rapidly increasing availability of electronic health records (EHRs) from multiple heterogeneous sources has spearheaded the adoption of data-driven approaches for improved clinical research, decision making, prognosis, and patient management. Unfortunately, EHR data do not always directly and reliably map to medical concepts that clinical researchers need or use. Some recent studies have focused on EHR-derived phenotyping, which aims at mapping the EHR data to specific medical concepts; however, most of these approaches require labor intensive supervision from experienced clinical professionals. Furthermore, existing approaches are often disease-centric and specialized to the idiosyncrasies of the information technology and/or business practices of a single healthcare organization. In this paper, we propose Limestone, a nonnegative tensor factorization method to derive phenotype candidates with virtually no human supervision. Limestone represents the data source interactions naturally using tensors (a generalization of matrices). In particular, we investigate the interaction of diagnoses and medications among patients. The resulting tensor factors are reported as phenotype candidates that automatically reveal patient clusters on specific diagnoses and medications. Using the proposed method, multiple phenotypes can be identified simultaneously from data. We demonstrate the capability of Limestone on a cohort of 31,815 patient records from the Geisinger Health System. The dataset spans 7years of longitudinal patient records and was initially constructed for a heart failure onset prediction study. Our experiments demonstrate the robustness, stability, and the conciseness of Limestone-derived phenotypes. Our results show that using only 40 phenotypes, we can outperform the original 640 features (169 diagnosis categories and 471 medication types) to achieve an area under the receiver operator characteristic curve (AUC) of 0.720 (95% CI 0.715 to 0.725). Moreover, in consultation with a medical expert, we confirmed 82% of the top 50 candidates automatically extracted by Limestone are clinically meaningful." }, { "pmid": "22127105", "title": "Applying active learning to assertion classification of concepts in clinical text.", "abstract": "Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC-0.7715) than the passive learning method (random sampling) (ALC-0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort." }, { "pmid": "16723398", "title": "Modularity and community structure in networks.", "abstract": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets." }, { "pmid": "25841328", "title": "Building bridges across electronic health record systems through inferred phenotypic topics.", "abstract": "OBJECTIVE\nData in electronic health records (EHRs) is being increasingly leveraged for secondary uses, ranging from biomedical association studies to comparative effectiveness. To perform studies at scale and transfer knowledge from one institution to another in a meaningful way, we need to harmonize the phenotypes in such systems. Traditionally, this has been accomplished through expert specification of phenotypes via standardized terminologies, such as billing codes. However, this approach may be biased by the experience and expectations of the experts, as well as the vocabulary used to describe such patients. The goal of this work is to develop a data-driven strategy to (1) infer phenotypic topics within patient populations and (2) assess the degree to which such topics facilitate a mapping across populations in disparate healthcare systems.\n\n\nMETHODS\nWe adapt a generative topic modeling strategy, based on latent Dirichlet allocation, to infer phenotypic topics. We utilize a variance analysis to assess the projection of a patient population from one healthcare system onto the topics learned from another system. The consistency of learned phenotypic topics was evaluated using (1) the similarity of topics, (2) the stability of a patient population across topics, and (3) the transferability of a topic across sites. We evaluated our approaches using four months of inpatient data from two geographically distinct healthcare systems: (1) Northwestern Memorial Hospital (NMH) and (2) Vanderbilt University Medical Center (VUMC).\n\n\nRESULTS\nThe method learned 25 phenotypic topics from each healthcare system. The average cosine similarity between matched topics across the two sites was 0.39, a remarkably high value given the very high dimensionality of the feature space. The average stability of VUMC and NMH patients across the topics of two sites was 0.988 and 0.812, respectively, as measured by the Pearson correlation coefficient. Also the VUMC and NMH topics have smaller variance of characterizing patient population of two sites than standard clinical terminologies (e.g., ICD9), suggesting they may be more reliably transferred across hospital systems.\n\n\nCONCLUSIONS\nPhenotypic topics learned from EHR data can be more stable and transferable than billing codes for characterizing the general status of a patient population. This suggests that EHR-based research may be able to leverage such phenotypic topics as variables when pooling patient populations in predictive models." }, { "pmid": "2579841", "title": "Platelet hyperfunction in patients with chronic airways obstruction.", "abstract": "Platelet aggregation (PA) and plasma beta-thromboglobulin (beta TG) values were evaluated in 40 patients affected by chronic airway obstruction (CAO). PA and beta TG were significantly higher than those observed in normal subjects. Beta TG plasma levels were inversely correlated with PaO2, directly with PaCO2 and [H+]. Two h after a venesection of 300-400 ml, no change of beta TG and PA was seen in 10 healthy subjects, while a significant increase of beta TG and PA values was observed in 29 patients. The investigation suggests that in patients with CAO in vivo platelet activation is present." } ]
Frontiers in Psychology
27721800
PMC5033969
10.3389/fpsyg.2016.01429
Referential Choice: Predictability and Its Limits
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
Related WorkAs was discussed in Section “Discussion: Referential Choice Is Not Always Categorical”, referential variation and non-categoricity is clearly gaining attention in the modern linguistic, computational, and psycholinguistic literature. Referential variation may be due to the interlocutors’ perspective taking and their efforts to coordinate cognitive processes, see e.g., Koolen et al. (2011), Heller et al. (2012), and Baumann et al. (2014). A number of corpus-based studies and psycholinguistic studies explored various factors involved in the phenomenon of overspecification, occurring regularly in natural language (e.g., Kaiser et al., 2011; Hendriks, 2014; Vogels et al., 2014; Fukumura and van Gompel, 2015). Kibrik (2011, pp. 56–60) proposed to differentiate between three kinds of speaker’s referential strategies, differing in the extent to which the speaker takes the addressee’s actual cognitive state into account: egocentric, optimal, and overprotective. There is a series of recent studies addressing other aspects of referential variation, e.g., as a function of individual differences (Nieuwland and van Berkum, 2006), depending on age (Hughes and Allen, 2013; Hendriks et al., 2014) or gender (Arnold, 2015), under high cognitive load (van Rij et al., 2011; Vogels et al., 2014) and even under the left prefrontal cortex stimulation (Arnold et al., 2014). These studies, both on production and on comprehension of referential expressions, open up a whole new field in the exploration of reference.We discuss a more general kind of referential variation, probably associated with the intermediate level of referent activation. This kind of variation may occur in any discourse type. In order to test the non-categorical character of referential choice we previously conducted two experiments, based on the materials of our text corpus. Both of these experiments were somewhat similar to the experiment from Kibrik (1999), described in Section “Discussion: Referential Choice Is Not Always Categorical” above.In a comprehension experiment, Khudyakova (2012) tested the human ability to understand texts, in which the predicted referential device diverged from the original text. Nine texts from the corpus were randomly selected, such that they contained a predicted pronoun instead of an original full NP; text length did not exceed 250 words. In addition to the nine original texts, nine modified texts were created in which the original referential device (proper name) was replaced by the one predicted by the algorithm (pronoun). Two experimental lists were formed, each containing nine texts (four texts in an original version and five in a modified one, or vice versa), so that original and modified texts alternated between the two lists.The experiment was run online on Virtual Experiments platform3 with 60 participants with the expert level command of English. Each participant was asked to read all the nine texts one at a time, and answer a set of three questions after each text. Each text appeared in full on the screen, and disappeared when the participant was presented with three multiple-choice questions about referents in the text, beginning with a WH-word. Two of those were control questions, related to referents that did not create divergences. The third question was experimental: it concerned the referent in point, that is the one that was predicted by the algorithm differently from the original text. Questions were presented in a random order. Each participant thus answered 18 control questions and nine experimental questions. In the alleged instances of non-categorical referential choice, allowing both a full NP and a pronoun, experimental questions to proper names (original) and to pronouns (predicted) were expected to be answered with a comparable level of accuracy.The accuracy of the answers to the experimental questions to proper names, as well as to the control questions, was found to be 84%. In seven out of nine texts, experimental questions to pronouns were answered with the comparable accuracy of 80%. We propose that in these seven instances we deal exactly with non-categorical referential choice, probably associated with an intermediate level of referent activation. Two remaining instances may result from the algorithms’ errors.The processes of discourse production and comprehension are related but distinct, so we also conducted an editing experiment (Khudyakova et al., 2014), imitating referential choice as performed by a language speaker/writer. In the editing experiment, 47 participants with the expert level command of English were asked to read several texts from the corpus and choose all possible referential options for a referent at a certain point in discourse. Twenty seven texts from the corpus were selected for that study. The texts contained 31 critical points, in which the choice of the algorithm diverged from the one in the original text. At each critical point, as well as at two other points per text (control points), a choice was offered between a description, a proper name (where appropriate), and a pronoun. Both critical and control points did not include syntactically determined pronouns. The participants edited from 5 to 9 texts each, depending on the texts’ length. The task was to choose all appropriate options (possibly more than one). We found that in all texts at least two referential options were proposed for each point in question, both critical and control ones.The experiments on comprehension and editing demonstrated the variability of referential choice characteristic of the corpus texts. However, a methodological problem with these experiments was associated with the fact that each predicted referential expression was treated independently, whereas in real language use each referential expression depends on the previous context and creates a context for the subsequent referential expressions in the chain. In order to create texts that are more amenable to human evaluation, in the present study we introduce a flexible prediction script.
[ "18449327", "11239812", "16324792", "22389109", "23356244", "25068852", "22389129", "22389094", "22496107", "16956594", "3450848", "25911154", "22389170", "25471259" ]
[ { "pmid": "18449327", "title": "The effect of additional characters on choice of referring expression: Everyone counts.", "abstract": "Two story-telling experiments examine the process of choosing between pronouns and proper names in speaking. Such choices are traditionally attributed to speakers striving to make referring expressions maximally interpretable to addressees. The experiments revealed a novel effect: even when a pronoun would not be ambiguous, the presence of another character in the discourse decreased pronoun use and increased latencies to refer to the most prominent character in the discourse. In other words, speakers were more likely to call Minnie Minnie than shewhen Donald was also present. Even when the referent character appeared alone in the stimulus picture, the presence of another character in the preceding discourse reduced pronouns. Furthermore, pronoun use varied with features associated with the speaker's degree of focus on the preceding discourse (e.g., narrative style and disfluency). We attribute this effect to competition for attentional resources in the speaker's representation of the discourse." }, { "pmid": "11239812", "title": "Overlapping mechanisms of attention and spatial working memory.", "abstract": "Spatial selective attention and spatial working memory have largely been studied in isolation. Studies of spatial attention have provided clear evidence that observers can bias visual processing towards specific locations, enabling faster and better processing of information at those locations than at unattended locations. We present evidence supporting the view that this process of visual selection is a key component of rehearsal in spatial working memory. Thus, although working memory has sometimes been depicted as a storage system that emerges 'downstream' of early sensory processing, current evidence suggests that spatial rehearsal recruits top-down processes that modulate the earliest stages of visual analysis." }, { "pmid": "16324792", "title": "Interactions between attention and working memory.", "abstract": "Studies of attention and working memory address the fundamental limits in our ability to encode and maintain behaviorally relevant information, processes that are critical for goal-driven processing. Here we review our current understanding of the interactions between these processes, with a focus on how each construct encompasses a variety of dissociable phenomena. Attention facilitates target processing during both perceptual and postperceptual stages of processing, and functionally dissociated processes have been implicated in the maintenance of different kinds of information in working memory. Thus, although it is clear that these processes are closely intertwined, the nature of these interactions depends upon the specific variety of attention or working memory that is considered." }, { "pmid": "22389109", "title": "The interplay between gesture and speech in the production of referring expressions: investigating the tradeoff hypothesis.", "abstract": "The tradeoff hypothesis in the speech-gesture relationship claims that (a) when gesturing gets harder, speakers will rely relatively more on speech, and (b) when speaking gets harder, speakers will rely relatively more on gestures. We tested the second part of this hypothesis in an experimental collaborative referring paradigm where pairs of participants (directors and matchers) identified targets to each other from an array visible to both of them. We manipulated two factors known to affect the difficulty of speaking to assess their effects on the gesture rate per 100 words. The first factor, codability, is the ease with which targets can be described. The second factor, repetition, is whether the targets are old or new (having been already described once or twice). We also manipulated a third factor, mutual visibility, because it is known to affect the rate and type of gesture produced. None of the manipulations systematically affected the gesture rate. Our data are thus mostly inconsistent with the tradeoff hypothesis. However, the gesture rate was sensitive to concurrent features of referring expressions, suggesting that gesture parallels aspects of speech. We argue that the redundancy between speech and gesture is communicatively motivated." }, { "pmid": "23356244", "title": "Gender affects semantic competition: the effect of gender in a non-gender-marking language.", "abstract": "English speakers tend to produce fewer pronouns when a referential competitor has the same gender as the referent than otherwise. Traditionally, this gender congruence effect has been explained in terms of ambiguity avoidance (e.g., Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000; Fukumura, Van Gompel, & Pickering, 2010). However, an alternative hypothesis is that the competitor's gender congruence affects semantic competition, making the referent less accessible relative to when the competitor has a different gender (Arnold & Griffin, 2007). Experiment 1 found that even in Finnish, which is a nongendered language, the competitor's gender congruence results in fewer pronouns, supporting the semantic competition account. In Experiment 2, Finnish native speakers took part in an English version of the same experiment. The effect of gender congruence was larger in Experiment 2 than in Experiment 1, suggesting that the presence of a same-gender competitor resulted in a larger reduction in pronoun use in English than in Finnish. In contrast, other nonlinguistic similarity had similar effects in both experiments. This indicates that the effect of gender congruence in English is not entirely driven by semantic competition: Speakers also avoid gender-ambiguous pronouns." }, { "pmid": "25068852", "title": "Effects of order of mention and grammatical role on anaphor resolution.", "abstract": "A controversial issue in anaphoric processing has been whether processing preferences of anaphoric expressions are affected by the antecedent's grammatical role or surface position. Using eye tracking, Experiment 1 examined the comprehension of pronouns during reading, which revealed shorter reading times in the pronoun region and later regions when the antecedent was the subject than when it was the prepositional object. There was no effect of antecedent position. Experiment 2 showed that the choice between pronouns and repeated names during language production is also primarily affected by the antecedent's grammatical role. Experiment 3 examined the comprehension of repeated names, showing a clear effect of antecedent position. Reading times in the name region and in later regions were longer when the antecedent was 1st mentioned than 2nd mentioned, whereas the antecedent's grammatical role only affected regression measures in the name region, showing more processing difficulty with a subject than prepositional-object antecedent. Thus, the processing of pronouns is primarily driven by antecedent grammatical role rather than position, whereas the processing of repeated names is most strongly affected by position, suggesting that different representations and processing constraints underlie the processing of pronouns and names." }, { "pmid": "22389129", "title": "Underspecification of cognitive status in reference production: some empirical predictions.", "abstract": "Within the Givenness Hierarchy framework of Gundel, Hedberg, and Zacharski (1993), lexical items included in referring forms are assumed to conventionally encode two kinds of information: conceptual information about the speaker's intended referent and procedural information about the assumed cognitive status of that referent in the mind of the addressee, the latter encoded by various determiners and pronouns. This article focuses on effects of underspecification of cognitive status, establishing that, although salience and accessibility play an important role in reference processing, the Givenness Hierarchy itself is not a hierarchy of degrees of salience/accessibility, contrary to what has often been assumed. We thus show that the framework is able to account for a number of experimental results in the literature without making additional assumptions about form-specific constraints associated with different referring forms." }, { "pmid": "22389094", "title": "To name or to describe: shared knowledge affects referential form.", "abstract": "The notion of common ground is important for the production of referring expressions: In order for a referring expression to be felicitous, it has to be based on shared information. But determining what information is shared and what information is privileged may require gathering information from multiple sources, and constantly coordinating and updating them, which might be computationally too intensive to affect the earliest moments of production. Previous work has found that speakers produce overinformative referring expressions, which include privileged names, violating Grice's Maxims, and concluded that this is because they do not mark the distinction between shared and privileged information. We demonstrate that speakers are in fact quite effective in marking this distinction in the form of their utterances. Nonetheless, under certain circumstances, speakers choose to overspecify privileged names." }, { "pmid": "22496107", "title": "Managing ambiguity in reference generation: the role of surface structure.", "abstract": "This article explores the role of surface ambiguities in referring expressions, and how the risk of such ambiguities should be taken into account by an algorithm that generates referring expressions, if these expressions are to be optimally effective for a hearer. We focus on the ambiguities that arise when adjectives occur in coordinated structures. The central idea is to use statistical information about lexical co-occurrence to estimate which interpretation of a phrase is most likely for human readers, and to avoid generating phrases where misunderstandings are likely. Various aspects of the problem were explored in three experiments in which responses by human participants provided evidence about which reading was most likely for certain phrases, which phrases were deemed most suitable for particular referents, and the speed at which various phrases were read. We found a preference for ''clear'' expressions to ''unclear'' ones, but if several of the expressions are ''clear,'' then brief expressions are preferred over non-brief ones even though the brief ones are syntactically ambiguous and the non-brief ones are not; the notion of clarity was made precise using Kilgarriff's Word Sketches. We outline an implemented algorithm that generates noun phrases conforming to our hypotheses." }, { "pmid": "16956594", "title": "Individual differences and contextual bias in pronoun resolution: evidence from ERPs.", "abstract": "Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often 'formally' ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., \"Jennifer Lopez told Madonna that she had too much money.\"). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias." }, { "pmid": "3450848", "title": "A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability.", "abstract": "The statistical test of hypothesis of no difference between the average bioavailabilities of two drug formulations, usually supplemented by an assessment of what the power of the statistical test would have been if the true averages had been inequivalent, continues to be used in the statistical analysis of bioavailability/bioequivalence studies. In the present article, this Power Approach (which in practice usually consists of testing the hypothesis of no difference at level 0.05 and requiring an estimated power of 0.80) is compared to another statistical approach, the Two One-Sided Tests Procedure, which leads to the same conclusion as the approach proposed by Westlake based on the usual (shortest) 1-2 alpha confidence interval for the true average difference. It is found that for the specific choice of alpha = 0.05 as the nominal level of the one-sided tests, the two one-sided tests procedure has uniformly superior properties to the power approach in most cases. The only cases where the power approach has superior properties when the true averages are equivalent correspond to cases where the chance of concluding equivalence with the power approach when the true averages are not equivalent exceeds 0.05. With appropriate choice of the nominal level of significance of the one-sided tests, the two one-sided tests procedure always has uniformly superior properties to the power approach. The two one-sided tests procedure is compared to the procedure proposed by Hauck and Anderson." }, { "pmid": "25911154", "title": "Working memory capacity and the scope and control of attention.", "abstract": "Complex span and visual arrays are two common measures of working memory capacity that are respectively treated as measures of attention control and storage capacity. A recent analysis of these tasks concluded that (1) complex span performance has a relatively stronger relationship to fluid intelligence and (2) this is due to the requirement that people engage control processes while performing this task. The present study examines the validity of these conclusions by examining two large data sets that include a more diverse set of visual arrays tasks and several measures of attention control. We conclude that complex span and visual arrays account for similar amounts of variance in fluid intelligence. The disparity relative to the earlier analysis is attributed to the present study involving a more complete measure of the latent ability underlying the performance of visual arrays. Moreover, we find that both types of working memory task have strong relationships to attention control. This indicates that the ability to engage attention in a controlled manner is a critical aspect of working memory capacity, regardless of the type of task that is used to measure this construct." }, { "pmid": "22389170", "title": "Toward a computational psycholinguistics of reference production.", "abstract": "This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms." }, { "pmid": "25471259", "title": "How Cognitive Load Influences Speakers' Choice of Referring Expressions.", "abstract": "We report on two experiments investigating the effect of an increased cognitive load for speakers on the choice of referring expressions. Speakers produced story continuations to addressees, in which they referred to characters that were either salient or non-salient in the discourse. In Experiment 1, referents that were salient for the speaker were non-salient for the addressee, and vice versa. In Experiment 2, all discourse information was shared between speaker and addressee. Cognitive load was manipulated by the presence or absence of a secondary task for the speaker. The results show that speakers under load are more likely to produce pronouns, at least when referring to less salient referents. We take this finding as evidence that speakers under load have more difficulties taking discourse salience into account, resulting in the use of expressions that are more economical for themselves." } ]
Journal of Cheminformatics
28316646
PMC5034616
10.1186/s13321-016-0164-0
An ensemble model of QSAR tools for regulatory risk assessment
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.Electronic supplementary materialThe online version of this article (doi:10.1186/s13321-016-0164-0) contains supplementary material, which is available to authorized users.
Related workThere are studies that investigate methods for combining predictions from multiple QSAR tools to gain better predictive performance for various toxic endpoints: (1) Several QSAR models were developed and compared using different clustering algorithms (multiple linear regression, radial basis function neural network and support vector machines) to develop hybrid models for bioconcentration factor (BCF) prediction [17]; (2) QSAR models implementing cut-off rules were used to determine a reliable and conservative consensus prediction from two models implemented in VEGA [18] for BCF prediction [19]; (3) Predictive performance of four QSAR tools (Derek [20, 21], Leadscope [22], MultiCASE [23] and Toxtree [24]) were evaluated and compared to the standard Ames assay [25] for mutagenicity prediction. Pairwise hybrid models were then developed using AND (accepting positive results when both tools predict a positive) and OR combinations (accepting positive results when either one of the tool predicts a positive) [25–27]; (4) A similar AND/OR approach was implemented for the validation and construction of a hydrid QSAR model using MultiCASE and MDL-QSAR [28] tools for carcinogenicity prediction in rodents [29]. The work was extended using more tools (BioEpisteme [30], Leadscope PDM, and Derek) to construct hybrid models using majority consensus predictions in addition to AND/OR combinations [31].The results of these studies demonstrate that: (1) None of the QSAR tools perform significantly better than others, and they also differ in their predictive performance based upon the toxic endpoint and the chemical datasets under investigation, (2) Hybrid models have an improved overall predictive performance in comparison to individual QSAR tools, and (3) Consensus-positive predictions from more than one QSAR tool improved the identification of true positives. The underlying idea is that each QSAR model brings a different perspective of the complexity of the modeled biological system and combining them can improve the classification accuracy. However, consensus-positive methods are prone to introducing a conservative nature in discarding a potentially non-toxic chemical based on false positive prediction. Therefore, we propose an ensemble learning approach for combining predictions from multiple QSAR tools that addresses the drawbacks of consensus-positive predictions [32, 33]. Hybrid QSAR models using ensemble approaches have been developed for various biological endpoints like cancer classification and prediction of ADMET properties [34–36] but not for toxic endpoints. In this study, a Bayesian ensemble approach is investigated for carcinogenicity prediction, which is discussed in more details in the next section.
[ "17643090", "22771339", "15226221", "18405842", "13677480", "12896862", "15170526", "21504870", "8564854", "12896859", "22316153", "18954891", "23624006", "1679649", "11128088", "768755", "3418743", "21534561", "17703860", "20020914", "15921468", "21509786", "23343412", "15883903", "17283280" ]
[ { "pmid": "17643090", "title": "The application of discovery toxicology and pathology towards the design of safer pharmaceutical lead candidates.", "abstract": "Toxicity is a leading cause of attrition at all stages of the drug development process. The majority of safety-related attrition occurs preclinically, suggesting that approaches to identify 'predictable' preclinical safety liabilities earlier in the drug development process could lead to the design and/or selection of better drug candidates that have increased probabilities of becoming marketed drugs. In this Review, we discuss how the early application of preclinical safety assessment--both new molecular technologies as well as more established approaches such as standard repeat-dose rodent toxicology studies--can identify predictable safety issues earlier in the testing paradigm. The earlier identification of dose-limiting toxicities will provide chemists and toxicologists the opportunity to characterize the dose-limiting toxicities, determine structure-toxicity relationships and minimize or circumvent adverse safety liabilities." }, { "pmid": "22771339", "title": "Toxicokinetics as a key to the integrated toxicity risk assessment based primarily on non-animal approaches.", "abstract": "Toxicokinetics (TK) is the endpoint that informs about the penetration into and fate within the body of a toxic substance, including the possible emergence of metabolites. Traditionally, the data needed to understand those phenomena have been obtained in vivo. Currently, with a drive towards non-animal testing approaches, TK has been identified as a key element to integrate the results from in silico, in vitro and already available in vivo studies. TK is needed to estimate the range of target organ doses that can be expected from realistic human external exposure scenarios. This information is crucial for determining the dose/concentration range that should be used for in vitro testing. Vice versa, TK is necessary to convert the in vitro results, generated at tissue/cell or sub-cellular level, into dose response or potency information relating to the entire target organism, i.e. the human body (in vitro-in vivo extrapolation, IVIVE). Physiologically based toxicokinetic modelling (PBTK) is currently regarded as the most adequate approach to simulate human TK and extrapolate between in vitro and in vivo contexts. The fact that PBTK models are mechanism-based which allows them to be 'generic' to a certain extent (various extrapolations possible) has been critical for their success so far. The need for high-quality in vitro and in silico data on absorption, distribution, metabolism as well as excretion (ADME) as input for PBTK models to predict human dose-response curves is currently a bottleneck for integrative risk assessment." }, { "pmid": "18405842", "title": "Computational toxicology in drug development.", "abstract": "Computational tools for predicting toxicity have been envisaged for their potential to considerably impact the attrition rate of compounds in drug discovery and development. In silico techniques like knowledge-based expert systems (quantitative) structure activity relationship tools and modeling approaches may therefore help to significantly reduce drug development costs by succeeding in predicting adverse drug reactions in preclinical studies. It has been shown that commercial as well as proprietary systems can be successfully applied in the pharmaceutical industry. As the prediction has been exhaustively optimized for early safety-relevant endpoints like genotoxicity, future activities will now be directed to prevent the occurrence of undesired toxicity in patients by making these tools more relevant to human disease." }, { "pmid": "13677480", "title": "In silico prediction of drug toxicity.", "abstract": "It is essential, in order to minimise expensive drug failures due to toxicity being found in late development or even in clinical trials, to determine potential toxicity problems as early as possible. In view of the large libraries of compounds now being handled by combinatorial chemistry and high-throughput screening, identification of putative toxicity is advisable even before synthesis. Thus the use of predictive toxicology is called for. A number of in silico approaches to toxicity prediction are discussed. Quantitative structure-activity relationships (QSARs), relating mostly to specific chemical classes, have long been used for this purpose, and exist for a wide range of toxicity endpoints. However, QSARs also exist for the prediction of toxicity of very diverse libraries, although often such QSARs are of the classification type; that is, they predict simply whether or not a compound is toxic, and do not give an indication of the level of toxicity. Examples are given of all of these. A number of expert systems are available for toxicity prediction, most of them covering a range of toxicity endpoints. Those discussed include TOPKAT, CASE, DEREK, HazardExpert, OncoLogic and COMPACT. Comparative tests of the ability of these systems to predict carcinogenicity show that improvement is still needed. The consensus approach is recommended, whereby the results from several prediction systems are pooled." }, { "pmid": "12896862", "title": "Use of QSARs in international decision-making frameworks to predict health effects of chemical substances.", "abstract": "This article is a review of the use of quantitative (and qualitative) structure-activity relationships (QSARs and SARs) by regulatory agencies and authorities to predict acute toxicity, mutagenicity, carcinogenicity, and other health effects. A number of SAR and QSAR applications, by regulatory agencies and authorities, are reviewed. These include the use of simple QSAR analyses, as well as the use of multivariate QSARs, and a number of different expert system approaches." }, { "pmid": "15170526", "title": "Animal testing and alternative approaches for the human health risk assessment under the proposed new European chemicals regulation.", "abstract": "During the past 20 years the EU legislation for the notification of chemicals has focussed on new chemicals and at the same time failed to cover the evaluation of existing chemicals in Europe. Therefore, in a new EU chemicals policy (REACH, Registration, Evaluation and Authorization of Chemicals) the European Commission proposes to evaluate 30,000 chemicals within a period of 15 years. We are providing estimates of the testing requirements based on our personal experiences during the past 20 years. A realistic scenario based on an in-depth discussion of potential toxicological developments and an optimised \"tailor-made\" testing strategy shows that to meet the goals of the REACH policy, animal numbers may be significantly reduced below 10 million if industry would use in-house data from toxicity testing, which are confidential, if non-animal tests would be used, and if information from quantitative structure activity relationships (QSARs) would be applied in substance-tailored testing schemes. The procedures for evaluating the reproductive toxicity of chemicals have the strongest impact on the total number of animals bred for testing under REACH. We are assuming both an active collaboration with our colleagues in industry and substantial funding of the development and validation of advanced non-animal methods by the EU Commission, specifically in reproductive and developmental toxicity." }, { "pmid": "21504870", "title": "In silico toxicology models and databases as FDA Critical Path Initiative toolkits.", "abstract": "In silico toxicology methods are practical, evidence-based and high throughput, with varying accuracy. In silico approaches are of keen interest, not only to scientists in the private sector and to academic researchers worldwide, but also to the public. They are being increasingly evaluated and applied by regulators. Although there are foreseeable beneficial aspects--including maximising use of prior test data and the potential for minimising animal use for future toxicity testing--the primary use of in silico toxicology methods in the pharmaceutical sciences are as decision support information. It is possible for in silico toxicology methods to complement and strengthen the evidence for certain regulatory review processes, and to enhance risk management by supporting a more informed decision regarding priority setting for additional toxicological testing in research and product development. There are also several challenges with these continually evolving methods which clearly must be considered. This mini-review describes in silico methods that have been researched as Critical Path Initiative toolkits for predicting toxicities early in drug development based on prior knowledge derived from preclinical and clinical data at the US Food and Drug Administration, Center for Drug Evaluation and Research." }, { "pmid": "8564854", "title": "U.S. EPA regulatory perspectives on the use of QSAR for new and existing chemical evaluations.", "abstract": "As testing is not required, ecotoxicity or fate data are available for approximately 5% of the approximately 2,300 new chemicals/year (26,000 + total) submitted to the US-EPA. The EPA's Office of Pollution Prevention and Toxics (OPPT) regulatory program was forced to develop and rely upon QSARs to estimate the ecotoxicity and fate of most of the new chemicals evaluated for hazard and risk assessment. QSAR methods routinely result in ecotoxicity estimations of acute and chronic toxicity to fish, aquatic invertebrates, and algae, and in fate estimations of physical/chemical properties, degradation, and bioconcentration. The EPA's Toxic Substances Control Act (TSCA) Inventory of existing chemicals currently lists over 72,000 chemicals. Most existing chemicals also appear to have little or no ecotoxicity or fate data available and the OPPT new chemical QSAR methods now provide predictions and cross-checks of test data for the regulation of existing chemicals. Examples include the Toxics Release Inventory (TRI), the Design for the Environment (DfE), and the OECD/SIDS/HPV Programs. QSAR screening of the TSCA Inventory has prioritized thousands of existing chemicals for possible regulatory testing of: 1) persistent bioaccumulative chemicals, and 2) the high ecotoxicity of specific discrete organic chemicals." }, { "pmid": "12896859", "title": "Summary of a workshop on regulatory acceptance of (Q)SARs for human health and environmental endpoints.", "abstract": "The \"Workshop on Regulatory Use of (Q)SARs for Human Health and Environmental Endpoints,\" organized by the European Chemical Industry Council and the International Council of Chemical Associations, gathered more than 60 human health and environmental experts from industry, academia, and regulatory agencies from around the world. They agreed, especially industry and regulatory authorities, that the workshop initiated great potential for the further development and use of predictive models, that is, quantitative structure-activity relationships [(Q)SARs], for chemicals management in a much broader scope than is currently the case. To increase confidence in (Q)SAR predictions and minimization of their misuse, the workshop aimed to develop proposals for guidance and acceptability criteria. The workshop also described the broad outline of a system that would apply that guidance and acceptability criteria to a (Q)SAR when used for chemical management purposes, including priority setting, risk assessment, and classification and labeling." }, { "pmid": "22316153", "title": "The challenges involved in modeling toxicity data in silico: a review.", "abstract": "The percentage of failures in late pharmaceutical development due to toxicity has increased dramatically over the last decade or so, resulting in increased demand for new methods to rapidly and reliably predict the toxicity of compounds. In this review we discuss the challenges involved in both the building of in silico models on toxicology endpoints and their practical use in decision making. In particular, we will reflect upon the predictive strength of a number of different in silico models for a range of different endpoints, different approaches used to generate the models or rules, and limitations of the methods and the data used in model generation. Given that there exists no unique definition of a 'good' model, we will furthermore highlight the need to balance model complexity/interpretability with predictability, particularly in light of OECD/REACH guidelines. Special emphasis is put on the data and methods used to generate the in silico toxicology models, and their strengths and weaknesses are discussed. Switching to the applied side, we next review a number of toxicity endpoints, discussing the methods available to predict them and their general level of predictability (which very much depends on the endpoint considered). We conclude that, while in silico toxicology is a valuable tool to drug discovery scientists, much still needs to be done to, firstly, understand more completely the biological mechanisms for toxicity and, secondly, to generate more rapid in vitro models to screen compounds. With this biological understanding, and additional data available, our ability to generate more predictive in silico models should significantly improve in the future." }, { "pmid": "18954891", "title": "A new hybrid system of QSAR models for predicting bioconcentration factors (BCF).", "abstract": "The aim was to develop a reliable and practical quantitative structure-activity relationship (QSAR) model validated by strict conditions for predicting bioconcentration factors (BCF). We built up several QSAR models starting from a large data set of 473 heterogeneous chemicals, based on multiple linear regression (MLR), radial basis function neural network (RBFNN) and support vector machine (SVM) methods. To improve the results, we also applied a hybrid model, which gave better prediction than single models. All models were statistically analysed using strict criteria, including an external test set. The outliers were also examined to understand better in which cases large errors were to be expected and to improve the predictive models. The models offer more robust tools for regulatory purposes, on the basis of the statistical results and the quality check on the input data." }, { "pmid": "23624006", "title": "Integration of QSAR models for bioconcentration suitable for REACH.", "abstract": "QSAR (Quantitative Structure Activity Relationship) models can be a valuable alternative method to replace or reduce animal test required by REACH. In particular, some endpoints such as bioconcentration factor (BCF) are easier to predict and many useful models have been already developed. In this paper we describe how to integrate two popular BCF models to obtain more reliable predictions. In particular, the herein presented integrated model relies on the predictions of two among the most used BCF models (CAESAR and Meylan), together with the Applicability Domain Index (ADI) provided by the software VEGA. Using a set of simple rules, the integrated model selects the most reliable and conservative predictions and discards possible outliers. In this way, for the prediction of the 851 compounds included in the ANTARES BCF dataset, the integrated model discloses a R(2) (coefficient of determination) of 0.80, a RMSE (Root Mean Square Error) of 0.61 log units and a sensitivity of 76%, with a considerable improvement in respect to the CAESAR (R(2)=0.63; RMSE=0.84 log units; sensitivity 55%) and Meylan (R(2)=0.66; RMSE=0.77 log units; sensitivity 65%) without discarding too many predictions (118 out of 851). Importantly, considering solely the compounds within the new integrated ADI, the R(2) increased to 0.92, and the sensitivity to 85%, with a RMSE of 0.44 log units. Finally, the use of properly set safety thresholds applied for monitoring the so called \"suspicious\" compounds, which are those chemicals predicted in proximity of the border normally accepted to discern non-bioaccumulative from bioaccumulative substances, permitted to obtain an integrated model with sensitivity equal to 100%." }, { "pmid": "1679649", "title": "Computer prediction of possible toxic action from chemical structure; the DEREK system.", "abstract": "1. The development of DEREK, a computer-based expert system (derived from the LHASA chemical synthesis design program) for the qualitative prediction of possible toxic action of compounds on the basis of their chemical structure is described. 2. The system is able to perceive chemical sub-structures within molecules and relate these to a rulebase linking the sub-structures with likely types of toxicity. 3. Structures can be drawn in directly at a computer graphics terminal or retrieved automatically from a suitable in-house database. 4. The system is intended to aid the selection of compounds based on toxicological considerations, or separately to indicate specific toxicological properties to be tested for early in the evaluation of a compound, so saving time, money and some laboratory animals and resources." }, { "pmid": "11128088", "title": "LeadScope: software for exploring large sets of screening data.", "abstract": "Modern approaches to drug discovery have dramatically increased the speed and quantity of compounds that are made and tested for potential potency. The task of collecting, organizing, and assimilating this information is a major bottleneck in the discovery of new drugs. We have developed LeadScope a novel, interactive computer program for visualizing, browsing, and interpreting chemical and biological screening data that can assist pharmaceutical scientists in finding promising drug candidates. The software organizes the chemical data by structural features familiar to medicinal chemists. Graphs are used to summarize the data, and structural classes are highlighted that are statistically correlated with biological activity." }, { "pmid": "3418743", "title": "Computer-assisted analysis of interlaboratory Ames test variability.", "abstract": "The interlaboratory Ames test variability of the Salmonella/microsome assay was studied by comparing 12 sets of results generated in the frame of the International Program for the Evaluation of Short-Term Tests for Carcinogens (IPESTTC). The strategy for the simultaneous analysis of test performance similarities over the whole range of chemicals involved the use of multivariate data analysis methods. The various sets of Ames test data were contrasted both against each other and against a selection of other IPESTTC tests. These tests were chosen as representing a wide range of different patterns of response to the chemicals. This approach allowed us both to estimate the absolute extent of the interlaboratory variability of the Ames test, and to contrast its range of variability with the overall spread of test responses. Ten of the 12 laboratories showed a high degree of experimental reproducibility; two laboratories generated clearly differentiated results, probably related to differences in the protocol of metabolic activation. The analysis also indicated that assays such as Escherichia coli WP2 and chromosomal aberrations in Chinese hamster ovary cells generated sets of results within the variability range of Salmonella; in this sense they were not complementary to Salmonella." }, { "pmid": "21534561", "title": "Comparative evaluation of in silico systems for ames test mutagenicity prediction: scope and limitations.", "abstract": "The predictive power of four commonly used in silico tools for mutagenicity prediction (DEREK, Toxtree, MC4PC, and Leadscope MA) was evaluated in a comparative manner using a large, high-quality data set, comprising both public and proprietary data (F. Hoffmann-La Roche) from 9,681 compounds tested in the Ames assay. Satisfactory performance statistics were observed on public data (accuracy, 66.4-75.4%; sensitivity, 65.2-85.2%; specificity, 53.1-82.9%), whereas a significant deterioration of sensitivity was observed in the Roche data (accuracy, 73.1-85.5%; sensitivity, 17.4-43.4%; specificity, 77.5-93.9%). As a general tendency, expert systems showed higher sensitivity and lower specificity when compared to QSAR-based tools, which displayed the opposite behavior. Possible reasons for the performance differences between the public and Roche data, relating to the experimentally inactive to active compound ratio and the different coverage of chemical space, are thoroughly discussed. Examples of peculiar chemical classes enriched in false negative or false positive predictions are given, and the results of the combined use of the prediction systems are described." }, { "pmid": "17703860", "title": "Comparison of MC4PC and MDL-QSAR rodent carcinogenicity predictions and the enhancement of predictive performance by combining QSAR models.", "abstract": "This report presents a comparison of the predictive performance of MC4PC and MDL-QSAR software as well as a method for combining the predictions from both programs to increase overall accuracy. The conclusions are based on 10 x 10% leave-many-out internal cross-validation studies using 1540 training set compounds with 2-year rodent carcinogenicity findings. The models were generated using the same weight of evidence scoring method previously developed [Matthews, E.J., Contrera, J.F., 1998. A new highly specific method for predicting the carcinogenic potential of pharmaceuticals in rodents using enhanced MCASE QSAR-ES software. Regul. Toxicol. Pharmacol. 28, 242-264.]. Although MC4PC and MDL-QSAR use different algorithms, their overall predictive performance was remarkably similar. Respectively, the sensitivity of MC4PC and MDL-QSAR was 61 and 63%, specificity was 71 and 75%, and concordance was 66 and 69%. Coverage for both programs was over 95% and receiver operator characteristic (ROC) intercept statistic values were above 2.00. The software programs had complimentary coverage with none of the 1540 compounds being uncovered by both MC4PC and MDL-QSAR. Merging MC4PC and MDL-QSAR predictions improved the overall predictive performance. Consensus sensitivity increased to 67%, specificity to 84%, concordance to 76%, and ROC to 4.31. Consensus rules can be tuned to reflect the priorities of the user, so that greater emphasis may be placed on predictions with high sensitivity/low false negative rates or high specificity/low false positive rates. Sensitivity was optimized to 75% by reclassifying all compounds predicted to be positive in MC4PC or MDL-QSAR as positive, and specificity was optimized to 89% by reclassifying all compounds predicted negative in MC4PC or MDL-QSAR as negative." }, { "pmid": "20020914", "title": "Combined Use of MC4PC, MDL-QSAR, BioEpisteme, Leadscope PDM, and Derek for Windows Software to Achieve High-Performance, High-Confidence, Mode of Action-Based Predictions of Chemical Carcinogenesis in Rodents.", "abstract": "ABSTRACT This report describes a coordinated use of four quantitative structure-activity relationship (QSAR) programs and an expert knowledge base system to predict the occurrence and the mode of action of chemical carcinogenesis in rodents. QSAR models were based upon a weight-of-evidence paradigm of carcinogenic activity that was linked to chemical structures (n = 1,572). Identical training data sets were configured for four QSAR programs (MC4PC, MDL-QSAR, BioEpisteme, and Leadscope PDM), and QSAR models were constructed for the male rat, female rat, composite rat, male mouse, female mouse, composite mouse, and rodent composite endpoints. Model predictions were adjusted to favor high specificity (>80%). Performance was shown to be affected by the method used to score carcinogenicity study findings and the ratio of the number of active to inactive chemicals in the QSAR training data set. Results demonstrated that the four QSAR programs were complementary, each detecting different profiles of carcinogens. Accepting any positive prediction from two programs showed better overall performance than either of the single programs alone; specificity, sensitivity, and Chi-square values were 72.9%, 65.9%, and 223, respectively, compared to 84.5%, 45.8%, and 151. Accepting only consensus-positive predictions using any two programs had the best overall performance and higher confidence; specificity, sensitivity, and Chi-square values were 85.3%, 57.5%, and 287, respectively. Specific examples are provided to demonstrate that consensus-positive predictions of carcinogenicity by two QSAR programs identified both genotoxic and nongenotoxic carcinogens and that they detected 98.7% of the carcinogens linked in this study to Derek for Windows defined modes of action." }, { "pmid": "15921468", "title": "Boosting: an ensemble learning tool for compound classification and QSAR modeling.", "abstract": "A classification and regression tool, J. H. Friedman's Stochastic Gradient Boosting (SGB), is applied to predicting a compound's quantitative or categorical biological activity based on a quantitative description of the compound's molecular structure. Stochastic Gradient Boosting is a procedure for building a sequence of models, for instance regression trees (as in this paper), whose outputs are combined to form a predicted quantity, either an estimate of the biological activity, or a class label to which a molecule belongs. In particular, the SGB procedure builds a model in a stage-wise manner by fitting each tree to the gradient of a loss function: e.g., squared error for regression and binomial log-likelihood for classification. The values of the gradient are computed for each sample in the training set, but only a random sample of these gradients is used at each stage. (Friedman showed that the well-known boosting algorithm, AdaBoost of Freund and Schapire, could be considered as a particular case of SGB.) The SGB method is used to analyze 10 cheminformatics data sets, most of which are publicly available. The results show that SGB's performance is comparable to that of Random Forest, another ensemble learning method, and are generally competitive with or superior to those of other QSAR methods. The use of SGB's variable importance with partial dependence plots for model interpretation is also illustrated." }, { "pmid": "21509786", "title": "Ensemble QSAR: a QSAR method based on conformational ensembles and metric descriptors.", "abstract": "Quantitative structure-activity relationship (QSAR) is the most versatile tool in computer-assisted molecular design. One conceptual drawback seen in QSAR approaches is the \"one chemical-one structure-one parameter value\" dogma where the model development is based on physicochemical description for a single molecular conformation, while ignoring the rest of the conformational space. It is well known that molecules have several low-energy conformations populated at physiological temperature, and each conformer makes a significant impact on associated properties such as biological activity. At the level of molecular interaction, the dynamics around the molecular structure is of prime essence rather than the average structure. As a step toward understanding the role of these discrete microscopic states in biological activity, we have put together a theoretically rigorous and computationally tractable formalism coined as eQSAR. In this approach, the biological activity is modeled as a function of physicochemical description for a selected set of low-energy conformers, rather than that's for a single lowest energy conformation. Eigenvalues derived from the \"Physicochemical property integrated distance matrices\" (PD-matrices) that encompass both 3D structure and physicochemical properties, have been used as descriptors; is a novel addition. eQSAR is validated on three peptide datasets and explicitly elaborated for bradykinin-potentiating peptides. The conformational ensembles were generated by a simple molecular dynamics and consensus dynamics approaches. The eQSAR models are statistically significant and possess the ability to select the most biologically relevant conformation(s) with the relevant physicochemical attributes that have the greatest meaning for description of the biological activity." }, { "pmid": "23343412", "title": "Interpretable, probability-based confidence metric for continuous quantitative structure-activity relationship models.", "abstract": "A great deal of research has gone into the development of robust confidence in prediction and applicability domain (AD) measures for quantitative structure-activity relationship (QSAR) models in recent years. Much of the attention has historically focused on structural similarity, which can be defined in many forms and flavors. A concept that is frequently overlooked in the realm of the QSAR applicability domain is how the local activity landscape plays a role in how accurate a prediction is or is not. In this work, we describe an approach that pairs information about both the chemical similarity and activity landscape of a test compound's neighborhood into a single calculated confidence value. We also present an approach for converting this value into an interpretable confidence metric that has a simple and informative meaning across data sets. The approach will be introduced to the reader in the context of models built upon four diverse literature data sets. The steps we will outline include the definition of similarity used to determine nearest neighbors (NN), how we incorporate the NN activity landscape with a similarity-weighted root-mean-square distance (wRMSD) value, and how that value is then calibrated to generate an intuitive confidence metric for prospective application. Finally, we will illustrate the prospective performance of the approach on five proprietary models whose predictions and confidence metrics have been tracked for more than a year." }, { "pmid": "15883903", "title": "Understanding interobserver agreement: the kappa statistic.", "abstract": "Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described." } ]
JMIR Medical Education
27731840
PMC5041364
10.2196/mededu.4789
A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education
BackgroundPreparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education.ObjectiveThe aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making.MethodsA deductive case study approach was applied to develop the conceptual model.ResultsThe analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators.ConclusionsThe model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach.
Related WorkEducational Informatics is a multidisciplinary research area that uses Information and Communication Technology (ICT) in education. It has many sub-disciplines, a number of which focus on learning or teaching (eg, simulation), and others that focus on administration of educational programs (eg, curriculum mapping and analytics). Within the area of analytics, it is possible to identify work focusing on the technical challenges (eg, educational data mining), the educational challenges (eg, Learning analytics), or the administrative challenges (eg, Academic- and Action analytics) [8].The Academic- and Learning analytics fields emerged in early 2005. The major factors driving their development are technological, educational, and political. Development of the necessary techniques for data-driven analytics and decision support began in the early 20thcentury. Higher education institutions are collecting more data than ever before. However, most of these data are not used at all, or they are used for purposes other than addressing strategic questions. Educational institutions face bigger challenges than ever before, including increasing requirements for excellence, internationalization, the emergence of new sciences, new markets, and new educational forms. The potential benefits of analytics for applications such as resource optimization and automatization of multiple administrative functions (alerts, reports, and recommendations) have been described in the literature [9,10].
[ "2294449", "11141156", "15523387", "20054502", "25160372" ]
[ { "pmid": "20054502", "title": "Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics. First Revision.", "abstract": "Objective: The International Medical Informatics Association (IMIA) agreed on revising the existing international recommendations in health informatics/medical informatics education. These should help to establish courses, course tracks or even complete programs in this field, to further develop existing educational activities in the various nations and to support international initiatives concerning education in biomedical and health informatics (BMHI), particularly international activities in educating BMHI specialists and the sharing of courseware. Method: An IMIA task force, nominated in 2006, worked on updating the recommendations' first version. These updates have been broadly discussed and refined by members of IMIA's National Member Societies, IMIA's Academic Institutional Members and by members of IMIA's Working Group on Health and Medical Informatics Education. Results and Conclusions: The IMIA recommendations center on educational needs for health care professionals to acquire knowledge and skills in information processing and information and communication technology. The educational needs are described as a three-dimensional framework. The dimensions are: 1) professionals in health care (e.g. physicians, nurses, BMHI professionals), 2) type of specialization in BMHI (IT users, BMHI specialists), and 3) stage of career progression (bachelor, master, doctorate). Learning outcomes are defined in terms of knowledge and practical skills for health care professionals in their role a) as IT user and b) as BMHI specialist. Recommendations are given for courses/course tracks in BMHI as part of educational programs in medicine, nursing, health care management, dentistry, pharmacy, public health, health record administration, and informatics/computer science as well as for dedicated programs in BMHI (with bachelor, master or doctor degree). To support education in BMHI, IMIA offers to award a certificate for high-quality BMHI education. It supports information exchange on programs and courses in BMHI through its Working Group on Health and Medical Informatics Education." }, { "pmid": "25160372", "title": "Big data in medical informatics: improving education through visual analytics.", "abstract": "A continuous effort to improve healthcare education today is currently driven from the need to create competent health professionals able to meet healthcare demands. Limited research reporting how educational data manipulation can help in healthcare education improvement. The emerging research field of visual analytics has the advantage to combine big data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognise visual patterns. The aim of this study was therefore to explore novel ways of representing curriculum and educational data using visual analytics. Three approaches of visualization and representation of educational data were presented. Five competencies at undergraduate medical program level addressed in courses were identified to inaccurately correspond to higher education board competencies. Different visual representations seem to have a potential in impacting on the ability to perceive entities and connections in the curriculum data." } ]
Scientific Reports
27686748
PMC5043229
10.1038/srep34181
Accuracy Improvement for Predicting Parkinson’s Disease Progression
Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease.
Related WorkFor effective diagnosis of Parkinson’s Disease (PD), different types of classification methods were examined by Das30. The computation of the performance score of the classifiers was based on various evaluation methods. According to the results of application scores, they found that Neural Networks (NNs) classifier obtains the best result which was 92.9% of accuracy. Bhattacharya and Bhatia31 used data mining tool, Weka, to pre-process the dataset on which they used Support Vector Machine (SVM) to distinguish people with PD from the healthy people. They applied LIBSVM to find the best possible accuracy on different kernel values for the experimental dataset. They measured the accuracy of models using Receiver Operating Characteristic (ROC) curve variation. Chen et al.13 presented a diagnosis PD system by using Fuzzy K-Nearest Neighbor (FKNN). They compared the results of developed FKNN-based system with the results of SVM based approaches. They also employed PCA to further improve the PD diagnosis accuracy. Using a 10-fold cross-validation, the experimental results demonstrated that the FKNN-based system significantly improve the classification accuracy (96.07%) and outperforms SVM-based approaches and other methods in the literature. Ozcift32 developed a classification method based on SVM and obtained about 97% accuracy for the prediction of PD progression. Polat29 examined the Fuzzy C-Means (FCM) Clustering-based Feature Weighting (FCMFW) for the detection of PD. The author used K-NN classifier for classification purpose and applied it on the experimental dataset with different values of k. Åström and Koker33 proposed a prediction system that is based on parallel NNs. The output of each NN was evaluated by using a rule-based system for the final decision. The experiments on the proposed method showed that a set of nine parallel NNs yielded an improvement of 8.4% on the prediction of PD compared to a single unique network. Li et al.34 proposed a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, they applied Principal Component Analysis (PCA) to extract the optimal subset of features and SVM for predicting PD. Guo et al.35 developed a hybrid system using Expectation Maximization (EM) and Genetic Programming (GP) to construct learning feature functions from the features of voice in PD context. Using projection based learning for meta-cognitive Radial Basis Function Network (PBL-McRBFN), Babu and Suresh (2013) implemented a gene expression based method for the prediction of PD progression. The capabilities of the Random Forest algorithm was tested by Peterek et al.36 for the prediction of PD progression. A hybrid intelligent system was proposed by Hariharan et al.24 using clustering (Gaussian mixture model), feature reduction and classification methods. Froelich et al.23 investigated the diagnosis of PD on the basis of characteristic features of a person’s voice. They classified individual voice samples to a sick or to a healthy person using decision trees. Then they used the threshold-based method for the final diagnosis of a person thorough previously classified voice samples. The value of the threshold determines the minimal number of individual voice samples (indicating the disease) that is required for the reliable diagnosis of a sick person. Using real-world data, the achievement of accuracy of classification was 90%. Eskidere et al.25 studied the performance of SVM, Least Square SVM (LS-SVM), Multilayer Perceptron NN (MLPNN), and General Regression NN (GRNN) regression methods to remote tracking of PD progression. Results of their study demonstrated that the best accuracy is obtained by LS-SVM in relation to the other three methods, and outperforms latest proposed regression methods published in the literature. In a study by Guo et al.10 in Central South of Mainland China, sixteen Single-Nucleotide Polymorphisms (SNPs) located in the 8 genes and/or loci (SNCA, LRRK2, MAPT, GBA, HLA-DR, BST1, PARK16, and PARK17) were analysed in a cohort of 1061 PD, and 1066 Normal healthy participants. This study established that Rep1, rs356165, and rs11931074 in SNCA gene, G2385R in LRRK2 gene, rs4698412 in BST1 gene, rs1564282 in PARK17, and L444P in GBA gene have an independent and combined significant effect on PD. As a final point, this study has reported that SNPs in these 4 genes have more pronounced effect on PD.From the literature on the prediction of PD progression, we found that at the moment there is no implementation of Principal Component Analysis (PCA), Gaussian mixture model with Expectation Maximization (EM) and prediction methods in PD diagnosis. This research accordingly tries to develop an intelligent system for PD diagnosis based on these approaches. Hence, in this paper, we incorporate the robust machine learning techniques and propose a new hybrid intelligent system using PCA, Gaussian mixture model with EM and prediction methods. Overall, in comparison with research efforts found in the literature, in this research:A comparative study is conducted between two robust supervised prediction techniques, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR).EM is used for data clustering. The clustering problem has been addressed in many diseases diagnosis systems1337. This reflects its broad appeal and usefulness as one of the steps in exploratory health data analysis. In this study, EM clustering is used as an unsupervised classification method to cluster the data of experimental dataset into similar groups.ANFIS and SVR are used for prediction of PD progression.PCA is used for dimensionality reduction and dealing with the multi-collinearity problem in the experimental data. This technique has been used in developing in many disease diagnosis systems to eliminate the redundant information in the original health data272829.A hybrid intelligent system is proposed using EM, PCA and prediction methods, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression.
[ "20082967", "23711400", "27184740", "22387368", "23154271", "21556377", "26618044", "25623333", "22387592", "25064009", "12777365", "22656184", "22733427", "22502984", "23182747", "24485390", "21547504", "21493051", "26019610", "26828106" ]
[ { "pmid": "20082967", "title": "Predicting Parkinson's disease - why, when, and how?", "abstract": "Parkinson's disease (PD) is a progressive disorder with a presymptomatic interval; that is, there is a period during which the pathologic process has begun, but motor signs required for the clinical diagnosis are absent. There is considerable interest in discovering markers to diagnose this preclinical stage. Current predictive marker development stems mainly from two principles; first, that pathologic processes occur in lower brainstem regions before substantia nigra involvement and second, that redundancy and compensatory responses cause symptoms to emerge only after advanced degeneration. Decreased olfaction has recently been demonstrated to predict PD in prospective pathologic studies, although the lead time may be relatively short and the positive predictive value and specificity are low. Screening patients for depression and personality changes, autonomic symptoms, subtle motor dysfunction on quantitative testing, sleepiness and insomnia are other potential simple markers. More invasive measures such as detailed autonomic testing, cardiac MIBG-scintigraphy, transcranial ultrasound, and dopaminergic functional imaging may be especially useful in those at high risk or for further defining risk in those identified through primary screening. Despite intriguing leads, direct testing of preclinical markers has been limited, mainly because there is no reliable way to identify preclinical disease. Idiopathic RBD is characterized by loss of normal atonia with REM sleep. Approximately 50% of affected individuals will develop PD or dementia within 10 years. This provides an unprecedented opportunity to test potential predictive markers before clinical disease onset. The results of marker testing in idiopathic RBD with its implications for disease prediction will be detailed." }, { "pmid": "23711400", "title": "Unveiling relevant non-motor Parkinson's disease severity symptoms using a machine learning approach.", "abstract": "OBJECTIVE\nIs it possible to predict the severity staging of a Parkinson's disease (PD) patient using scores of non-motor symptoms? This is the kickoff question for a machine learning approach to classify two widely known PD severity indexes using individual tests from a broad set of non-motor PD clinical scales only.\n\n\nMETHODS\nThe Hoehn & Yahr index and clinical impression of severity index are global measures of PD severity. They constitute the labels to be assigned in two supervised classification problems using only non-motor symptom tests as predictor variables. Such predictors come from a wide range of PD symptoms, such as cognitive impairment, psychiatric complications, autonomic dysfunction or sleep disturbance. The classification was coupled with a feature subset selection task using an advanced evolutionary algorithm, namely an estimation of distribution algorithm.\n\n\nRESULTS\nResults show how five different classification paradigms using a wrapper feature selection scheme are capable of predicting each of the class variables with estimated accuracy in the range of 72-92%. In addition, classification into the main three severity categories (mild, moderate and severe) was split into dichotomic problems where binary classifiers perform better and select different subsets of non-motor symptoms. The number of jointly selected symptoms throughout the whole process was low, suggesting a link between the selected non-motor symptoms and the general severity of the disease.\n\n\nCONCLUSION\nQuantitative results are discussed from a medical point of view, reflecting a clear translation to the clinical manifestations of PD. Moreover, results include a brief panel of non-motor symptoms that could help clinical practitioners to identify patients who are at different stages of the disease from a limited set of symptoms, such as hallucinations, fainting, inability to control body sphincters or believing in unlikely facts." }, { "pmid": "27184740", "title": "Modified serpinA1 as risk marker for Parkinson's disease dementia: Analysis of baseline data.", "abstract": "Early detection of dementia in Parkinson disease is a prerequisite for preventive therapeutic approaches. Modified serpinA1 in cerebrospinal fluid (CSF) was suggested as an early biomarker for differentiation between Parkinson patients with (PDD) or without dementia (PD). Within this study we aimed to further explore the diagnostic value of serpinA1. We applied a newly developed nanoscale method for the detection of serpinA1 based on automated capillary isoelectric focusing (CIEF). A clinical sample of 102 subjects including neurologically healthy controls (CON), PD and PDD patients was investigated. Seven serpinA1 isoforms of different charge were detected in CSF from all three diagnostic groups. The mean CSF signals of the most acidic serpinA1 isoform differed significantly (p < 0.01) between PDD (n = 29) and PD (n = 37) or CON (n = 36). Patients above the cut-off of 6.4 have a more than six times higher risk for an association with dementia compared to patients below the cut off. We propose this serpinA1 CIEF-immunoassay as a novel tool in predicting cognitive impairment in PD patients and therefore for patient stratification in therapeutic trials." }, { "pmid": "22387368", "title": "Neurotransmitter receptors and cognitive dysfunction in Alzheimer's disease and Parkinson's disease.", "abstract": "Cognitive dysfunction is one of the most typical characteristics in various neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease (advanced stage). Although several mechanisms like neuronal apoptosis and inflammatory responses have been recognized to be involved in the pathogenesis of cognitive dysfunction in these diseases, recent studies on neurodegeneration and cognitive dysfunction have demonstrated a significant impact of receptor modulation on cognitive changes. The pathological alterations in various receptors appear to contribute to cognitive impairment and/or deterioration with correlation to diversified mechanisms. This article recapitulates the present understandings and concepts underlying the modulation of different receptors in human beings and various experimental models of Alzheimer's disease and Parkinson's disease as well as a conceptual update on the underlying mechanisms. Specific roles of serotonin, adrenaline, acetylcholine, dopamine receptors, and N-methyl-D-aspartate receptors in Alzheimer's disease and Parkinson's disease will be interactively discussed. Complex mechanisms involved in their signaling pathways in the cognitive dysfunction associated with the neurodegenerative diseases will also be addressed. Substantial evidence has suggested that those receptors are crucial neuroregulators contributing to cognitive pathology and complicated correlations exist between those receptors and the expression of cognitive capacities. The pathological alterations in the receptors would, therefore, contribute to cognitive impairments and/or deterioration in Alzheimer's disease and Parkinson's disease. Future research may shed light on new clues for the treatment of cognitive dysfunction in neurodegenerative diseases by targeting specific alterations in these receptors and their signal transduction pathways in the frontal-striatal, fronto-striato-thalamic, and mesolimbic circuitries." }, { "pmid": "23154271", "title": "Serum uric acid in patients with Parkinson's disease and vascular parkinsonism: a cross-sectional study.", "abstract": "BACKGROUND\nElevation of serum uric acid (UA) is correlated with a decreased risk of Parkinson's disease (PD); however, the association and clinical relevance of serum UA levels in patients with PD and vascular parkinsonism (VP) are unknown.\n\n\nOBJECTIVE\nWe performed a cross-sectional study of 160 Chinese patients with PD and VP to determine whether UA levels in patients could predict the outcomes.\n\n\nMETHODS\nSerum UA levels were divided into quartiles and the association between UA and the severity of PD or VP was investigated in each quartile.\n\n\nRESULTS\nThe serum levels of UA in PD were significantly lower than those in normal subjects and VP. The serum UA levels in PD patients were significantly correlated with some clinical parameters. Strong correlations were observed in male PD patients, but significant correlations were observed only between UA and the non-motor symptoms (NMS) of burden of sleep/fatigue and mood in female PD patients. PD patients in the lowest quartile of serum UA levels had significant correlations between UA and the unified Parkinson's disease rating scale, the modified Hoehn and Yahr staging scale and NMS burden for attention/memory.\n\n\nCONCLUSION\nOur findings support the hypothesis that subjects with low serum UA levels may be more prone to developing PD and indicate that the inverse relationship between UA and severity of PD was robust for men but weak for women. Our results strongly imply that either low serum UA level is a deteriorative predictor or that serum UA level serves as an indirect biomarker of prediction in PD but not in VP patients." }, { "pmid": "21556377", "title": "The combination of homocysteine and C-reactive protein predicts the outcomes of Chinese patients with Parkinson's disease and vascular parkinsonism.", "abstract": "BACKGROUND\nThe elevation of plasma homocysteine (Hcy) and C-reactive protein (CRP) has been correlated to an increased risk of Parkinson's disease (PD) or vascular diseases. The association and clinical relevance of a combined assessment of Hcy and CRP levels in patients with PD and vascular parkinsonism (VP) are unknown.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe performed a cross-sectional study of 88 Chinese patients with PD and VP using a clinical interview and the measurement of plasma Hcy and CRP to determine if Hcy and CRP levels in patients may predict the outcomes of the motor status, non-motor symptoms (NMS), disease severity, and cognitive declines. Each patient's NMS, cognitive deficit, disease severity, and motor status were assessed by the Nonmotor Symptoms Scale (NMSS), Mini-Mental State Examination (MMSE), the modified Hoehn and Yahr staging scale (H&Y), and the unified Parkinson's disease rating scale part III (UPDRS III), respectively. We found that 100% of patients with PD and VP presented with NMS. The UPDRS III significantly correlated with CRP (P = 0.011) and NMSS (P = 0.042) in PD patients. The H&Y was also correlated with Hcy (P = 0.002), CRP (P = 0.000), and NMSS (P = 0.023) in PD patients. In VP patients, the UPDRS III and H&Y were not significantly associated with NMSS, Hcy, CRP, or MMSE. Strong correlations were observed between Hcy and NMSS as well as between CRP and NMSS in PD and VP.\n\n\nCONCLUSIONS/SIGNIFICANCE\nOur findings support the hypothesis that Hcy and CRP play important roles in the pathogenesis of PD. The combination of Hcy and CRP may be used to assess the progression of PD and VP. Whether or not anti-inflammatory medication could be used in the management of PD and VP will produce an interesting topic for further research." }, { "pmid": "26618044", "title": "Low Cerebral Glucose Metabolism: A Potential Predictor for the Severity of Vascular Parkinsonism and Parkinson's Disease.", "abstract": "This study explored the association between cerebral metabolic rates of glucose (CMRGlc) and the severity of Vascular Parkinsonism (VP) and Parkinson's disease (PD). A cross-sectional study was performed to compare CMRGlc in normal subjects vs. VP and PD patients. Twelve normal subjects, 22 VP, and 11 PD patients were evaluated with the H&Y and MMSE, and underwent 18F-FDG measurements. Pearson's correlations were used to identify potential associations between the severity of VP/PD and CMRGlc. A pronounced reduction of CMRGlc in the frontal lobe and caudate putamen was detected in patients with VP and PD when compared with normal subjects. The VP patients displayed a slight CMRGlc decrease in the caudate putamen and frontal lobe in comparison with PD patients. These decreases in CMRGlc in the frontal lobe and caudate putamen were significantly correlated with the VP patients' H&Y, UPDRS II, UPDRS III, MMSE, cardiovascular, and attention/memory scores. Similarly, significant correlations were observed in patients with PD. This is the first clinical study finding strong evidence for an association between low cerebral glucose metabolism and the severity of VP and PD. Our findings suggest that these changes in glucose metabolism in the frontal lobe and caudate putamen may underlie the pathophysiological mechanisms of VP and PD. As the scramble to find imaging biomarkers or predictors of the disease intensifies, a better understanding of the roles of cerebral glucose metabolism may give us insight into the pathogenesis of VP and PD." }, { "pmid": "25623333", "title": "Polygenic determinants of Parkinson's disease in a Chinese population.", "abstract": "It has been reported that some single-nucleotide polymorphisms (SNPs) are associated with the risk of Parkinson's disease (PD), but whether a combination of these SNPs would have a stronger association with PD than any individual SNP is unknown. Sixteen SNPs located in the 8 genes and/or loci (SNCA, LRRK2, MAPT, GBA, HLA-DR, BST1, PARK16, and PARK17) were analyzed in a Chinese cohort consisting of 1061 well-characterized PD patients and 1066 control subjects from Central South of Mainland China. We found that Rep1, rs356165, and rs11931074 in SNCA gene; G2385R in LRRK2 gene; rs4698412 in BST1 gene; rs1564282 in PARK17; and L444P in GBA gene were associated with PD with adjustment of sex and age (p < 0.05) in the analysis of 16 variants. PD risk increased when Rep1 and rs11931074, G2385R, rs1564282, rs4698412; rs11931074 and G2385R, rs1564282, rs4698412; G2385R and rs1564282, rs4698412; and rs1564282 and rs4698412 were combined for the association analysis. In addition, PD risk increased cumulatively with the increasing number of variants (odds ratio for carrying 3 variants, 3.494). In summary, we confirmed that Rep1, rs356165, and rs11931074 in SNCA gene, G2385R in LRRK2 gene, rs4698412 in BST1 gene, rs1564282 in PARK17, and L444P in GBA gene have an independent and combined significant association with PD. SNPs in these 4 genes have a cumulative effect with PD." }, { "pmid": "22387592", "title": "Speech impairment in a large sample of patients with Parkinson's disease.", "abstract": "This study classified speech impairment in 200 patients with Parkinson's disease (PD) into five levels of overall severity and described the corresponding type (voice, articulation, fluency) and extent (rated on a five-point scale) of impairment for each level. From two-minute conversational speech samples, parameters of voice, fluency and articulation were assessed by two trained-raters. Voice was found to be the leading deficit, most frequently affected and impaired to a greater extent than other features in the initial stages. Articulatory and fluency deficits manifested later, articulatory impairment matching voice impairment in frequency and extent at the `Severe' stage. At the final stage of `Profound' impairment, articulation was the most frequently impaired feature at the lowest level of performance. This study illustrates the prominence of voice and articulatory speech motor control deficits, and draws parallels with deficits of motor set and motor set instability in skeletal controls of gait and handwriting." }, { "pmid": "25064009", "title": "Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson's disease.", "abstract": "We conducted a meta-analysis of Parkinson's disease genome-wide association studies using a common set of 7,893,274 variants across 13,708 cases and 95,282 controls. Twenty-six loci were identified as having genome-wide significant association; these and 6 additional previously reported loci were then tested in an independent set of 5,353 cases and 5,551 controls. Of the 32 tested SNPs, 24 replicated, including 6 newly identified loci. Conditional analyses within loci showed that four loci, including GBA, GAK-DGKQ, SNCA and the HLA region, contain a secondary independent risk variant. In total, we identified and replicated 28 independent risk variants for Parkinson's disease across 24 loci. Although the effect of each individual locus was small, risk profile analysis showed substantial cumulative risk in a comparison of the highest and lowest quintiles of genetic risk (odds ratio (OR) = 3.31, 95% confidence interval (CI) = 2.55-4.30; P = 2 × 10(-16)). We also show six risk loci associated with proximal gene expression or DNA methylation." }, { "pmid": "12777365", "title": "Incidence of Parkinson's disease: variation by age, gender, and race/ethnicity.", "abstract": "The goal of this study was to estimate the incidence of Parkinson's disease by age, gender, and ethnicity. Newly diagnosed Parkinson's disease cases in 1994-1995 were identified among members of the Kaiser Permanente Medical Care Program of Northern California, a large health maintenance organization. Each case met modified standardized criteria/Hughes diagnostic criteria as applied by a movement disorder specialist. Incidence rates per 100,000 person-years were calculated using the Kaiser Permanente membership information as the denominator and adjusted for age and/or gender using the direct method of standardization. A total of 588 newly diagnosed (incident) cases of Parkinson's disease were identified, which gave an overall annualized age- and gender-adjusted incidence rate of 13.4 per 100,000 (95% confidence interval (CI): 11.4, 15.5). The incidence rapidly increased over the age of 60 years, with only 4% of the cases being under the age of 50 years. The rate for men (19.0 per 100,000, 95% CI: 16.1, 21.8) was 91% higher than that for women (9.9 per 100,000, 95% CI: 7.6, 12.2). The age- and gender-adjusted rate per 100,000 was highest among Hispanics (16.6, 95% CI: 12.0, 21.3), followed by non-Hispanic Whites (13.6, 95% CI: 11.5, 15.7), Asians (11.3, 95% CI: 7.2, 15.3), and Blacks (10.2, 95% CI: 6.4, 14.0). These data suggest that the incidence of Parkinson's disease varies by race/ethnicity." }, { "pmid": "22656184", "title": "Musculoskeletal problems as an initial manifestation of Parkinson's disease: a retrospective study.", "abstract": "OBJECTIVE\nThe purpose of this study was to review the prevalence of musculoskeletal pain in the prodromal phase of PD, before the PD diagnosis is made.\n\n\nMETHODS\nA retrospective review of 82 PD patients was performed. Hospital inpatient notes and outpatient clinic admission notes were reviewed. The initial complaints prompting patients to seek medical attention were noted, as were the initial diagnoses. The symptoms were considered retrospectively to be associated with PD.\n\n\nRESULTS\nMusculoskeletal pain was present as a prodromal PD symptom in 27 (33%) cases initially diagnosed with osteoarthritis, degenerative spinal disease, and frozen shoulder. The mean time from the initial symptom appearance to dopaminergic treatment was 6.6 years in the musculoskeletal pain group and 2.3 years in the group with typical PD signs. Significant improvement of musculoskeletal pain after the initiation of dopaminergic treatment was present in 23 (85%) cases.\n\n\nCONCLUSIONS\nOf the PD patients who went on to develop motor features of PD, one third manifested musculoskeletal pain as the initial symptom. A good response to L-DOPA therapy was seen in 85% of cases presenting with musculoskeletal pain. Our findings suggest that musculoskeletal pain may be a significant feature in earlier PD stages." }, { "pmid": "22733427", "title": "Rapid eye movement sleep behavior disorder and subtypes of Parkinson's disease.", "abstract": "Numerous studies have explored the potential relationship between rapid eye movement sleep behavior disorder (RBD) and manifestations of PD. Our aim was to perform an expanded extensive assessment of motor and nonmotor manifestations in PD to identify whether RBD was associated with differences in the nature and severity of these manifestations. PD patients underwent polysomnography (PSG) to diagnose the presence of RBD. Participants then underwent an extensive evaluation by a movement disorders specialist blinded to PSG results. Measures of disease severity, quantitative motor indices, motor subtypes, therapy complications, and autonomic, psychiatric, visual, and olfactory dysfunction were assessed and compared using regression analysis, adjusting for disease duration, age, and sex. Of 98 included patients, 54 had RBD and 44 did not. PD patients with RBD were older (P = 0.034) and were more likely to be male (P < 0.001). On regression analysis, the most consistent links between RBD and PD were a higher systolic blood pressure (BP) change while standing (-23.9 ± 13.9 versus -3.5 ± 10.9; P < 0.001), a higher orthostatic symptom score (0.89 ± 0.82 versus 0.44 ± 0.66; P = 0.003), and a higher frequency of freezing (43% versus 14%; P = 0.011). A systolic BP drop >10 could identify PD patients with RBD with 81% sensitivity and 86% specificity. In addition, there was a probable relationship between RBD and nontremor predominant subtype of PD (P = 0.04), increased frequency of falls (P = 0.009), and depression (P = 0.009). Our results support previous findings that RBD is a multifaceted phenomenon in PD. Patients with PD who have RBD tend to have specific motor and nonmotor manifestations, especially orthostatic hypotension." }, { "pmid": "22502984", "title": "A PC-based system for predicting movement from deep brain signals in Parkinson's disease.", "abstract": "There is much current interest in deep brain stimulation (DBS) of the subthalamic nucleus (STN) for the treatment of Parkinson's disease (PD). This type of surgery has enabled unprecedented access to deep brain signals in the awake human. In this paper we present an easy-to-use computer based system for recording, displaying, archiving, and processing electrophysiological signals from the STN. The system was developed for predicting self-paced hand-movements in real-time via the online processing of the electrophysiological activity of the STN. It is hoped that such a computerised system might have clinical and experimental applications. For example, those sites within the STN most relevant to the processing of voluntary movement could be identified through the predictive value of their activities with respect to the timing of future movement." }, { "pmid": "23182747", "title": "Accurate telemonitoring of Parkinson's disease diagnosis using robust inference system.", "abstract": "This work presents more precise computational methods for improving the diagnosis of Parkinson's disease based on the detection of dysphonia. New methods are presented for enhanced evaluation and recognize Parkinson's disease affected patients at early stage. Analysis is performed with significant level of error tolerance rate and established our results with corrected T-test. Here new ensembles and other machine learning methods consisting of multinomial logistic regression classifier with Haar wavelets transformation as projection filter that outperform logistic regression is used. Finally a novel and reliable inference system is presented for early recognition of people affected by this disease and presents a new measure of the severity of the disease. Feature selection method is based on Support Vector Machines and ranker search method. Performance analysis of each model is compared to the existing methods and examines the main advancements and concludes with propitious results. Reliable methods are proposed for treating Parkinson's disease that includes sparse multinomial logistic regression, Bayesian network, Support Vector Machines, Artificial Neural Networks, Boosting methods and their ensembles. The study aim at improving the quality of Parkinson's disease treatment by tracking them and reinforce the viability of cost effective, regular and precise telemonitoring application." }, { "pmid": "24485390", "title": "A new hybrid intelligent system for accurate detection of Parkinson's disease.", "abstract": "Elderly people are commonly affected by Parkinson's disease (PD) which is one of the most common neurodegenerative disorders due to the loss of dopamine-producing brain cells. People with PD's (PWP) may have difficulty in walking, talking or completing other simple tasks. Variety of medications is available to treat PD. Recently, researchers have found that voice signals recorded from the PWP is becoming a useful tool to differentiate them from healthy controls. Several dysphonia features, feature reduction/selection techniques and classification algorithms were proposed by researchers in the literature to detect PD. In this paper, hybrid intelligent system is proposed which includes feature pre-processing using Model-based clustering (Gaussian mixture model), feature reduction/selection using principal component analysis (PCA), linear discriminant analysis (LDA), sequential forward selection (SFS) and sequential backward selection (SBS), and classification using three supervised classifiers such as least-square support vector machine (LS-SVM), probabilistic neural network (PNN) and general regression neural network (GRNN). PD dataset was used from University of California-Irvine (UCI) machine learning database. The strength of the proposed method has been evaluated through several performance measures. The experimental results show that the combination of feature pre-processing, feature reduction/selection methods and classification gives a maximum classification accuracy of 100% for the Parkinson's dataset." }, { "pmid": "21547504", "title": "SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease.", "abstract": "Parkinson disease (PD) is an age-related deterioration of certain nerve systems, which affects movement, balance, and muscle control of clients. PD is one of the common diseases which affect 1% of people older than 60 years. A new classification scheme based on support vector machine (SVM) selected features to train rotation forest (RF) ensemble classifiers is presented for improving diagnosis of PD. The dataset contains records of voice measurements from 31 people, 23 with PD and each record in the dataset is defined with 22 features. The diagnosis model first makes use of a linear SVM to select ten most relevant features from 22. As a second step of the classification model, six different classifiers are trained with the subset of features. Subsequently, at the third step, the accuracies of classifiers are improved by the utilization of RF ensemble classification strategy. The results of the experiments are evaluated using three metrics; classification accuracy (ACC), Kappa Error (KE) and Area under the Receiver Operating Characteristic (ROC) Curve (AUC). Performance measures of two base classifiers, i.e. KStar and IBk, demonstrated an apparent increase in PD diagnosis accuracy compared to similar studies in literature. After all, application of RF ensemble classification scheme improved PD diagnosis in 5 of 6 classifiers significantly. We, numerically, obtained about 97% accuracy in RF ensemble of IBk (a K-Nearest Neighbor variant) algorithm, which is a quite high performance for Parkinson disease diagnosis." }, { "pmid": "21493051", "title": "A fuzzy-based data transformation for feature extraction to increase classification performance with small medical data sets.", "abstract": "OBJECTIVE\nMedical data sets are usually small and have very high dimensionality. Too many attributes will make the analysis less efficient and will not necessarily increase accuracy, while too few data will decrease the modeling stability. Consequently, the main objective of this study is to extract the optimal subset of features to increase analytical performance when the data set is small.\n\n\nMETHODS\nThis paper proposes a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, this study applies principal component analysis (PCA) to extract the optimal subset of features. Finally, we use the transformed data with these optimal features as the input data for a learning tool, a support vector machine (SVM). Six medical data sets: Pima Indians' diabetes, Wisconsin diagnostic breast cancer, Parkinson disease, echocardiogram, BUPA liver disorders dataset, and bladder cancer cases in Taiwan, are employed to illustrate the approach presented in this paper.\n\n\nRESULTS\nThis research uses the t-test to evaluate the classification accuracy for a single data set; and uses the Friedman test to show the proposed method is better than other methods over the multiple data sets. The experiment results indicate that the proposed method has better classification performance than either PCA or kernel principal component analysis (KPCA) when the data set is small, and suggest creating new purpose-related information to improve the analysis performance.\n\n\nCONCLUSION\nThis paper has shown that feature extraction is important as a function of feature selection for efficient data analysis. When the data set is small, using the fuzzy-based transformation method presented in this work to increase the information available produces better results than the PCA and KPCA approaches." }, { "pmid": "26019610", "title": "Clustering performance comparison using K-means and expectation maximization algorithms.", "abstract": "Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results." } ]
Scientific Reports
27694950
PMC5046183
10.1038/srep33985
Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images
Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin & Eosin, CD31 & Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method.
Previous Related Work and Novel ContributionsTable 1 enumerates some recent techniques for nuclei detection. Most approaches typically tend to use image derived cues, such as color/intensity2528293031, edges192124323334, texture35, self learned features1336, and symmetry22242737.The color and texture-based methods require consistent color/texture appearance for the individual nuclei in order to work optimally. The method presented in ref. 31 applied the Laplacian of Gaussian (LoG) filter to detect the initial seed points representing nuclei. However, due to the uneven distribution of nuclear stain, the response of LoG filter may not reflect the true nuclear center. Filipczuk et al. applied circular Hough transform to detect the nuclear center34, however the circular Hough transform assumes that the shape of the underlying region of interest can be represented by a parametric function, i.e., circle or ellipse. In poorly stained tissue images, the circular Hough transform is likely to fail due to the great variations in appearance of nuclear edges and the presence of clusters of edge fragments.Recently, there has been substantial interest in developing and employing DL based methods for nuclei detection in histology images1336. The DL methods are supervised classification methods that typically employ multiple layers of neural networks for object detection and recognition. They can be easily extended and employed for multiple different classification tasks. Recently a number of DL based approaches have been proposed for image analysis and classification applications in digital pathology1336. For instance, Xu et al. proposed a stacked sparse autoencoder (SSAE) to detect nuclei in breast cancer tissue images. They showed that the DL scheme was able to outperform hand-crafted features on multi-site/stain histology images. However, DL methods required a large number of dedicated training samples since the learning process requires a large number of parameters to be learned. These approaches therefore tend to be heavily biased and sensitive to the choice of the training set.The key idea behind voting based techniques is to cluster circular symmetries along the radial line/inverse gradient direction on an object’s contour in order to infer the center of the object of interest. An illustrative example is shown in Fig. 2(a,b). Figure 2(a) shows a synthetic phantom nucleus with foreground color as grey, and the background color in white. A few sample pixels/points on the nuclei contour with their inverse gradient directions are shown as blue arrows in Fig. 2. Figure 2(b) illustrates the voting procedure with three selected pixels on the contour. Note that for each pixel, a dotted triangle is used to represent an active voting area. The region where three voting areas converge can be thought of as a region with a high likelihood of containing a nuclear center.Several effective symmetric voting-based techniques have been developed employing variants of the same principal. Parvin et al.27 proposed a multi-pass voting (MPV) method to calculate the centroid of overlapping nuclei. Qi et al.22 proposed a single pass voting (SPV) technique followed by a mean-shift procedure to calculate the seed points of overlapping nuclei. In order to further improve the efficiency of the approach, Xu et al.24 proposed a technique based on an elliptic descriptor and improved single pass voting for nuclei via a seed point detection scheme. This initial nuclear detection step was followed by a marker-controlled watershed algorithm to segment nuclei in H&E stained histology images. In practice, the MPV procedure tends to yield more accurate results compared to the SPV procedure in terms of nuclei detection. The SPV procedure may help improve overall efficiency of nuclear detection24, however, it needs an additional mean-shift clustering step to identify the local maxima in the voting map. This additional clustering step requires estimating additional parameters and increases overall model complexity.Since existing voting-based techniques typically utilize edge features, nuclei with hollow interiors could result in incorrect voting and hence in generation of a spurious detection result. One example is shown in Fig. 2(c), where we can see a color image, its corresponding edge map and one of the nuclei, denoted as A. Nucleus A has a hollow interior so that it has two contours, an inner and an outer contour, which results in two edge fragments in the edge map (see second row of Fig. 2(c)). For the outer nuclear contour, the inverse gradients are pointing inwards, whereas for the inner nuclear contour, the inverse gradients are pointing outwards. As one may expect, the inverse gradient obtained from the inner contour minimally contributes towards identifying the nuclear centroid (because the active voting area appears to be outside the nucleus, while the nuclear center should be within the nucleus). Another synthetic example of a nucleus with a hollow interior is shown in Fig. 2(c), and a few inverse gradient directions are drawn on the inner contour. In most cases, those inverse gradients from the inner contour will lead to a spurious result in regions of clustered nuclei. In Fig. 2(e), three synthetic nuclei with hollow regions are shown. It is clear that due to the vicinity of these three nuclei, the highlighted red circle region has received a large number of votes and thus could lead to a potential false positive detection. In section, we will show that in real histopathologic images, existing voting-based techniques tend to generate many false positive detection results.In this paper, we present a Multi-Pass Adaptive Voting (MPAV) method. The MPAV is a voting based technique which adaptively selects and refines the gradient information from the image to infer the location of nuclear centroids. The schematic for the MPAV is illustrated in Fig. 3. The MPAV consists of three modules: gradient field generation, refinement of the gradient field, and multi-pass voting. Given a color image, a gradient field is generated by using image smoothing and edge detection. In the second module, the gradient field is refined, gradients whose direction leads away from the center of the nuclei are removed or corrected. The refined gradient field is then utilized in a multi-pass voting module to guide each edge pixels for generating the nuclear voting map. Finally, a global threshold is applied on the voting map to obtain candidate nuclear centroids. The details of each module are discussed in the next section and the notations and symbols used in this paper are summarized in Table 2.
[ "26186772", "26167385", "24505786", "24145650", "23392336", "23157334", "21333490", "20491597", "26208307", "22614727", "25203987", "22498689", "21383925", "20172780", "22167559", "25192578", "24608059", "23221815", "20359767", "19884070", "20656653", "21947866", "23912498", "18288618", "24557687", "25958195", "21869365", "24704158" ]
[ { "pmid": "26186772", "title": "Feature Importance in Nonlinear Embeddings (FINE): Applications in Digital Pathology.", "abstract": "Quantitative histomorphometry (QH) refers to the process of computationally modeling disease appearance on digital pathology images by extracting hundreds of image features and using them to predict disease presence or outcome. Since constructing a robust and interpretable classifier is challenging in a high dimensional feature space, dimensionality reduction (DR) is often implemented prior to classifier construction. However, when DR is performed it can be challenging to quantify the contribution of each of the original features to the final classification result. We have previously presented a method for scoring features based on their importance for classification on an embedding derived via principal components analysis (PCA). However, nonlinear DR involves the eigen-decomposition of a kernel matrix rather than the data itself, compounding the issue of classifier interpretability. In this paper we present feature importance in nonlinear embeddings (FINE), an extension of our PCA-based feature scoring method to kernel PCA (KPCA), as well as several NLDR algorithms that can be cast as variants of KPCA. FINE is applied to four digital pathology datasets to identify key QH features for predicting the risk of breast and prostate cancer recurrence. Measures of nuclear and glandular architecture and clusteredness were found to play an important role in predicting the likelihood of recurrence of both breast and prostate cancers. Compared to the t-test, Fisher score, and Gini index, FINE was able to identify a stable set of features that provide good classification accuracy on four publicly available datasets from the NIPS 2003 Feature Selection Challenge." }, { "pmid": "26167385", "title": "Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces.", "abstract": "CONTEXT\nContent-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important.\n\n\nAIMS\nIn this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space.\n\n\nSETTINGS AND DESIGN\nBoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images.\n\n\nMATERIALS AND METHODS\nThe following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration).\n\n\nSTATISTICAL ANALYSIS USED\nWe plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier.\n\n\nRESULTS\nBoSE outperformed SE both in terms of CBIR-based (area under the precision-recall curve) and classifier-based (classification accuracy) on average across all of the dimensions tested for all three datasets: (1) Prostate cancer histopathology (AUPRC: BoSE = 0.79, SE = 0.63; Accuracy: BoSE = 0.93, SE = 0.80), (2) ER + breast cancer histopathology (AUPRC: BoSE = 0.79, SE = 0.68; Accuracy: BoSE = 0.96, SE = 0.96), and (3) HER2+ breast cancer histopathology (AUPRC: BoSE = 0.54, SE = 0.44; Accuracy: BoSE = 0.93, SE = 0.91).\n\n\nCONCLUSION\nOur results suggest that BoSE could serve as an important tool for CBIR and classification of high-dimensional biomedical data." }, { "pmid": "24505786", "title": "Cell orientation entropy (COrE): predicting biochemical recurrence from prostate cancer tissue microarrays.", "abstract": "We introduce a novel feature descriptor to describe cancer cells called Cell Orientation Entropy (COrE). The main objective of this work is to employ COrE to quantitatively model disorder of cell/nuclear orientation within local neighborhoods and evaluate whether these measurements of directional disorder are correlated with biochemical recurrence (BCR) in prostate cancer (CaP) patients. COrE has a number of novel attributes that are unique to digital pathology image analysis. Firstly, it is the first rigorous attempt to quantitatively model cell/nuclear orientation. Secondly, it provides for modeling of local cell networks via construction of subgraphs. Thirdly, it allows for quantifying the disorder in local cell orientation via second order statistical features. We evaluated the ability of 39 COrE features to capture the characteristics of cell orientation in CaP tissue microarray (TMA) images in order to predict 10 year BCR in men with CaP following radical prostatectomy. Randomized 3-fold cross-validation via a random forest classifier evaluated on a combination of COrE and other nuclear features achieved an accuracy of 82.7 +/- 3.1% on a dataset of 19 BCR and 20 non-recurrence patients. Our results suggest that COrE features could be extended to characterize disease states in other histological cancer images in addition to prostate cancer." }, { "pmid": "24145650", "title": "A quantitative histomorphometric classifier (QuHbIC) identifies aggressive versus indolent p16-positive oropharyngeal squamous cell carcinoma.", "abstract": "Human papillomavirus-related (p16-positive) oropharyngeal squamous cell carcinoma patients develop recurrent disease, mostly distant metastasis, in approximately 10% of cases, and the remaining patients, despite cure, can have major morbidity from treatment. Identifying patients with aggressive versus indolent tumors is critical. Hematoxylin and eosin-stained slides of a microarray cohort of p16-positive oropharyngeal squamous cell carcinoma cases were digitally scanned. A novel cluster cell graph was constructed using the nuclei as vertices to characterize and measure spatial distribution and cell clustering. A series of topological features defined on each node of the subgraph were analyzed, and a random forest decision tree classifier was developed. The classifier (QuHbIC) was validated over 25 runs of 3-fold cross-validation using case subsets for independent training and testing. Nineteen (11.9%) of the 160 patients on the array developed recurrence. QuHbIC correctly predicted outcomes in 140 patients (87.5% accuracy). There were 23 positive patients, of whom 11 developed recurrence (47.8% positive predictive value), and 137 negative patients, of whom only 8 developed recurrence (94.2% negative predictive value). The best other predictive features were stage T4 (18 patients; 83.1% accuracy) and N3 nodal disease (10 patients; 88.6% accuracy). QuHbIC-positive patients had poorer overall, disease-free, and disease-specific survival (P<0.001 for each). In multivariate analysis, QuHbIC-positive patients still showed significantly poorer disease-free and disease-specific survival, independent of all other variables. In summary, using just tiny hematoxylin and eosin punches, a computer-aided histomorphometric classifier (QuHbIC) can strongly predict recurrence risk. With prospective validation, this testing may be useful to stratify patients into different treatment groups." }, { "pmid": "23392336", "title": "Multi-field-of-view framework for distinguishing tumor grade in ER+ breast cancer from entire histopathology slides.", "abstract": "Modified Bloom-Richardson (mBR) grading is known to have prognostic value in breast cancer (BCa), yet its use in clinical practice has been limited by intra- and interobserver variability. The development of a computerized system to distinguish mBR grade from entire estrogen receptor-positive (ER+) BCa histopathology slides will help clinicians identify grading discrepancies and improve overall confidence in the diagnostic result. In this paper, we isolate salient image features characterizing tumor morphology and texture to differentiate entire hematoxylin and eosin (H and E) stained histopathology slides based on mBR grade. The features are used in conjunction with a novel multi-field-of-view (multi-FOV) classifier--a whole-slide classifier that extracts features from a multitude of FOVs of varying sizes--to identify important image features at different FOV sizes. Image features utilized include those related to the spatial arrangement of cancer nuclei (i.e., nuclear architecture) and the textural patterns within nuclei (i.e., nuclear texture). Using slides from 126 ER+ patients (46 low, 60 intermediate, and 20 high mBR grade), our grading system was able to distinguish low versus high, low versus intermediate, and intermediate versus high grade patients with area under curve values of 0.93, 0.72, and 0.74, respectively. Our results suggest that the multi-FOV classifier is able to 1) successfully discriminate low, medium, and high mBR grade and 2) identify specific image features at different FOV sizes that are important for distinguishing mBR grade in H and E stained ER+ BCa histology slides." }, { "pmid": "23157334", "title": "Digital imaging in pathology: whole-slide imaging and beyond.", "abstract": "Digital imaging in pathology has undergone an exponential period of growth and expansion catalyzed by changes in imaging hardware and gains in computational processing. Today, digitization of entire glass slides at near the optical resolution limits of light can occur in 60 s. Whole slides can be imaged in fluorescence or by use of multispectral imaging systems. Computational algorithms have been developed for cytometric analysis of cells and proteins in subcellular locations by use of multiplexed antibody staining protocols. Digital imaging is unlocking the potential to integrate primary image features into high-dimensional genomic assays by moving microscopic analysis into the digital age. This review highlights the emerging field of digital pathology and explores the methods and analytic approaches being developed for the application and use of these methods in clinical care and research settings." }, { "pmid": "21333490", "title": "Computer-aided prognosis: predicting patient and disease outcome via quantitative fusion of multi-scale, multi-modal data.", "abstract": "Computer-aided prognosis (CAP) is a new and exciting complement to the field of computer-aided diagnosis (CAD) and involves developing and applying computerized image analysis and multi-modal data fusion algorithms to digitized patient data (e.g. imaging, tissue, genomic) for helping physicians predict disease outcome and patient survival. While a number of data channels, ranging from the macro (e.g. MRI) to the nano-scales (proteins, genes) are now being routinely acquired for disease characterization, one of the challenges in predicting patient outcome and treatment response has been in our inability to quantitatively fuse these disparate, heterogeneous data sources. At the Laboratory for Computational Imaging and Bioinformatics (LCIB)(1) at Rutgers University, our team has been developing computerized algorithms for high dimensional data and image analysis for predicting disease outcome from multiple modalities including MRI, digital pathology, and protein expression. Additionally, we have been developing novel data fusion algorithms based on non-linear dimensionality reduction methods (such as Graph Embedding) to quantitatively integrate information from multiple data sources and modalities with the overarching goal of optimizing meta-classifiers for making prognostic predictions. In this paper, we briefly describe 4 representative and ongoing CAP projects at LCIB. These projects include (1) an Image-based Risk Score (IbRiS) algorithm for predicting outcome of Estrogen receptor positive breast cancer patients based on quantitative image analysis of digitized breast cancer biopsy specimens alone, (2) segmenting and determining extent of lymphocytic infiltration (identified as a possible prognostic marker for outcome in human epidermal growth factor amplified breast cancers) from digitized histopathology, (3) distinguishing patients with different Gleason grades of prostate cancer (grade being known to be correlated to outcome) from digitized needle biopsy specimens, and (4) integrating protein expression measurements obtained from mass spectrometry with quantitative image features derived from digitized histopathology for distinguishing between prostate cancer patients at low and high risk of disease recurrence following radical prostatectomy." }, { "pmid": "20491597", "title": "Integrated diagnostics: a conceptual framework with examples.", "abstract": "With the advent of digital pathology, imaging scientists have begun to develop computerized image analysis algorithms for making diagnostic (disease presence), prognostic (outcome prediction), and theragnostic (choice of therapy) predictions from high resolution images of digitized histopathology. One of the caveats to developing image analysis algorithms for digitized histopathology is the ability to deal with highly dense, information rich datasets; datasets that would overwhelm most computer vision and image processing algorithms. Over the last decade, manifold learning and non-linear dimensionality reduction schemes have emerged as popular and powerful machine learning tools for pattern recognition problems. However, these techniques have thus far been applied primarily to classification and analysis of computer vision problems (e.g., face detection). In this paper, we discuss recent work by a few groups in the application of manifold learning methods to problems in computer aided diagnosis, prognosis, and theragnosis of digitized histopathology. In addition, we discuss some exciting recent developments in the application of these methods for multi-modal data fusion and classification; specifically the building of meta-classifiers by fusion of histological image and proteomic signatures for prostate cancer outcome prediction." }, { "pmid": "26208307", "title": "Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images.", "abstract": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of \"Deep Learning\" strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies." }, { "pmid": "22614727", "title": "Automated segmentation of the melanocytes in skin histopathological images.", "abstract": "In the diagnosis of skin melanoma by analyzing histopathological images, the detection of the melanocytes in the epidermis area is an important step. However, the detection of melanocytes in the epidermis area is dicult because other keratinocytes that are very similar to the melanocytes are also present. This paper proposes a novel computer-aided technique for segmentation of the melanocytes in the skin histopathological images. In order to reduce the local intensity variant, a mean-shift algorithm is applied for the initial segmentation of the image. A local region recursive segmentation algorithm is then proposed to filter out the candidate nuclei regions based on the domain prior knowledge. To distinguish the melanocytes from other keratinocytes in the epidermis area, a novel descriptor, named local double ellipse descriptor (LDED), is proposed to measure the local features of the candidate regions. The LDED uses two parameters: region ellipticity and local pattern characteristics to distinguish the melanocytes from the candidate nuclei regions. Experimental results on 28 dierent histopathological images of skin tissue with dierent zooming factors show that the proposed technique provides a superior performance." }, { "pmid": "25203987", "title": "Supervised multi-view canonical correlation analysis (sMVCCA): integrating histologic and proteomic features for predicting recurrent prostate cancer.", "abstract": "In this work, we present a new methodology to facilitate prediction of recurrent prostate cancer (CaP) following radical prostatectomy (RP) via the integration of quantitative image features and protein expression in the excised prostate. Creating a fused predictor from high-dimensional data streams is challenging because the classifier must 1) account for the \"curse of dimensionality\" problem, which hinders classifier performance when the number of features exceeds the number of patient studies and 2) balance potential mismatches in the number of features across different channels to avoid classifier bias towards channels with more features. Our new data integration methodology, supervised Multi-view Canonical Correlation Analysis (sMVCCA), aims to integrate infinite views of highdimensional data to provide more amenable data representations for disease classification. Additionally, we demonstrate sMVCCA using Spearman's rank correlation which, unlike Pearson's correlation, can account for nonlinear correlations and outliers. Forty CaP patients with pathological Gleason scores 6-8 were considered for this study. 21 of these men revealed biochemical recurrence (BCR) following RP, while 19 did not. For each patient, 189 quantitative histomorphometric attributes and 650 protein expression levels were extracted from the primary tumor nodule. The fused histomorphometric/proteomic representation via sMVCCA combined with a random forest classifier predicted BCR with a mean AUC of 0.74 and a maximum AUC of 0.9286. We found sMVCCA to perform statistically significantly (p < 0.05) better than comparative state-of-the-art data fusion strategies for predicting BCR. Furthermore, Kaplan-Meier analysis demonstrated improved BCR-free survival prediction for the sMVCCA-fused classifier as compared to histology or proteomic features alone." }, { "pmid": "22498689", "title": "An integrated region-, boundary-, shape-based active contour for multiple object overlap resolution in histological imagery.", "abstract": "Active contours and active shape models (ASM) have been widely employed in image segmentation. A major limitation of active contours, however, is in their 1) inability to resolve boundaries of intersecting objects and to 2) handle occlusion. Multiple overlapping objects are typically segmented out as a single object. On the other hand, ASMs are limited by point correspondence issues since object landmarks need to be identified across multiple objects for initial object alignment. ASMs are also are constrained in that they can usually only segment a single object in an image. In this paper, we present a novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation with automated initialization based on watershed. We demonstrate an application of these synergistic active contour models using multiple level sets to segment nuclear and glandular structures on digitized histopathology images of breast and prostate biopsy specimens. Unlike previous related approaches, our model is able to resolve object overlap and separate occluded boundaries of multiple objects simultaneously. The energy functional of the active contour is comprised of three terms. The first term is the prior shape term, modeled on the object of interest, thereby constraining the deformation achievable by the active contour. The second term, a boundary-based term detects object boundaries from image gradients. The third term drives the shape prior and the contour towards the object boundary based on region statistics. The results of qualitative and quantitative evaluation on 100 prostate and 14 breast cancer histology images for the task of detecting and segmenting nuclei and lymphocytes reveals that the model easily outperforms two state of the art segmentation schemes (geodesic active contour and Rousson shape-based model) and on average is able to resolve up to 91% of overlapping/occluded structures in the images." }, { "pmid": "21383925", "title": "Barriers and facilitators to adoption of soft copy interpretation from the user perspective: Lessons learned from filmless radiology for slideless pathology.", "abstract": "BACKGROUND\nAdoption of digital images for pathological specimens has been slower than adoption of digital images in radiology, despite a number of anticipated advantages for digital images in pathology. In this paper, we explore the factors that might explain this slower rate of adoption.\n\n\nMATERIALS AND METHOD\nSemi-structured interviews on barriers and facilitators to the adoption of digital images were conducted with two radiologists, three pathologists, and one pathologist's assistant.\n\n\nRESULTS\nBarriers and facilitators to adoption of digital images were reported in the areas of performance, workflow-efficiency, infrastructure, integration with other software, and exposure to digital images. The primary difference between the settings was that performance with the use of digital images as compared to the traditional method was perceived to be higher in radiology and lower in pathology. Additionally, exposure to digital images was higher in radiology than pathology, with some radiologists exclusively having been trained and/or practicing with digital images. The integration of digital images both improved and reduced efficiency in routine and non-routine workflow patterns in both settings, and was variable across the different organizations. A comparison of these findings with prior research on adoption of other health information technologies suggests that the barriers to adoption of digital images in pathology are relatively tractable.\n\n\nCONCLUSIONS\nImproving performance using digital images in pathology would likely accelerate adoption of innovative technologies that are facilitated by the use of digital images, such as electronic imaging databases, electronic health records, double reading for challenging cases, and computer-aided diagnostic systems." }, { "pmid": "20172780", "title": "Expectation-maximization-driven geodesic active contour with overlap resolution (EMaGACOR): application to lymphocyte segmentation on breast cancer histopathology.", "abstract": "The presence of lymphocytic infiltration (LI) has been correlated with nodal metastasis and tumor recurrence in HER2+ breast cancer (BC). The ability to automatically detect and quantify extent of LI on histopathology imagery could potentially result in the development of an image based prognostic tool for human epidermal growth factor receptor-2 (HER2+) BC patients. Lymphocyte segmentation in hematoxylin and eosin (H&E) stained BC histopathology images is complicated by the similarity in appearance between lymphocyte nuclei and other structures (e.g., cancer nuclei) in the image. Additional challenges include biological variability, histological artifacts, and high prevalence of overlapping objects. Although active contours are widely employed in image segmentation, they are limited in their ability to segment overlapping objects and are sensitive to initialization. In this paper, we present a new segmentation scheme, expectation-maximization (EM) driven geodesic active contour with overlap resolution (EMaGACOR), which we apply to automatically detecting and segmenting lymphocytes on HER2+ BC histopathology images. EMaGACOR utilizes the expectation-maximization algorithm for automatically initializing a geodesic active contour (GAC) and includes a novel scheme based on heuristic splitting of contours via identification of high concavity points for resolving overlapping structures. EMaGACOR was evaluated on a total of 100 HER2+ breast biopsy histology images and was found to have a detection sensitivity of over 86% and a positive predictive value of over 64%. By comparison, the EMaGAC model (without overlap resolution) and GAC model yielded corresponding detection sensitivities of 42% and 19%, respectively. Furthermore, EMaGACOR was able to correctly resolve over 90% of overlaps between intersecting lymphocytes. Hausdorff distance (HD) and mean absolute distance (MAD) for EMaGACOR were found to be 2.1 and 0.9 pixels, respectively, and significantly better compared to the corresponding performance of the EMaGAC and GAC models. EMaGACOR is an efficient, robust, reproducible, and accurate segmentation technique that could potentially be applied to other biomedical image analysis problems." }, { "pmid": "22167559", "title": "Robust segmentation of overlapping cells in histopathology specimens using parallel seed detection and repulsive level set.", "abstract": "Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMAs) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm that can reliably separate touching cells in hematoxylin-stained breast TMA specimens that have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach that utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and TMAs containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) that resulted in significant speedup over the C/C++ implementation." }, { "pmid": "25192578", "title": "An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.", "abstract": "In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation." }, { "pmid": "24608059", "title": "Toward automatic mitotic cell detection and segmentation in multispectral histopathological images.", "abstract": "The count of mitotic cells is a critical factor in most cancer grading systems. Extracting the mitotic cell from the histopathological image is a very challenging task. In this paper, we propose an efficient technique for detecting and segmenting the mitotic cells in the high-resolution multispectral image. The proposed technique consists of three main modules: discriminative image generation, mitotic cell candidate detection and segmentation, and mitotic cell candidate classification. In the first module, a discriminative image is obtained by linear discriminant analysis using ten different spectral band images. A set of mitotic cell candidate regions is then detected and segmented by the Bayesian modeling and local-region threshold method. In the third module, a 226 dimension feature is extracted from the mitotic cell candidates and their surrounding regions. An imbalanced classification framework is then applied to perform the classification for the mitotic cell candidates in order to detect the real mitotic cells. The proposed technique has been evaluated on a publicly available dataset of 35 × 10 multispectral images, in which 224 mitotic cells are manually labeled by experts. The proposed technique is able to provide superior performance compared to the existing technique, 81.5% sensitivity rate and 33.9% precision rate in terms of detection performance, and 89.3% sensitivity rate and 87.5% precision rate in terms of segmentation performance." }, { "pmid": "23221815", "title": "Invariant delineation of nuclear architecture in glioblastoma multiforme for clinical and molecular association.", "abstract": "Automated analysis of whole mount tissue sections can provide insights into tumor subtypes and the underlying molecular basis of neoplasm. However, since tumor sections are collected from different laboratories, inherent technical and biological variations impede analysis for very large datasets such as The Cancer Genome Atlas (TCGA). Our objective is to characterize tumor histopathology, through the delineation of the nuclear regions, from hematoxylin and eosin (H&E) stained tissue sections. Such a representation can then be mined for intrinsic subtypes across a large dataset for prediction and molecular association. Furthermore, nuclear segmentation is formulated within a multi-reference graph framework with geodesic constraints, which enables computation of multidimensional representations, on a cell-by-cell basis, for functional enrichment and bioinformatics analysis. Here, we present a novel method, multi-reference graph cut (MRGC), for nuclear segmentation that overcomes technical variations associated with sample preparation by incorporating prior knowledge from manually annotated reference images and local image features. The proposed approach has been validated on manually annotated samples and then applied to a dataset of 377 Glioblastoma Multiforme (GBM) whole slide images from 146 patients. For the GBM cohort, multidimensional representation of the nuclear features and their organization have identified 1) statistically significant subtypes based on several morphometric indexes, 2) whether each subtype can be predictive or not, and 3) that the molecular correlates of predictive subtypes are consistent with the literature. Data and intermediaries for a number of tumor types (GBM, low grade glial, and kidney renal clear carcinoma) are available at: http://tcga.lbl.gov for correlation with TCGA molecular data. The website also provides an interface for panning and zooming of whole mount tissue sections with/without overlaid segmentation results for quality control." }, { "pmid": "20359767", "title": "Automated segmentation of tissue images for computerized IHC analysis.", "abstract": "This paper presents two automated methods for the segmentation of immunohistochemical tissue images that overcome the limitations of the manual approach as well as of the existing computerized techniques. The first independent method, based on unsupervised color clustering, recognizes automatically the target cancerous areas in the specimen and disregards the stroma; the second method, based on colors separation and morphological processing, exploits automated segmentation of the nuclear membranes of the cancerous cells. Extensive experimental results on real tissue images demonstrate the accuracy of our techniques compared to manual segmentations; additional experiments show that our techniques are more effective in immunohistochemical images than popular approaches based on supervised learning or active contours. The proposed procedure can be exploited for any applications that require tissues and cells exploration and to perform reliable and standardized measures of the activity of specific proteins involved in multi-factorial genetic pathologies." }, { "pmid": "19884070", "title": "Improved automatic detection and segmentation of cell nuclei in histopathology images.", "abstract": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation." }, { "pmid": "20656653", "title": "Segmenting clustered nuclei using H-minima transform-based marker extraction and contour parameterization.", "abstract": "In this letter, we present a novel watershed-based method for segmentation of cervical and breast cell images. We formulate the segmentation of clustered nuclei as an optimization problem. A hypothesis concerning the nuclei, which involves a priori knowledge with respect to the shape of nuclei, is tested to solve the optimization problem. We first apply the distance transform to the clustered nuclei. A marker extraction scheme based on the H-minima transform is introduced to obtain the optimal segmentation result from the distance map. In order to estimate the optimal h-value, a size-invariant segmentation distortion evaluation function is defined based on the fitting residuals between the segmented region boundaries and fitted models. Ellipsoidal modeling of contours is introduced to adjust nuclei contours for more effective analysis. Experiments on a variety of real microscopic cell images show that the proposed method yields more accurate segmentation results than the state-of-the-art watershed-based methods." }, { "pmid": "21947866", "title": "Machine vision-based localization of nucleic and cytoplasmic injection sites on low-contrast adherent cells.", "abstract": "Automated robotic bio-micromanipulation can improve the throughput and efficiency of single-cell experiments. Adherent cells, such as fibroblasts, include a wide range of mammalian cells and are usually very thin with highly irregular morphologies. Automated micromanipulation of these cells is a beneficial yet challenging task, where the machine vision sub-task is addressed in this article. The necessary but neglected problem of localizing injection sites on the nucleus and the cytoplasm is defined and a novel two-stage model-based algorithm is proposed. In Stage I, the gradient information associated with the nucleic regions is extracted and used in a mathematical morphology clustering framework to roughly localize the nucleus. Next, this preliminary segmentation information is used to estimate an ellipsoidal model for the nucleic region, which is then used as an attention window in a k-means clustering-based iterative search algorithm for fine localization of the nucleus and nucleic injection site (NIS). In Stage II, a geometrical model is built on each localized nucleus and employed in a new texture-based region-growing technique called Growing Circles Algorithm to localize the cytoplasmic injection site (CIS). The proposed algorithm has been tested on 405 images containing more than 1,000 NIH/3T3 fibroblast cells, and yielded the precision rates of 0.918, 0.943, and 0.866 for the NIS, CIS, and combined NIS-CIS localizations, respectively." }, { "pmid": "23912498", "title": "Computer-Aided Breast Cancer Diagnosis Based on the Analysis of Cytological Images of Fine Needle Biopsies.", "abstract": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information." }, { "pmid": "18288618", "title": "An automated method for cell detection in zebrafish.", "abstract": "Quantification of cells is a critical step towards the assessment of cell fate in neurological disease or developmental models. Here, we present a novel cell detection method for the automatic quantification of zebrafish neuronal cells, including primary motor neurons, Rohon-Beard neurons, and retinal cells. Our method consists of four steps. First, a diffused gradient vector field is produced. Subsequently, the orientations and magnitude information of diffused gradients are accumulated, and a response image is computed. In the third step, we perform non-maximum suppression on the response image and identify the detection candidates. In the fourth and final step the detected objects are grouped into clusters based on their color information. Using five different datasets depicting zebrafish cells, we show that our method consistently displays high sensitivity and specificity of over 95%. Our results demonstrate the general applicability of this method to different data samples, including nuclear staining, immunohistochemistry, and cell death detection." }, { "pmid": "24557687", "title": "Automatic Ki-67 counting using robust cell detection and online dictionary learning.", "abstract": "Ki-67 proliferation index is a valid and important biomarker to gauge neuroendocrine tumor (NET) cell progression within the gastrointestinal tract and pancreas. Automatic Ki-67 assessment is very challenging due to complex variations of cell characteristics. In this paper, we propose an integrated learning-based framework for accurate automatic Ki-67 counting for NET. The main contributions of our method are: 1) A robust cell counting and boundary delineation algorithm that is designed to localize both tumor and nontumor cells. 2) A novel online sparse dictionary learning method to select a set of representative training samples. 3) An automated framework that is used to differentiate tumor from nontumor cells (such as lymphocytes) and immunopositive from immunonegative tumor cells for the assessment of Ki-67 proliferation index. The proposed method has been extensively tested using 46 NET cases. The performance is compared with pathologists' manual annotations. The automatic Ki-67 counting is quite accurate compared with pathologists' manual annotations. This is much more accurate than existing methods." }, { "pmid": "25958195", "title": "Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis.", "abstract": "Color deconvolution has emerged as a popular method for color unmixing as a pre-processing step for image analysis of digital pathology images. One deficiency of this approach is that the stain matrix is pre-defined which requires specific knowledge of the data. This paper presents an unsupervised Sparse Non-negative Matrix Factorization (SNMF) based approach for color unmixing. We evaluate this approach for color unmixing of breast pathology images. Compared to Non-negative Matrix Factorization (NMF), the sparseness constraint imposed on coefficient matrix aims to use more meaningful representation of color components for separating stained colors. In this work SNMF is leveraged for decomposing pure stained color in both Immunohistochemistry (IHC) and Hematoxylin and Eosin (H&E) images. SNMF is compared with Principle Component Analysis (PCA), Independent Component Analysis (ICA), Color Deconvolution (CD), and Non-negative Matrix Factorization (NMF) based approaches. SNMF demonstrated improved performance in decomposing brown diaminobenzidine (DAB) component from 36 IHC images as well as accurately segmenting about 1400 nuclei and 500 lymphocytes from H & E images." }, { "pmid": "21869365", "title": "A computational approach to edge detection.", "abstract": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge." }, { "pmid": "24704158", "title": "Automated quantification of MART1-verified Ki-67 indices: useful diagnostic aid in melanocytic lesions.", "abstract": "The MART1-verified Ki-67 proliferation index is a valuable aid to distinguish melanomas from nevi. Because such indices are quantifiable by image analysis, they may provide a novel automated diagnostic aid. This study aimed to validate the diagnostic performance of automated dermal Ki-67 indices and to explore the diagnostic capability of epidermal Ki-67 in lesions both with and without a dermal component. In addition, we investigated the automated indices' ability to predict sentinel lymph node (SLN) status. Paraffin-embedded tissues from 84 primary cutaneous melanomas (35 with SLN biopsy), 22 melanoma in situ, and 270 nevi were included consecutively. Whole slide images were captured from Ki-67/MART1 double stains, and image analysis computed Ki-67 indices for epidermis and dermis. In lesions with a dermal component, the area under the receiver operating characteristic (ROC) curve was 0.79 (95% confidence interval [CI], 0.72-0.86) for dermal indices. By excluding lesions with few melanocytic cells, this area increased to 0.93 (95% CI, 0.88-0.98). A simultaneous analysis of epidermis and dermis yielded an ROC area of 0.94 (95% CI, 0.91-0.96) for lesions with a dermal component and 0.98 (95% CI, 0.97-1.0) for lesions with a considerable dermal component. For all lesions, the ROC area of the simultaneous analysis was 0.89 (95% CI, 0.85-0.92). SLN-positive patients generally had a higher index than SLN-negative patients (P ≤ .003). Conclusively, an automated diagnostic aid seems feasible in melanocytic pathology. The dermal Ki-67 index was inferior to a combined epidermal and dermal index in diagnosis but valuable for predicting the SLN status of our melanoma patients." } ]
Scientific Reports
27703256
PMC5050509
10.1038/srep34759
Feature Subset Selection for Cancer Classification Using Weight Local Modularity
Microarray is recently becoming an important tool for profiling the global gene expression patterns of tissues. Gene selection is a popular technology for cancer classification that aims to identify a small number of informative genes from thousands of genes that may contribute to the occurrence of cancers to obtain a high predictive accuracy. This technique has been extensively studied in recent years. This study develops a novel feature selection (FS) method for gene subset selection by utilizing the Weight Local Modularity (WLM) in a complex network, called the WLMGS. In the proposed method, the discriminative power of gene subset is evaluated by using the weight local modularity of a weighted sample graph in the gene subset where the intra-class distance is small and the inter-class distance is large. A higher local modularity of the gene subset corresponds to a greater discriminative of the gene subset. With the use of forward search strategy, a more informative gene subset as a group can be selected for the classification process. Computational experiments show that the proposed algorithm can select a small subset of the predictive gene as a group while preserving classification accuracy.
Related WorkOwing to the importance of gene selection in the analysis of the microarray dataset and the diagnosis of cancer, various techniques for gene selection problems have been proposed.Because of the high dimensionality of most microarray analyses, fast and efficient gene selection techniques such as univariate filter methods8910 have gained more attention. Most filter methods consider the problem of FS to be a ranking problem. The solution is provided by selecting the top scoring features/genes while the rest are discarded. Scoring functions represent the core of ranking methods and are used to assign a relevance index to each feature/gene. The scoring functions mainly include the Z-score11 and Welch t-test12 from the t-test family, the Bayesian t-test13 from the Bayesian scoring family, and the Info gain14 method from the theory-based scoring family. However, the filter-ranking methods ignore the correlations among gene subset, so the selected gene subset may contain redundant information. Thus, multivariate filter techniques have been proposed by researchers to capture the correlations between genes. Some of these filter techniques are the correlation-based feature selection (CFS)15, the Markov blanket filter method16 and the mutual information (MI) based methods, e.g. mRMR17, MIFS18, MIFS_U19, and CMIM20.In recent years, the metaheuristic technique, which is a type of wrapper technique, has gained extensive attention and has been proven to be one of the best -performing techniques used in solving gene selection problems2122. Genetic algorithms (GAs) are generally used as the search engine for feature subsets combined with classification methods. Some examples of GAs are the estimation of distribution algorithm (EDA) with SVM232425, the genetic algorithms support vector machine (GA-SVM)26, and the K nearest neighbors/genetic algorithms (KNN/GA)27.However, most of the existing methods, such as the mutual information based methods17181920, only choose the strong genes in the target class but ignore the weak genes which possess a strong discriminatory power as a group but are weak as individuals3.Over the past few decades, complex network theories have been used in different areas such as biological, social, technological, and information networks. In this present study, a novel method is proposed to search for the ‘weak’ genes by using the sequential forward search strategy. In the proposed method, an efficient discrimination evaluation criterion of a gene subset as a group is presented based on the weight local modularity (WLM) in a complex network. This method employs the advantages of the weight local modularity which most networks are composed of. The WLM are communities or groups within which the networks have a locally small distance between the nodes, but have a relatively large distance between the various communities28. By constructing the weighted sample graph (WSG) in a gene subset, a large weight local modularity value means that the samples in the gene subset are easily separated locally, and that the gene subset is more informative for classification. Therefore, the proposed method has the capability to select for an optimal gene subset with a stronger discriminative power as a group. The effectiveness of method in this present study is validated by conducting experiments on several publicly available microarray datasets. The proposed method performs well on the gene selection and the cancer classification accuracy.
[ "23124059", "17720704", "16790051", "11435405", "15327980", "11435405", "22149632", "15680584", "16119262", "18515276", "15087314", "20975711" ]
[ { "pmid": "23124059", "title": "Selection of interdependent genes via dynamic relevance analysis for cancer diagnosis.", "abstract": "Microarray analysis is widely accepted for human cancer diagnosis and classification. However the high dimensionality of microarray data poses a great challenge to classification. Gene selection plays a key role in identifying salient genes from thousands of genes in microarray data that can directly contribute to the symptom of disease. Although various excellent selection methods are currently available, one common problem of these methods is that genes which have strong discriminatory power as a group but are weak as individuals will be discarded. In this paper, a new gene selection method is proposed for cancer diagnosis and classification by retaining useful intrinsic groups of interdependent genes. The primary characteristic of this method is that the relevance between each gene and target will be dynamically updated when a new gene is selected. The effectiveness of our method is validated by experiments on six publicly available microarray data sets. Experimental results show that the classification performance and enrichment score achieved by our proposed method is better than those of other selection methods." }, { "pmid": "17720704", "title": "A review of feature selection techniques in bioinformatics.", "abstract": "Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this article, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications." }, { "pmid": "16790051", "title": "An assessment of recently published gene expression data analyses: reporting experimental design and statistical factors.", "abstract": "BACKGROUND\nThe analysis of large-scale gene expression data is a fundamental approach to functional genomics and the identification of potential drug targets. Results derived from such studies cannot be trusted unless they are adequately designed and reported. The purpose of this study is to assess current practices on the reporting of experimental design and statistical analyses in gene expression-based studies.\n\n\nMETHODS\nWe reviewed hundreds of MEDLINE-indexed papers involving gene expression data analysis, which were published between 2003 and 2005. These papers were examined on the basis of their reporting of several factors, such as sample size, statistical power and software availability.\n\n\nRESULTS\nAmong the examined papers, we concentrated on 293 papers consisting of applications and new methodologies. These papers did not report approaches to sample size and statistical power estimation. Explicit statements on data transformation and descriptions of the normalisation techniques applied prior to data analyses (e.g. classification) were not reported in 57 (37.5%) and 104 (68.4%) of the methodology papers respectively. With regard to papers presenting biomedical-relevant applications, 41(29.1 %) of these papers did not report on data normalisation and 83 (58.9%) did not describe the normalisation technique applied. Clustering-based analysis, the t-test and ANOVA represent the most widely applied techniques in microarray data analysis. But remarkably, only 5 (3.5%) of the application papers included statements or references to assumption about variance homogeneity for the application of the t-test and ANOVA. There is still a need to promote the reporting of software packages applied or their availability.\n\n\nCONCLUSION\nRecently-published gene expression data analysis studies may lack key information required for properly assessing their design quality and potential impact. There is a need for more rigorous reporting of important experimental factors such as statistical power and sample size, as well as the correct description and justification of statistical methods applied. This paper highlights the importance of defining a minimum set of information required for reporting on statistical design and analysis of expression data. By improving practices of statistical analysis reporting, the scientific community can facilitate quality assurance and peer-review processes, as well as the reproducibility of results." }, { "pmid": "11435405", "title": "An efficient and robust statistical modeling approach to discover differentially expressed genes using genomic expression profiles.", "abstract": "We have developed a statistical regression modeling approach to discover genes that are differentially expressed between two predefined sample groups in DNA microarray experiments. Our model is based on well-defined assumptions, uses rigorous and well-characterized statistical measures, and accounts for the heterogeneity and genomic complexity of the data. In contrast to cluster analysis, which attempts to define groups of genes and/or samples that share common overall expression profiles, our modeling approach uses known sample group membership to focus on expression profiles of individual genes in a sensitive and robust manner. Further, this approach can be used to test statistical hypotheses about gene expression. To demonstrate this methodology, we compared the expression profiles of 11 acute myeloid leukemia (AML) and 27 acute lymphoblastic leukemia (ALL) samples from a previous study (Golub et al. 1999) and found 141 genes differentially expressed between AML and ALL with a 1% significance at the genomic level. Using this modeling approach to compare different sample groups within the AML samples, we identified a group of genes whose expression profiles correlated with that of thrombopoietin and found that genes whose expression associated with AML treatment outcome lie in recurrent chromosomal locations. Our results are compared with those obtained using t-tests or Wilcoxon rank sum statistics." }, { "pmid": "15327980", "title": "Rank products: a simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments.", "abstract": "One of the main objectives in the analysis of microarray experiments is the identification of genes that are differentially expressed under two experimental conditions. This task is complicated by the noisiness of the data and the large number of genes that are examined simultaneously. Here, we present a novel technique for identifying differentially expressed genes that does not originate from a sophisticated statistical model but rather from an analysis of biological reasoning. The new technique, which is based on calculating rank products (RP) from replicate experiments, is fast and simple. At the same time, it provides a straightforward and statistically stringent way to determine the significance level for each gene and allows for the flexible control of the false-detection rate and familywise error rate in the multiple testing situation of a microarray experiment. We use the RP technique on three biological data sets and show that in each case it performs more reliably and consistently than the non-parametric t-test variant implemented in Tusher et al.'s significance analysis of microarrays (SAM). We also show that the RP results are reliable in highly noisy data. An analysis of the physiological function of the identified genes indicates that the RP approach is powerful for identifying biologically relevant expression changes. In addition, using RP can lead to a sharp reduction in the number of replicate experiments needed to obtain reproducible results." }, { "pmid": "11435405", "title": "An efficient and robust statistical modeling approach to discover differentially expressed genes using genomic expression profiles.", "abstract": "We have developed a statistical regression modeling approach to discover genes that are differentially expressed between two predefined sample groups in DNA microarray experiments. Our model is based on well-defined assumptions, uses rigorous and well-characterized statistical measures, and accounts for the heterogeneity and genomic complexity of the data. In contrast to cluster analysis, which attempts to define groups of genes and/or samples that share common overall expression profiles, our modeling approach uses known sample group membership to focus on expression profiles of individual genes in a sensitive and robust manner. Further, this approach can be used to test statistical hypotheses about gene expression. To demonstrate this methodology, we compared the expression profiles of 11 acute myeloid leukemia (AML) and 27 acute lymphoblastic leukemia (ALL) samples from a previous study (Golub et al. 1999) and found 141 genes differentially expressed between AML and ALL with a 1% significance at the genomic level. Using this modeling approach to compare different sample groups within the AML samples, we identified a group of genes whose expression profiles correlated with that of thrombopoietin and found that genes whose expression associated with AML treatment outcome lie in recurrent chromosomal locations. Our results are compared with those obtained using t-tests or Wilcoxon rank sum statistics." }, { "pmid": "22149632", "title": "Separating significant matches from spurious matches in DNA sequences.", "abstract": "Word matches are widely used to compare genomic sequences. Complete genome alignment methods often rely on the use of matches as anchors for building their alignments, and various alignment-free approaches that characterize similarities between large sequences are based on word matches. Among matches that are retrieved from the comparison of two genomic sequences, a part of them may correspond to spurious matches (SMs), which are matches obtained by chance rather than by homologous relationships. The number of SMs depends on the minimal match length (ℓ) that has to be set in the algorithm used to retrieve them. Indeed, if ℓ is too small, a lot of matches are recovered but most of them are SMs. Conversely, if ℓ is too large, fewer matches are retrieved but many smaller significant matches are certainly ignored. To date, the choice of ℓ mostly depends on empirical threshold values rather than robust statistical methods. To overcome this problem, we propose a statistical approach based on the use of a mixture model of geometric distributions to characterize the distribution of the length of matches obtained from the comparison of two genomic sequences." }, { "pmid": "15680584", "title": "Gene selection from microarray data for cancer classification--a machine learning approach.", "abstract": "A DNA microarray can track the expression levels of thousands of genes simultaneously. Previous research has demonstrated that this technology can be useful in the classification of cancers. Cancer microarray data normally contains a small number of samples which have a large number of gene expression levels as features. To select relevant genes involved in different types of cancer remains a challenge. In order to extract useful gene information from cancer microarray data and reduce dimensionality, feature selection algorithms were systematically investigated in this study. Using a correlation-based feature selector combined with machine learning algorithms such as decision trees, naïve Bayes and support vector machines, we show that classification performance at least as good as published results can be obtained on acute leukemia and diffuse large B-cell lymphoma microarray data sets. We also demonstrate that a combined use of different classification and feature selection approaches makes it possible to select relevant genes with high confidence. This is also the first paper which discusses both computational and biological evidence for the involvement of zyxin in leukaemogenesis." }, { "pmid": "16119262", "title": "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.", "abstract": "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy." }, { "pmid": "18515276", "title": "Sparse kernel methods for high-dimensional survival data.", "abstract": "UNLABELLED\nSparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches.\n\n\nAVAILABILITY\nSoftware is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html." }, { "pmid": "15087314", "title": "A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression.", "abstract": "This paper studies the problem of building multiclass classifiers for tissue classification based on gene expression. The recent development of microarray technologies has enabled biologists to quantify gene expression of tens of thousands of genes in a single experiment. Biologists have begun collecting gene expression for a large number of samples. One of the urgent issues in the use of microarray data is to develop methods for characterizing samples based on their gene expression. The most basic step in the research direction is binary sample classification, which has been studied extensively over the past few years. This paper investigates the next step-multiclass classification of samples based on gene expression. The characteristics of expression data (e.g. large number of genes with small sample size) makes the classification problem more challenging. The process of building multiclass classifiers is divided into two components: (i) selection of the features (i.e. genes) to be used for training and testing and (ii) selection of the classification method. This paper compares various feature selection methods as well as various state-of-the-art classification methods on various multiclass gene expression datasets. Our study indicates that multiclass classification problem is much more difficult than the binary one for the gene expression datasets. The difficulty lies in the fact that the data are of high dimensionality and that the sample size is small. The classification accuracy appears to degrade very rapidly as the number of classes increases. In particular, the accuracy was very low regardless of the choices of the methods for large-class datasets (e.g. NCI60 and GCM). While increasing the number of samples is a plausible solution to the problem of accuracy degradation, it is important to develop algorithms that are able to analyze effectively multiple-class expression data for these special datasets." }, { "pmid": "20975711", "title": "Identification of high-quality cancer prognostic markers and metastasis network modules.", "abstract": "Cancer patients are often overtreated because of a failure to identify low-risk cancer patients. Thus far, no algorithm has been able to successfully generate cancer prognostic gene signatures with high accuracy and robustness in order to identify these patients. In this paper, we developed an algorithm that identifies prognostic markers using tumour gene microarrays focusing on metastasis-driving gene expression signals. Application of the algorithm to breast cancer samples identified prognostic gene signature sets for both estrogen receptor (ER) negative (-) and positive (+) subtypes. A combinatorial use of the signatures allowed the stratification of patients into low-, intermediate- and high-risk groups in both the training set and in eight independent testing sets containing 1,375 samples. The predictive accuracy for the low-risk group reached 87-100%. Integrative network analysis identified modules in which each module contained the genes of a signature and their direct interacting partners that are cancer driver-mutating genes. These modules are recurrent in many breast tumours and contribute to metastasis." } ]
Frontiers in Neuroscience
27774048
PMC5054006
10.3389/fnins.2016.00454
Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces.
2. Related workThis section presents the most relevant studies related to the scope of this paper. We focus on target selection tasks, in particular on existing gaze- and SSVEP-based methods for target selection.2.1. Target selectionAccording to Foley et al. (1984), any interaction task can be decomposed into a small set of basic interaction tasks. Foley proposed six types of interaction tasks for human–computer interaction: select, position, orient, path, quantify, and text. Depending on the interaction context, other basic interaction tasks have been proposed since then. The select interaction task is described as : “The user makes a selection from a set of alternatives” (Foley et al., 1984). This set can be a group of commands, or a “collection of displayed entities that form part of the application information presentation.” In human–computer interaction, selection is often performed with a point-and-click paradigm, generally driven by a computer mouse. The performance of an interaction technique for selection is usually measured by Fitts' law. This law is a descriptive model of human movement. It predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. This model is well suited to measure pointing speed, and has thus been widely used for point-and-click selection method where the “pointing” is critical, while the “clicking” is not.In the specific context of hands-free interaction, other input devices need to be used. Among them, gaze tracking has shown promising results (Velichkovsky et al., 1997; Zhu and Yang, 2002). Speech recognition or BCIs are other alternatives for hands-free interaction (Gürkök et al., 2011). Hands-free interaction methods can rely on a point-and-click paradigm, but in this specific context, the “clicking” is often as problematic as the “pointing” (Velichkovsky et al., 1997; Zander et al., 2010). Gaze tracking, speech recognition, and BCIs all share the particularity of presenting a relatively high error rate, compared to a keyboard or a mouse, for example.2.2. Gaze-based interactionIn order to improve dwell-based techniques, several methods have been proposed such as the Fish-eye methods (Ashmore et al., 2005). Fish-eye methods magnify (zoom in) the area around the gaze position, thus decreasing the required selection precision, but without addressing the Midas touch problem. However, the omnipresence of the visual deformation can degrade the exploration of the graphical interface. A potential solution is to zoom in only when potential targets are available (Ashmore et al., 2005; Istance et al., 2008). Another solution relies on designing user interfaces specifically suited for gaze-based selection such as hierarchical menus (Kammerer et al., 2008).2.3. SSVEP-based BCIsWhen the human eye is stimulated by a flickering stimulus, a brain response can be observed in the cortical visual areas, under the form of an activity at the frequency of stimulation, as well as the harmonics of this frequency. This response is known as Steady-State Visually Evoked Potential (SSVEP). SSVEP interfaces are frequently used for brain–computer interaction (Legeny et al., 2013; Quan et al., 2013), as SSVEP-based BCIs have a high precision and information transfer rate compared to other BCIs (Wang et al., 2010).The classical usage of SSVEP-based BCIs is target selection (Quan et al., 2013; Shyu et al., 2013). In order to select a target, the user has to focus on the flickering target she wants to select, each visible target being associated to a stimulation at a different frequency. The SSVEP response is detected in the brain activity of the user through the analysis of the EEG data, and the corresponding target is selected. Most of the time, SSVEP-based interfaces are limited to a small number of targets (commonly three targets), although some attempts were successful at using more targets, in a synchronous context (Wang et al., 2010; Manyakov et al., 2013; Chen et al., 2014).2.4. Gaze and EEG based hybrid interactionThe concept of Hybrid BCI was originally introduced in Pfurtscheller et al. (2010) and it was defined as a system “composed of two BCIs, or at least one BCI and another system” that fulfills four criteria : “(i) the device must rely on signals recorded directly from the brain; (ii) there must be at least one recordable brain signal that the user can intentionally modulate to effect goal-directed behavior; (iii) real time processing; and (iv) the user must obtain feedback.”In the past few years, it has been proposed to combine BCIs with a keyboard (Nijholt and Tan, 2008), a computer mouse (Mercier-Ganady et al., 2013), or a joystick (Leeb et al., 2013). Several types of BCIs can also be used at the same time (Li et al., 2010; Fruitet et al., 2011). In Gürkök et al. (2011), participants can switch at will between a SSVEP-based BCI and a speech recognition system. For a more complete review on hybrid BCIs, the interested reader can refer to Pfurtscheller et al. (2010).All these contributions can be broadly classified in two categories: sequential or simultaneous processing (Pfurtscheller et al., 2010). Hybrid BCIs based on sequential processing use two or more inputs to accomplish two or more interaction tasks. Each input is then responsible for one task. Hybrid BCIs based on simultaneous processing can fuse several inputs in order to achieve a single interaction task (Müller-Putz et al., 2011).2.4.1. Gaze and BCI-based hybrid interactionAlthough the idea of combining BCI and gaze-tracking has been already proposed, it has been marginally explored. Existing works have mainly focused on P300 (Choi et al., 2013) and motor imagery (Zander et al., 2010) BCIs. Regarding P300 paradigms, Choi et al. (2013) combined gaze tracking with a P300-based BCI for a spelling application. Compared to a P300 speller, the number of accessible characters and the detection accuracy are improved. In contrast, Zander et al. proposed to control a 2D cursor with the gaze, and to emulate a mouse “click” with a motor-imagery based brain switch (Zander et al., 2010). They found that interaction using only gaze tracking was a bit faster, but that BCI-based click is a reasonable alternative to dwell time.Later, Kos'Myna and Tarpin-Bernard (2013) proposed to use both gaze tracking and SSVEP-based BCI for a selection task in the context of a videogame. The gaze tracking allowed for a first selection task (selecting an object), followed by BCI-based selection for a second task (selecting a transformation to apply to the previously selected object). The findings of this study indicate that selection based only on gaze was faster and more intuitive.So far, attempts at creating hybrid interfaces using EEG and gaze tracking inputs for target selection have focused on sequential methods, and proposed ways to separate the selection into secondary tasks.Zander et al. (2010) separates the task (selection attribute) into pointing and clicking, while both (Choi et al., 2013) and Kos'Myna and Tarpin-Bernard (2013) use a two-step selection. In this paper, we propose a novel hybrid interaction technique, that simultaneously fusions information from gaze tracking and SSVEP-based BCI at a low level of abstraction.
[ "23486216", "23594762", "22589242", "20582271", "8361834", "16933428" ]
[ { "pmid": "23486216", "title": "Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs).", "abstract": "Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the user's gaze position) are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods." }, { "pmid": "23594762", "title": "Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain-computer interfacing.", "abstract": "OBJECTIVE\nThe performance and usability of brain-computer interfaces (BCIs) can be improved by new paradigms, stimulation methods, decoding strategies, sensor technology etc. In this study we introduce new stimulation and decoding methods for electroencephalogram (EEG)-based BCIs that have targets flickering at the same frequency but with different phases.\n\n\nAPPROACH\nThe phase information is estimated from the EEG data, and used for target command decoding. All visual stimulation is done on a conventional (60-Hz) LCD screen. Instead of the 'on/off' visual stimulation, commonly used in phase-coded BCI, we propose one based on a sampled sinusoidal intensity profile. In order to fully exploit the circular nature of the evoked phase response, we introduce a filter feature selection procedure based on circular statistics and propose a fuzzy logic classifier designed to cope with circular information from multiple channels jointly.\n\n\nMAIN RESULTS\nWe show that the proposed visual stimulation enables us not only to encode more commands under the same conditions, but also to obtain EEG responses with a more stable phase. We also demonstrate that the proposed decoding approach outperforms existing ones, especially for the short time windows used.\n\n\nSIGNIFICANCE\nThe work presented here shows how to overcome some of the limitations of screen-based visual stimulation. The superiority of the proposed decoding approach demonstrates the importance of preserving the circularity of the data during the decoding stage." }, { "pmid": "22589242", "title": "Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.", "abstract": "The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design." }, { "pmid": "20582271", "title": "The hybrid BCI.", "abstract": "Nowadays, everybody knows what a hybrid car is. A hybrid car normally has two engines to enhance energy efficiency and reduce CO2 output. Similarly, a hybrid brain-computer interface (BCI) is composed of two BCIs, or at least one BCI and another system. A hybrid BCI, like any BCI, must fulfill the following four criteria: (i) the device must rely on signals recorded directly from the brain; (ii) there must be at least one recordable brain signal that the user can intentionally modulate to effect goal-directed behaviour; (iii) real time processing; and (iv) the user must obtain feedback. This paper introduces hybrid BCIs that have already been published or are in development. We also introduce concepts for future work. We describe BCIs that classify two EEG patterns: one is the event-related (de)synchronisation (ERD, ERS) of sensorimotor rhythms, and the other is the steady-state visual evoked potential (SSVEP). Hybrid BCIs can either process their inputs simultaneously, or operate two systems sequentially, where the first system can act as a \"brain switch\". For example, we describe a hybrid BCI that simultaneously combines ERD and SSVEP BCIs. We also describe a sequential hybrid BCI, in which subjects could use a brain switch to control an SSVEP-based hand orthosis. Subjects who used this hybrid BCI exhibited about half the false positives encountered while using the SSVEP BCI alone. A brain switch can also rely on hemodynamic changes measured through near-infrared spectroscopy (NIRS). Hybrid BCIs can also use one brain signal and a different type of input. This additional input can be an electrophysiological signal such as the heart rate, or a signal from an external device such as an eye tracking system." }, { "pmid": "8361834", "title": "Response selection, sensitivity, and taste-test performance.", "abstract": "Tasters selected the odd stimulus from among sets of three samples of party dip. Two samples came from one batch, and one sample came from another batch. The physicochemical difference between the batches consisted of the presence or absence of added salt. Two different tests of discriminability were undertaken by the same subjects with the same stimuli: the triangle test and the three-alternative forced-choice (3-AFC) method. Although different numbers of correct selections were obtained in the two tasks, an index of discriminability, d', had the same value when the data were analyzed in accordance with the Thurstone-Ura and signal-detection models, respectively. The average data support Frijters's (1979b) contention that different models of the discrimination process are appropriate to the results of the triangular and the 3-AFC procedures. Further analysis of the data revealed that discrimination was poorer for trios containing one physicochemically weak stimulus and two stronger stimuli than it was for trios containing one stronger stimulus and two weak stimuli. A two-signal 3-AFC task was undertaken by some subjects, and d' estimates from this task were lower than expected on the basis of performance in the other tasks." }, { "pmid": "16933428", "title": "Measures of sensitivity based on a single hit rate and false alarm rate: the accuracy, precision, and robustness of d', Az, and A'.", "abstract": "Signal detection theory offers several indexes of sensitivity (d', Az, and A') that are appropriate for two-choice discrimination when data consist of one hit rate and one false alarm rate per condition. These measures require simplifying assumptions about how target and lure evidence is distributed. We examine three statistical properties of these indexes: accuracy (good agreement between the parameter and the sampling distribution mean), precision (small variance of the sampling distribution), and robustness (small influence of violated assumptions on accuracy). We draw several conclusions from the results. First, a variety of parameters (sample size, degree of discriminability, and magnitude of hits and false alarms) influence statistical bias in these indexes. Comparing conditions that differ in these parameters entails discrepancies that can be reduced by increasing N. Second, unequal variance of the evidence distributions produces significant bias that cannot be reduced by increasing N-a serious drawback to the use of these sensitivity indexes when variance is unknown. Finally, their relative statistical performances suggest that Az is preferable to A'." } ]
JMIR Medical Informatics
27658571
PMC5054236
10.2196/medinform.5353
Characterizing the (Perceived) Newsworthiness of Health Science Articles: A Data-Driven Approach
BackgroundHealth science findings are primarily disseminated through manuscript publications. Information subsidies are used to communicate newsworthy findings to journalists in an effort to earn mass media coverage and further disseminate health science research to mass audiences. Journal editors and news journalists then select which news stories receive coverage and thus public attention.ObjectiveThis study aims to identify attributes of published health science articles that correlate with (1) journal editor issuance of press releases and (2) mainstream media coverage.MethodsWe constructed four novel datasets to identify factors that correlate with press release issuance and media coverage. These corpora include thousands of published articles, subsets of which received press release or mainstream media coverage. We used statistical machine learning methods to identify correlations between words in the science abstracts and press release issuance and media coverage. Further, we used a topic modeling-based machine learning approach to uncover latent topics predictive of the perceived newsworthiness of science articles.ResultsBoth press release issuance for, and media coverage of, health science articles are predictable from corresponding journal article content. For the former task, we achieved average areas under the curve (AUCs) of 0.666 (SD 0.019) and 0.882 (SD 0.018) on two separate datasets, comprising 3024 and 10,760 articles, respectively. For the latter task, models realized mean AUCs of 0.591 (SD 0.044) and 0.783 (SD 0.022) on two datasets—in this case containing 422 and 28,910 pairs, respectively. We reported most-predictive words and topics for press release or news coverage.ConclusionsWe have presented a novel data-driven characterization of content that renders health science “newsworthy.” The analysis provides new insights into the news coverage selection process. For example, it appears epidemiological papers concerning common behaviors (eg, alcohol consumption) tend to receive media attention.
Motivation and Related WorkThe news media are powerful conduits by which to disseminate important information to the public [8]. There is a chasm between the constant demand for up-to-date information and shrinking budgets and staff at newspapers around the globe. Information subsidies such as press releases are often looked to as a way to fill this widening gap. As a standard of industry practice, public relations professionals generate packaged information to promote their organization and to communicate aspects of interest to target the public [9].Agenda setting has been used to explain the impact of the news media in the formation of public opinion [10]. The theory posits that the decisions made by news gatekeepers (eg, editors and journalists) in choosing and reporting news plays an important part in shaping the public’s reality. Information subsidies are tools for public relations practitioners to use to participate in the building process of the news media agenda [11,12].In the area of health, journalists rely more heavily on sources and experts because of the technical nature of the information [12,13]. Tanner [14] found that television health-news journalists reported relying most heavily on public relations practitioners for story ideas. Another study of science journalists at large newspapers revealed that they work through public relations practitioners and also rely on scientific journals for news of medical discoveries [15]. Viswanath and colleagues [4] found that health and medical reporters and editors from small media organizations were less likely to use government websites or scientific journals as resources, but were more likely to use press releases. In other studies, factors such as newspaper circulation, publication frequency, and community size were shown to influence publication of health information subsidies [16-18].This study focuses on media coverage of developments in health science and scientific findings. Previous research has highlighted factors that might promote press release generation for, and news coverage of, health science articles. This work has relied predominantly on qualitative approaches. For instance, Woloshin and Schwartz [19] studied the press release process by interviewing journal editors about the process of selecting articles for which to generate press releases. They also analyzed the fraction of press releases that reported study limitations and related characteristics. Tsfati et al [20] argued through content analysis that scholars’ beliefs in the influence of media increases their motivation and efforts to obtain media coverage, in turn influencing the actual amount of media coverage of their research.In this study, we present a complementary approach using data-driven, quantitative methods to uncover the topical content that correlates with both news release generation and mainstream media coverage. Our hypothesis is that there exist specific topics—for which words and phrases are proxies—that are more likely to be considered “newsworthy.” Identifying such topics will illuminate latent biases in the journalistic process of selecting scientific articles for media coverage.
[ "15249264", "16641081", "19051112", "22546317", "15253997", "12038933", "25498121" ]
[ { "pmid": "15249264", "title": "Health attitudes, health cognitions, and health behaviors among Internet health information seekers: population-based survey.", "abstract": "BACKGROUND\nUsing a functional theory of media use, this paper examines the process of health-information seeking in different domains of Internet use.\n\n\nOBJECTIVE\nBased on an analysis of the 1999 HealthStyles data, this study was designed to demonstrate that people who gather information on the Internet are more health-oriented than non-users of Internet health information.\n\n\nMETHODS\nThe Porter Novelli HealthStyles database, collected annually since 1995, is based on the results of nationally representative postal mail surveys. In 1999, 2636 respondents provided usable data for the HealthStyles database. Independent sample t-tests and logistic regression analyses were conducted.\n\n\nRESULTS\nThe results showed that individuals who searched for health information on the Internet were indeed more likely to be health-oriented than those who did not. Consumers who sought out medical information on the Internet reported higher levels of health-information orientation and healthy activities, as well as stronger health beliefs than those who did not search for medical news on the Internet. It was observed that those who reported searching for information about drugs and medications on the Internet held stronger health beliefs than the non-searchers. Comparison of individuals who reported seeking out information about specific diseases on the Internet with individuals who did not showed those who sought out disease-specific information on the Internet to be more health-oriented. Finally, consumers who sought out healthy lifestyle information on the Internet were more health conscious and more health-information oriented than those who did not. They were also more likely to hold stronger health-oriented beliefs and to engage in healthy activities.\n\n\nCONCLUSIONS\nThe results support the functional theory of Internet use. Internet searchers who used the Internet for a wide range of health purposes were typically more health oriented than non-searchers." }, { "pmid": "16641081", "title": "Cancer information scanning and seeking behavior is associated with knowledge, lifestyle choices, and screening.", "abstract": "Previous research on cancer information focused on active seeking, neglecting information gathered through routine media use or conversation (\"scanning\"). It is hypothesized that both scanning and active seeking influence knowledge, prevention, and screening decisions. This study uses Health Information National Trends Survey (HINTS, 2003) data to describe cancer-related scanning and seeking behavior (SSB) and assess its relationship with knowledge, lifestyle behavior, and screening. Scanning was operationalized as the amount of attention paid to health topics, and seeking was defined as looking for cancer information in the past year. The resulting typology included 41% low-scan/no-seekers; 30% high-scan/no-seekers; 10% low-scan/seekers, and 19% high-scan/seekers. Both scanning and seeking were significantly associated with knowledge about cancer (B=.36; B=.34) and lifestyle choices that may prevent cancer (B=.15; B=.16) in multivariate analyses. Both scanning and seeking were associated with colonoscopy (OR = 1.38, for scanning and OR=1.44, for seeking) and with prostate cancer screening (OR=4.53, scanning; OR=10.01, seeking). Scanning was significantly associated with recent mammography (OR=1.46), but seeking was not. Individuals who scan or seek cancer information are those who acquire knowledge, adopt healthy lifestyle behaviors, and get screened for cancer. Causal claims about these associations await further research." }, { "pmid": "19051112", "title": "Occupational practices and the making of health news: a national survey of US Health and medical science journalists.", "abstract": "News media coverage of health topics can frame and heighten the salience of health-related issues, thus influencing the public's beliefs, attitudes, and behaviors. Through their routine coverage of scientific developments, news media are a critical intermediary in translating research for the public, patients, practitioners, and policymakers. Until now, little was known about how health and medical science reporters and editors initiate, prioritize, and develop news stories related to health and medicine. We surveyed 468 reporters and editors representing 463 local and national broadcast and print media outlets to characterize individual characteristics and occupational practices leading to the development of health and medical science news. Our survey revealed that 70% of respondents had bachelor's degrees; 8% were life sciences majors in college. Minorities are underrepresented in health journalism; 97% of respondents were non-Hispanic and 93% were White. Overall, initial ideas for stories come from a \"news source\" followed by press conferences or press releases. Regarding newsworthiness criteria, the \"potential for public impact\" and \"new information or development\" are the major criteria cited, followed by \"ability to provide a human angle\" and \"ability to provide a local angle.\" Significant differences were seen between responses from reporters vs. editors and print vs. broadcast outlets." }, { "pmid": "22546317", "title": "Evaluating the Ozioma cancer news service: a community randomized trial in 24 U.S. cities.", "abstract": "OBJECTIVE\nThis community randomized trial evaluated effects of the Ozioma News Service on the amount and quality of cancer coverage in Black weekly newspapers in 24 U.S. cities.\n\n\nMETHOD\nWe created and operated Ozioma, the first cancer information news service specifically for Black newspapers. Over 21 months, Ozioma developed community- and race-specific cancer news releases for each of 12 Black weekly newspapers in intervention communities. Cancer coverage in these papers was tracked before and during the intervention and compared to 12 Black newspapers in control communities.\n\n\nRESULTS\nFrom 2004 to 2007, we coded 9257 health and cancer stories from 3178 newspaper issues. Intervention newspapers published approximately 4 times the expected number of cancer stories compared to control newspapers (p(12,21 mo)<.01), and also saw an increase in graphics (p(12,21 mo)<.01), local relevance (p(12 mo)=.01), and personal mobilization (p(12 mo)<.10). However, this increased coverage supplanted other health topics and had smaller graphics (NS), had less community mobilization (p(21 mo)=.01), and is less likely to be from a local source (NS).\n\n\nCONCLUSION\nProviding news releases with localized and race-specific features to minority-serving media outlets can increase the quantity of cancer coverage. Results are mixed for the journalistic and public health quality of this increased cancer coverage in Black newspapers." }, { "pmid": "15253997", "title": "Building a health promotion agenda in local newspapers.", "abstract": "This is an analysis of newspaper coverage of breast cancer topics during a community-based health promotion campaign. The 4-year campaign, called the Breast Cancer Screening Campaign (BCSC), was devoted to promoting mammography screening in a Midwestern state. The BCSC included both paid advertising and volunteer-led community interventions that were intended, in part, to increase the flow of information about breast cancer and mammography screening in the local mass media. Findings showed that intervention was positively associated with local newspaper content about breast cancer, but the effects were confined to communities served by weekly newspapers. We discuss the implications of this study for future community-based health promotion campaigns." }, { "pmid": "12038933", "title": "Press releases: translating research into news.", "abstract": "CONTEXT\nWhile medical journals strive to ensure accuracy and the acknowledgment of limitations in articles, press releases may not reflect these efforts.\n\n\nMETHODS\nTelephone interviews conducted in January 2001 with press officers at 9 prominent medical journals and analysis of press releases (n = 127) about research articles for the 6 issues of each journal preceding the interviews.\n\n\nRESULTS\nSeven of the 9 journals routinely issue releases; in each case, the editor with the press office selects articles based on perceived newsworthiness and releases are written by press officers trained in communications. Journals have general guidelines (eg, length) but no standards for acknowledging limitations or for data presentation. Editorial input varies from none to intense. Of the 127 releases analyzed, 29 (23%) noted study limitations and 83 (65%) reported main effects using numbers; 58 reported differences between study groups and of these, 26 (55%) provided the corresponding base rate, the format least prone to exaggeration. Industry funding was noted in only 22% of 23 studies receiving such funding.\n\n\nCONCLUSIONS\nPress releases do not routinely highlight study limitations or the role of industry funding. Data are often presented using formats that may exaggerate the perceived importance of findings." }, { "pmid": "25498121", "title": "The association between exaggeration in health related science news and academic press releases: retrospective observational study.", "abstract": "OBJECTIVE\nTo identify the source (press releases or news) of distortions, exaggerations, or changes to the main conclusions drawn from research that could potentially influence a reader's health related behaviour.\n\n\nDESIGN\nRetrospective quantitative content analysis.\n\n\nSETTING\nJournal articles, press releases, and related news, with accompanying simulations.\n\n\nSAMPLE\nPress releases (n = 462) on biomedical and health related science issued by 20 leading UK universities in 2011, alongside their associated peer reviewed research papers and news stories (n = 668).\n\n\nMAIN OUTCOME MEASURES\nAdvice to readers to change behaviour, causal statements drawn from correlational research, and inference to humans from animal research that went beyond those in the associated peer reviewed papers.\n\n\nRESULTS\n40% (95% confidence interval 33% to 46%) of the press releases contained exaggerated advice, 33% (26% to 40%) contained exaggerated causal claims, and 36% (28% to 46%) contained exaggerated inference to humans from animal research. When press releases contained such exaggeration, 58% (95% confidence interval 48% to 68%), 81% (70% to 93%), and 86% (77% to 95%) of news stories, respectively, contained similar exaggeration, compared with exaggeration rates of 17% (10% to 24%), 18% (9% to 27%), and 10% (0% to 19%) in news when the press releases were not exaggerated. Odds ratios for each category of analysis were 6.5 (95% confidence interval 3.5 to 12), 20 (7.6 to 51), and 56 (15 to 211). At the same time, there was little evidence that exaggeration in press releases increased the uptake of news.\n\n\nCONCLUSIONS\nExaggeration in news is strongly associated with exaggeration in press releases. Improving the accuracy of academic press releases could represent a key opportunity for reducing misleading health related news." } ]
BioData Mining
27777627
PMC5057496
10.1186/s13040-016-0110-8
FEDRR: fast, exhaustive detection of redundant hierarchical relations for quality improvement of large biomedical ontologies
BackgroundRedundant hierarchical relations refer to such patterns as two paths from one concept to another, one with length one (direct) and the other with length greater than one (indirect). Each redundant relation represents a possibly unintended defect that needs to be corrected in the ontology quality assurance process. Detecting and eliminating redundant relations would help improve the results of all methods relying on the relevant ontological systems as knowledge source, such as the computation of semantic distance between concepts and for ontology matching and alignment.ResultsThis paper introduces a novel and scalable approach, called FEDRR – Fast, Exhaustive Detection of Redundant Relations – for quality assurance work during ontological evolution. FEDRR combines the algorithm ideas of Dynamic Programming with Topological Sort, for exhaustive mining of all redundant hierarchical relations in ontological hierarchies, in O(c·|V|+|E|) time, where |V| is the number of concepts, |E| is the number of the relations, and c is a constant in practice. Using FEDRR, we performed exhaustive search of all redundant is-a relations in two of the largest ontological systems in biomedicine: SNOMED CT and Gene Ontology (GO). 372 and 1609 redundant is-a relations were found in the 2015-09-01 version of SNOMED CT and 2015-05-01 version of GO, respectively. We have also performed FEDRR on over 190 source vocabularies in the UMLS - a large integrated repository of biomedical ontologies, and identified six sources containing redundant is-a relations. Randomly generated ontologies have also been used to further validate the efficiency of FEDRR.ConclusionsFEDRR provides a generally applicable, effective tool for systematic detecting redundant relations in large ontological systems for quality improvement.
Related workThere has been related work on exploring redundant relations in biomedical ontologies or terminologies [24–28]. Bodenreider [24] investigated the redundancy of hierarchical relations across biomedical terminologies in the UMLS. Different from Bodenreider’s work, FEDRR focuses on developing a fast and scalable approach to detect redundant hierarchical relations within a single ontology.Gu et al. [25] investigated five categories of possibly incorrect relationship assignment including redundant relations in FMA. The redundant relations were detected based on the interplay between the is_a and other structural relationships (part_of, tributary_of, branch_of). A review of 20 samples from possible redundant part_of relations validated 14 errors, a 70 % correctness. FEDRR differs from this work in two ways. Firstly, FEDRR aims to provide an efficient algorithm to identify redundant hierarchical relations from large ontologies with 100 % accuracy. Secondly, FEDRR can be used for detecting redundant relations in all DAGs with the transitivity property.Mougin [26] studied redundant relations as well as missing relations in GO. The identification of redundant relations was based on the combination of relationships including is_a and is_a, is_a and part_of, part_of and part_of, and is_a and positively_regulates. FEDRR’s main focus is to provide a generalizable and efficient approach to detecting redundant hierarchical relations in any ontology, which has been illustrated by applying it to all the UMLS source vocabularies. Moreover, the redundant hierarchical relations detected by FEDRR were evaluated by human experts, while only number of redundant relations was reported in [26] without human annotator’s validation.Mougin et al. [27] exhaustively examined multiply-related concepts within the UMLS, where multiply-related concepts mean concepts associated through multiple relations. They explored whether such multiply-related concepts were inherited from source vocabularies or introduced by the UMLS integration. About three quarters of multiply-related concepts in the UMLS were found to be caused by the UMLS integration. Additionally, Gu et al. [28] studied questionable relationship triples in the UMLS following four cases: conflicting hierarchical relationships, redundant hierarchical relationships, mixed hierarchical/lateral relationships, and multiple lateral relationships. It was reported in [28] that many examples indicated that questionable triples arose from the UMLS integration process.Bodenreider [29], Mougin and Bodenreider [30], and Halper et al. [31] studied various approaches to removing cyclic hierarchical relations in the UMLS. Although no cycles have been detected in the current UMLS in terms of the AUI, such approaches ([29–31]) to detecting and removing cyclic relations are needed before FEDRR can be applied. This is because FEDRR is based on the topological sorting of a graph, which requires no cycles in a graph.
[ "17095826", "26306232", "22580476", "18952949", "16929044", "19475727", "25991129", "23911553" ]
[ { "pmid": "17095826", "title": "SNOMED-CT: The advanced terminology and coding system for eHealth.", "abstract": "A clinical terminology is essential for Electronic Health records. It represents clinical information input into clinical IT systems by clinicians in a machine-readable manner. Use of a Clinical Terminology, implemented within a clinical information system, will enable the delivery of many patient health benefits including electronic clinical decision support, disease screening and enhanced patient safety. For example, it will help reduce medication-prescribing errors, which are currently known to kill or injure many citizens. It will also reduce clinical administration effort and the overall costs of healthcare." }, { "pmid": "26306232", "title": "Mining Relation Reversals in the Evolution of SNOMED CT Using MapReduce.", "abstract": "Relation reversals in ontological systems refer to such patterns as a path from concept A to concept B in one version becoming a path with the position of A and B switched in another version. We present a scalable approach, using cloud computing, to systematically extract all hierarchical relation reversals among 8 SNOMED CT versions from 2009 to 2014. Taking advantage of our MapReduce algorithms for computing transitive closure and large-scale set operations, 48 reversals were found through 28 pairwise comparison of the 8 versions in 18 minutes using a 30-node local cloud, to completely cover all possible scenarios. Except for one, all such reversals occurred in three sub-hierarchies: Body Structure, Clinical Finding, and Procedure. Two (2) reversal pairs involved an uncoupling of the pair before the is-a coupling is reversed. Twelve (12) reversal pairs involved paths of length-two, and none (0) involved paths beyond length-two. Such reversals not only represent areas of potential need for additional modeling work, but also are important for identifying and handling cycles for comparative visualization of ontological evolution." }, { "pmid": "22580476", "title": "COnto-Diff: generation of complex evolution mappings for life science ontologies.", "abstract": "Life science ontologies evolve frequently to meet new requirements or to better reflect the current domain knowledge. The development and adaptation of large and complex ontologies is typically performed collaboratively by several curators. To effectively manage the evolution of ontologies it is essential to identify the difference (Diff) between ontology versions. Such a Diff supports the synchronization of changes in collaborative curation, the adaptation of dependent data such as annotations, and ontology version management. We propose a novel approach COnto-Diff to determine an expressive and invertible diff evolution mapping between given versions of an ontology. Our approach first matches the ontology versions and determines an initial evolution mapping consisting of basic change operations (insert/update/delete). To semantically enrich the evolution mapping we adopt a rule-based approach to transform the basic change operations into a smaller set of more complex change operations, such as merge, split, or changes of entire subgraphs. The proposed algorithm is customizable in different ways to meet the requirements of diverse ontologies and application scenarios. We evaluate the proposed approach for large life science ontologies including the Gene Ontology and the NCI Thesaurus and compare it with PromptDiff. We further show how the Diff results can be used for version management and annotation migration in collaborative curation." }, { "pmid": "18952949", "title": "Auditing the semantic completeness of SNOMED CT using formal concept analysis.", "abstract": "OBJECTIVE\nThis study sought to develop and evaluate an approach for auditing the semantic completeness of the SNOMED CT contents using a formal concept analysis (FCA)-based model.\n\n\nDESIGN\nWe developed a model for formalizing the normal forms of SNOMED CT expressions using FCA. Anonymous nodes, identified through the analyses, were retrieved from the model for evaluation. Two quasi-Poisson regression models were developed to test whether anonymous nodes can evaluate the semantic completeness of SNOMED CT contents (Model 1), and for testing whether such completeness differs between 2 clinical domains (Model 2). The data were randomly sampled from all the contexts that could be formed in the 2 largest domains: Procedure and Clinical Finding. Case studies (n = 4) were performed on randomly selected anonymous node samples for validation.\n\n\nMEASUREMENTS\nIn Model 1, the outcome variable is the number of fully defined concepts within a context, while the explanatory variables are the number of lattice nodes and the number of anonymous nodes. In Model 2, the outcome variable is the number of anonymous nodes and the explanatory variables are the number of lattice nodes and a binary category for domain (Procedure/Clinical Finding).\n\n\nRESULTS\nA total of 5,450 contexts from the 2 domains were collected for analyses. Our findings revealed that the number of anonymous nodes had a significant negative correlation with the number of fully defined concepts within a context (p < 0.001). Further, the Clinical Finding domain had fewer anonymous nodes than the Procedure domain (p < 0.001). Case studies demonstrated that the anonymous nodes are an effective index for auditing SNOMED CT.\n\n\nCONCLUSION\nThe anonymous nodes retrieved from FCA-based analyses are a candidate proxy for the semantic completeness of the SNOMED CT contents. Our novel FCA-based approach can be useful for auditing the semantic completeness of SNOMED CT contents, or any large ontology, within or across domains." }, { "pmid": "16929044", "title": "Auditing as part of the terminology design life cycle.", "abstract": "OBJECTIVE\nTo develop and test an auditing methodology for detecting errors in medical terminologies satisfying systematic inheritance. This methodology is based on various abstraction taxonomies that provide high-level views of a terminology and highlight potentially erroneous concepts.\n\n\nDESIGN\nOur auditing methodology is based on dividing concepts of a terminology into smaller, more manageable units. First, we divide the terminology's concepts into areas according to their relationships/roles. Then each multi-rooted area is further divided into partial-areas (p-areas) that are singly-rooted. Each p-area contains a set of structurally and semantically uniform concepts. Two kinds of abstraction networks, called the area taxonomy and p-area taxonomy, are derived. These taxonomies form the basis for the auditing approach. Taxonomies tend to highlight potentially erroneous concepts in areas and p-areas. Human reviewers can focus their auditing efforts on the limited number of problematic concepts following two hypotheses on the probable concentration of errors.\n\n\nRESULTS\nA sample of the area taxonomy and p-area taxonomy for the Biological Process (BP) hierarchy of the National Cancer Institute Thesaurus (NCIT) was derived from the application of our methodology to its concepts. These views led to the detection of a number of different kinds of errors that are reported, and to confirmation of the hypotheses on error concentration in this hierarchy.\n\n\nCONCLUSION\nOur auditing methodology based on area and p-area taxonomies is an efficient tool for detecting errors in terminologies satisfying systematic inheritance of roles, and thus facilitates their maintenance. This methodology concentrates a domain expert's manual review on portions of the concepts with a high likelihood of errors." }, { "pmid": "19475727", "title": "Relationship auditing of the FMA ontology.", "abstract": "The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection." }, { "pmid": "25991129", "title": "Identifying redundant and missing relations in the gene ontology.", "abstract": "Significant efforts have been undertaken for providing the Gene Ontology (GO) in a computable format as well as for enriching it with logical definitions. Automated approaches can thus be applied to GO for assisting its maintenance and for checking its internal coherence. However, inconsistencies may still remain within GO. In this frame, the objective of this work was to audit GO relationships. First, reasoning over relationships was exploited for detecting redundant relations existing between GO concepts. Missing necessary and sufficient conditions were then identified based on the compositional structure of the preferred names of GO concepts. More than one thousand redundant relations and 500 missing necessary and sufficient conditions were found. The proposed approach was thus successful for detecting inconsistencies within GO relations. The application of lexical approaches as well as the exploitation of synonyms and textual definitions could be useful for identifying additional necessary and sufficient conditions. Multiple necessary and sufficient conditions for a given GO concept may be indicative of inconsistencies." }, { "pmid": "23911553", "title": "Development of a HIPAA-compliant environment for translational research data and analytics.", "abstract": "High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58." } ]
BMC Medical Informatics and Decision Making
27756371
PMC5070096
10.1186/s12911-016-0371-7
Predicting influenza with dynamical methods
BackgroundPrediction of influenza weeks in advance can be a useful tool in the management of cases and in the early recognition of pandemic influenza seasons.MethodsThis study explores the prediction of influenza-like-illness incidence using both epidemiological and climate data. It uses Lorenz’s well-known Method of Analogues, but with two novel improvements. Firstly, it determines internal parameters using the implicit near-neighbor distances in the data, and secondly, it employs climate data (mean dew point) to screen analogue near-neighbors and capture the hidden dynamics of disease spread.ResultsThese improvements result in the ability to forecast, four weeks in advance, the total number of cases and the incidence at the peak with increased accuracy. In most locations the total number of cases per year and the incidence at the peak are forecast with less than 15 % root-mean-square (RMS) Error, and in some locations with less than 10 % RMS Error.ConclusionsThe use of additional variables that contribute to the dynamics of influenza spread can greatly improve prediction accuracy.
Related workA survey of influenza forecasting methods [3] yielded 35 publications organized into categories based on the epidemiological application – population-based, medical facility-based, and forecasting regionally or globally. Within these categories, the forecasting methods varied along with the types of data used to make the forecast. Roughly half of the publications used statistical approaches without explicit mechanistic models and the other half used epidemiological models. Three of these models used meteorological predictors.In this study, we model directly from the data (time series consisting of weekly incidence geographically aligned with multiple facilities) and use meteorological data to enrich the model. None of the models surveyed in [3] used both the Method of Analogues and meteorological data to forecast influenza in a population.Typically data on the current number of influenza cases reported by the Centers for Disease Control ([4]; one of the more accurate geographically tagged data sets) has a one-week lag. In order to predict 4 weeks ahead of the current date, one uses data up to one week before the current date. This translates, in reality, to a 5-week prediction horizon for a prediction 4 weeks in the future. For the remainder of the paper we will refer to this as a 4-week prediction. Similarly, most climate data for the current date is not available in a format for which acquisition can be automated immediately; for most there is a lag of about one week. Our goal is to predict influenza incidence (number of influenza cases/total number of health-care visits) 4 weeks ahead of the current date, using only data available up to the current time, that is, using both incidence and climate data from the week before.This study was part of a team effort to predict the height of the peak, the timing of the peak and the total cases in an influenza season. This paper addresses the height of the peak and the total cases in a season. Another paper (see [5]) uses machine-learning methods to predict the timing of the peak.
[ "27127415" ]
[ { "pmid": "27127415", "title": "Prediction of Peaks of Seasonal Influenza in Military Health-Care Data.", "abstract": "Influenza is a highly contagious disease that causes seasonal epidemics with significant morbidity and mortality. The ability to predict influenza peak several weeks in advance would allow for timely preventive public health planning and interventions to be used to mitigate these outbreaks. Because influenza may also impact the operational readiness of active duty personnel, the US military places a high priority on surveillance and preparedness for seasonal outbreaks. A method for creating models for predicting peak influenza visits per total health-care visits (ie, activity) weeks in advance has been developed using advanced data mining techniques on disparate epidemiological and environmental data. The model results are presented and compared with those of other popular data mining classifiers. By rigorously testing the model on data not used in its development, it is shown that this technique can predict the week of highest influenza activity for a specific region with overall better accuracy than other methods examined in this article." } ]
BioData Mining
27785153
PMC5073928
10.1186/s13040-016-0113-5
Developing a modular architecture for creation of rule-based clinical diagnostic criteria
BackgroundWith recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies.Methods and resultsThe architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation.ConclusionOur efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.Electronic supplementary materialThe online version of this article (doi:10.1186/s13040-016-0113-5) contains supplementary material, which is available to authorized users.
Related workPrevious studies have been conducted in integrating and formally expressing diagnostic rules from different perspectives. These rules are usually extracted from free-text-based clinical guidelines or diagnostic criteria, and integrated into computerized decision support systems to improve clinical performance and patient outcomes [12, 13]. The related studies mainly include as follows.Clinical guideline computerization and Computer Interpretable Guideline (CIG) Systems. Various computerized clinical guidelines and decision support systems that incorporate clinical guidelines have been developed. Researchers have tried different approaches on computerization of clinical practice guidelines [12, 14–18]. Since guidelines cover many complex medical procedures, the application of computerized guideline in real-world practice is still very limited. However, the methods used to computerize guidelines are valuable in tackling the issues in diagnostic criteria computerization.Formalization method studies on clinical research data. Previous studies investigated the eligibility criteria in clinical trial protocol and developed approaches for eligibility criteria extraction and semantic representation, and used hierarchical clustering for dynamic categorization of such criteria [19]. For example, EliXR provided a corpus-based knowledge acquisition framework that used the Unified Medical Language System (UMLS) to standardize eligibility-concept encoding and to enrich eligibility-concept relations for clinical research eligibility criteria from text [20]. QDM-based phenotyping methods used for identification of patient cohorts from EHR data also provide valuable reference for our work [21]. However, few studies are directly related to building a unified architecture to support the goal of diagnostic criteria formalization. In particular, the lack of a standards-based information model has been recognized as a major barrier for achieving computable diagnostic criteria [22]. Fortunately, current efforts in the development of international recommendation standard models in clinical domains provide valuable references for modeling and representing computable diagnostic criteria. The notable examples include the ICD-11 content model [5, 23] and the National Quality Forum (NQF) QDM [21, 24, 25].
[ "24500457", "23523876", "22874162", "22462194", "24975859", "12509357", "23806274", "18639485", "15196480", "15182844", "21689783", "21807647", "23304325", "17712081", "23601451", "23304366" ]
[ { "pmid": "23523876", "title": "An ontology-driven, diagnostic modeling system.", "abstract": "OBJECTIVES\nTo present a system that uses knowledge stored in a medical ontology to automate the development of diagnostic decision support systems. To illustrate its function through an example focused on the development of a tool for diagnosing pneumonia.\n\n\nMATERIALS AND METHODS\nWe developed a system that automates the creation of diagnostic decision-support applications. It relies on a medical ontology to direct the acquisition of clinic data from a clinical data warehouse and uses an automated analytic system to apply a sequence of machine learning algorithms that create applications for diagnostic screening. We refer to this system as the ontology-driven diagnostic modeling system (ODMS). We tested this system using samples of patient data collected in Salt Lake City emergency rooms and stored in Intermountain Healthcare's enterprise data warehouse.\n\n\nRESULTS\nThe system was used in the preliminary development steps of a tool to identify patients with pneumonia in the emergency department. This tool was compared with a manually created diagnostic tool derived from a curated dataset. The manually created tool is currently in clinical use. The automatically created tool had an area under the receiver operating characteristic curve of 0.920 (95% CI 0.916 to 0.924), compared with 0.944 (95% CI 0.942 to 0.947) for the manually created tool.\n\n\nDISCUSSION\nInitial testing of the ODMS demonstrates promising accuracy for the highly automated results and illustrates the route to model improvement.\n\n\nCONCLUSIONS\nThe use of medical knowledge, embedded in ontologies, to direct the initial development of diagnostic computing systems appears feasible." }, { "pmid": "22874162", "title": "Ontology driven decision support systems for medical diagnosis - an interactive form for consultation in patients with plasma cell disease.", "abstract": "Multiple myeloma (MM) is a malignant disorder characterized by the monoclonal proliferation of B cell derived plasma cells in the bone marrow. The diagnosis depends on the identification of abnormal monoclonal marrow plasma cells, monoclonal protein in the serum or urine, evidence of end-organ damage, and a clinical picture consistent with MM. The distinction between MM stages- monoclonal gammopathy of undetermined significance or indolent myeloma-is critical in guiding therapy. This paper describes how to produce ontology-driven semiological rules base (SRB) and a consultation form to aid in the diagnosis of plasma cells diseases. We have extracted the MM sub-ontology from the NCI Thesaurus. Using Protégé 3.4.2 and owl1, criteria in the literature for the diagnosis and staging of MM have been added to the ontology. All quantitative parameters have been transformed to a qualitative format. A formal description of MM variants and stages has been given. The obtained ontology has been checked by a reasoner and instantiated to obtain a SRB. The form created has been tested and evaluated utilizing 63 clinical medical reports. The likelihood for a disease being the correct diagnosis is determined by computing a ratio. The resulting tool is relevant for MM diagnosis and staging." }, { "pmid": "22462194", "title": "Ontology and medical diagnosis.", "abstract": "Ontology and associated generic tools are appropriate for knowledge modeling and reasoning, but most of the time, disease definitions in existing description logic (DL) ontology are not sufficient to classify patient's characteristics under a particular disease because they do not formalize operational definitions of diseases (association of signs and symptoms=diagnostic criteria). The main objective of this study is to propose an ontological representation which takes into account the diagnostic criteria on which specific patient conditions may be classified under a specific disease. This method needs as a prerequisite a clear list of necessary and sufficient diagnostic criteria as defined for lots of diseases by learned societies. It does not include probability/uncertainty which Web Ontology Language (OWL 2.0) cannot handle. We illustrate it with spondyloarthritis (SpA). Ontology has been designed in Protégé 4.1 OWL-DL2.0. Several kinds of criteria were formalized: (1) mandatory criteria, (2) picking two criteria among several diagnostic criteria, (3) numeric criteria. Thirty real patient cases were successfully classified with the reasoner. This study shows that it is possible to represent operational definitions of diseases with OWL and successfully classify real patient cases. Representing diagnostic criteria as descriptive knowledge (instead of rules in Semantic Web Rule Language or Prolog) allows us to take advantage of tools already available for OWL. While we focused on Assessment of SpondyloArthritis international Society SpA criteria, we believe that many of the representation issues addressed here are relevant to using OWL-DL for operational definition of other diseases in ontology." }, { "pmid": "24975859", "title": "Evaluation and construction of diagnostic criteria for inclusion body myositis.", "abstract": "OBJECTIVE\nTo use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle.\n\n\nMETHODS\nThe literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria.\n\n\nRESULTS\nTwenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%-84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity).\n\n\nCONCLUSIONS\nPublished expert consensus-derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class II evidence that published expert consensus-derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities." }, { "pmid": "12509357", "title": "Comparing computer-interpretable guideline models: a case-study approach.", "abstract": "OBJECTIVES\nMany groups are developing computer-interpretable clinical guidelines (CIGs) for use during clinical encounters. CIGs use \"Task-Network Models\" for representation but differ in their approaches to addressing particular modeling challenges. We have studied similarities and differences between CIGs in order to identify issues that must be resolved before a consensus on a set of common components can be developed.\n\n\nDESIGN\nWe compared six models: Asbru, EON, GLIF, GUIDE, PRODIGY, and PROforma. Collaborators from groups that created these models represented, in their own formalisms, portions of two guidelines: American College of Chest Physicians cough guidelines [correction] and the Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure.\n\n\nMEASUREMENTS\nWe compared the models according to eight components that capture the structure of CIGs. The components enable modelers to encode guidelines as plans that organize decision and action tasks in networks. They also enable the encoded guidelines to be linked with patient data-a key requirement for enabling patient-specific decision support.\n\n\nRESULTS\nWe found consensus on many components, including plan organization, expression language, conceptual medical record model, medical concept model, and data abstractions. Differences were most apparent in underlying decision models, goal representation, use of scenarios, and structured medical actions.\n\n\nCONCLUSION\nWe identified guideline components that the CIG community could adopt as standards. Some of the participants are pursuing standardization of these components under the auspices of HL7." }, { "pmid": "23806274", "title": "Computer-interpretable clinical guidelines: a methodological review.", "abstract": "Clinical practice guidelines (CPGs) aim to improve the quality of care, reduce unjustified practice variations and reduce healthcare costs. In order for them to be effective, clinical guidelines need to be integrated with the care flow and provide patient-specific advice when and where needed. Hence, their formalization as computer-interpretable guidelines (CIGs) makes it possible to develop CIG-based decision-support systems (DSSs), which have a better chance of impacting clinician behavior than narrative guidelines. This paper reviews the literature on CIG-related methodologies since the inception of CIGs, while focusing and drawing themes for classifying CIG research from CIG-related publications in the Journal of Biomedical Informatics (JBI). The themes span the entire life-cycle of CIG development and include: knowledge acquisition and specification for improved CIG design, including (1) CIG modeling languages and (2) CIG acquisition and specification methodologies, (3) integration of CIGs with electronic health records (EHRs) and organizational workflow, (4) CIG validation and verification, (5) CIG execution engines and supportive tools, (6) exception handling in CIGs, (7) CIG maintenance, including analyzing clinician's compliance to CIG recommendations and CIG versioning and evolution, and finally (8) CIG sharing. I examine the temporal trends in CIG-related research and discuss additional themes that were not identified in JBI papers, including existing themes such as overcoming implementation barriers, modeling clinical goals, and temporal expressions, as well as futuristic themes, such as patient-centric CIGs and distributed CIGs." }, { "pmid": "18639485", "title": "Computer-based execution of clinical guidelines: a review.", "abstract": "PURPOSE\nClinical guidelines are useful tools to standardize and improve health care. The automation of the guideline execution process is a basic step towards its widespread use in medical centres. This paper presents an analysis and a comparison of eight systems that allow the enactment of clinical guidelines in a (semi) automatic fashion.\n\n\nMETHODS\nThis paper presents a review of the literature (2000-2007) collected from medical databases as well as international conferences in the medical informatics area.\n\n\nRESULTS\nEight systems containing a guideline execution engine were selected. The language used to represent the guidelines as well as the architecture of these systems were compared. Different aspects have been assessed for each system, such as the integration with external elements or the coordination mechanisms used in the execution of clinical guidelines. Security and terminology issues complement the above study.\n\n\nCONCLUSIONS\nAlthough these systems could be beneficial for clinicians and patients, it is an ongoing research area, and they are not yet fully implemented and integrated into existing careflow management systems and hence used in daily practice in health care institutions." }, { "pmid": "15196480", "title": "GLIF3: a representation format for sharable computer-interpretable clinical practice guidelines.", "abstract": "The Guideline Interchange Format (GLIF) is a model for representation of sharable computer-interpretable guidelines. The current version of GLIF (GLIF3) is a substantial update and enhancement of the model since the previous version (GLIF2). GLIF3 enables encoding of a guideline at three levels: a conceptual flowchart, a computable specification that can be verified for logical consistency and completeness, and an implementable specification that is intended to be incorporated into particular institutional information systems. The representation has been tested on a wide variety of guidelines that are typical of the range of guidelines in clinical use. It builds upon GLIF2 by adding several constructs that enable interpretation of encoded guidelines in computer-based decision-support systems. GLIF3 leverages standards being developed in Health Level 7 in order to allow integration of guidelines with clinical information systems. The GLIF3 specification consists of an extensible object-oriented model and a structured syntax based on the resource description framework (RDF). Empirical validation of the ability to generate appropriate recommendations using GLIF3 has been tested by executing encoded guidelines against actual patient data. GLIF3 is accordingly ready for broader experimentation and prototype use by organizations that wish to evaluate its ability to capture the logic of clinical guidelines, to implement them in clinical systems, and thereby to provide integrated decision support to assist clinicians." }, { "pmid": "15182844", "title": "Approaches for creating computer-interpretable guidelines that facilitate decision support.", "abstract": "During the last decade, studies have shown the benefits of using clinical guidelines in the practice of medicine. Although the importance of these guidelines is widely recognized, health care organizations typically pay more attention to guideline development than to guideline implementation for routine use in daily care. However, studies have shown that clinicians are often not familiar with written guidelines and do not apply them appropriately during the actual care process. Implementing guidelines in computer-based decision support systems promises to improve the acceptance and application of guidelines in daily practice because the actions and observations of health care workers are monitored and advice is generated whenever a guideline is not followed. Such implementations are increasingly applied in diverse areas such as policy development, utilization management, education, clinical trials, and workflow facilitation. Many parties are developing computer-based guidelines as well as decision support systems that incorporate these guidelines. This paper reviews generic approaches for developing and implementing computer-based guidelines that facilitate decision support. It addresses guideline representation, acquisition, verification and execution aspects. The paper describes five approaches (the Arden Syntax, GuideLine Interchange Format (GLIF), PROforma, Asbru and EON), after the approaches are compared and discussed." }, { "pmid": "21689783", "title": "Dynamic categorization of clinical research eligibility criteria by hierarchical clustering.", "abstract": "OBJECTIVE\nTo semi-automatically induce semantic categories of eligibility criteria from text and to automatically classify eligibility criteria based on their semantic similarity.\n\n\nDESIGN\nThe UMLS semantic types and a set of previously developed semantic preference rules were utilized to create an unambiguous semantic feature representation to induce eligibility criteria categories through hierarchical clustering and to train supervised classifiers.\n\n\nMEASUREMENTS\nWe induced 27 categories and measured the prevalence of the categories in 27,278 eligibility criteria from 1578 clinical trials and compared the classification performance (i.e., precision, recall, and F1-score) between the UMLS-based feature representation and the \"bag of words\" feature representation among five common classifiers in Weka, including J48, Bayesian Network, Naïve Bayesian, Nearest Neighbor, and instance-based learning classifier.\n\n\nRESULTS\nThe UMLS semantic feature representation outperforms the \"bag of words\" feature representation in 89% of the criteria categories. Using the semantically induced categories, machine-learning classifiers required only 2000 instances to stabilize classification performance. The J48 classifier yielded the best F1-score and the Bayesian Network classifier achieved the best learning efficiency.\n\n\nCONCLUSION\nThe UMLS is an effective knowledge source and can enable an efficient feature representation for semi-automated semantic category induction and automatic categorization for clinical research eligibility criteria and possibly other clinical text." }, { "pmid": "21807647", "title": "EliXR: an approach to eligibility criteria extraction and representation.", "abstract": "OBJECTIVE\nTo develop a semantic representation for clinical research eligibility criteria to automate semistructured information extraction from eligibility criteria text.\n\n\nMATERIALS AND METHODS\nAn analysis pipeline called eligibility criteria extraction and representation (EliXR) was developed that integrates syntactic parsing and tree pattern mining to discover common semantic patterns in 1000 eligibility criteria randomly selected from http://ClinicalTrials.gov. The semantic patterns were aggregated and enriched with unified medical language systems semantic knowledge to form a semantic representation for clinical research eligibility criteria.\n\n\nRESULTS\nThe authors arrived at 175 semantic patterns, which form 12 semantic role labels connected by their frequent semantic relations in a semantic network.\n\n\nEVALUATION\nThree raters independently annotated all the sentence segments (N=396) for 79 test eligibility criteria using the 12 top-level semantic role labels. Eight-six per cent (339) of the sentence segments were unanimously labelled correctly and 13.8% (55) were correctly labelled by two raters. The Fleiss' κ was 0.88, indicating a nearly perfect interrater agreement.\n\n\nCONCLUSION\nThis study present a semi-automated data-driven approach to developing a semantic network that aligns well with the top-level information structure in clinical research eligibility criteria text and demonstrates the feasibility of using the resulting semantic role labels to generate semistructured eligibility criteria with nearly perfect interrater reliability." }, { "pmid": "23304325", "title": "Modeling and executing electronic health records driven phenotyping algorithms using the NQF Quality Data Model and JBoss® Drools Engine.", "abstract": "With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system." }, { "pmid": "17712081", "title": "Data standards in clinical research: gaps, overlaps, challenges and future directions.", "abstract": "Current efforts to define and implement health data standards are driven by issues related to the quality, cost and continuity of care, patient safety concerns, and desires to speed clinical research findings to the bedside. The President's goal for national adoption of electronic medical records in the next decade, coupled with the current emphasis on translational research, underscore the urgent need for data standards in clinical research. This paper reviews the motivations and requirements for standardized clinical research data, and the current state of standards development and adoption--including gaps and overlaps--in relevant areas. Unresolved issues and informatics challenges related to the adoption of clinical research data and terminology standards are mentioned, as are the collaborations and activities the authors perceive as most likely to address them." }, { "pmid": "23601451", "title": "Using Semantic Web technology to support icd-11 textual definitions authoring.", "abstract": "The beta phase of the 11th revision of International Classification of Diseases (ICD-11) intends to accept public input through a distributed model of authoring. One of the core use cases is to create textual definitions for the ICD categories. The objective of the present study is to design, develop, and evaluate approaches to support ICD-11 textual definitions authoring using Semantic Web technology. We investigated a number of heterogeneous resources related to the definitions of diseases, including the linked open data (LOD) from DBpedia, the textual definitions from the Unified Medical Language System (UMLS) and the formal definitions of the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT). We integrated them in a Semantic Web framework (i.e., the Linked Data in a Resource Description Framework [RDF] triple store), which is being proposed as a backend in a prototype platform for collaborative authoring of ICD-11 beta. We performed a preliminary evaluation on the usefulness of our approaches and discussed the potential challenges from both technical and clinical perspectives." }, { "pmid": "23304366", "title": "An evaluation of the NQF Quality Data Model for representing Electronic Health Record driven phenotyping algorithms.", "abstract": "The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria." } ]
Frontiers in Neuroscience
27833526
PMC5081358
10.3389/fnins.2016.00479
Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Related workPrior work for bioinspired acoustic surveillance units (ASU; from flies), such as that of Cauwenberghs et al. used spatial and temporal derivatives of the field over a sensor array of MEMS microphones, power series expansion, and Independent Component Analysis (ICA) for localizing and separating mixtures of delayed sources of sound (Cauwenberghs et al., 2001). This work showed that the number of sources that can be extracted depends strongly on the number of resolvable terms in the series.Similar work was also done by Sawada et al. using ICA for estimating the number of sound sources (Sawada et al., 2005a) and localization of multiple sources of sound (Sawada et al., 2003, 2005b).Julian et al. compared four different algorithms for sound localization using MEMS microphones and signals recorded in a natural environment (Julian et al., 2004). The spatial-gradient algorithm (SGA) showed the best accuracy results. The implementation requires a sampled data analog architecture able to solve adaptively a standard least means-square (LMS) problem. The performance of the system, with low power CMOS VLSI design, is of the order of 1° error margin and similar standard deviation for the bearing angle estimation (Cauwenberghs et al., 2005; Julian et al., 2006; Pirchio et al., 2006). A very low power implementation for interaural time delay (ITD) estimation without delay lines with the same ASU unit is reported by Chacon-Rodriguez et al. with an estimation error in the low single-digit range (Chacon-Rodriguez et al., 2009).Masson et al. used a data fusion algorithm to calculate the estimation of the position based on measurements from five nodes, each with four MEMS microphones (Masson et al., 2005). The measurement is made from a unique fixed source emitting a 1 kHz signal.Zhang and Andreou used cross correlation of the signals received and a zero crossing point to estimate the bearing angle of a moving vehicle (Zhang and Andreou, 2008). The hardware was an ASU with four MEMS microphones.
[ "19115011", "10991021" ]
[ { "pmid": "19115011", "title": "Brian: a simulator for spiking neural networks in python.", "abstract": "\"Brian\" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience." }, { "pmid": "10991021", "title": "Theory of arachnid prey localization.", "abstract": "Sand scorpions and many other arachnids locate their prey through highly sensitive slit sensilla at the tips (tarsi) of their eight legs. This sensor array responds to vibrations with stimulus-locked action potentials encoding the target direction. We present a neuronal model to account for stimulus angle determination using a population of second-order neurons, each receiving excitatory input from one tarsus and inhibition from a triad opposite to it. The input opens a time window whose width determines a neuron's firing probability. Stochastic optimization is realized through tuning the balance between excitation and inhibition. The agreement with experiments on the sand scorpion is excellent." } ]
Frontiers in Neuroinformatics
27867355
PMC5095137
10.3389/fninf.2016.00048
Methods for Specifying Scientific Data Standards and Modeling Relationships with Applications to Neuroscience
Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat.
2. Background and related workThe scientific community utilizes a broad range of data formats. Basic formats explicitly specify how data is laid out and formatted in binary or text data files (e.g., CSV, BOF, etc). While such basic formats are common, they generally suffer from a lack of portability, scalability and a rigorous specification. For text-based files, languages and formats, such as the Extensible Markup Language (XML) (Bray et al., 2008) or the JavaScript Object Notation (JSON) (JSON, 2015), have become popular means to standardize documents for data exchange. XML, JSON and other text-based standards (in combination with character-encoding schema, e.g., ASCII or Unicode) play a critical role in practice in the exchange of usually relatively small, structured documents but are impractical for storage and exchange of large scientific data arrays.For storage of large scientific data, HDF5 (The HDF Group, 2015) and NetCDF (Rew and Davis, 1990) among others, have gained wide popularity. HDF5 is a data model, library, and file format for storing and managing large and complex data. HDF5 supports groups, datasets, and attributes as core data object primitives, which in combination provide the foundation for data organization and storage. HDF5 is portable, scalable, self-describing, and extensible and is widely supported across programming languages and systems, e.g., R, Matlab, Python, C, Fortran, VisIt, or ParaView. The HDF5 technology suite includes tools for managing, manipulating, viewing, and analyzing HDF5 files. HDF5 has been adopted as a base format across a broad range of application sciences, ranging from physics to bio-sciences and beyond (Habermann et al., 2014). Self-describing formats address the critical need for standardized storage and exchange of complex and large scientific data.Self-describing formats like HDF5 provide general capabilities for organizing data, but they do not prescribe a data organization. The structure, layout, names, and descriptions of storage objects, hence, often still differ greatly between applications and experiments. This diversity makes the development of common and reusable tools challenging. VizSchema (Shasharina et al., 2009) and XDMF (Clarke and Mark, 2007) among others, propose to bridge this gap between general-purpose, self-describing formats and the need for standardized tools via additional lightweight, low-level schema (often based on XML) to further standardize the description of the low-level data organization to facilitate data exchange and tool development.Application-oriented formats then generally focus on specifying the organization of data in a semantically meaningful fashion, including but not limited to the specification of storage object names, locations, and descriptions. Many application formats build on existing self-describing formats, e.g., NeXus (Klosowski et al., 1997) (neutron, x-ray, and muon data), OpenMSI (mass spectrometry imaging) (Rübel et al., 2013), CXIDB (Maia, 2012) (coherent x-ray imaging), or NetCDF (Rew and Davis, 1990) in combination with CF and COARDS metadata conventions for climate data, and many others. Application formats are commonly described by documents specifying the location and names of data items and often provide application-programmer interfaces (API) to facilitate reading and writing of format files. Some formats are further governed by formal, computer-readable, and verifiable specifications. For example, NeXus uses the NXDL (NeXus International Advisory Committee, 2016) XML-based format and schema to define the nomenclature and arrangement of information in a NeXus data file. On the level of HDF5 groups, NeXus also uses the notion of Classes to define the fields that a group should contain in a reusable and extensible fashion.The critical need for data standards in neuroscience has been recognized by several efforts over the course of the last several years (e.g., Sommer et al., 2016); however, much work remains. Here, our goal is to contribute to this discussion by providing much-needed methods and tools for the effective design of sustainable neuroscience data standards and demonstration of the methods in practice toward the design and implementation of a usable and extensible format with an initial focus on electrocardiography data. The developers of the Klustakwik suite (Kadir et al., 2013, 2015) have proposed an HDF5-based data format for storage of spike sorting data. Orca (also called BORG) (Keith Godfrey, 2014) is an HDF5-based format developed by the Allen Institute for Brain Science designed to store electrophysiology and optophysiology data. The NIX (Stoewer et al., 2014) project has developed a set of standardized methods and models for storing electrophysiology and other neuroscience data together with their metadata in one common file format based on HDF5. Rather than an application-specific format, NIX defines highly generic models for data as well as for metadata that can be linked to terminologies (defined via odML) to provide a domain-specific context for elements. The open metadata Markup Language odML (Grewe et al., 2011) is a metadata markup language based on XML with the goal to define and establish an open and flexible format to transport neuroscience metadata. NeuroML (Gleeson et al., 2010) is also an XML-based format with a particular focus on defining and exchanging descriptions of neuronal cell and network models. The Neurodata Without Borders (NWB) (Teeters et al., 2015) initiative is a recent project with the specific goal “[…] to produce a unified data format for cellular-based neurophysiology data based on representative use cases initially from four laboratories—the Buzsaki group at NYU, the Svoboda group at Janelia Farm, the Meister group at Caltech, and the Allen Institute for Brain Science in Seattle.” Members of the NIX, KWIK, Orca, BRAINformat, and other development teams have been invited and contributed to the NWB effort. NWB has adopted concepts and methods from a range of these formats, including from the here-described BRAINformat.
[ "20585541", "21941477", "25149694", "22543711", "22936162", "24087878", "26590340" ]
[ { "pmid": "20585541", "title": "NeuroML: a language for describing data driven models of neurons and networks with a high degree of biological detail.", "abstract": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience." }, { "pmid": "21941477", "title": "A Bottom-up Approach to Data Annotation in Neurophysiology.", "abstract": "Metadata providing information about the stimulus, data acquisition, and experimental conditions are indispensable for the analysis and management of experimental data within a lab. However, only rarely are metadata available in a structured, comprehensive, and machine-readable form. This poses a severe problem for finding and retrieving data, both in the laboratory and on the various emerging public data bases. Here, we propose a simple format, the \"open metaData Markup Language\" (odML), for collecting and exchanging metadata in an automated, computer-based fashion. In odML arbitrary metadata information is stored as extended key-value pairs in a hierarchical structure. Central to odML is a clear separation of format and content, i.e., neither keys nor values are defined by the format. This makes odML flexible enough for storing all available metadata instantly without the necessity to submit new keys to an ontology or controlled terminology. Common standard keys can be defined in odML-terminologies for guaranteeing interoperability. We started to define such terminologies for neurophysiological data, but aim at a community driven extension and refinement of the proposed definitions. By customized terminologies that map to these standard terminologies, metadata can be named and organized as required or preferred without softening the standard. Together with the respective libraries provided for common programming languages, the odML format can be integrated into the laboratory workflow, facilitating automated collection of metadata information where it becomes available. The flexibility of odML also encourages a community driven collection and definition of terms used for annotating data in the neurosciences." }, { "pmid": "25149694", "title": "High-dimensional cluster analysis with the masked EM algorithm.", "abstract": "Cluster analysis faces two problems in high dimensions: the \"curse of dimensionality\" that can lead to overfitting and poor generalization performance and the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of spike sorting for next-generation, high-channel-count neural probes. In this problem, only a small subset of features provides information about the cluster membership of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a \"masked EM\" algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data and to real-world high-channel-count spike sorting data." }, { "pmid": "22543711", "title": "Resolving brain regions using nanostructure initiator mass spectrometry imaging of phospholipids.", "abstract": "In a variety of neurological diseases, pathological progression is cell type and region specific. Previous reports suggest that mass spectrometry imaging has the potential to differentiate between brain regions enriched in specific cell types. Here, we utilized a matrix-free surface mass spectrometry approach, nanostructure initiator mass spectrometry (NIMS), to show that spatial distributions of multiple lipids can be used as a 'fingerprint' to discriminate between neuronal- and glial- enriched brain regions. In addition, glial cells from different brain regions can be distinguished based on unique lipid profiles. NIMS images were generated from sagittal brain sections and were matched with immunostained serial sections to define glial cell enriched areas. Tandem mass spectrometry (LC-MS/MS QTOF) on whole brain extracts was used to identify 18 phospholipids. Multivariate statistical analysis (Nonnegative Matrix Factorization) enhanced differentiation of brain regions and cell populations compared to single ion imaging methods. This analysis resolved brain regions that are difficult to distinguish using conventional stains but are known to have distinct physiological functions. This method accurately distinguished the frontal (or somatomotor) and dorsal (or retrosplenial) regions of the cortex from each other and from the pons region." }, { "pmid": "24087878", "title": "OpenMSI: a high-performance web-based platform for mass spectrometry imaging.", "abstract": "Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data access (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis." }, { "pmid": "26590340", "title": "Neurodata Without Borders: Creating a Common Data Format for Neurophysiology.", "abstract": "The Neurodata Without Borders (NWB) initiative promotes data standardization in neuroscience to increase research reproducibility and opportunities. In the first NWB pilot project, neurophysiologists and software developers produced a common data format for recordings and metadata of cellular electrophysiology and optical imaging experiments. The format specification, application programming interfaces, and sample datasets have been released." } ]
JMIR Public Health and Surveillance
27765731
PMC5095368
10.2196/publichealth.5901
Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis
BackgroundTraditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods.ObjectiveThe objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter.MethodsPublicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia.ResultsDuring the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%.ConclusionsOf the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed.
Related WorkAs the number of Internet users has increased [11], researchers have identified the use of Google, Twitter, and Wikipedia as novel surveillance approaches to complement traditional methods. Google Flu Trends, which monitors Google users’ searches for information related to influenza, has shown correlation with CDC influenza data, while delivering estimates 1 to 2 weeks ahead of CDC reports [8,12]. Although initially successful, the system has not been without its issues in more recent years. Google Flu Trends overestimated influenza activity during the 2012-2013 influenza season and underestimated it during the 2009 H1N1 influenza pandemic [13-16]. One study found that both the original (2008) and revised (2009) algorithms for Google Flu Trends were not reliable on city, regional, and national scales, particularly in instances of varying intensity in influenza seasons and media coverage [16]. Due to issues with its proprietary algorithm, Google Flu Trends was discontinued in August 2015 [17].Influenza-related posts on Twitter, a social networking platform for disseminating short messages (tweets), have shown high correlation with reported ILI activity in ILINet [18,19]. Studies have found that Twitter data highly correlate with national- and city-level ILI counts [20]. Signorini et al (2011) also demonstrated that tweets could be used to estimate ILI activity at regional and national levels within a reasonable margin of error [21]. Moreover, studies have found that Twitter data perform better than Google data. Nagar et al (2014) conducted a study showing that tweets better reflected city-level ILI incidence in comparison with Google search queries [22]. Aramaki et al discovered that a Twitter-based model outperformed a Google-based model during periods of normal news coverage, although the Twitter model performed less optimally during the periods of excessive media coverage [23]. Moreover, geographic granularity can affect the performance of Twitter data. Broniatowski et al (2015) found that city-level Twitter data performed better than state- and national-level Twitter data, although Google Flu Trends data performed better at each level [24].Wikipedia page view data have proven valuable for tracking trending topics as well as disease monitoring and forecasting [25,26]. McIver and Brownstein (2014) reported that increases in the quantity of visits to influenza-related Wikipedia articles allowed for the estimation of influenza activity up to 2 weeks before ILINet, outperforming Google Flu Trends estimates during abnormal influenza seasons and periods of high media reporting [27]. One study found that Wikipedia page view data have suitable forecasting value up until the peak of the influenza seasons [26], whereas another study also reported that Wikipedia page view data are suitable for forecasting using a 28-day analysis as well as for nowcasting, or monitoring current disease incidence [25]. However, as a disadvantage, the signal-to-noise ratio of Wikipedia data can be problematic [25] as Wikipedia has become a preferred source for seeking health information whether an individual is ill or not [28,29]. In addition, unlike the granularity flexibility of Google and Twitter data, Wikipedia does not have such capability of evaluating influenza activity at local or regional levels because it only provides counts of page views and no accompanying location or user information in its publicly available data.
[ "25835538", "20798667", "15714620", "23896182", "19329408", "22844241", "19020500", "21886802", "23407515", "24626916", "24146603", "25406040", "24349542", "21573238", "25331122", "27014744", "25392913", "25974758", "24743682", "19390105", "21827326", "23760189", "24898165", "26042650", "18667443", "21199944", "22759619", "23037553", "26392678", "25374786", "26420469", "26226068", "26082941", "26513245" ]
[ { "pmid": "25835538", "title": "Estimating influenza attack rates in the United States using a participatory cohort.", "abstract": "We considered how participatory syndromic surveillance data can be used to estimate influenza attack rates during the 2012-2013 and 2013-2014 seasons in the United States. Our inference is based on assessing the difference in the rates of self-reported influenza-like illness (ILI, defined as presence of fever and cough/sore throat) among the survey participants during periods of active vs. low influenza circulation as well as estimating the probability of self-reported ILI for influenza cases. Here, we combined Flu Near You data with additional sources (Hong Kong household studies of symptoms of influenza cases and the U.S. Centers for Disease Control and Prevention estimates of vaccine coverage and effectiveness) to estimate influenza attack rates. The estimated influenza attack rate for the early vaccinated Flu Near You members (vaccination reported by week 45) aged 20-64 between calendar weeks 47-12 was 14.7%(95% CI(5.9%,24.1%)) for the 2012-2013 season and 3.6%(-3.3%,10.3%) for the 2013-2014 season. The corresponding rates for the US population aged 20-64 were 30.5% (4.4%, 49.3%) in 2012-2013 and 7.1%(-5.1%, 32.5%) in 2013-2014. The attack rates in women and men were similar each season. Our findings demonstrate that participatory syndromic surveillance data can be used to gauge influenza attack rates during future influenza seasons." }, { "pmid": "20798667", "title": "Estimates of deaths associated with seasonal influenza --- United States, 1976-2007.", "abstract": "Influenza infections are associated with thousands of deaths every year in the United States, with the majority of deaths from seasonal influenza occurring among adults aged >or=65 years. For several decades, CDC has made annual estimates of influenza-associated deaths, which have been used in influenza research and to develop influenza control and prevention policy. To update previously published estimates of the numbers and rates of influenza-associated deaths during 1976-2003 by adding four influenza seasons through 2006-07, CDC used statistical models with data from death certificate reports. National mortality data for two categories of underlying cause of death codes, pneumonia and influenza causes and respiratory and circulatory causes, were used in regression models to estimate lower and upper bounds for the number of influenza-associated deaths. Estimates by seasonal influenza virus type and subtype were examined to determine any association between virus type and subtype and the number of deaths in a season. This report summarizes the results of these analyses, which found that, during 1976-2007, estimates of annual influenza-associated deaths from respiratory and circulatory causes (including pneumonia and influenza causes) ranged from 3,349 in 1986-87 to 48,614 in 2003-04. The annual rate of influenza-associated death in the United States overall during this period ranged from 1.4 to 16.7 deaths per 100,000 persons. The findings also indicated the wide variation in the estimated number of deaths from season to season was closely related to the particular influenza virus types and subtypes in circulation." }, { "pmid": "15714620", "title": "What is syndromic surveillance?", "abstract": "Innovative electronic surveillance systems are being developed to improve early detection of outbreaks attributable to biologic terrorism or other causes. A review of the rationale, goals, definitions, and realistic expectations for these surveillance systems is a crucial first step toward establishing a framework for further research and development in this area. This commentary provides such a review for current syndromic surveillance systems. Syndromic surveillance has been used for early detection of outbreaks, to follow the size, spread, and tempo of outbreaks, to monitor disease trends, and to provide reassurance that an outbreak has not occurred. Syndromic surveillance systems seek to use existing health data in real time to provide immediate analysis and feedback to those charged with investigation and follow-up of potential outbreaks. Optimal syndrome definitions for continuous monitoring and specific data sources best suited to outbreak surveillance for specific diseases have not been determined. Broadly applicable signal-detection methodologies and response protocols that would maximize detection while preserving scant resources are being sought. Stakeholders need to understand the advantages and limitations of syndromic surveillance systems. Syndromic surveillance systems might enhance collaboration among public health agencies, health-care providers, information-system professionals, academic investigators, and industry. However, syndromic surveillance does not replace traditional public health surveillance, nor does it substitute for direct physician reporting of unusual or suspect cases of public health importance." }, { "pmid": "23896182", "title": "Scoping review on search queries and social media for disease surveillance: a chronology of innovation.", "abstract": "BACKGROUND\nThe threat of a global pandemic posed by outbreaks of influenza H5N1 (1997) and Severe Acute Respiratory Syndrome (SARS, 2002), both diseases of zoonotic origin, provoked interest in improving early warning systems and reinforced the need for combining data from different sources. It led to the use of search query data from search engines such as Google and Yahoo! as an indicator of when and where influenza was occurring. This methodology has subsequently been extended to other diseases and has led to experimentation with new types of social media for disease surveillance.\n\n\nOBJECTIVE\nThe objective of this scoping review was to formally assess the current state of knowledge regarding the use of search queries and social media for disease surveillance in order to inform future work on early detection and more effective mitigation of the effects of foodborne illness.\n\n\nMETHODS\nStructured scoping review methods were used to identify, characterize, and evaluate all published primary research, expert review, and commentary articles regarding the use of social media in surveillance of infectious diseases from 2002-2011.\n\n\nRESULTS\nThirty-two primary research articles and 19 reviews and case studies were identified as relevant. Most relevant citations were peer-reviewed journal articles (29/32, 91%) published in 2010-11 (28/32, 88%) and reported use of a Google program for surveillance of influenza. Only four primary research articles investigated social media in the context of foodborne disease or gastroenteritis. Most authors (21/32 articles, 66%) reported that social media-based surveillance had comparable performance when compared to an existing surveillance program. The most commonly reported strengths of social media surveillance programs included their effectiveness (21/32, 66%) and rapid detection of disease (21/32, 66%). The most commonly reported weaknesses were the potential for false positive (16/32, 50%) and false negative (11/32, 34%) results. Most authors (24/32, 75%) recommended that social media programs should primarily be used to support existing surveillance programs.\n\n\nCONCLUSIONS\nThe use of search queries and social media for disease surveillance are relatively recent phenomena (first reported in 2006). Both the tools themselves and the methodologies for exploiting them are evolving over time. While their accuracy, speed, and cost compare favorably with existing surveillance systems, the primary challenge is to refine the data signal by reducing surrounding noise. Further developments in digital disease surveillance have the potential to improve sensitivity and specificity, passively through advances in machine learning and actively through engagement of users. Adoption, even as supporting systems for existing surveillance, will entail a high level of familiarity with the tools and collaboration across jurisdictions." }, { "pmid": "19329408", "title": "Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet.", "abstract": "Infodemiology can be defined as the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy. Infodemiology data can be collected and analyzed in near real time. Examples for infodemiology applications include the analysis of queries from Internet search engines to predict disease outbreaks (eg. influenza), monitoring peoples' status updates on microblogs such as Twitter for syndromic surveillance, detecting and quantifying disparities in health information availability, identifying and monitoring of public health relevant publications on the Internet (eg. anti-vaccination sites, but also news articles or expert-curated outbreak reports), automated tools to measure information diffusion and knowledge translation, and tracking the effectiveness of health marketing campaigns. Moreover, analyzing how people search and navigate the Internet for health-related information, as well as how they communicate and share this information, can provide valuable insights into health-related behavior of populations. Seven years after the infodemiology concept was first introduced, this paper revisits the emerging fields of infodemiology and infoveillance and proposes an expanded framework, introducing some basic metrics such as information prevalence, concept occurrence ratios, and information incidence. The framework distinguishes supply-based applications (analyzing what is being published on the Internet, eg. on Web sites, newsgroups, blogs, microblogs and social media) from demand-based methods (search and navigation behavior), and further distinguishes passive from active infoveillance methods. Infodemiology metrics follow population health relevant events or predict them. Thus, these metrics and methods are potentially useful for public health practice and research, and should be further developed and standardized." }, { "pmid": "22844241", "title": "Digital epidemiology.", "abstract": "Mobile, social, real-time: the ongoing revolution in the way people communicate has given rise to a new kind of epidemiology. Digital data sources, when harnessed appropriately, can provide local and timely information about disease and health dynamics in populations around the world. The rapid, unprecedented increase in the availability of relevant data from various digital sources creates considerable technical and computational challenges." }, { "pmid": "19020500", "title": "Detecting influenza epidemics using search engine query data.", "abstract": "Seasonal influenza epidemics are a major public health concern, causing tens of millions of respiratory illnesses and 250,000 to 500,000 deaths worldwide each year. In addition to seasonal influenza, a new strain of influenza virus against which no previous immunity exists and that demonstrates human-to-human transmission could result in a pandemic with millions of fatalities. Early detection of disease activity, when followed by a rapid response, can reduce the impact of both seasonal and pandemic influenza. One way to improve early detection is to monitor health-seeking behaviour in the form of queries to online search engines, which are submitted by millions of users around the world each day. Here we present a method of analysing large numbers of Google search queries to track influenza-like illness in a population. Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. This approach may make it possible to use search queries to detect influenza epidemics in areas with a large population of web search users." }, { "pmid": "21886802", "title": "Assessing Google flu trends performance in the United States during the 2009 influenza virus A (H1N1) pandemic.", "abstract": "BACKGROUND\nGoogle Flu Trends (GFT) uses anonymized, aggregated internet search activity to provide near-real time estimates of influenza activity. GFT estimates have shown a strong correlation with official influenza surveillance data. The 2009 influenza virus A (H1N1) pandemic [pH1N1] provided the first opportunity to evaluate GFT during a non-seasonal influenza outbreak. In September 2009, an updated United States GFT model was developed using data from the beginning of pH1N1.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe evaluated the accuracy of each U.S. GFT model by comparing weekly estimates of ILI (influenza-like illness) activity with the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). For each GFT model we calculated the correlation and RMSE (root mean square error) between model estimates and ILINet for four time periods: pre-H1N1, Summer H1N1, Winter H1N1, and H1N1 overall (Mar 2009-Dec 2009). We also compared the number of queries, query volume, and types of queries (e.g., influenza symptoms, influenza complications) in each model. Both models' estimates were highly correlated with ILINet pre-H1N1 and over the entire surveillance period, although the original model underestimated the magnitude of ILI activity during pH1N1. The updated model was more correlated with ILINet than the original model during Summer H1N1 (r = 0.95 and 0.29, respectively). The updated model included more search query terms than the original model, with more queries directly related to influenza infection, whereas the original model contained more queries related to influenza complications.\n\n\nCONCLUSIONS\nInternet search behavior changed during pH1N1, particularly in the categories \"influenza complications\" and \"term for influenza.\" The complications associated with pH1N1, the fact that pH1N1 began in the summer rather than winter, and changes in health-seeking behavior each may have played a part. Both GFT models performed well prior to and during pH1N1, although the updated model performed better during pH1N1, especially during the summer months." }, { "pmid": "24146603", "title": "Reassessing Google Flu Trends data for detection of seasonal and pandemic influenza: a comparative epidemiological study at three geographic scales.", "abstract": "The goal of influenza-like illness (ILI) surveillance is to determine the timing, location and magnitude of outbreaks by monitoring the frequency and progression of clinical case incidence. Advances in computational and information technology have allowed for automated collection of higher volumes of electronic data and more timely analyses than previously possible. Novel surveillance systems, including those based on internet search query data like Google Flu Trends (GFT), are being used as surrogates for clinically-based reporting of influenza-like-illness (ILI). We investigated the reliability of GFT during the last decade (2003 to 2013), and compared weekly public health surveillance with search query data to characterize the timing and intensity of seasonal and pandemic influenza at the national (United States), regional (Mid-Atlantic) and local (New York City) levels. We identified substantial flaws in the original and updated GFT models at all three geographic scales, including completely missing the first wave of the 2009 influenza A/H1N1 pandemic, and greatly overestimating the intensity of the A/H3N2 epidemic during the 2012/2013 season. These results were obtained for both the original (2008) and the updated (2009) GFT algorithms. The performance of both models was problematic, perhaps because of changes in internet search behavior and differences in the seasonality, geographical heterogeneity and age-distribution of the epidemics between the periods of GFT model-fitting and prospective use. We conclude that GFT data may not provide reliable surveillance for seasonal or pandemic influenza and should be interpreted with caution until the algorithm can be improved and evaluated. Current internet search query data are no substitute for timely local clinical and laboratory surveillance, or national surveillance based on local data collection. New generation surveillance systems such as GFT should incorporate the use of near-real time electronic health data and computational methods for continued model-fitting and ongoing evaluation and improvement." }, { "pmid": "25406040", "title": "The reliability of tweets as a supplementary method of seasonal influenza surveillance.", "abstract": "BACKGROUND\nExisting influenza surveillance in the United States is focused on the collection of data from sentinel physicians and hospitals; however, the compilation and distribution of reports are usually delayed by up to 2 weeks. With the popularity of social media growing, the Internet is a source for syndromic surveillance due to the availability of large amounts of data. In this study, tweets, or posts of 140 characters or less, from the website Twitter were collected and analyzed for their potential as surveillance for seasonal influenza.\n\n\nOBJECTIVE\nThere were three aims: (1) to improve the correlation of tweets to sentinel-provided influenza-like illness (ILI) rates by city through filtering and a machine-learning classifier, (2) to observe correlations of tweets for emergency department ILI rates by city, and (3) to explore correlations for tweets to laboratory-confirmed influenza cases in San Diego.\n\n\nMETHODS\nTweets containing the keyword \"flu\" were collected within a 17-mile radius from 11 US cities selected for population and availability of ILI data. At the end of the collection period, 159,802 tweets were used for correlation analyses with sentinel-provided ILI and emergency department ILI rates as reported by the corresponding city or county health department. Two separate methods were used to observe correlations between tweets and ILI rates: filtering the tweets by type (non-retweets, retweets, tweets with a URL, tweets without a URL), and the use of a machine-learning classifier that determined whether a tweet was \"valid\", or from a user who was likely ill with the flu.\n\n\nRESULTS\nCorrelations varied by city but general trends were observed. Non-retweets and tweets without a URL had higher and more significant (P<.05) correlations than retweets and tweets with a URL. Correlations of tweets to emergency department ILI rates were higher than the correlations observed for sentinel-provided ILI for most of the cities. The machine-learning classifier yielded the highest correlations for many of the cities when using the sentinel-provided or emergency department ILI as well as the number of laboratory-confirmed influenza cases in San Diego. High correlation values (r=.93) with significance at P<.001 were observed for laboratory-confirmed influenza cases for most categories and tweets determined to be valid by the classifier.\n\n\nCONCLUSIONS\nCompared to tweet analyses in the previous influenza season, this study demonstrated increased accuracy in using Twitter as a supplementary surveillance tool for influenza as better filtering and classification methods yielded higher correlations for the 2013-2014 influenza season than those found for tweets in the previous influenza season, where emergency department ILI rates were better correlated to tweets than sentinel-provided ILI rates. Further investigations in the field would require expansion with regard to the location that the tweets are collected from, as well as the availability of more ILI data." }, { "pmid": "24349542", "title": "National and local influenza surveillance through Twitter: an analysis of the 2012-2013 influenza epidemic.", "abstract": "Social media have been proposed as a data source for influenza surveillance because they have the potential to offer real-time access to millions of short, geographically localized messages containing information regarding personal well-being. However, accuracy of social media surveillance systems declines with media attention because media attention increases \"chatter\" - messages that are about influenza but that do not pertain to an actual infection - masking signs of true influenza prevalence. This paper summarizes our recently developed influenza infection detection algorithm that automatically distinguishes relevant tweets from other chatter, and we describe our current influenza surveillance system which was actively deployed during the full 2012-2013 influenza season. Our objective was to analyze the performance of this system during the most recent 2012-2013 influenza season and to analyze the performance at multiple levels of geographic granularity, unlike past studies that focused on national or regional surveillance. Our system's influenza prevalence estimates were strongly correlated with surveillance data from the Centers for Disease Control and Prevention for the United States (r = 0.93, p < 0.001) as well as surveillance data from the Department of Health and Mental Hygiene of New York City (r = 0.88, p < 0.001). Our system detected the weekly change in direction (increasing or decreasing) of influenza prevalence with 85% accuracy, a nearly twofold increase over a simpler model, demonstrating the utility of explicitly distinguishing infection tweets from other chatter." }, { "pmid": "21573238", "title": "The use of Twitter to track levels of disease activity and public concern in the U.S. during the influenza A H1N1 pandemic.", "abstract": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels." }, { "pmid": "25331122", "title": "A case study of the New York City 2012-2013 influenza season with daily geocoded Twitter data from temporal and spatiotemporal perspectives.", "abstract": "BACKGROUND\nTwitter has shown some usefulness in predicting influenza cases on a weekly basis in multiple countries and on different geographic scales. Recently, Broniatowski and colleagues suggested Twitter's relevance at the city-level for New York City. Here, we look to dive deeper into the case of New York City by analyzing daily Twitter data from temporal and spatiotemporal perspectives. Also, through manual coding of all tweets, we look to gain qualitative insights that can help direct future automated searches.\n\n\nOBJECTIVE\nThe intent of the study was first to validate the temporal predictive strength of daily Twitter data for influenza-like illness emergency department (ILI-ED) visits during the New York City 2012-2013 influenza season against other available and established datasets (Google search query, or GSQ), and second, to examine the spatial distribution and the spread of geocoded tweets as proxies for potential cases.\n\n\nMETHODS\nFrom the Twitter Streaming API, 2972 tweets were collected in the New York City region matching the keywords \"flu\", \"influenza\", \"gripe\", and \"high fever\". The tweets were categorized according to the scheme developed by Lamb et al. A new fourth category was added as an evaluator guess for the probability of the subject(s) being sick to account for strength of confidence in the validity of the statement. Temporal correlations were made for tweets against daily ILI-ED visits and daily GSQ volume. The best models were used for linear regression for forecasting ILI visits. A weighted, retrospective Poisson model with SaTScan software (n=1484), and vector map were used for spatiotemporal analysis.\n\n\nRESULTS\nInfection-related tweets (R=.763) correlated better than GSQ time series (R=.683) for the same keywords and had a lower mean average percent error (8.4 vs 11.8) for ILI-ED visit prediction in January, the most volatile month of flu. SaTScan identified primary outbreak cluster of high-probability infection tweets with a 2.74 relative risk ratio compared to medium-probability infection tweets at P=.001 in Northern Brooklyn, in a radius that includes Barclay's Center and the Atlantic Avenue Terminal.\n\n\nCONCLUSIONS\nWhile others have looked at weekly regional tweets, this study is the first to stress test Twitter for daily city-level data for New York City. Extraction of personal testimonies of infection-related tweets suggests Twitter's strength both qualitatively and quantitatively for ILI-ED prediction compared to alternative daily datasets mixed with awareness-based data such as GSQ. Additionally, granular Twitter data provide important spatiotemporal insights. A tweet vector-map may be useful for visualization of city-level spread when local gold standard data are otherwise unavailable." }, { "pmid": "27014744", "title": "Using Social Media to Perform Local Influenza Surveillance in an Inner-City Hospital: A Retrospective Observational Study.", "abstract": "BACKGROUND\nPublic health officials and policy makers in the United States expend significant resources at the national, state, county, and city levels to measure the rate of influenza infection. These individuals rely on influenza infection rate information to make important decisions during the course of an influenza season driving vaccination campaigns, clinical guidelines, and medical staffing. Web and social media data sources have emerged as attractive alternatives to supplement existing practices. While traditional surveillance methods take 1-2 weeks, and significant labor, to produce an infection estimate in each locale, web and social media data are available in near real-time for a broad range of locations.\n\n\nOBJECTIVE\nThe objective of this study was to analyze the efficacy of flu surveillance from combining data from the websites Google Flu Trends and HealthTweets at the local level. We considered both emergency department influenza-like illness cases and laboratory-confirmed influenza cases for a single hospital in the City of Baltimore.\n\n\nMETHODS\nThis was a retrospective observational study comparing estimates of influenza activity of Google Flu Trends and Twitter to actual counts of individuals with laboratory-confirmed influenza, and counts of individuals presenting to the emergency department with influenza-like illness cases. Data were collected from November 20, 2011 through March 16, 2014. Each parameter was evaluated on the municipal, regional, and national scale. We examined the utility of social media data for tracking actual influenza infection at the municipal, state, and national levels. Specifically, we compared the efficacy of Twitter and Google Flu Trends data.\n\n\nRESULTS\nWe found that municipal-level Twitter data was more effective than regional and national data when tracking actual influenza infection rates in a Baltimore inner-city hospital. When combined, national-level Twitter and Google Flu Trends data outperformed each data source individually. In addition, influenza-like illness data at all levels of geographic granularity were best predicted by national Google Flu Trends data.\n\n\nCONCLUSIONS\nIn order to overcome sensitivity to transient events, such as the news cycle, the best-fitting Google Flu Trends model relies on a 4-week moving average, suggesting that it may also be sacrificing sensitivity to transient fluctuations in influenza infection to achieve predictive power. Implications for influenza forecasting are discussed in this report." }, { "pmid": "25392913", "title": "Global disease monitoring and forecasting with Wikipedia.", "abstract": "Infectious disease is a leading threat to public health, economic stability, and other key social structures. Efforts to mitigate these impacts depend on accurate and timely monitoring to measure the risk and progress of disease. Traditional, biologically-focused monitoring techniques are accurate but costly and slow; in response, new techniques based on social internet data, such as social media and search queries, are emerging. These efforts are promising, but important challenges in the areas of scientific peer review, breadth of diseases and countries, and forecasting hamper their operational usefulness. We examine a freely available, open data source for this use: access logs from the online encyclopedia Wikipedia. Using linear models, language as a proxy for location, and a systematic yet simple article selection procedure, we tested 14 location-disease combinations and demonstrate that these data feasibly support an approach that overcomes these challenges. Specifically, our proof-of-concept yields models with r2 up to 0.92, forecasting value up to the 28 days tested, and several pairs of models similar enough to suggest that transferring models from one location to another without re-training is feasible. Based on these preliminary results, we close with a research agenda designed to overcome these challenges and produce a disease monitoring and forecasting system that is significantly more effective, robust, and globally comprehensive than the current state of the art." }, { "pmid": "25974758", "title": "Forecasting the 2013-2014 influenza season using Wikipedia.", "abstract": "Infectious diseases are one of the leading causes of morbidity and mortality around the world; thus, forecasting their impact is crucial for planning an effective response strategy. According to the Centers for Disease Control and Prevention (CDC), seasonal influenza affects 5% to 20% of the U.S. population and causes major economic impacts resulting from hospitalization and absenteeism. Understanding influenza dynamics and forecasting its impact is fundamental for developing prevention and mitigation strategies. We combine modern data assimilation methods with Wikipedia access logs and CDC influenza-like illness (ILI) reports to create a weekly forecast for seasonal influenza. The methods are applied to the 2013-2014 influenza season but are sufficiently general to forecast any disease outbreak, given incidence or case count data. We adjust the initialization and parametrization of a disease model and show that this allows us to determine systematic model bias. In addition, we provide a way to determine where the model diverges from observation and evaluate forecast accuracy. Wikipedia article access logs are shown to be highly correlated with historical ILI records and allow for accurate prediction of ILI data several weeks before it becomes available. The results show that prior to the peak of the flu season, our forecasting method produced 50% and 95% credible intervals for the 2013-2014 ILI observations that contained the actual observations for most weeks in the forecast. However, since our model does not account for re-infection or multiple strains of influenza, the tail of the epidemic is not predicted well after the peak of flu season has passed." }, { "pmid": "24743682", "title": "Wikipedia usage estimates prevalence of influenza-like illness in the United States in near real-time.", "abstract": "Circulating levels of both seasonal and pandemic influenza require constant surveillance to ensure the health and safety of the population. While up-to-date information is critical, traditional surveillance systems can have data availability lags of up to two weeks. We introduce a novel method of estimating, in near-real time, the level of influenza-like illness (ILI) in the United States (US) by monitoring the rate of particular Wikipedia article views on a daily basis. We calculated the number of times certain influenza- or health-related Wikipedia articles were accessed each day between December 2007 and August 2013 and compared these data to official ILI activity levels provided by the Centers for Disease Control and Prevention (CDC). We developed a Poisson model that accurately estimates the level of ILI activity in the American population, up to two weeks ahead of the CDC, with an absolute average difference between the two estimates of just 0.27% over 294 weeks of data. Wikipedia-derived ILI models performed well through both abnormally high media coverage events (such as during the 2009 H1N1 pandemic) as well as unusually severe influenza seasons (such as the 2012-2013 influenza season). Wikipedia usage accurately estimated the week of peak ILI activity 17% more often than Google Flu Trends data and was often more accurate in its measure of ILI intensity. With further study, this method could potentially be implemented for continuous monitoring of ILI activity in the US and to provide support for traditional influenza surveillance tools." }, { "pmid": "19390105", "title": "Seeking health information online: does Wikipedia matter?", "abstract": "OBJECTIVE To determine the significance of the English Wikipedia as a source of online health information. DESIGN The authors measured Wikipedia's ranking on general Internet search engines by entering keywords from MedlinePlus, NHS Direct Online, and the National Organization of Rare Diseases as queries into search engine optimization software. We assessed whether article quality influenced this ranking. The authors tested whether traffic to Wikipedia coincided with epidemiological trends and news of emerging health concerns, and how it compares to MedlinePlus. MEASUREMENTS Cumulative incidence and average position of Wikipedia compared to other Web sites among the first 20 results on general Internet search engines (Google, Google UK, Yahoo, and MSN, and page view statistics for selected Wikipedia articles and MedlinePlus pages. RESULTS Wikipedia ranked among the first ten results in 71-85% of search engines and keywords tested. Wikipedia surpassed MedlinePlus and NHS Direct Online (except for queries from the latter on Google UK), and ranked higher with quality articles. Wikipedia ranked highest for rare diseases, although its incidence in several categories decreased. Page views increased parallel to the occurrence of 20 seasonal disorders and news of three emerging health concerns. Wikipedia articles were viewed more often than MedlinePlus Topic (p = 0.001) but for MedlinePlus Encyclopedia pages, the trend was not significant (p = 0.07-0.10). CONCLUSIONS Based on its search engine ranking and page view statistics, the English Wikipedia is a prominent source of online health information compared to the other online health information providers studied." }, { "pmid": "21827326", "title": "Public anxiety and information seeking following the H1N1 outbreak: blogs, newspaper articles, and Wikipedia visits.", "abstract": "Web-based methodologies may provide a new and unique insight into public response to an infectious disease outbreak. This naturalistic study investigates the effectiveness of new web-based methodologies in assessing anxiety and information seeking in response to the 2009 H1N1 outbreak by examining language use in weblogs (\"blogs\"), newspaper articles, and web-based information seeking. Language use in blogs and newspaper articles was assessed using Linguistic Inquiry and Word Count, and information seeking was examined using the number of daily visits to H1N1-relevant Wikipedia articles. The results show that blogs mentioning \"swine flu\" used significantly higher levels of anxiety, health, and death words and lower levels of positive emotion words than control blogs. Change in language use on blogs was strongly related to change in language use in newspaper coverage for the same day. Both the measure of anxiety in blogs mentioning \"swine flu\" and the number of Wikipedia visits followed similar trajectories, peaking shortly after the announcement of H1N1 and then declining rapidly. Anxiety measured in blogs preceded information seeking on Wikipedia. These results show that the public reaction to H1N1 was rapid and short-lived. This research suggests that analysis of web behavior can provide a source of naturalistic data on the level and changing pattern of public anxiety and information seeking following the outbreak of a public health emergency." }, { "pmid": "23760189", "title": "Influenza activity--United States, 2012-13 season and composition of the 2013-14 influenza vaccine.", "abstract": "During the 2012-13 influenza season in the United States, influenza activity* increased through November and December before peaking in late December. Influenza A (H3N2) viruses predominated overall, but influenza B viruses and, to a lesser extent, influenza A (H1N1)pdm09 (pH1N1) viruses also were reported in the United States. This influenza season was moderately severe, with a higher percentage of outpatient visits for influenza-like illness (ILI), higher rates of hospitalization, and more reported deaths attributed to pneumonia and influenza compared with recent years. This report summarizes influenza activity in the United States during the 2012-13 influenza season (September 30, 2012-May 18, 2013) as of June 7, 2013, and reports the recommendations for the components of the 2013-14 Northern Hemisphere influenza vaccine." }, { "pmid": "24898165", "title": "Influenza activity - United States, 2013-14 season and composition of the 2014-15 influenza vaccines.", "abstract": "During the 2013-14 influenza season in the United States, influenza activity increased through November and December before peaking in late December. Influenza A (H1N1)pdm09 (pH1N1) viruses predominated overall, but influenza B viruses and, to a lesser extent, influenza A (H3N2) viruses also were reported in the United States. This influenza season was the first since the 2009 pH1N1 pandemic in which pH1N1 viruses predominated and was characterized overall by lower levels of outpatient illness and mortality than influenza A (H3N2)-predominant seasons, but higher rates of hospitalization among adults aged 50-64 years compared with recent years. This report summarizes influenza activity in the United States for the 2013-14 influenza season (September 29, 2013-May 17, 2014†) and reports recommendations for the components of the 2014-15 Northern Hemisphere influenza vaccines." }, { "pmid": "26042650", "title": "Influenza activity - United States, 2014-15 season and composition of the 2015-16 influenza vaccine.", "abstract": "During the 2014-15 influenza season in the United States, influenza activity increased through late November and December before peaking in late December. Influenza A (H3N2) viruses predominated, and the prevalence of influenza B viruses increased late in the season. This influenza season, similar to previous influenza A (H3N2)-predominant seasons, was moderately severe with overall high levels of outpatient illness and influenza-associated hospitalization, especially for adults aged ≥65 years. The majority of circulating influenza A (H3N2) viruses were different from the influenza A (H3N2) component of the 2014-15 Northern Hemisphere seasonal vaccines, and the predominance of these drifted viruses resulted in reduced vaccine effectiveness. This report summarizes influenza activity in the United States during the 2014-15 influenza season (September 28, 2014-May 23, 2015) and reports the recommendations for the components of the 2015-16 Northern Hemisphere influenza vaccine." }, { "pmid": "18667443", "title": "A fast Bayesian change point analysis for the segmentation of microarray data.", "abstract": "MOTIVATION\nThe ability to detect regions of genetic alteration is of great importance in cancer research. These alterations can take the form of large chromosomal gains and losses as well as smaller amplifications and deletions. The detection of such regions allows researchers to identify genes involved in cancer progression, and to fully understand differences between cancer and non-cancer tissue. The Bayesian method proposed by Barry and Hartigan is well suited for the analysis of such change point problems. In our previous article we introduced the R package bcp (Bayesian change point), an MCMC implementation of Barry and Hartigan's method. In a simulation study and real data examples, bcp is shown to both accurately detect change points and estimate segment means. Earlier versions of bcp (prior to 2.0) are O(n(2)) in speed and O(n) in memory (where n is the number of observations), and run in approximately 45 min for a sequence of length 10 000. With the high resolution of newer microarrays, the number of computations in the O(n(2)) algorithm is prohibitively time-intensive.\n\n\nRESULTS\nWe present a new implementation of the Bayesian change point method that is O(n) in both speed and memory; bcp 2.1 runs in approximately 45 s on a single processor with a sequence of length 10,000--a tremendous speed gain. Further speed improvements are possible using parallel computing, supported in bcp via NetWorkSpaces. In simulated and real microarray data from the literature, bcp is shown to quickly and accurately detect aberrations of varying width and magnitude.\n\n\nAVAILABILITY\nThe R package bcp is available on CRAN (R Development Core Team, 2008). The O(n) version is available in version 2.0 or higher, with support for NetWorkSpaces in versions 2.1 and higher." }, { "pmid": "21199944", "title": "Long-term effects of a trophic cascade in a large lake ecosystem.", "abstract": "Introductions or invasions of nonnative organisms can mediate major changes in the trophic structure of aquatic ecosystems. Here we document multitrophic level impacts in a spatially extensive system that played out over more than a century. Positive interactions among exotic vertebrate and invertebrate predators caused a substantial and abrupt shift in community composition resulting in a trophic cascade that extended to primary producers and to a nonaquatic species, the bald eagle. The opossum shrimp, Mysis diluviana, invaded Flathead Lake, Montana, the largest freshwater lake in the western United States. Lake trout had been introduced 80 y prior but remained at low densities until nonnative Mysis became established. The bottom-dwelling mysids eliminated a recruitment bottleneck for lake trout by providing a deep water source of food where little was available previously. Lake trout subsequently flourished on mysids and this voracious piscivore now dominates the lake fishery; formerly abundant kokanee were extirpated, and native bull and westslope cutthroat trout are imperiled. Predation by Mysis shifted zooplankton and phytoplankton community size structure. Bayesian change point analysis of primary productivity (27-y time series) showed a significant step increase of 55 mg C m(-2) d(-1) (i.e., 21% rise) concurrent with the mysid invasion, but little trend before or after despite increasing nutrient loading. Mysis facilitated predation by lake trout and indirectly caused the collapse of kokanee, redirecting energy flow through the ecosystem that would otherwise have been available to other top predators (bald eagles)." }, { "pmid": "22759619", "title": "Application of change point analysis to daily influenza-like illness emergency department visits.", "abstract": "BACKGROUND\nThe utility of healthcare utilization data from US emergency departments (EDs) for rapid monitoring of changes in influenza-like illness (ILI) activity was highlighted during the recent influenza A (H1N1) pandemic. Monitoring has tended to rely on detection algorithms, such as the Early Aberration Reporting System (EARS), which are limited in their ability to detect subtle changes and identify disease trends.\n\n\nOBJECTIVE\nTo evaluate a complementary approach, change point analysis (CPA), for detecting changes in the incidence of ED visits due to ILI.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nData collected through the Distribute project (isdsdistribute.org), which aggregates data on ED visits for ILI from over 50 syndromic surveillance systems operated by state or local public health departments were used. The performance was compared of the cumulative sum (CUSUM) CPA method in combination with EARS and the performance of three CPA methods (CUSUM, structural change model and Bayesian) in detecting change points in daily time-series data from four contiguous US states participating in the Distribute network. Simulation data were generated to assess the impact of autocorrelation inherent in these time-series data on CPA performance. The CUSUM CPA method was robust in detecting change points with respect to autocorrelation in time-series data (coverage rates at 90% when -0.2≤ρ≤0.2 and 80% when -0.5≤ρ≤0.5). During the 2008-9 season, 21 change points were detected and ILI trends increased significantly after 12 of these change points and decreased nine times. In the 2009-10 flu season, we detected 11 change points and ILI trends increased significantly after two of these change points and decreased nine times. Using CPA combined with EARS to analyze automatically daily ED-based ILI data, a significant increase was detected of 3% in ILI on April 27, 2009, followed by multiple anomalies in the ensuing days, suggesting the onset of the H1N1 pandemic in the four contiguous states.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nAs a complementary approach to EARS and other aberration detection methods, the CPA method can be used as a tool to detect subtle changes in time-series data more effectively and determine the moving direction (ie, up, down, or stable) in ILI trends between change points. The combined use of EARS and CPA might greatly improve the accuracy of outbreak detection in syndromic surveillance systems." }, { "pmid": "23037553", "title": "FluBreaks: early epidemic detection from Google flu trends.", "abstract": "BACKGROUND\nThe Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases.\n\n\nOBJECTIVE\nTo evaluate whether these trends can be used as a basis for an early warning system for epidemics.\n\n\nMETHODS\nWe present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data.\n\n\nRESULTS\nAcross our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size. Regions with larger populations (regions 4 and 6) had higher values of EA than region 10 (which had the smallest population) for negative binomial- and Poisson-based algorithms. The difference was 12.5% and 13.5% on average in negative binomial- and Poisson-based algorithms, respectively.\n\n\nCONCLUSIONS\nWe present the first detailed comparative analysis of popular early epidemic detection algorithms on Google Flu Trends data. We note that realizing this opportunity requires moving beyond the cumulative sum and historical limits method-based normal distribution approaches, traditionally employed by the CDC, to negative binomial- and Poisson-based algorithms to deal with potentially noisy search query data from regions with varying population and Internet penetrations. Based on our work, we have developed FluBreaks, an early warning system for flu epidemics using Google Flu Trends." }, { "pmid": "26392678", "title": "A content analysis of depression-related Tweets.", "abstract": "This study examines depression-related chatter on Twitter to glean insight into social networking about mental health. We assessed themes of a random sample (n=2,000) of depression-related tweets (sent 4-11 to 5-4-14). Tweets were coded for expression of DSM-5 symptoms for Major Depressive Disorder (MDD). Supportive or helpful tweets about depression was the most common theme (n=787, 40%), closely followed by disclosing feelings of depression (n=625; 32%). Two-thirds of tweets revealed one or more symptoms for the diagnosis of MDD and/or communicated thoughts or ideas that were consistent with struggles with depression after accounting for tweets that mentioned depression trivially. Health professionals can use our findings to tailor and target prevention and awareness messages to those Twitter users in need." }, { "pmid": "25374786", "title": "Use of Twitter to monitor attitudes toward depression and schizophrenia: an exploratory study.", "abstract": "Introduction. The paper reports on an exploratory study of the usefulness of Twitter for unobtrusive assessment of stigmatizing attitudes in the community. Materials and Methods. Tweets with the hashtags #depression or #schizophrenia posted on Twitter during a 7-day period were collected. Tweets were categorised based on their content and user information and also on the extent to which they indicated a stigmatising attitude towards depression or schizophrenia (stigmatising, personal experience of stigma, supportive, neutral, or anti-stigma). Tweets that indicated stigmatising attitudes or personal experiences of stigma were further grouped into the following subthemes: social distance, dangerousness, snap out of it, personal weakness, inaccurate beliefs, mocking or trivializing, and self-stigma. Results and Discussion. Tweets on depression mostly related to resources for consumers (34%), or advertised services or products for individuals with depression (20%). The majority of schizophrenia tweets aimed to increase awareness of schizophrenia (29%) or reported on research findings (22%). Tweets on depression were largely supportive (65%) or neutral (27%). A number of tweets were specifically anti-stigma (7%). Less than 1% of tweets reflected stigmatising attitudes (0.7%) or personal experience of stigma (0.1%). More than one third of the tweets which reflected stigmatising attitudes were mocking or trivialising towards individuals with depression (37%). The attitude that individuals with depression should \"snap out of it\" was evident in 30% of the stigmatising tweets. The majority of tweets relating to schizophrenia were categorised as supportive (42%) or neutral (43%). Almost 10% of tweets were explicitly anti-stigma. The percentage of tweets showing stigmatising attitudes was 5%, while less than 1% of tweets described personal experiences of stigmatising attitudes towards individuals with schizophrenia. Of the tweets that indicated stigmatising attitudes, most reflected inaccurate beliefs about schizophrenia being multiple personality disorder (52%) or mocked or trivialised individuals with schizophrenia (33%). Conclusions. The study supports the use of analysis of Twitter content to unobtrusively measure attitudes towards mental illness, both supportive and stigmatising. The results of the study may be useful in assisting mental health promotion and advocacy organisations to provide information about resources and support, raise awareness and counter common stigmatising attitudes." }, { "pmid": "26420469", "title": "What Online Communities Can Tell Us About Electronic Cigarettes and Hookah Use: A Study Using Text Mining and Visualization Techniques.", "abstract": "BACKGROUND\nThe rise in popularity of electronic cigarettes (e-cigarettes) and hookah over recent years has been accompanied by some confusion and uncertainty regarding the development of an appropriate regulatory response towards these emerging products. Mining online discussion content can lead to insights into people's experiences, which can in turn further our knowledge of how to address potential health implications. In this work, we take a novel approach to understanding the use and appeal of these emerging products by applying text mining techniques to compare consumer experiences across discussion forums.\n\n\nOBJECTIVE\nThis study examined content from the websites Vapor Talk, Hookah Forum, and Reddit to understand people's experiences with different tobacco products. Our investigation involves three parts. First, we identified contextual factors that inform our understanding of tobacco use behaviors, such as setting, time, social relationships, and sensory experience, and compared the forums to identify the ones where content on these factors is most common. Second, we compared how the tobacco use experience differs with combustible cigarettes and e-cigarettes. Third, we investigated differences between e-cigarette and hookah use.\n\n\nMETHODS\nIn the first part of our study, we employed a lexicon-based extraction approach to estimate prevalence of contextual factors, and then we generated a heat map based on these estimates to compare the forums. In the second and third parts of the study, we employed a text mining technique called topic modeling to identify important topics and then developed a visualization, Topic Bars, to compare topic coverage across forums.\n\n\nRESULTS\nIn the first part of the study, we identified two forums, Vapor Talk Health & Safety and the Stopsmoking subreddit, where discussion concerning contextual factors was particularly common. The second part showed that the discussion in Vapor Talk Health & Safety focused on symptoms and comparisons of combustible cigarettes and e-cigarettes, and the Stopsmoking subreddit focused on psychological aspects of quitting. Last, we examined the discussion content on Vapor Talk and Hookah Forum. Prominent topics included equipment, technique, experiential elements of use, and the buying and selling of equipment.\n\n\nCONCLUSIONS\nThis study has three main contributions. Discussion forums differ in the extent to which their content may help us understand behaviors with potential health implications. Identifying dimensions of interest and using a heat map visualization to compare across forums can be helpful for identifying forums with the greatest density of health information. Additionally, our work has shown that the quitting experience can potentially be very different depending on whether or not e-cigarettes are used. Finally, e-cigarette and hookah forums are similar in that members represent a \"hobbyist culture\" that actively engages in information exchange. These differences have important implications for both tobacco regulation and smoking cessation intervention design." }, { "pmid": "26226068", "title": "Hookah-Related Twitter Chatter: A Content Analysis.", "abstract": "INTRODUCTION\nHookah smoking is becoming increasingly popular among young adults and is often perceived as less harmful than cigarette use. Prior studies show that it is common for youth and young adults to network about substance use behaviors on social media. Social media messages about hookah could influence its use among young people. We explored normalization or discouragement of hookah smoking, and other common messages about hookah on Twitter.\n\n\nMETHODS\nFrom the full stream of tweets posted on Twitter from April 12, 2014, to May 10, 2014 (approximately 14.5 billion tweets), all tweets containing the terms hookah, hooka, shisha, or sheesha were collected (n = 358,523). The hookah tweets from Twitter users (tweeters) with high influence and followers were identified (n = 39,824) and a random sample of 5,000 tweets was taken (13% of tweets with high influence and followers). The sample of tweets was qualitatively coded for normalization (ie, makes hookah smoking seem common and normal or portrays positive experiences with smoking hookah) or discouragement of hookah smoking, and other common themes using crowdsourcing.\n\n\nRESULTS\nApproximately 87% of the sample of tweets normalized hookah use, and 7% were against hookah or discouraged its use. Nearly half (46%) of tweets that normalized hookah indicated that the tweeter was smoking hookah or wanted to smoke hookah, and 19% were advertisements/promotions for hookah bars or products.\n\n\nCONCLUSION\nEducational campaigns about health harms from hookah use and policy changes regarding smoke-free air laws and tobacco advertising on the Internet may be useful to help offset the influence of pro-hookah messages seen on social media." }, { "pmid": "26082941", "title": "Twitter: A Novel Tool for Studying the Health and Social Needs of Transgender Communities.", "abstract": "BACKGROUND\nLimited research has examined the health and social needs of transgender and gender nonconforming populations. Due to high levels of stigma, transgender individuals may avoid disclosing their identities to researchers, hindering this type of work. Further, researchers have traditionally relied on clinic-based sampling methods, which may mask the true heterogeneity of transgender and gender nonconforming communities. Online social networking websites present a novel platform for studying this diverse, difficult-to-reach population.\n\n\nOBJECTIVE\nThe objective of this study was to attempt to examine the perceived health and social needs of transgender and gender nonconforming communities by examining messages posted to the popular microblogging platform, Twitter.\n\n\nMETHODS\nTweets were collected from 13 transgender-related hashtags on July 11, 2014. They were read and coded according to general themes addressed, and a content analysis was performed. Qualitative and descriptive statistics are presented.\n\n\nRESULTS\nThere were 1135 tweets that were collected in total. Both \"positive\" and \"negative\" events were discussed, in both personal and social contexts. Violence, discrimination, suicide, and sexual risk behavior were discussed. There were 34.36% (390/1135) of tweets that addressed transgender-relevant current events, and 60.79% (690/1135) provided a link to a relevant news article or resource.\n\n\nCONCLUSIONS\nThis study found that transgender individuals and allies use Twitter to discuss health and social needs relevant to the population. Real-time social media sites like Twitter can be used to study issues relevant to transgender communities." }, { "pmid": "26513245", "title": "Combining Search, Social Media, and Traditional Data Sources to Improve Influenza Surveillance.", "abstract": "We present a machine learning-based methodology capable of providing real-time (\"nowcast\") and forecast estimates of influenza activity in the US by leveraging data from multiple data sources including: Google searches, Twitter microblogs, nearly real-time hospital visit records, and data from a participatory surveillance system. Our main contribution consists of combining multiple influenza-like illnesses (ILI) activity estimates, generated independently with each data source, into a single prediction of ILI utilizing machine learning ensemble approaches. Our methodology exploits the information in each data source and produces accurate weekly ILI predictions for up to four weeks ahead of the release of CDC's ILI reports. We evaluate the predictive ability of our ensemble approach during the 2013-2014 (retrospective) and 2014-2015 (live) flu seasons for each of the four weekly time horizons. Our ensemble approach demonstrates several advantages: (1) our ensemble method's predictions outperform every prediction using each data source independently, (2) our methodology can produce predictions one week ahead of GFT's real-time estimates with comparable accuracy, and (3) our two and three week forecast estimates have comparable accuracy to real-time predictions using an autoregressive model. Moreover, our results show that considerable insight is gained from incorporating disparate data streams, in the form of social media and crowd sourced data, into influenza predictions in all time horizons." } ]
Frontiers in Neuroscience
27877107
PMC5099523
10.3389/fnins.2016.00508
Training Deep Spiking Neural Networks Using Backpropagation
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
1.1. Related workGradient descent methods for SNNs have not been deeply investigated because both spike trains and the underlying membrane potentials are not differentiable at the time of spikes. The most successful approaches to date have used indirect methods, such as training a network in the continuous rate domain and converting it into a spiking version. O'Connor et al. (2013) pioneered this area by training a spiking deep belief network based on the Siegert event-rate approximation model. However, on the MNIST hand written digit classification task (LeCun et al., 1998), which is nowadays almost perfectly solved by ANNs (0.21% error rate in Wan et al., 2013), their approach only reached an accuracy around 94.09%. Hunsberger and Eliasmith (2015) used the softened rate model, in which a hard threshold in the response function of leaky integrate and fire (LIF) neuron is replaced with a continuous differentiable function to make it amenable to use in backpropagation. After training an ANN with the rate model they converted it into a SNN consisting of LIF neurons. With the help of pre-training based on denoising autoencoders they achieved 98.6% in the permutation-invariant (PI) MNIST task (see Section 3.1). Diehl et al. (2015) trained deep neural networks with conventional deep learning techniques and additional constraints necessary for conversion to SNNs. After training, the ANN units were converted into non-leaky spiking neurons and the performance was optimized by normalizing weight parameters. This approach resulted in the current state-of-the-art accuracy for SNNs of 98.64% in the PI MNIST task. Esser et al. (2015) used a differentiable probabilistic spiking neuron model for training and statistically sampled the trained network for deployment. In all of these methods, training was performed indirectly using continuous signals, which may not capture important statistics of spikes generated by real sensors used during processing. Even though SNNs are well-suited for processing signals from event-based sensors such as the Dynamic Vision Sensor (DVS) (Lichtsteiner et al., 2008), the previous SNN training models require removing time information and generating image frames from the event streams. Instead, in this article we use the same signal format for training and processing deep SNNs, and can thus train SNNs directly on spatio-temporal event streams considering non-ideal factors such as pixel variation in sensors. This is demonstrated on the neuromorphic N-MNIST benchmark dataset (Orchard et al., 2015), achieving higher accuracy with a smaller number of neurons than all previous attempts that ignored spike timing by using event-rate approximation models for training.
[ "25910252", "26617512", "27199646", "26941637", "27651489", "26177908", "25333112", "26017442", "17305422", "25104385", "24574952", "24115919", "26217169", "26635513", "19548795", "18439138", "27683554" ]
[ { "pmid": "25910252", "title": "Turn Down That Noise: Synaptic Encoding of Afferent SNR in a Single Spiking Neuron.", "abstract": "We have added a simplified neuromorphic model of Spike Time Dependent Plasticity (STDP) to the previously described Synapto-dendritic Kernel Adapting Neuron (SKAN), a hardware efficient neuron model capable of learning spatio-temporal spike patterns. The resulting neuron model is the first to perform synaptic encoding of afferent signal-to-noise ratio in addition to the unsupervised learning of spatio-temporal spike patterns. The neuron model is particularly suitable for implementation in digital neuromorphic hardware as it does not use any complex mathematical operations and uses a novel shift-based normalization approach to achieve synaptic homeostasis. The neuron's noise compensation properties are characterized and tested on random spatio-temporal spike patterns as well as a noise corrupted subset of the zero images of the MNIST handwritten digit dataset. Results show the simultaneously learning common patterns in its input data while dynamically weighing individual afferents based on their signal to noise ratio. Despite its simplicity the interesting behaviors of the neuron model and the resulting computational power may also offer insights into biological systems." }, { "pmid": "26617512", "title": "Models of Metaplasticity: A Review of Concepts.", "abstract": "Part of hippocampal and cortical plasticity is characterized by synaptic modifications that depend on the joint activity of the pre- and post-synaptic neurons. To which extent those changes are determined by the exact timing and the average firing rates is still a matter of debate; this may vary from brain area to brain area, as well as across neuron types. However, it has been robustly observed both in vitro and in vivo that plasticity itself slowly adapts as a function of the dynamical context, a phenomena commonly referred to as metaplasticity. An alternative concept considers the regulation of groups of synapses with an objective at the neuronal level, for example, maintaining a given average firing rate. In that case, the change in the strength of a particular synapse of the group (e.g., due to Hebbian learning) affects others' strengths, which has been coined as heterosynaptic plasticity. Classically, Hebbian synaptic plasticity is paired in neuron network models with such mechanisms in order to stabilize the activity and/or the weight structure. Here, we present an oriented review that brings together various concepts from heterosynaptic plasticity to metaplasticity, and show how they interact with Hebbian-type learning. We focus on approaches that are nowadays used to incorporate those mechanisms to state-of-the-art models of spiking plasticity inspired by experimental observations in the hippocampus and cortex. Making the point that metaplasticity is an ubiquitous mechanism acting on top of classical Hebbian learning and promoting the stability of neural function over multiple timescales, we stress the need for incorporating it as a key element in the framework of plasticity models. Bridging theoretical and experimental results suggests a more functional role for metaplasticity mechanisms than simply stabilizing neural activity." }, { "pmid": "27199646", "title": "Skimming Digits: Neuromorphic Classification of Spike-Encoded Images.", "abstract": "The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classification using N-MNIST, a neuromorphic dataset created with a silicon retina, and the Synaptic Kernel Inverse Method (SKIM), a learning method based on principles of dendritic computation. As this work represents the first large-scale and multi-class classification task performed using the SKIM network, it explores different training patterns and output determination methods necessary to extend the original SKIM method to support multi-class problems. Making use of SKIM networks applied to real-world datasets, implementing the largest hidden layer sizes and simultaneously training the largest number of output neurons, the classification system achieved a best-case accuracy of 92.87% for a network containing 10,000 hidden layer neurons. These results represent the highest accuracies achieved against the dataset to date and serve to validate the application of the SKIM method to event-based visual classification tasks. Additionally, the study found that using a square pulse as the supervisory training signal produced the highest accuracy for most output determination methods, but the results also demonstrate that an exponential pattern is better suited to hardware implementations as it makes use of the simplest output determination method based on the maximum value." }, { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "27651489", "title": "Convolutional networks for fast, energy-efficient neuromorphic computing.", "abstract": "Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer." }, { "pmid": "26177908", "title": "An Improved Method for Predicting Linear B-cell Epitope Using Deep Maxout Networks.", "abstract": "To establish a relation between an protein amino acid sequence and its tendencies to generate antibody response, and to investigate an improved in silico method for linear B-cell epitope (LBE) prediction. We present a sequence-based LBE predictor developed using deep maxout network (DMN) with dropout training techniques. A graphics processing unit (GPU) was used to reduce the training time of the model. A 10-fold cross-validation test on a large, non-redundant and experimentally verified dataset (Lbtope_Fixed_ non_redundant) was performed to evaluate the performance. DMN-LBE achieved an accuracy of 68.33% and an area under the receiver operating characteristic curve (AUC) of 0.743, outperforming other prediction methods in the field. A web server, DMN-LBE, of the improved prediction model has been provided for public free use. We anticipate that DMN-LBE will be beneficial to vaccine development, antibody production, disease diagnosis, and therapy." }, { "pmid": "25333112", "title": "Improved reconstruction of 4D-MR images by motion predictions.", "abstract": "The reconstruction of 4D images from 2D navigator and data slices requires sufficient observations per motion state to avoid blurred images and motion artifacts between slices. Especially images from rare motion states, like deep inhalations during free-breathing, suffer from too few observations. To address this problem, we propose to actively generate more suitable images instead of only selecting from the available images. The method is based on learning the relationship between navigator and data-slice motion by linear regression after dimensionality reduction. This can then be used to predict new data slices for a given navigator by warping existing data slices by their predicted displacement field. The method was evaluated for 4D-MRIs of the liver under free-breathing, where sliding boundaries pose an additional challenge for image registration. Leave-one-out tests for five short sequences of ten volunteers showed that the proposed prediction method improved on average the residual mean (95%) motion between the ground truth and predicted data slice from 0.9mm (1.9mm) to 0.8mm (1.6mm) in comparison to the best selection method. The approach was particularly suited for unusual motion states, where the mean error was reduced by 40% (2.2mm vs. 1.3mm)." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "17305422", "title": "Unsupervised learning of visual features through spike timing dependent plasticity.", "abstract": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "24574952", "title": "Event-driven contrastive divergence for spiking neuromorphic systems.", "abstract": "Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality." }, { "pmid": "24115919", "title": "Real-time classification and sensor fusion with a spiking deep belief network.", "abstract": "Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input." }, { "pmid": "26217169", "title": "Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.", "abstract": "Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time." }, { "pmid": "26635513", "title": "Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.", "abstract": "Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches." }, { "pmid": "19548795", "title": "Computation with spikes in a winner-take-all network.", "abstract": "The winner-take-all (WTA) computation in networks of recurrently connected neurons is an important decision element of many models of cortical processing. However, analytical studies of the WTA performance in recurrent networks have generally addressed rate-based models. Very few have addressed networks of spiking neurons, which are relevant for understanding the biological networks themselves and also for the development of neuromorphic electronic neurons that commmunicate by action potential like address-events. Here, we make steps in that direction by using a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events. Our work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval. We show first for the case of spike rate inputs that input discrimination and the effects of self-excitation and inhibition on this discrimination are consistent with results obtained from the standard rate-based WTA models. We also extend this discrimination analysis of spiking WTAs to nonstationary inputs with time-varying spike rates resembling statistics of real-world sensory stimuli. We conclude that spiking WTAs are consistent with their continuous counterparts for steady-state inputs, but they also exhibit high discrimination performance with nonstationary inputs." }, { "pmid": "18439138", "title": "Sparse coding via thresholding and local competition in neural circuits.", "abstract": "While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholding functions to induce local (usually one-way) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms." }, { "pmid": "27683554", "title": "Toward an Integration of Deep Learning and Neuroscience.", "abstract": "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses." } ]
PLoS Computational Biology
27835647
PMC5105998
10.1371/journal.pcbi.1005113
Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Related workA number of methods have been used previously to examine the spectrotemporal sensitivity of auditory cortical neurons. Previous studies have attempted to extend the application of the LN model to auditory cortical data, mostly using maximum-likelihood methods. Indeed, several studies have used approaches that have fundamental similarities to the one we explore here, in that they combine or cascade several linear filters in a nonlinear manner. One such body of work that improved predictions over the LN model is based on finding the maximally-informative dimensions (MID) [20,21,34,43–46] that drove the response of auditory cortical neurons. This method involves finding usually one or two maximally informative linear features that interact through a flexible 1D or 2D nonlinearity, and is equivalent to fitting a form of LN model under assumptions of a Poisson model of spiking variability [46–48]. When this method was applied to neurons in primary auditory cortex it was found that the neurons’ response properties are typically better described using two features rather than one [20,34], in contrast to midbrain neurons which are well fitted using a single feature [43]. That result thus seems consistent with ours, in that we found NRFs fitted to cortical responses most commonly evolved to have two effective HUs (or input features). Another approach, that has been found to improve predictions of auditory cortical responses, is to apply a multi-linear model over the dimensions of frequency, sound level, and time lag, and for the extended multi-linear model also over dimensions involved in multiplicative contextual effects [21]. However, the above studies in auditory cortex [20,34,43] did not use natural stimuli, and hence might not have been in the right stimulus space to observe some complexities, as STRFs measured with natural stimuli can be quite different than when measured with artificial stimuli [49]. An advantage of the NRF model is that its architecture is entirely that of traditional feedforward models of sensory pathways in which activations of lower level features simply converge onto model neurons with sigmoidal input-firing rate functions. NRFs can therefore be interpreted in a context that is perhaps simpler and more familiar than that of, for example, maximally informative dimension models [20,44].Other developments on the standard LN model have included model components that can be interpreted as intraneuronal rather than network properties, such as including a post-spike filter [22] or synaptic depression [23], and have also been shown to improve predictions. Pillow and colleagues [50,51] applied a generalized linear model (GLM) to the problem of receptive field modelling. Their approach is similar to the basic LN model in that it involves a linear function of stimulus history combined with an output nonlinearity. However, unlike in LN models, the response of their GLM also depends on the spike history (using a post-spike filter). This post-spike filter may reflect intrinsic refractory characteristics of neurons, but could also represent network filter effects. A GLM model has been applied to avian forebrain neurons [22], where it has been shown to significantly improve predictions of neural responses over a linear model, but not over an LN model.Although they haven’t yet been applied to auditory cortical responses, it is worth mentioning two extensions to GLMs. First, GLMs can be extended so that model responses depend on the history of many recorded neurons [50], representing interconnections between recorded neurons. While this approach is thus also aimed at modeling network properties, it is quite different from our NRF model, where we infer the characteristics of hidden units. Second, the extension of the GLM approach investigated by Park and colleagues [52] included sensitivity to more than one stimulus feature. Thus, like our NRF or the multi-feature MID approach, this “generalized quadratic model” (GQM) has an input stage comprising several filters which are nonlinearly combined, in this case using a quadratic function. One might argue that our choice for the HUs of a sigmoidal nonlinearity following a linear filter stage, and the same form for the OU, is perhaps more similar to what occurs in the brain, where dendritic currents might be thought of as combining linearly according to Kirchhoff’s laws as they converge on neurons that often have sigmoidal current-firing rate functions. However, we do not wish to overstate either the physiological realism of our model (which is very rudimentary compared to the known complexity of real neurons) or the conceptual difference with GQMs or multi-feature MIDs. A summation of sigmoidal unit outputs may perhaps be better motivated physiologically than a quadratic function, but given the diversity of nonlinearity in the brain this is a debatable point.Another extension to GLMs, a generalized nonlinear model (GNM), does, however, employ input units with monotonically-increasing nonlinearities, and unlike multi-neuron GLMs or GQMs, GNMs have been applied to auditory neurons by Schinkel-Bielefeld and colleagues [24]. Their GNM comprises a very simple feedforward network based on the weighted sum of an excitatory and an inhibitory unit, along with a post-spike filter. The architecture of that model is thus not dissimilar from our NRFs, except that the number of HUs is fixed at two, and their inhibitory and excitatory influences are fixed in advance. It has been applied to mammalian (ferret) cortical neural responses, uncovering non-monotonic sound intensity tuning and onset/offset selectivity.For neurons in the avian auditory forebrain, although not for mammalian auditory cortex, GNMs have also been extended by McFarland and colleagues to include the sum of more than two input units with monotonically-increasing nonlinearities [53]. Of the previously described models, this cascaded LN-LN ‘Nonlinear Input Model (NIM)’ model bears perhaps the greatest similarity with our NRF model. Just like our NRF, it comprises a collection of nonlinear units feeding into a nonlinear unit. The main differences between their model and ours thus pertain not to model architecture, but to the methods of fitting the models and the extent to which the models have been characterized. The NIM has been applied to a single zebra finch auditory forebrain neuron, separating out its excitatory and inhibitory receptive fields in a manner similar to what we observe in the bi-feature neurons described above.One advantage of the NRF over the NIM is that the fitting algorithm automatically determines the number of features that parsimoniously explain each neuron's response, obviating the need to laboriously compare the cross-validated model performance for each possible number of hidden units. Another difference is that the NRF is simpler while still maintaining the capacity to capture complex nonlinear network properties of neural responses; for example, the NIM [53] had potentially large numbers of hyperparameters (four for each hidden unit or “feature”) that were manually turned, something that would be very difficult to do if the model needed to be fitted to datasets comprising large numbers of neurons. In contrast, the NRF has only one hyperparameter for the entire network, which can easily be tuned in an automated parameter search with cross-validation. Consequently, we have been able to use the NRF to characterize a sizeable population of recorded neurons, but so far no systematic examination of the capacity of the NIM to explain the responses of many neurons has been performed.Another recent avian forebrain study [54] used a maximum noise entropy (MNE) approach to uncover multiple receptive fields sensitive to second-order aspects of the stimulus. Unlike the above two GNM [24,53] approaches, this model does not have hidden units with sigmoidal nonlinearities, but finds multiple quadratic features. The MNE predicted neural responses better than a linear model, although still poorly, with an average CCraw of 0.24, and it was not determined whether it could out-predict an LN model. Note, however, that the CCraw values reported in that study do not distinguish stimulus-driven response variability from neural “noise”. Consequently, it is unclear whether the relatively modest CCraw values reported there might reflect shortcomings of the model or whether they are a consequence of differences in the species, brain regions and stimuli under study. Finally, perhaps the most relevant study in the avian forebrain used a time delay feedforward neural network to predict responses of zebra finch nucleus ovoidalis neurons to birdsong [55]. These authors reported that the network predicted neural responses better than a linear model, but performed no quantitative comparisons to support this.Advances on the LN model have also been applied in other brain regions. Various advances on the LN model have also been made in studies of primary visual cortex, and of particular relevance are the few cases where neural networks have been used to predict neural responses. Visual cortical responses to certain artificial stimuli (randomly varying bar patterns and related stimuli) have been fitted using a single hidden layer neural network, resulting in improvements in prediction over linear models for complex but not simple cells in one study [56] and over LN-like models in another study [57]. However, the challenge we tackle here is to predict the responses to natural stimuli. In this respect we are aware of only one similar study by Prenger and colleagues [58] which used a single hidden layer neural network to predict responses to series of still images of natural scenes. The network model in this study gave better predictions than an LN model with a simple rectifying nonlinearity. However, the improvements had limited consistency, predicting significantly better in only 16/34 neurons, and it did worse than an LN model applied to the power spectra of the images. Additionally, the CCraw of the model predictions with the neural data were somewhat small (0.24). This appears to contrast with the seemingly better performance we obtained with our NRF model.These apparent differences in model performance may, however, not all be attributable to differences in model design or fitting. In addition to the fact we already noted that low CCraw values might be diagnostic of very noisy neurons rather than shortcomings of the model, we also need to be cognizant of the differences in the types of data that are being modeled: we applied our model responses of auditory cortical neurons to natural auditory sound recordings, whereas Prenger and colleagues [58] applied theirs to visual cortical neuron responses to random sequences of photographs of natural scenes. Furthermore, the neural responses to our stimuli were averaged over several repeats, whereas the above study did not use repeated stimuli, which may limit how predictable their neural responses may be. However, there are also notable structural differences between their model and ours. For example, the activation function on the OU in the Prenger et al. study [58] was linear (as with [56] but not [57]), whereas the OU of our NRF has a nonlinear activation function, which enables our NRF to model observed neuronal thresholds explicitly. Furthermore, we used a notably powerful optimization algorithm, the sum-of-function optimizer [26], which has been shown to find substantially lower values of neural network cost function than the forms of gradient descent used in the above neural network studies. Finally, the L1-norm regularization that we used has the advantage of finding a parsimonious network quickly and simply, as compared with the more laborious and often more complex methods of the above three studies: L2-norm-based regularization methods and hidden unit pruning [58], early stopping and post-fit pruning [56] or no regularization and comparing different numbers of hidden units [57].
[ "3973762", "6976799", "18184787", "19295144", "9603734", "12019330", "14583754", "16633939", "18854580", "10946994", "12815016", "11784767", "3479811", "11700557", "10704507", "14762127", "15969914", "21689603", "18579084", "18287509", "21264310", "24305812", "22457454", "15748852", "19675288", "22807665", "19918079", "23966693", "12427863", "9334433", "16226582", "16286934", "24265596", "22895711", "9356395", "22323634", "15006095", "21704508", "23841838", "19568981", "25831448", "23209771", "18650810", "23874185", "26787894", "12060706", "1527596", "15288891", "26903851", "15703254", "20371828", "15152018", "20808728", "26683490" ]
[ { "pmid": "3973762", "title": "Spatiotemporal energy models for the perception of motion.", "abstract": "A motion sequence may be represented as a single pattern in x-y-t space; a velocity of motion corresponds to a three-dimensional orientation in this space. Motion sinformation can be extracted by a system that responds to the oriented spatiotemporal energy. We discuss a class of models for human motion mechanisms in which the first stage consists of linear filters that are oriented in space-time and tuned in spatial frequency. The outputs of quadrature pairs of such filters are squared and summed to give a measure of motion energy. These responses are then fed into an opponent stage. Energy models can be built from elements that are consistent with known physiology and psychophysics, and they permit a qualitative understanding of a variety of motion phenomena." }, { "pmid": "6976799", "title": "A comparison of the spectro-temporal sensitivity of auditory neurons to tonal and natural stimuli.", "abstract": "The spectro-temporal sensitivity of auditory neurons has been investigated experimentally by averaging the spectrograms of stimuli preceding the occurrence of action potentials or neural events ( the APES : Aertsen et al., 1980, 1981). The properties of the stimulus ensemble are contained in this measure of neural selectivity. The spectro-temporal receptive field (STRF) has been proposed as a theoretical concept which should give a stimulus-invariant representation of the second order characteristics of the neuron's system function (Aertsen and Johannesma, 1981). The present paper investigates the relation between the experimental and the theoretical description of the neuron's spectro-temporal sensitivity for sound. The aim is to derive a formally based stimulus-normalization procedure for the results of the experimental averaging procedure. Under particular assumptions, regarding both the neuron and the stimulus ensemble, an integral equation connecting the APES and the STRF is derived. This integral expression enables to calculate the APES from the STRF by taking into account the stimulus spectral composition and the characteristics of the spectrogram analysis. The inverse relation, i.e. starting from the experimental results and by application of a formal normalization procedure arriving at the theoretical STRF, is effectively hindered by the nature of the spectrogram analysis. An approximative \"normalization\" procedure, based on intuitive manipulation of the integral equation, has been applied to a number of single unit recordings from the grassfrog's auditory midbrain area to tonal and natural stimulus ensembles. The results indicate tha spectrogram analysis, while being a useful real-time tool in investigating the spectro-temporal transfer properties of auditory neurons, shows fundamental shortcomings for a theoretical treatment of the questions of interest." }, { "pmid": "18184787", "title": "The consequences of response nonlinearities for interpretation of spectrotemporal receptive fields.", "abstract": "Neurons in the central auditory system are often described by the spectrotemporal receptive field (STRF), conventionally defined as the best linear fit between the spectrogram of a sound and the spike rate it evokes. An STRF is often assumed to provide an estimate of the receptive field of a neuron, i.e., the spectral and temporal range of stimuli that affect the response. However, when the true stimulus-response function is nonlinear, the STRF will be stimulus dependent, and changes in the stimulus properties can alter estimates of the sign and spectrotemporal extent of receptive field components. We demonstrate analytically and in simulations that, even when uncorrelated stimuli are used, interactions between simple neuronal nonlinearities and higher-order structure in the stimulus can produce STRFs that show contributions from time-frequency combinations to which the neuron is actually insensitive. Only when spectrotemporally independent stimuli are used does the STRF reliably indicate features of the underlying receptive field, and even then it provides only a conservative estimate. One consequence of these observations, illustrated using natural stimuli, is that a stimulus-induced change in an STRF could arise from a consistent but nonlinear neuronal response to stimulus ensembles with differing higher-order dependencies. Thus, although the responses of higher auditory neurons may well involve adaptation to the statistics of different stimulus ensembles, stimulus dependence of STRFs alone, or indeed of any overly constrained stimulus-response mapping, cannot demonstrate the nature or magnitude of such effects." }, { "pmid": "19295144", "title": "Rapid synaptic depression explains nonlinear modulation of spectro-temporal tuning in primary auditory cortex by natural stimuli.", "abstract": "In this study, we explored ways to account more accurately for responses of neurons in primary auditory cortex (A1) to natural sounds. The auditory cortex has evolved to extract behaviorally relevant information from complex natural sounds, but most of our understanding of its function is derived from experiments using simple synthetic stimuli. Previous neurophysiological studies have found that existing models, such as the linear spectro-temporal receptive field (STRF), fail to capture the entire functional relationship between natural stimuli and neural responses. To study this problem, we compared STRFs for A1 neurons estimated using a natural stimulus, continuous speech, with STRFs estimated using synthetic ripple noise. For about one-third of the neurons, we found significant differences between STRFs, usually in the temporal dynamics of inhibition and/or overall gain. This shift in tuning resulted primarily from differences in the coarse temporal structure of the speech and noise stimuli. Using simulations, we found that the stimulus dependence of spectro-temporal tuning can be explained by a model in which synaptic inputs to A1 neurons are susceptible to rapid nonlinear depression. This dynamic reshaping of spectro-temporal tuning suggests that synaptic depression may enable efficient encoding of natural auditory stimuli." }, { "pmid": "9603734", "title": "Optimizing sound features for cortical neurons.", "abstract": "The brain's cerebral cortex decomposes visual images into information about oriented edges, direction and velocity information, and color. How does the cortex decompose perceived sounds? A reverse correlation technique demonstrates that neurons in the primary auditory cortex of the awake primate have complex patterns of sound-feature selectivity that indicate sensitivity to stimulus edges in frequency or in time, stimulus transitions in frequency or intensity, and feature conjunctions. This allows the creation of classes of stimuli matched to the processing characteristics of auditory cortical neurons. Stimuli designed for a particular neuron's preferred feature pattern can drive that neuron with higher sustained firing rates than have typically been recorded with simple stimuli. These data suggest that the cortex decomposes an auditory scene into component parts using a feature-processing system reminiscent of that used for the cortical decomposition of visual images." }, { "pmid": "12019330", "title": "Nonlinear spectrotemporal sound analysis by neurons in the auditory midbrain.", "abstract": "The auditory system of humans and animals must process information from sounds that dynamically vary along multiple stimulus dimensions, including time, frequency, and intensity. Therefore, to understand neuronal mechanisms underlying acoustic processing in the central auditory pathway, it is essential to characterize how spectral and temporal acoustic dimensions are jointly processed by the brain. We use acoustic signals with a structurally rich time-varying spectrum to study linear and nonlinear spectrotemporal interactions in the central nucleus of the inferior colliculus (ICC). Our stimuli, the dynamic moving ripple (DMR) and ripple noise (RN), allow us to systematically characterize response attributes with the spectrotemporal receptive field (STRF) methods to a rich and dynamic stimulus ensemble. Theoretically, we expect that STRFs derived with DMR and RN would be identical for a linear integrating neuron, and we find that approximately 60% of ICC neurons meet this basic requirement. We find that the remaining neurons are distinctly nonlinear; these could either respond selectively to DMR or produce no STRFs despite selective activation to spectrotemporal acoustic attributes. Our findings delineate rules for spectrotemporal integration in the ICC that cannot be accounted for by conventional linear-energy integration models." }, { "pmid": "14583754", "title": "Rapid task-related plasticity of spectrotemporal receptive fields in primary auditory cortex.", "abstract": "We investigated the hypothesis that task performance can rapidly and adaptively reshape cortical receptive field properties in accord with specific task demands and salient sensory cues. We recorded neuronal responses in the primary auditory cortex of behaving ferrets that were trained to detect a target tone of any frequency. Cortical plasticity was quantified by measuring focal changes in each cell's spectrotemporal response field (STRF) in a series of passive and active behavioral conditions. STRF measurements were made simultaneously with task performance, providing multiple snapshots of the dynamic STRF during ongoing behavior. Attending to a specific target frequency during the detection task consistently induced localized facilitative changes in STRF shape, which were swift in onset. Such modulatory changes may enhance overall cortical responsiveness to the target tone and increase the likelihood of 'capturing' the attended target during the detection task. Some receptive field changes persisted for hours after the task was over and hence may contribute to long-term sensory memory." }, { "pmid": "16633939", "title": "Sound representation methods for spectro-temporal receptive field estimation.", "abstract": "The spectro-temporal receptive field (STRF) of an auditory neuron describes the linear relationship between the sound stimulus in a time-frequency representation and the neural response. Time-frequency representations of a sound in turn require a nonlinear operation on the sound pressure waveform and many different forms for this non-linear transformation are possible. Here, we systematically investigated the effects of four factors in the non-linear step in the STRF model: the choice of logarithmic or linear filter frequency spacing, the time-frequency scale, stimulus amplitude compression and adaptive gain control. We quantified the goodness of fit of these different STRF models on data obtained from auditory neurons in the songbird midbrain and forebrain. We found that adaptive gain control and the correct stimulus amplitude compression scheme are paramount to correctly modelling neurons. The time-frequency scale and frequency spacing also affected the goodness of fit of the model but to a lesser extent and the optimal values were stimulus dependent." }, { "pmid": "18854580", "title": "Spectrotemporal receptive fields in anesthetized cat primary auditory cortex are context dependent.", "abstract": "In order to investigate how the auditory scene is analyzed and perceived, auditory spectrotemporal receptive fields (STRFs) are generally used as a convenient way to describe how frequency and temporal sound information is encoded. However, using broadband sounds to estimate STRFs imperfectly reflects the way neurons process complex stimuli like conspecific vocalizations insofar as natural sounds often show limited bandwidth. Using recordings in the primary auditory cortex of anesthetized cats, we show that presentation of narrowband stimuli not including the best frequency of neurons provokes the appearance of residual peaks and increased firing rate at some specific spectral edges of stimuli compared with classical STRFs obtained from broadband stimuli. This result is the same for STRFs obtained from both spikes and local field potentials. Potential mechanisms likely involve release from inhibition. We thus emphasize some aspects of context dependency of STRFs, that is, how the balance of inhibitory and excitatory inputs is able to shape the neural response from the spectral content of stimuli." }, { "pmid": "10946994", "title": "Robust spectrotemporal reverse correlation for the auditory system: optimizing stimulus design.", "abstract": "The spectrotemporal receptive field (STRF) is a functional descriptor of the linear processing of time-varying acoustic spectra by the auditory system. By cross-correlating sustained neuronal activity with the dynamic spectrum of a spectrotemporally rich stimulus ensemble, one obtains an estimate of the STRF. In this article, the relationship between the spectrotemporal structure of any given stimulus and the quality of the STRF estimate is explored and exploited. Invoking the Fourier theorem, arbitrary dynamic spectra are described as sums of basic sinusoidal components--that is, moving ripples. Accurate estimation is found to be especially reliant on the prominence of components whose spectral and temporal characteristics are of relevance to the auditory locus under study and is sensitive to the phase relationships between components with identical temporal signatures. These and other observations have guided the development and use of stimuli with deterministic dynamic spectra composed of the superposition of many temporally orthogonal moving ripples having a restricted, relevant range of spectral scales and temporal rates. The method, termed sum-of-ripples, is similar in spirit to the white-noise approach but enjoys the same practical advantages--which equate to faster and more accurate estimation--attributable to the time-domain sum-of-sinusoids method previously employed in vision research. Application of the method is exemplified with both modeled data and experimental data from ferret primary auditory cortex (AI)." }, { "pmid": "12815016", "title": "Spectrotemporal structure of receptive fields in areas AI and AAF of mouse auditory cortex.", "abstract": "The mouse is a promising model system for auditory cortex research because of the powerful genetic tools available for manipulating its neural circuitry. Previous studies have identified two tonotopic auditory areas in the mouse-primary auditory cortex (AI) and anterior auditory field (AAF)- but auditory receptive fields in these areas have not yet been described. To establish a foundation for investigating auditory cortical circuitry and plasticity in the mouse, we characterized receptive-field structure in AI and AAF of anesthetized mice using spectrally complex and temporally dynamic stimuli as well as simple tonal stimuli. Spectrotemporal receptive fields (STRFs) were derived from extracellularly recorded responses to complex stimuli, and frequency-intensity tuning curves were constructed from responses to simple tonal stimuli. Both analyses revealed temporal differences between AI and AAF responses: peak latencies and receptive-field durations for STRFs and first-spike latencies for responses to tone bursts were significantly longer in AI than in AAF. Spectral properties of AI and AAF receptive fields were more similar, although STRF bandwidths were slightly broader in AI than in AAF. Finally, in both AI and AAF, a substantial minority of STRFs were spectrotemporally inseparable. The spectrotemporal interaction typically appeared in the form of clearly disjoint excitatory and inhibitory subfields or an obvious spectrotemporal slant in the STRF. These data provide the first detailed description of auditory receptive fields in the mouse and suggest that although neurons in areas AI and AAF share many response characteristics, area AAF may be specialized for faster temporal processing." }, { "pmid": "11784767", "title": "Spectrotemporal receptive fields in the lemniscal auditory thalamus and cortex.", "abstract": "Receptive fields have been characterized independently in the lemniscal auditory thalamus and cortex, usually with spectrotemporally simple sounds tailored to a specific task. No studies have employed naturalistic stimuli to investigate the thalamocortical transformation in temporal, spectral, and aural domains simultaneously and under identical conditions. We recorded simultaneously in the ventral division of the medial geniculate body (MGBv) and in primary auditory cortex (AI) of the ketamine-anesthetized cat. Spectrotemporal receptive fields (STRFs) of single units (n = 387) were derived by reverse-correlation with a broadband and dynamically varying stimulus, the dynamic ripple. Spectral integration, as measured by excitatory bandwidth and spectral modulation preference, was similar across both stations (mean Q(1/e) thalamus = 5.8, cortex = 5.4; upper cutoff of spectral modulation transfer function, thalamus = 1.30 cycles/octave, cortex = 1.37 cycles/octave). Temporal modulation rates slowed by a factor of two from thalamus to cortex (mean preferred rate, thalamus = 32.4 Hz, cortex = 16.6 Hz; upper cutoff of temporal modulation transfer function, thalamus = 62.9 Hz, cortex = 37.4 Hz). We found no correlation between spectral and temporal integration properties, suggesting that the excitatory-inhibitory interactions underlying preference in each domain are largely independent. A small number of neurons in each station had highly asymmetric STRFs, evidence of frequency sweep selectivity, but the population showed no directional bias. Binaural preferences differed in their relative proportions, most notably an increased prevalence of excitatory contralateral-only cells in cortex (40%) versus thalamus (23%), indicating a reorganization of this parameter. By comparing simultaneously along multiple stimulus dimensions in both stations, these observations establish the global characteristics of the thalamocortical receptive field transformation." }, { "pmid": "3479811", "title": "Linear mechanisms of directional selectivity in simple cells of cat striate cortex.", "abstract": "The role of linear spatial summation in the directional selectivity of simple cells in cat striate cortex was investigated. The experimental paradigm consisted of comparing the response to drifting grating stimuli with linear predictions based on the response to stationary contrast-reversing gratings. The spatial phase dependence of the response to contrast-reversing gratings was consistent with a high degree of linearity of spatial summation within the receptive fields. Furthermore, the preferred direction predicted from the response to stationary gratings generally agreed with the measurements made with drifting gratings. The amount of directional selectivity predicted was, on average, about half the measured value, indicating that nonlinear mechanisms act in concert with linear mechanisms in determining the overall directional selectivity." }, { "pmid": "11700557", "title": "Linear processing of spatial cues in primary auditory cortex.", "abstract": "To determine the direction of a sound source in space, animals must process a variety of auditory spatial cues, including interaural level and time differences, as well as changes in the sound spectrum caused by the direction-dependent filtering of sound by the outer ear. Behavioural deficits observed when primary auditory cortex (A1) is damaged have led to the widespread view that A1 may have an essential role in this complex computational task. Here we show, however, that the spatial selectivity exhibited by the large majority of A1 neurons is well predicted by a simple linear model, which assumes that neurons additively integrate sound levels in each frequency band and ear. The success of this linear model is surprising, given that computing sound source direction is a necessarily nonlinear operation. However, because linear operations preserve information, our results are consistent with the hypothesis that A1 may also form a gateway to higher, more specialized cortical areas." }, { "pmid": "10704507", "title": "Spectral-temporal receptive fields of nonlinear auditory neurons obtained using natural sounds.", "abstract": "The stimulus-response function of many visual and auditory neurons has been described by a spatial-temporal receptive field (STRF), a linear model that for mathematical reasons has until recently been estimated with the reverse correlation method, using simple stimulus ensembles such as white noise. Such stimuli, however, often do not effectively activate high-level sensory neurons, which may be optimized to analyze natural sounds and images. We show that it is possible to overcome the simple-stimulus limitation and then use this approach to calculate the STRFs of avian auditory forebrain neurons from an ensemble of birdsongs. We find that in many cases the STRFs derived using natural sounds are strikingly different from the STRFs that we obtained using an ensemble of random tone pips. When we compare these two models by assessing their predictions of neural response to the actual data, we find that the STRFs obtained from natural sounds are superior. Our results show that the STRF model is an incomplete description of response properties of nonlinear auditory neurons, but that linear receptive fields are still useful models for understanding higher level sensory processing, as long as the STRFs are estimated from the responses to relevant complex stimuli." }, { "pmid": "14762127", "title": "Linearity of cortical receptive fields measured with natural sounds.", "abstract": "How do cortical neurons represent the acoustic environment? This question is often addressed by probing with simple stimuli such as clicks or tone pips. Such stimuli have the advantage of yielding easily interpreted answers, but have the disadvantage that they may fail to uncover complex or higher-order neuronal response properties. Here, we adopt an alternative approach, probing neuronal responses with complex acoustic stimuli, including animal vocalizations. We used in vivo whole-cell methods in the rat auditory cortex to record subthreshold membrane potential fluctuations elicited by these stimuli. Most neurons responded robustly and reliably to the complex stimuli in our ensemble. Using regularization techniques, we estimated the linear component, the spectrotemporal receptive field (STRF), of the transformation from the sound (as represented by its time-varying spectrogram) to the membrane potential of the neuron. We find that the STRF has a rich dynamical structure, including excitatory regions positioned in general accord with the prediction of the classical tuning curve. However, whereas the STRF successfully predicts the responses to some of the natural stimuli, it surprisingly fails completely to predict the responses to others; on average, only 11% of the response power could be predicted by the STRF. Therefore, most of the response of the neuron cannot be predicted by the linear component, although the response is deterministically related to the stimulus. Analysis of the systematic errors of the STRF model shows that this failure cannot be attributed to simple nonlinearities such as adaptation to mean intensity, rectification, or saturation. Rather, the highly nonlinear response properties of auditory cortical neurons must be attributable to nonlinear interactions between sound frequencies and time-varying properties of the neural encoder." }, { "pmid": "15969914", "title": "How close are we to understanding v1?", "abstract": "A wide variety of papers have reviewed what is known about the function of primary visual cortex. In this review, rather than stating what is known, we attempt to estimate how much is still unknown about V1 function. In particular, we identify five problems with the current view of V1 that stem largely from experimental and theoretical biases, in addition to the contributions of nonlinearities in the cortex that are not well understood. Our purpose is to open the door to new theories, a number of which we describe, along with some proposals for testing them." }, { "pmid": "21689603", "title": "Contrast gain control in auditory cortex.", "abstract": "The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds." }, { "pmid": "18579084", "title": "Cooperative nonlinearities in auditory cortical neurons.", "abstract": "Cortical receptive fields represent the signal preferences of sensory neurons. Receptive fields are thought to provide a representation of sensory experience from which the cerebral cortex may make interpretations. While it is essential to determine a neuron's receptive field, it remains unclear which features of the acoustic environment are specifically represented by neurons in the primary auditory cortex (AI). We characterized cat AI spectrotemporal receptive fields (STRFs) by finding both the spike-triggered average (STA) and stimulus dimensions that maximized the mutual information between response and stimulus. We derived a nonlinearity relating spiking to stimulus projection onto two maximally informative dimensions (MIDs). The STA was highly correlated with the first MID. Generally, the nonlinearity for the first MID was asymmetric and often monotonic in shape, while the second MID nonlinearity was symmetric and nonmonotonic. The joint nonlinearity for both MIDs revealed that most first and second MIDs were synergistic and thus should be considered conjointly. The difference between the nonlinearities suggests different possible roles for the MIDs in auditory processing." }, { "pmid": "18287509", "title": "Nonlinearities and contextual influences in auditory cortical responses modeled with multilinear spectrotemporal methods.", "abstract": "The relationship between a sound and its neural representation in the auditory cortex remains elusive. Simple measures such as the frequency response area or frequency tuning curve provide little insight into the function of the auditory cortex in complex sound environments. Spectrotemporal receptive field (STRF) models, despite their descriptive potential, perform poorly when used to predict auditory cortical responses, showing that nonlinear features of cortical response functions, which are not captured by STRFs, are functionally important. We introduce a new approach to the description of auditory cortical responses, using multilinear modeling methods. These descriptions simultaneously account for several nonlinearities in the stimulus-response functions of auditory cortical neurons, including adaptation, spectral interactions, and nonlinear sensitivity to sound level. The models reveal multiple inseparabilities in cortical processing of time lag, frequency, and sound level, and suggest functional mechanisms by which auditory cortical neurons are sensitive to stimulus context. By explicitly modeling these contextual influences, the models are able to predict auditory cortical responses more accurately than are STRF models. In addition, they can explain some forms of stimulus dependence in STRFs that were previously poorly understood." }, { "pmid": "21264310", "title": "A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds.", "abstract": "In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons." }, { "pmid": "24305812", "title": "Integration over multiple timescales in primary auditory cortex.", "abstract": "Speech and other natural vocalizations are characterized by large modulations in their sound envelope. The timing of these modulations contains critical information for discrimination of important features, such as phonemes. We studied how depression of synaptic inputs, a mechanism frequently reported in cortex, can contribute to the encoding of envelope dynamics. Using a nonlinear stimulus-response model that accounted for synaptic depression, we predicted responses of neurons in ferret primary auditory cortex (A1) to stimuli with natural temporal modulations. The depression model consistently performed better than linear and second-order models previously used to characterize A1 neurons, and it produced more biologically plausible fits. To test how synaptic depression can contribute to temporal stimulus integration, we used nonparametric maximum a posteriori decoding to compare the ability of neurons showing and not showing depression to reconstruct the stimulus envelope. Neurons showing evidence for depression reconstructed stimuli over a longer range of latencies. These findings suggest that variation in depression across the cortical population supports a rich code for representing the temporal dynamics of natural sounds." }, { "pmid": "22457454", "title": "Inferring the role of inhibition in auditory processing of complex natural stimuli.", "abstract": "Intracellular studies have revealed the importance of cotuned excitatory and inhibitory inputs to neurons in auditory cortex, but typical spectrotemporal receptive field models of neuronal processing cannot account for this overlapping tuning. Here, we apply a new nonlinear modeling framework to extracellular data recorded from primary auditory cortex (A1) that enables us to explore how the interplay of excitation and inhibition contributes to the processing of complex natural sounds. The resulting description produces more accurate predictions of observed spike trains than the linear spectrotemporal model, and the properties of excitation and inhibition inferred by the model are furthermore consistent with previous intracellular observations. It can also describe several nonlinear properties of A1 that are not captured by linear models, including intensity tuning and selectivity to sound onsets and offsets. These results thus offer a broader picture of the computational role of excitation and inhibition in A1 and support the hypothesis that their interactions play an important role in the processing of natural auditory stimuli." }, { "pmid": "15748852", "title": "Spatial structure of complex cell receptive fields measured with natural images.", "abstract": "Neuronal receptive fields (RFs) play crucial roles in visual processing. While the linear RFs of early neurons have been well studied, RFs of cortical complex cells are nonlinear and therefore difficult to characterize, especially in the context of natural stimuli. In this study, we used a nonlinear technique to compute the RFs of complex cells from their responses to natural images. We found that each RF is well described by a small number of subunits, which are oriented, localized, and bandpass. These subunits contribute to neuronal responses in a contrast-dependent, polarity-invariant manner, and they can largely predict the orientation and spatial frequency tuning of the cell. Although the RF structures measured with natural images were similar to those measured with random stimuli, natural images were more effective for driving complex cells, thus facilitating rapid identification of the subunits. The subunit RF model provides a useful basis for understanding cortical processing of natural stimuli." }, { "pmid": "19675288", "title": "Long-lasting context dependence constrains neural encoding models in rodent auditory cortex.", "abstract": "Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to > or =4 s (tau approximately 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted greater, similar 100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant tau <100 ms, indicating that the long-lasting form originated from presynaptic mechanisms in the cortex, such as synaptic depression. Restricting knowledge of the stimulus history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation." }, { "pmid": "22807665", "title": "Sparse codes for speech predict spectrotemporal receptive fields in the inferior colliculus.", "abstract": "We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogram representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus and cortex, and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds." }, { "pmid": "19918079", "title": "Hierarchical computation in the canonical auditory cortical circuit.", "abstract": "Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-triggered averaged STRFs revealed that temporal precision, spectrotemporal separability, and feature selectivity varied with layer according to a hierarchical processing model. STRFs from maximally informative dimension (MID) analysis confirmed hierarchical processing. Of two cooperative MIDs identified for each neuron, the first comprised the majority of stimulus information in granular layers. Second MID contributions and nonlinear cooperativity increased in supragranular and infragranular layers. The AI microcircuit provides a valid template for three independent hierarchical computation principles. Increases in processing complexity, STRF cooperativity, and nonlinearity correlate with the synaptic distance from granular layers." }, { "pmid": "23966693", "title": "Parvalbumin-expressing inhibitory interneurons in auditory cortex are well-tuned for frequency.", "abstract": "In the auditory cortex, synaptic inhibition is known to be involved in shaping receptive fields, enhancing temporal precision, and regulating gain. Cortical inhibition is provided by local GABAergic interneurons, which comprise 10-20% of the cortical population and can be separated into numerous subclasses. The morphological and physiological diversity of interneurons suggests that these different subclasses have unique roles in sound processing; however, these roles are yet unknown. Understanding the receptive field properties of distinct inhibitory cell types will be critical to elucidating their computational function in cortical circuits. Here we characterized the tuning and response properties of parvalbumin-positive (PV+) interneurons, the largest inhibitory subclass. We used channelrhodopsin-2 (ChR2) as an optogenetic tag to identify PV+ and PV- neurons in vivo in transgenic mice. In contrast to PV+ neurons in mouse visual cortex, which are broadly tuned for orientation, we found that auditory cortical PV+ neurons were well tuned for frequency, although very tightly tuned PV+ cells were uncommon. This suggests that PV+ neurons play a minor role in shaping frequency tuning, and is consistent with the idea that PV+ neurons nonselectively pool input from the local network. PV+ interneurons had shallower response gain and were less intensity-tuned than PV- neurons, suggesting that PV+ neurons provide dynamic gain control and shape intensity tuning in auditory cortex. PV+ neurons also had markedly faster response latencies than PV- neurons, consistent with a computational role in enhancing the temporal precision of cortical responses." }, { "pmid": "12427863", "title": "A synaptic explanation of suppression in visual cortex.", "abstract": "The responses of neurons in the primary visual cortex (V1) are suppressed by mask stimuli that do not elicit responses if presented alone. This suppression is widely believed to be mediated by intracortical inhibition. As an alternative, we propose that it can be explained by thalamocortical synaptic depression. This explanation correctly predicts that suppression is monocular, immune to cortical adaptation, and occurs for mask stimuli that elicit responses in the thalamus but not in the cortex. Depression also explains other phenomena previously ascribed to intracortical inhibition. It explains why responses saturate at high stimulus contrast, whereas selectivity for orientation and spatial frequency is invariant with contrast. It explains why transient responses to flashed stimuli are nonlinear, whereas spatial summation is primarily linear. These results suggest that the very first synapses into the cortex, and not the cortical network, may account for important response properties of V1 neurons." }, { "pmid": "9334433", "title": "Linearity and normalization in simple cells of the macaque primary visual cortex.", "abstract": "Simple cells in the primary visual cortex often appear to compute a weighted sum of the light intensity distribution of the visual stimuli that fall on their receptive fields. A linear model of these cells has the advantage of simplicity and captures a number of basic aspects of cell function. It, however, fails to account for important response nonlinearities, such as the decrease in response gain and latency observed at high contrasts and the effects of masking by stimuli that fail to elicit responses when presented alone. To account for these nonlinearities we have proposed a normalization model, which extends the linear model to include mutual shunting inhibition among a large number of cortical cells. Shunting inhibition is divisive, and its effect in the model is to normalize the linear responses by a measure of stimulus energy. To test this model we performed extracellular recordings of simple cells in the primary visual cortex of anesthetized macaques. We presented large stimulus sets consisting of (1) drifting gratings of various orientations and spatiotemporal frequencies; (2) plaids composed of two drifting gratings; and (3) gratings masked by full-screen spatiotemporal white noise. We derived expressions for the model predictions and fitted them to the physiological data. Our results support the normalization model, which accounts for both the linear and the nonlinear properties of the cells. An alternative model, in which the linear responses are subject to a compressive nonlinearity, did not perform nearly as well." }, { "pmid": "16226582", "title": "Drivers and modulators from push-pull and balanced synaptic input.", "abstract": "In 1998, Sherman and Guillery proposed that there are two types of inputs to cortical neurons; drivers and modulators. These two forms of input are required to explain how, for example, sensory driven responses are controlled and modified by attention and other internally generated gating signals. One might imagine that driver signals are carried by fast ionotropic receptors, whereas modulators correspond to slower metabotropic receptors. Instead, we have proposed a novel mechanism by which both driver and modulator inputs could be carried by transmission through the same types of ionotropic receptors. In this scheme, the distinction between driver and modulator inputs is functional and changeable rather than anatomical and fixed. Driver inputs are carried by excitation and inhibition acting in a push-pull manner. This means that increases in excitation are accompanied by decreases in inhibition and vice versa. Modulators correspond to excitation and inhibition that covary so that they increase or decrease together. Theoretical and experimental work has shown that such an arrangement modulates the gain of a neuron, rather than driving it to respond. Constructing drivers and modulators in this manner allows individual excitatory synaptic inputs to play either role, and indeed to switch between roles, depending on how they are linked with inhibition." }, { "pmid": "16286934", "title": "Neural population coding of sound level adapts to stimulus statistics.", "abstract": "Mammals can hear sounds extending over a vast range of sound levels with remarkable accuracy. How auditory neurons code sound level over such a range is unclear; firing rates of individual neurons increase with sound level over only a very limited portion of the full range of hearing. We show that neurons in the auditory midbrain of the guinea pig adjust their responses to the mean, variance and more complex statistics of sound level distributions. We demonstrate that these adjustments improve the accuracy of the neural population code close to the region of most commonly occurring sound levels. This extends the range of sound levels that can be accurately encoded, fine-tuning hearing to the local acoustic environment." }, { "pmid": "24265596", "title": "Constructing noise-invariant representations of sound in the auditory pathway.", "abstract": "Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain." }, { "pmid": "22895711", "title": "Spectrotemporal contrast kernels for neurons in primary auditory cortex.", "abstract": "Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds." }, { "pmid": "9356395", "title": "First-spike timing of auditory-nerve fibers and comparison with auditory cortex.", "abstract": "First-spike timing of auditory-nerve fibers and comparison with auditory cortex. J. Neurophysiol. 78: 2438-2454, 1997. The timing of the first spike of cat auditory-nerve (AN) fibers in response to onsets of characteristic frequency (CF) tone bursts was studied and compared with that of neurons in primary auditory cortex (AI), reported previously. Tones were shaped with cosine-squared rise functions, and rise time and sound pressure level were parametrically varied. Although measurement of first-spike latency of AN fibers was somewhat compromised by effects of spontaneous activity, latency was an invariant and inverse function of the maximum acceleration of peak pressure (i.e., a feature of the 2nd derivative of the stimulus envelope), as previously found in AI, rather than of tone level or rise time. Latency-acceleration functions of all AN fibers were of very similar shape, similar to that observed in AI. As in AI, latency-acceleration functions of different fibers were displaced along the latency axis, reflecting differences in minimum latency, and along the acceleration axis, reflecting differences in sensitivity to acceleration [neuronal transient sensitivity (S)]. S estimates increased with spontaneous rate (SR), but values of high-SR fibers exceeded those in AI. This suggests that S estimates are biased by SR per se, and that unbiased true S values would be less tightly correlated with response properties covarying with SR, such as firing threshold. S estimates varied with CF in a fashion similar to the cat's audiogram and, for low- and medium-SR fibers, matched those for AI neurons. Minimum latency decreased with increasing SR and CF. As in AI, the standard deviation of first-spike timing (SD) in AN was also an inverse function of maximum acceleration of peak pressure. The characteristics of the increase of SD with latency in a given AN fiber/AI neuron and across AN fibers/AI neurons revealed that the precision of first-spike timing to some stimuli can actually be higher in AI than in AN. The data suggest that the basic characteristics of the latency-acceleration functions of transient onset responses seen in cortex are generated at inner hair cell-AN fiber synapses. Implications for signal processing in the auditory system and for first-spike generation and adaptation in AN are discussed." }, { "pmid": "22323634", "title": "Receptive field dimensionality increases from the auditory midbrain to cortex.", "abstract": "In the primary auditory cortex, spectrotemporal receptive fields (STRFs) are composed of multiple independent components that capture the processing of disparate stimulus aspects by any given neuron. The origin of these multidimensional stimulus filters in the central auditory system is unknown. To determine whether multicomponent STRFs emerge prior to the forebrain, we recorded from single neurons in the main obligatory station of the auditory midbrain, the inferior colliculus. By comparing results of different spike-triggered techniques, we found that the neural responses in the inferior colliculus can be accounted for by a single stimulus filter. This was observed for all temporal response patterns, from strongly phasic to tonic. Our results reveal that spectrotemporal stimulus encoding undergoes a fundamental transformation along the auditory neuraxis, with the emergence of multidimensional receptive fields beyond the auditory midbrain." }, { "pmid": "15006095", "title": "Analyzing neural responses to natural signals: maximally informative dimensions.", "abstract": "We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli that are nongaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high-dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low-dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small, it becomes feasible to map the neuron's input-output function even under fully natural stimulus conditions. These ideas are illustrated in simulations on model visual and auditory neurons responding to natural scenes and sounds, respectively." }, { "pmid": "21704508", "title": "Hierarchical representations in the auditory cortex.", "abstract": "Understanding the neural mechanisms of invariant object recognition remains one of the major unsolved problems in neuroscience. A common solution that is thought to be employed by diverse sensory systems is to create hierarchical representations of increasing complexity and tolerance. However, in the mammalian auditory system many aspects of this hierarchical organization remain undiscovered, including the prominent classes of high-level representations (that would be analogous to face selectivity in the visual system or selectivity to bird's own song in the bird) and the dominant types of invariant transformations. Here we review the recent progress that begins to probe the hierarchy of auditory representations, and the computational approaches that can be helpful in achieving this feat." }, { "pmid": "23841838", "title": "Computational identification of receptive fields.", "abstract": "Natural stimuli elicit robust responses of neurons throughout sensory pathways, and therefore their use provides unique opportunities for understanding sensory coding. This review describes statistical methods that can be used to characterize neural feature selectivity, focusing on the case of natural stimuli. First, we discuss how such classic methods as reverse correlation/spike-triggered average and spike-triggered covariance can be generalized for use with natural stimuli to find the multiple relevant stimulus features that affect the responses of a given neuron. Second, ways to characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons, are discussed. Finally, we discuss methods that do not require one to make an assumption of invariance and instead can determine the type of invariance by analyzing relationships between the multiple stimulus features that affect the neural responses." }, { "pmid": "19568981", "title": "Estimating linear-nonlinear models using Renyi divergences.", "abstract": "This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data." }, { "pmid": "25831448", "title": "The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.", "abstract": "Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as \"single-spike information\" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex." }, { "pmid": "23209771", "title": "Differences between spectro-temporal receptive fields derived from artificial and natural stimuli in the auditory cortex.", "abstract": "Spectro-temporal properties of auditory cortex neurons have been extensively studied with artificial sounds but it is still unclear whether they help in understanding neuronal responses to communication sounds. Here, we directly compared spectro-temporal receptive fields (STRFs) obtained from the same neurons using both artificial stimuli (dynamic moving ripples, DMRs) and natural stimuli (conspecific vocalizations) that were matched in terms of spectral content, average power and modulation spectrum. On a population of auditory cortex neurons exhibiting reliable tuning curves when tested with pure tones, significant STRFs were obtained for 62% of the cells with vocalizations and 68% with DMR. However, for many cells with significant vocalization-derived STRFs (STRF(voc)) and DMR-derived STRFs (STRF(dmr)), the BF, latency, bandwidth and global STRFs shape differed more than what would be predicted by spiking responses simulated by a linear model based on a non-homogenous Poisson process. Moreover STRF(voc) predicted neural responses to vocalizations more accurately than STRF(dmr) predicted neural response to DMRs, despite similar spike-timing reliability for both sets of stimuli. Cortical bursts, which potentially introduce nonlinearities in evoked responses, did not explain the differences between STRF(voc) and STRF(dmr). Altogether, these results suggest that the nonlinearity of auditory cortical responses makes it difficult to predict responses to communication sounds from STRFs computed from artificial stimuli." }, { "pmid": "18650810", "title": "Spatio-temporal correlations and visual signalling in a complete neuronal population.", "abstract": "Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons." }, { "pmid": "23874185", "title": "Inferring nonlinear neuronal computation based on physiologically plausible inputs.", "abstract": "The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such 'upstream nonlinearities' within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation." }, { "pmid": "26787894", "title": "Central auditory neurons have composite receptive fields.", "abstract": "High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes." }, { "pmid": "12060706", "title": "Computational subunits of visual cortical neurons revealed by artificial neural networks.", "abstract": "A crucial step toward understanding visual processing is to obtain a comprehensive description of the relationship between visual stimuli and neuronal responses. Many neurons in the visual cortex exhibit nonlinear responses, making it difficult to characterize their stimulus-response relationships. Here, we recorded the responses of primary visual cortical neurons of the cat to spatiotemporal random-bar stimuli and trained artificial neural networks to predict the response of each neuron. The random initial connections in the networks consistently converged to regular patterns. Analyses of these connection patterns showed that the response of each complex cell to the random-bar stimuli could be well approximated by the sum of a small number of subunits resembling simple cells. The direction selectivity of each complex cell measured with drifting gratings was also well predicted by the combination of these subunits, indicating the generality of the model. These results are consistent with a simple functional model for complex cells and demonstrate the usefulness of the neural network method for revealing the stimulus-response transformations of nonlinear neurons." }, { "pmid": "1527596", "title": "Predicting responses of nonlinear neurons in monkey striate cortex to complex patterns.", "abstract": "The overwhelming majority of neurons in primate visual cortex are nonlinear. For those cells, the techniques of linear system analysis, used with some success to model retinal ganglion cells and striate simple cells, are of limited applicability. As a start toward understanding the properties of nonlinear visual neurons, we have recorded responses of striate complex cells to hundreds of images, including both simple stimuli (bars and sinusoids) as well as complex stimuli (random textures and 3-D shaded surfaces). The latter set tended to give the strongest response. We created a neural network model for each neuron using an iterative optimization algorithm. The recorded responses to some stimulus patterns (the training set) were used to create the model, while responses to other patterns were reserved for testing the networks. The networks predicted recorded responses to training set patterns with a median correlation of 0.95. They were able to predict responses to test stimuli not in the training set with a correlation of 0.78 overall, and a correlation of 0.65 for complex stimuli considered alone. Thus, they were able to capture much of the input/output transfer function of the neurons, even for complex patterns. Examining connection strengths within each network, different parts of the network appeared to handle information at different spatial scales. To gain further insights, the network models were inverted to construct \"optimal\" stimuli for each cell, and their receptive fields were mapped with high-resolution spots. The receptive field properties of complex cells could not be reduced to any simpler mathematical formulation than the network models themselves." }, { "pmid": "15288891", "title": "Nonlinear V1 responses to natural scenes revealed by neural network analysis.", "abstract": "A key goal in the study of visual processing is to obtain a comprehensive description of the relationship between visual stimuli and neuronal responses. One way to guide the search for models is to use a general nonparametric regression algorithm, such as a neural network. We have developed a multilayer feed-forward network algorithm that can be used to characterize nonlinear stimulus-response mapping functions of neurons in primary visual cortex (area V1) using natural image stimuli. The network is capable of extracting several known V1 response properties such as: orientation and spatial frequency tuning, the spatial phase invariance of complex cells, and direction selectivity. We present details of a method for training networks and visualizing their properties. We also compare how well conventional explicit models and those developed using neural networks can predict novel responses to natural scenes." }, { "pmid": "26903851", "title": "Measuring the Performance of Neural Models.", "abstract": "Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models." }, { "pmid": "15703254", "title": "Functional organization of ferret auditory cortex.", "abstract": "We characterized the functional organization of different fields within the auditory cortex of anaesthetized ferrets. As previously reported, the primary auditory cortex, A1, and the anterior auditory field, AAF, are located on the middle ectosylvian gyrus. These areas exhibited a similar tonotopic organization, with high frequencies represented at the dorsal tip of the gyrus and low frequencies more ventrally, but differed in that AAF neurons had shorter response latencies than those in A1. On the basis of differences in frequency selectivity, temporal response properties and thresholds, we identified four more, previously undescribed fields. Two of these are located on the posterior ectosylvian gyrus and were tonotopically organized. Neurons in these areas responded robustly to tones, but had longer latencies, more sustained responses and a higher incidence of non-monotonic rate-level functions than those in the primary fields. Two further auditory fields, which were not tonotopically organized, were found on the anterior ectosylvian gyrus. Neurons in the more dorsal anterior area gave short-latency, transient responses to tones and were generally broadly tuned with a preference for high (>8 kHz) frequencies. Neurons in the other anterior area were frequently unresponsive to tones, but often responded vigorously to broadband noise. The presence of both tonotopic and non-tonotopic auditory cortical fields indicates that the organization of ferret auditory cortex is comparable to that seen in other mammals." }, { "pmid": "20371828", "title": "Neural ensemble codes for stimulus periodicity in auditory cortex.", "abstract": "We measured the responses of neurons in auditory cortex of male and female ferrets to artificial vowels of varying fundamental frequency (f(0)), or periodicity, and compared these with the performance of animals trained to discriminate the periodicity of these sounds. Sensitivity to f(0) was found in all five auditory cortical fields examined, with most of those neurons exhibiting either low-pass or high-pass response functions. Only rarely was the stimulus dependence of individual neuron discharges sufficient to account for the discrimination performance of the ferrets. In contrast, when analyzed with a simple classifier, responses of small ensembles, comprising 3-61 simultaneously recorded neurons, often discriminated periodicity changes as well as the animals did. We examined four potential strategies for decoding ensemble responses: spike counts, relative first-spike latencies, a binary \"spike or no-spike\" code, and a spike-order code. All four codes represented stimulus periodicity effectively, and, surprisingly, the spike count and relative latency codes enabled an equally rapid readout, within 75 ms of stimulus onset. Thus, relative latency codes do not necessarily facilitate faster discrimination judgments. A joint spike count plus relative latency code was more informative than either code alone, indicating that the information captured by each measure was not wholly redundant. The responses of neural ensembles, but not of single neurons, reliably encoded f(0) changes even when stimulus intensity was varied randomly over a 20 dB range. Because trained animals can discriminate stimulus periodicity across different sound levels, this implies that ensemble codes are better suited to account for behavioral performance." }, { "pmid": "15152018", "title": "Large-scale organization of ferret auditory cortex revealed using continuous acquisition of intrinsic optical signals.", "abstract": "We have adapted a new approach for intrinsic optical imaging, in which images were acquired continuously while stimuli were delivered in a series of continually repeated sequences, to provide the first demonstration of the large-scale tonotopic organization of both primary and nonprimary areas of the ferret auditory cortex. Optical responses were collected during continuous stimulation by repeated sequences of sounds with varying frequency. The optical signal was averaged as a function of time during the sequence, to produce reflectance modulation functions (RMFs). We examined the stability and properties of the RMFs and show that their zero-crossing points provide the best temporal reference points for quantifying the relationship between the stimulus parameter values and optical responses. Sequences of different duration and direction of frequency change gave rise to comparable results, although in some cases discrepancies were observed, mostly between upward- and downward-frequency sequences. We demonstrated frequency maps, consistent with previous data, in primary auditory cortex and in the anterior auditory field, which were verified with electrophysiological recordings. In addition to these tonotopic gradients, we demonstrated at least 2 new acoustically responsive areas on the anterior and posterior ectosylvian gyri, which have not previously been described. Although responsive to pure tones, these areas exhibit less tonotopic order than the primary fields." }, { "pmid": "20808728", "title": "Regularization Paths for Generalized Linear Models via Coordinate Descent.", "abstract": "We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods." }, { "pmid": "26683490", "title": "The Essential Complexity of Auditory Receptive Fields.", "abstract": "Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models." } ]
Frontiers in Psychology
27899905
PMC5110545
10.3389/fpsyg.2016.01793
Effects of Individual Differences in Working Memory on Plan Presentational Choices
This paper addresses research questions that are central to the area of visualization interfaces for decision support: (RQ1) whether individual user differences in working memory should be considered when choosing how to present visualizations; (RQ2) how to present the visualization to support effective decision making and processing; and (RQ3) how to evaluate the effectiveness of presentational choices. These questions are addressed in the context of presenting plans, or sequences of actions, to users. The experiments are conducted in several domains, and the findings are relevant to applications such as semi-autonomous systems in logistics. That is, scenarios that require the attention of humans who are likely to be interrupted, and require good performance but are not time critical. Following a literature review of different types of individual differences in users that have been found to affect the effectiveness of presentational choices, we consider specifically the influence of individuals' working memory (RQ1). The review also considers metrics used to evaluate presentational choices, and types of presentational choices considered. As for presentational choices (RQ2), we consider a number of variants including interactivity, aggregation, layout, and emphasis. Finally, to evaluate the effectiveness of plan presentational choices (RQ3) we adopt a layered-evaluation approach and measure performance in a dual task paradigm, involving both task interleaving and evaluation of situational awareness. This novel methodology for evaluating visualizations is employed in a series of experiments investigating presentational choices for a plan. A key finding is that emphasizing steps (by highlighting borders) can improve effectiveness on a primary task, but only when controlling for individual variation in working memory.
2. Related workThis section discusses related work addressing how visual presentational choices have been applied and evaluated in the past.2.1. RQ1: whether individual user differences in working memory should be consideredAnecdotal evidence about individual differences has motivated research on presenting the same information in different visualization views (Wang Baldonado et al., 2000). While our work looks specifically at working memory, we contextualize our choice with findings with measurable variation between individuals based on a number of factors such as cognitive abilities (Velez et al., 2005; Toker et al., 2012) (including working memory), personality (Ziemkiewicz et al., 2011), and degree of expertise or knowledge (Lewandowsky and Spence, 1989; Kobsa, 2001). In addition, gender and culture may be factors to consider for visual presentational choices given known gender differences in processing spatial information, and cultural differences in spatial density of information (Hubona, 2004; Velez et al., 2005; Fraternali and Tisi, 2008).2.1.1. PersonalityStudies have evaluated whether personality traits affect individuals' abilities to interpret visualizations. In trait theory, a trait is defined as “an enduring personal characteristic that reveals itself in a particular pattern of behavior in different situations” (Carlson et al., 2004, p. 583). One study found an interaction of the personality trait of locus of control, with the ability to understand nested visualizations (Ziemkiewicz et al., 2011).Another study evaluated the general effect of personality on completion times, and number of insights, but did not study the interaction between information presentational choices and personality (Green and Fisher, 2010). This study also looked at locus of control, and two of the “Big Five” personality traits: Extraversion and Neuroticism. Participants who had an intrinsic locus of control, or scored higher on the traits of extroversion and neuroticism were found to complete tasks faster. In contrast, participants who had an external locus of control, or scored lower on Extraversion and Neuroticism gained more insights.2.1.2. ExpertiseAnother trait that has been considered is the level of expertise of the user. Küpper and Kobsa Kobsa (2001) proposed adapting plan presentation to a model of a user's knowledge and capabilities with regard to plan concepts, e.g., knowledge of the steps and the relationships with them. Others have formally evaluated the effect of familiarity of the data presented and individuals' graphical literacy on abilities to make inferences (from both bar and line charts) (Shah and Freedman, 2011). Individual expertise in using each particular type of graphs (radar graphs and bar graphs) also influenced the usability of each respective type of graph (Toker et al., 2012).2.1.3. Cognitive traitsThe influence of individual cognitive traits has been shown consistently to influence the understandability of visualizations. Studies have also considered a number of related cognitive abilities. Previous studies have found significant effects of cognitive factors such as perceptual speed, verbal working memory, visual working memory on task performance (Toker et al., 2012; Conati et al., 2014). Other studies have also found an effect of individual perceptual speed, and visual working memory capacity, on which visualizations were most effective (Velez et al., 2005; Conati et al., 2014). For example, participants with low visual working memory were found to perform better with a horizontal layout (Conati et al., 2014). These findings suggest that cognitive traits are particularly promising factors to personalize presentational choices to in domains with high cognitive load. We further motivate the trait we chose to study, working memory, in Section 3.2.2. RQ2: how to present the presentationThis section describes some of the choices that can be made about how to present a plan: modality, layout, degree of interactivity, aggregation and emphasis.2.2.1. ModalityPlans can be presented in textual form (Mellish and Evans, 1989; Biundo et al., 2011; Bercher et al., 2014), and as visualizations (Küpper and Kobsa, 2003; Butler et al., 2007; Brown and Paik, 2009; McCurdy, 2009; Billman et al., 2011; de Leoni et al., 2012) (with some variants somewhere on a continuum). Given that users are variable in terms of their verbal working memory (Toker et al., 2012; Conati et al., 2014), the choice of modality is a candidate for design choice. Figure 1 shows a simple plan visualization where nodes describe actions, and edges are transitions to other actions.Figure 1Example of a simple plan visualization.2.2.2. LayoutThe way plans are laid out can help or hinder their presentational complexity. For example, the visual layout can use a mapping most suitable for that domain. The mapping used has differed between different planning domains, for example mapping to a location resource in the domain of logistics (de Leoni et al., 2012), or horizontal alignment according to time for tasks that are constrained by time. For example, Conati et al. (2014) found that users with low visual working memory answer more answers correctly with a horizontal layout compared to a vertical layout for complex visualizations (ValueCharts). Other work has compared the same information presented as a bar chart vs. a radar graph (Toker et al., 2012).2.2.3. Degrees of interactivityAs plans get large it may be necessary to occlude parts of a plan to support an overview. The idea of fading (Hothi et al., 2000) and hiding parts (e.g., using stretchtext Boyle and Encarnacion, 1994) of information presentation (primarly text) has previously been explored in the area of hypertext. Research on stretchtext has investigated the effectiveness of choosing which information is shown (i.e., “stretched”) and which is not (but available, via selection, i.e., “shrunk”). In the area of graphs, Henry (1992) looked at filtering graphs by content, and Sarkar and Brown (1992) applied fish-eyes views to grow or shrink parts of a graph. Other work has supported zooming to manage the visualization of larger plans (Billman et al., 2011; de Leoni et al., 2012).2.2.4. AggregationBy aggregation, we mean gathering of several things together into one thing. For example, the making of dough can include several composite steps such as adding flour or mixing the ingredients, but it can also be aggregated into a single step of “making dough.” In other words, an alternative method for dealing with large plans is to support the cognitive mechanism of chunking, by representing several steps by higher order concepts. For example, Eugenio et al. (2005) found that describing instructions in natural language using aggregated concepts (such as the concept of an engine rather than listing all of its composite parts) lead to greater learning outcomes. This method can also be combined with interactivity using methods such as stretchtext (mentioned above) to contract or expand relevant portions of a plan. Aggregation would also benefit from considering a user's expertise or experience with a particular task (Kobsa, 2001). Several of the surveyed planning systems support concepts similar to main task and sub-tasks, where a main task can consist of several sub-tasks, c.f., Billman et al. (2011). In contrast, several levels of aggregation could make presentation more complex, e.g., Gruhn and Laue (2007) claims that a greater nesting depth in a model increases its complexity. Users have also been found to differ on how well they perform tasks using visualizations with nesting or aggregation; users who scored low on the personality trait of internal locus of control performed worse with nested visualizations when comparing with users who scored highly on the trait (Ziemkiewicz et al., 2011).2.2.5. EmphasisBoth text and graphics can be visually annotated to indicate importance, for example by changing their size or color to indicate relevance (Brusilovsky et al., 1996). Conversely, dimming and hiding has been used to de-emphasize information (Brusilovsky et al., 1996). Color is a particularly good candidate for emphasis; work in visual processing has established that color is processed much quicker by the visual system compared to other highly salient visual features such as shapes (Nowell et al., 2002). This fact has implicitly been taken into consideration in interactive learning environments (Freyne et al., 2007; Jae-Kyung et al., 2008). Color highlighting specifically is a recognized technique for adapting hypertext and hypermedia (Brusilovsky et al., 1996; Bra et al., 1999; Jae-Kyung et al., 2008; Knutlov et al., 2009), and is possibly the most commonly used type of emphasis (Butler et al., 2007; Brown and Paik, 2009; McCurdy, 2009; Billman et al., 2011; de Leoni et al., 2012), however systems also used other visual encodings to distinguish between different types of information such as relative differences in sizes and shapes (c.f., de Leoni et al., 2012).2.3. RQ3: how to evaluate the effectiveness of presentational choicesThe aims of visualization evaluations have varied (Lam et al., 2012). First of all, it is worth to distinguish between what one is evaluating (e.g., data abstraction design vs. presentational encoding), and how (e.g., lab studies, ethnographic studies) one is evaluating it (Munzner, 2009). We also follow most closely the sort of evaluations that could be classed under Lam et al's header of “evaluating human performance” that study the effects of an interactive or visual aspect of the tool on people in isolation.To enable this we supply an overview of previously applied evaluation criteria, broaden our view from visualizations to include earlier work on information presented as hypertext or hypermedia.2.3.1. EfficiencyBroadly speaking, efficiency can be defined as “Helping users to perform their tasks faster.” In previous studies efficiency has been measured as time to complete a single task, a set of tasks, or the number of tasks per hour (Campagnoni and Ehrlich, 1989; Egan et al., 1989; McDonald and Stevenson, 1996). Alternative measures, such as the number or types of interactions have also been used (Kay and Lum, 2004). Efficiency can be affected by the choice of visualization, but also depends on the task and user characteristics (Toker et al., 2012). For example, previous work found that perceptual speed influenced task completion times using bar chars and radar charts. In addition, they found an interaction between visualization type and perceptual speed: the difference in time performance between bar and radar charts decreases as a users' perceptual speed increases. A similar study evaluating a more complex visualization called ValueCharts measured the interaction between task type and various cognitive traits (Conati et al., 2014). They found that level of user expertise, verbal working memory, visual working memory, and perceptual speed all interacted with task type (high level or low level). That is, the influence of individual traits on speed of performance depended on the type of task that was being performed as well. Another study measured the effect of the personality trait of locus of control on the time spent on correct responses (Ziemkiewicz et al., 2011). Overall, participants who scored low on locus on control were slower at answering questions correctly. There was also an interaction with the trait and question type, for some questions (search tasks) participants who scored low on the trait were as fast as participants who scored highly.2.3.2. EffectivenessIn the most general sense, a system can be said to be effective if it helps a user to produce a desired outcome, i.e., “Helps users to perform their tasks well.” The nature of the tasks naturally varies from system to system. For example, in a decision support context, such as recommender systems, effectiveness has been defined as “Help users make good decisions” with regards to whether to try or buy an item (Tintarev and Masthoff, 2015). For plans, this means to successfully complete a task which requires several actions. This task may require completing the actions originally suggested by the system, or a revised sequence resulting from a user scrutinizing and modifying the suggested plan.The most common method for evaluating information visualizations' effectiveness has been to ask participants to answer questions based on the information presented (Campagnoni and Ehrlich, 1989), although this could be said to measure understandability rather than effectiveness (Section 2.3.7). For example, Conati et al. (2014) found that users with low visual working memory answer more answers correctly with a horizontal layout (compared to a vertical layout). The complexity of the questions has varied from simple tasks (e.g., searching for objects with a given property, specifying attributes of an object or performing count tasks; Stasko et al., 2000; Conati et al., 2014), to more complex ones (e.g., questions covering three or more attributes; Kobsa, 2001; Verbert et al., 2013). Consequently, studies of both visualization (Stasko et al., 2000; Kobsa, 2001; Indratmo and Gutwin, 2008), and hypertext (Campagnoni and Ehrlich, 1989; Chen and Rada, 1996), have been evaluated in terms of error rates or correct responses. Effectiveness has also been measured as how frequently a task was successful, for a task such as bookmarking at least one interesting item (Verbert et al., 2013). Other measures such as coverage (proportion of the material visited) have also been used (Egan et al., 1989; Instone et al., 1993).2.3.3. Perceived measures of effectiveness and efficiency vs. actual measuresOne way to supplement measures of effectiveness and efficiency is to ask participants to self-report. Self-reported measures have been found to be reliable and sensitive to small differences in cognitive load in some cases (Paas et al., 2003). That is, if participants perform better but take longer, a self-reported measure of high mental effort could confirm that cognitive load was high. There are justified questions about the reliability of self-reported measures as well, for example, people are known to be worse at correctly judging the time a task takes when under heavy cognitive load (Block et al., 2010). This suggests that self-reported measures may be a good supplement, but not a replacement for the actual measures. Visualizations have also been evaluated in terms of perceived effort when performing a task (Waldner and Vassileva, 2014). One commonly used measure for subjective workload is the NASA TLX, which uses six rating scales for: mental and physical demands, performance, temporal demand, effort and frustration (Hart, 2006).2.3.4. Task resumption lagOne criteria that is underrepresented in evaluations of visualizations is task resumption lag. This metric is particularly relevant in real world applications where interruptions are likely, and where the user is likely to be under heavy cognitive load. Task interleaving is a phenomena that happens in many multi-tasking applications, and is not limited to the evaluation of plans. This interleaving takes time, and poses a cognitive effort on users. One concrete way the cost of task interleaving has been evaluated is the time it takes to resume a previously interrupted task, or resumption lag (Iqbal and Bailey, 2006). Previous studies have identified a number of factors that influence resumption lag, including how frequent (or repeated) interruptions are, task representation complexity, whether the primary task is visible, whether interleaving happens at task boundaries, and how similar concurrent tasks are to each other (Salvucci et al., 2009).Some see task interleaving as a continuum from few interruptions, to concurrent multi-tasking (where joint attention is required) (Salvucci et al., 2009). A dual-task paradigm is a procedure in experimental psychology that requires an individual to perform two tasks simultaneously, to compare performance with single-task conditions (Knowles, 1963; Turner and Engle, 1989); (Damos, 1991, p. 221). When performance scores on one and/or both tasks are lower when they are done simultaneously compared to separately, these two tasks interfere with each other, and it is assumed that both tasks compete for the same class of information processing resources in the brain. Examples of where it may be important to measure task resumption lag include piloting, driving, and radar operation. For example, a pilot executing a procedure may be interrupted by the control center.A dual task methodology has been previously used to evaluate information visualization. Visual embellishments (e.g., icons and graphics) in terms of memorability (both short and long term memory), search efficiency, and concept grasping (Borgo et al., 2012).2.3.5. SatisfactionSatisfaction gives a measure of how much users like a system or its output (i.e., the presented plans). The most common approach used is a questionnaire evaluating subjective user perceptions on a numeric scale, such as how participants perceived a system (Bercher et al., 2014). It is also possible to focus on satisfaction with specific aspects of the interface such as how relevant users find different functionalities (Apted et al., 2003; Bakalov et al., 2010), and how well each functionality was implemented (Bakalov et al., 2010). A variant is to compare satisfaction with different variants or views of a system (Mabbott and Bull, 2004). Individual expertise in using particular types of graphs (radar graphs and bar graphs) has been found to influence preference for which type of graph people perceived as easier to use (Toker et al., 2012).2.3.6. MemorabilityMemorability is the extent to which somebody can remember a plan. This can be tested through recall (can the user reconstruct the plan) and through recognition (can the user recognize which plan is the one that they saw previously). Dixon et al (Dixon et al., 1988, 1993) showed that the memorability of plans can be affected by the representations used (in their case the sentence forms used). Kliegel et al. (2000) found that participants' working memory and plan complexity influence plan memorability (which they called plan retention) and plan execution. Recall of section headers (as a measure of incidental learning) have also been used (Hendry et al., 1990). In measuring memorability, it may be important to filter participants on general memory ability, e.g., excluding participants with exceptionally good or poor memory. In domains where users repeat a task it may also be valuable to measure performance after a certain training period as performance on memorability has been found to stabilize after training (Schmidt and Bjork, 1992). Measurements of memorability are likely to show improved performance with rehearsal. Previous evaluations of information visualizations have considered both short term and and long term memorability (Borgo et al., 2012).2.3.7. UnderstandabilityUnderstandability (also known as ComprehensibilityBateman et al. (2010)) is the extent to which the presented information is understood by participants. Understandability of information can be measured by asking people to summarize its contents (Bloom et al., 1956), answer questions about its contents (called Correctness of Understanding by Aranda et al., 2007), or by using a subjective self-reporting measure of how easy it is to understand (Hunter et al., 2012). For the latter, a distinction is sometimes made between confidence, the subjective confidence people display regarding their own understanding of the representation, and perceived difficulty, the subjective judgement of people regarding the ease to obtain information through the representation (Aranda et al., 2007). Biundo et al. (2011) and Bercher et al. (2014) evaluated a natural language presentation and explanation of a plan. The evaluation task was to connect multiple home appliances. The main evaluation criteria was primarily perceived certainty of correct completion (confidence aspect of understandability), but they also measured overall perceptions of the system (satisfaction). A study evaluating perceived ease-of-use found an effect of verbal working memory on ease-of-use for bar charts (Toker et al., 2012).As plan complexity impacts understandability, there is also research on measuring understandability by analysing this complexity, for example in business process models (Ghani et al., 2008) and workflows (Pfaff et al., 2014). Aranda et al. (2007)'s framework for the empirical evaluation of model comprehensibility highlights four variables that affect comprehensibility, namely language expertise (previous expertise with the notation/representation being studied), domain expertise (previous expertise with the domain being modeled), problem size (the size of the domain), and the type of task (for example, whether readers need to search for information, or integrate information in their mental model). One of the tasks mentioned by Aranda et al. (2007), namely information retention, is covered by our Memorability metric.2.3.8. Situational awarenessSituational awareness is the users' perception of environmental elements with respect to time and space, the comprehension of their meaning, and the projection of their status after some variable has changed (Endsley and Jones., 2012). It is often classified on three levels (Endsley and Jones., 2012): Level 1—the ability to correctly perceive information; Level 2—the ability to comprehend the situation, and Level 3—projecting the situation into the future. Abilities to make decisions in complex, dynamic areas are therefore concerned with errors in situational awareness. In particular, Level 3 expands the situational awareness beyond the regular scope of understandability (c.f., Section 2.3.7). Adagha et al. (2015) makes a case that standard usability metrics are inadequate for evaluating the effectiveness of visual analytics tools. In a systematic review of 470 papers on decision support in visual analytics they identify attributes of visual analytics tools, and how they were evaluated. Their findings imply a limited emphasis on the incorporation of Situational Awareness as a key attribute in the design of visual analytics decision support tools, in particular with regard to supporting future scenario projections. Situational awareness is strongly linked to what Borgo et al. (2012) call concept grasping and define as: “more complex cognitive processes of information gathering, concept understanding and semantic reasoning.”2.3.9. Trade-offs between metricsThe metrics mentioned above provide a useful way of thinking about ways of evaluating visualizations. However, it is unlikely that any choices about how to present a plan will improve performance on all these metrics. For example, effectiveness and efficiency do not always correlate. For example, high spatial ability has been found to be correlated with accuracy on three-dimensional visualization tasks, but not with time (Velez et al., 2005). One method that has been used is to record the time for successful trials only (Ziemkiewicz et al., 2011). Similarly, a meta-review of effectiveness and efficiency in hypertext found that the overall performance of hypertext users tended to be more effective than that of non-hypertext users, but that hypertext users were also less efficient than non-hypertext users (Chen and Rada, 1996). This is also reflected in the literature in psychology where a single combined measure of effectiveness and efficiency has been found to have very limited use (Bruyer and Brysbaert, 2011).Another useful distinction is between memorability at first exposure, and long term effectiveness. In some applications, it may be important for a user to quickly learn and remember the contents and plans. In others, the task may repeat many times and it is more important that effectiveness stabilizes at an acceptable level after a degree of training.Two other metrics that are known to conflict with regard to information presentation are effectiveness and satisfaction. For example, in one study, while participants subjectively preferred a visual representation (Satisfaction), they made better decisions (Effectiveness) using a textual representation of the same information (Law et al., 2005).
[ "26357185", "18522052", "8364535", "23068882", "19884961", "11105530", "14042898", "22144529", "16244840", "15676313", "13310704", "19834155", "25164403" ]
[ { "pmid": "26357185", "title": "An Empirical Study on Using Visual Embellishments in Visualization.", "abstract": "In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces \"divided attention\", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization." }, { "pmid": "18522052", "title": "Comparing online and lab methods in a problem-solving experiment.", "abstract": "Online experiments have recently become very popular, and--in comparison with traditional lab experiments--they may have several advantages, such as reduced demand characteristics, automation, and generalizability of results to wider populations (Birnbaum, 2004; Reips, 2000, 2002a, 2002b). We replicated Dandurand, Bowen, and Shultz's (2004) lab-based problem-solving experiment as an Internet experiment. Consistent with previous results, we found that participants who watched demonstrations of successful problem-solving sessions or who read instructions outperformed those who were told only that they solved problems correctly or not. Online participants were less accurate than lab participants, but there was no interaction with learning condition. Thus, we conclude that online and Internet results are consistent. Disadvantages included high dropout rate for online participants; however, combining the online experiment with the department subject pool worked well." }, { "pmid": "8364535", "title": "Effects of sentence form on the construction of mental plans from procedural discourse.", "abstract": "Memory for procedural discourse was examined in two experiments. In Experiment 1, memory was assessed using recall; in Experiment 2, a recognition test was used. In both experiments, the memorability of three types of action statements were compared: a transitive verb form, in which the action was described by a main clause; a verbal adjective form, in which the action was indicated by an adjective derived from a verb; and an implicit action form, in which the action was only implied. Information associated with transitive verbs and verbal adjectives was more likely to be recalled than information associated with implicit actions. Although a manipulation of prior knowledge affected overall recall performance, it did not interact with sentence form. In addition, recognition accuracy was affected by neither sentence form nor prior knowledge. To account for these results, it was proposed that transitive verbs and verbal adjectives generate a semantic representation that includes features of the action, whereas implicit actions do not. This difference in semantic representation leads to structural differences in a mental plan for the task. The obtained effects on recall reflect these differences in plan structure." }, { "pmid": "23068882", "title": "Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.", "abstract": "INTRODUCTION\nOur objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU).\n\n\nMETHODS\nA system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision.\n\n\nRESULTS\nIn an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries.\n\n\nCONCLUSIONS\nIt is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software." }, { "pmid": "19884961", "title": "Categorical Data Analysis: Away from ANOVAs (transformation or not) and towards Logit Mixed Models.", "abstract": "This paper identifies several serious problems with the widespread use of ANOVAs for the analysis of categorical outcome variables such as forced-choice variables, question-answer accuracy, choice in production (e.g. in syntactic priming research), et cetera. I show that even after applying the arcsine-square-root transformation to proportional data, ANOVA can yield spurious results. I discuss conceptual issues underlying these problems and alternatives provided by modern statistics. Specifically, I introduce ordinary logit models (i.e. logistic regression), which are well-suited to analyze categorical data and offer many advantages over ANOVA. Unfortunately, ordinary logit models do not include random effect modeling. To address this issue, I describe mixed logit models (Generalized Linear Mixed Models for binomially distributed outcomes, Breslow & Clayton, 1993), which combine the advantages of ordinary logit models with the ability to account for random subject and item effects in one step of analysis. Throughout the paper, I use a psycholinguistic data set to compare the different statistical methods." }, { "pmid": "11105530", "title": "Plan formation, retention, and execution in prospective memory: a new approach and age-related effects.", "abstract": "Existing laboratory paradigms of prospective memory instruct subjects to remember to perform a single, isolated act at an appropriate point in the experiment. These paradigms do not completely capture many everyday complex prospective memory situations in which a series or set of delayed actions is planned to be executed in some subsequent period of time. We adapted a laboratory paradigm within which to study these prospective memory processes, and we investigated age-related influences on these prospective memory processes. Age-related declines were found in the planning, initiation, and execution of the set of tasks. In contrast, there were no age differences in plan retention or in the fidelity with which the plan was performed." }, { "pmid": "22144529", "title": "Empirical Studies in Information Visualization: Seven Scenarios.", "abstract": "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study." }, { "pmid": "16244840", "title": "A comparison of graphical and textual presentations of time series data to support medical decision making in the neonatal intensive care unit.", "abstract": "OBJECTIVE\nTo compare expert-generated textual summaries of physiological data with trend graphs, in terms of their ability to support neonatal Intensive Care Unit (ICU) staff in making decisions when presented with medical scenarios.\n\n\nMETHODS\nForty neonatal ICU staff were recruited for the experiment, eight from each of five groups--junior, intermediate and senior nurses, junior and senior doctors. The participants were presented with medical scenarios on a computer screen, and asked to choose from a list of 18 possible actions those they thought were appropriate. Half of the scenarios were presented as trend graphs, while the other half were presented as passages of text. The textual summaries had been generated by two human experts and were intended to describe the physiological state of the patient over a short period of time (around 40 minutes) but not to interpret it.\n\n\nRESULTS\nIn terms of the content of responses there was a clear advantage for the Text condition, with participants tending to choose more of the appropriate actions when the information was presented as text rather than as graphs. In terms of the speed of response there was no difference between the Graphs and Text conditions. There was no significant difference between the staff groups in terms of speed or content of responses. In contrast to the objective measures of performance, the majority of participants reported a subjective preference for the Graphs condition.\n\n\nCONCLUSIONS\nIn this experimental task, participants performed better when presented with a textual summary of the medical scenario than when it was presented as a set of trend graphs. If the necessary algorithms could be developed that would allow computers automatically to generate descriptive summaries of physiological data, this could potentially be a useful feature of decision support tools in the intensive care unit." }, { "pmid": "15676313", "title": "Disorientation in hypertext: the effects of three text structures on navigation performance.", "abstract": "A study is described which examines the effects of two hypertext topologies (hierarchy and non-linear) on navigation performance compared to a linear version of the same document. Subjects used the document to answer 10 questions. After a distraction period, subjects returned to the document to locate five specified nodes. Speed and accuracy measures were taken, and the subjects' own evaluation of their performance was assessed using a questionnaire. The results showed that subjects performed better with the linear text than with the non-linear text, while performance on the hierarchical document fell between these two extremes. Analysis of the questionnaire data confirmed these differences. The results are discussed in terms of their implications for computer-assisted learning systems." }, { "pmid": "19834155", "title": "A nested model for visualization design and validation.", "abstract": "We present a nested model for the visualization design process with four layers: characterize the problem domain, abstract into operations on data types, design visual encoding and interaction techniques, and create algorithms to execute techniques efficiently. The output from a level above is input to the level below, bringing attention to the design challenge that an upstream error inevitably cascades to all downstream levels. This model provides prescriptive guidance for determining appropriate evaluation approaches by identifying threats to validity unique to each level. We call attention to specific steps in the design and evaluation process that are often given short shrift. We also provide three recommendations motivated by this model:authors should distinguish between these levels when claiming contributions at more than one of them, authors should explicitly state upstream assumptions at levels above the focus of a paper, and visualization venues should accept more papers on domain characterization." }, { "pmid": "25164403", "title": "Bar and line graph comprehension: an interaction of top-down and bottom-up processes.", "abstract": "This experiment investigated the effect of format (line vs. bar), viewers' familiarity with variables, and viewers' graphicacy (graphical literacy) skills on the comprehension of multivariate (three variable) data presented in graphs. Fifty-five undergraduates provided written descriptions of data for a set of 14 line or bar graphs, half of which depicted variables familiar to the population and half of which depicted variables unfamiliar to the population. Participants then took a test of graphicacy skills. As predicted, the format influenced viewers' interpretations of data. Specifically, viewers were more likely to describe x-y interactions when viewing line graphs than when viewing bar graphs, and they were more likely to describe main effects and \"z-y\" (the variable in the legend) interactions when viewing bar graphs than when viewing line graphs. Familiarity of data presented and individuals' graphicacy skills interacted with the influence of graph format. Specifically, viewers were most likely to generate inferences only when they had high graphicacy skills, the data were familiar and thus the information inferred was expected, and the format supported those inferences. Implications for multivariate data display are discussed." } ]
BMC Medical Research Methodology
27875988
PMC5118882
10.1186/s12874-016-0259-3
Common data elements for secondary use of electronic health record data for clinical trial execution and serious adverse event reporting
BackgroundData capture is one of the most expensive phases during the conduct of a clinical trial and the increasing use of electronic health records (EHR) offers significant savings to clinical research. To facilitate these secondary uses of routinely collected patient data, it is beneficial to know what data elements are captured in clinical trials. Therefore our aim here is to determine the most commonly used data elements in clinical trials and their availability in hospital EHR systems.MethodsCase report forms for 23 clinical trials in differing disease areas were analyzed. Through an iterative and consensus-based process of medical informatics professionals from academia and trial experts from the European pharmaceutical industry, data elements were compiled for all disease areas and with special focus on the reporting of adverse events. Afterwards, data elements were identified and statistics acquired from hospital sites providing data to the EHR4CR project.ResultsThe analysis identified 133 unique data elements. Fifty elements were congruent with a published data inventory for patient recruitment and 83 new elements were identified for clinical trial execution, including adverse event reporting. Demographic and laboratory elements lead the list of available elements in hospitals EHR systems. For the reporting of serious adverse events only very few elements could be identified in the patient records.ConclusionsCommon data elements in clinical trials have been identified and their availability in hospital systems elucidated. Several elements, often those related to reimbursement, are frequently available whereas more specialized elements are ranked at the bottom of the data inventory list. Hospitals that want to obtain the benefits of reusing data for research from their EHR are now able to prioritize their efforts based on this common data element list.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-016-0259-3) contains supplementary material, which is available to authorized users.
Related workIn the EHR4CR project, data inventories for ‘protocol feasibility’ [24] and ‘patient identification and recruitment’ [23] have been performed by Doods et al. There, 75 data elements were identified for feasibility assessment and 150 data elements for patient identification and recruitment. Despite the differing scenarios, a comparison with the current inventory for the execution and SAE reporting in clinical trials has shown that 50 data elements have already been identified and 83 are new data elements.CDISC, C-Path, NCI-EVS and CFAST had introduced an initiative on ‘Clinical Data Standards’ to create industry-wide common standards for data capture in clinical trials to support the exchange of clinical research and metadata [32]. This initiative defines common data elements for different therapeutic areas. Currently, traumatic brain injury, breast cancer, COPD, diabetes, tuberculosis, etc. are covered. In addition, the CDISC SDTM implementation guideline contains a set of standardized and structured data elements for each form domain. The aim of this initiative is similar to ours concerning the identification of most frequently used data elements for clinical trials. Nevertheless, the focus of our work is different and goes beyond this initiative in terms of determining the availability and quality of data within EHR systems.Köpcke et al. have analyzed eligibility criteria from 15 clinical trials and determined the presence and completeness within the partners EHR systems [33]. Botsis et al. examined the incompleteness rate of diagnoses in pathology reports resulting in 48.2% (1479 missing of 3068 patients) [25]. Both publications show that re-use of EHR data relies on the availability of (1) data fields and (2) captured patient values.
[ "20798168", "20803669", "19151888", "25220487", "21888989", "23266063", "23690362", "22733976", "19828572", "25123746", "21324182", "19151883", "25463966", "17149500", "26250061", "22308239", "25991199", "24410735", "21347133", "23514203", "27302260", "16160223", "22326800", "25377309", "1857252" ]
[ { "pmid": "20798168", "title": "A progress report on electronic health records in U.S. hospitals.", "abstract": "Given the substantial federal financial incentives soon to be available to providers who make \"meaningful use\" of electronic health records, tracking the progress of this health care technology conversion is a policy priority. Using a recent survey of U.S. hospitals, we found that the share of hospitals that had adopted either basic or comprehensive electronic records has risen modestly, from 8.7 percent in 2008 to 11.9 percent in 2009. Small, public, and rural hospitals were less likely to embrace electronic records than their larger, private, and urban counterparts. Only 2 percent of U.S. hospitals reported having electronic health records that would allow them to meet the federal government's \"meaningful use\" criteria. These findings underscore the fact that the transition to a digital health care system is likely to be a long one." }, { "pmid": "20803669", "title": "Where did the day go?--a time-motion study of hospitalists.", "abstract": "BACKGROUND\nWithin the last decade hospitalists have become an integral part of inpatient care in the United States and now care for about half of all Medicare patients requiring hospitalization. However, little data exists describing hospitalist workflow and their activities in daily patient care.\n\n\nOBJECTIVE\nTo clarify how hospitalists spend their time and how patient volumes affect their workflow.\n\n\nDESIGN\nObservers continuously shadowed each of 24 hospitalists for two complete shifts. Observations were recorded using a handheld computer device with customized data collection software.\n\n\nSETTING\nUrban, tertiary care, academic medical center.\n\n\nRESULTS\n: Hospitalists spent 17% of their time on direct patient contact, and 64% on indirect patient care. For 16% of all time recorded, more than one activity was occurring simultaneously (i.e., multitasking). Professional development, personal time, and travel each accounted for about 6% of their time. Communication and electronic medical record (EMR) use, two components of indirect care, occupied 25% and 34% of recorded time respectively. Hospitalists with above average patient loads spent less time per patient communicating with others and working with the EMR than those hospitalists with below average patient loads, but reported delaying documentation until later in the evening or next day. Patient load did not change the amount of time hospitalists spent with each patient.\n\n\nCONCLUSIONS\nHospitalists spend more time reviewing the EMR and documenting in it, than directly with the patient. Multi-tasking occurred frequently and occupied a significant portion of each shift." }, { "pmid": "19151888", "title": "The time needed for clinical documentation versus direct patient care. A work-sampling analysis of physicians' activities.", "abstract": "OBJECTIVES\nHealth care professionals seem to be confronted with an increasing need for high-quality, timely, patient-oriented documentation. However, a steady increase in documentation tasks has been shown to be associated with increased time pressure and low physician job satisfaction. Our objective was to examine the time physicians spend on clinical and administrative documentation tasks. We analyzed the time needed for clinical and administrative documentation, and compared it to other tasks, such as direct patient care.\n\n\nMETHODS\nDuring a 2-month period (December 2006 to January 2007) a trained investigator completed 40 hours of 2-minute work-sampling analysis from eight participating physicians on two internal medicine wards of a 200-bed hospital in Austria. A 37-item classification system was applied to categorize tasks into five categories (direct patient care, communication, clinical documentation, administrative documentation, other).\n\n\nRESULTS\nFrom the 5555 observation points, physicians spent 26.6% of their daily working time for documentation tasks, 27.5% for direct patient care, 36.2% for communication tasks, and 9.7% for other tasks. The documentation that is typically seen as administrative takes only approx. 16% of the total documentation time.\n\n\nCONCLUSIONS\nNearly as much time is being spent for documentation as is spent on direct patient care. Computer-based tools and, in some areas, documentation assistants may help to reduce the clinical and administrative documentation efforts." }, { "pmid": "25220487", "title": "Does single-source create an added value? Evaluating the impact of introducing x4T into the clinical routine on workflow modifications, data quality and cost-benefit.", "abstract": "OBJECTIVES\nThe first objective of this study is to evaluate the impact of integrating a single-source system into the routine patient care documentation workflow with respect to process modifications, data quality and execution times in patient care as well as research documentation. The second one is to evaluate whether it is cost-efficient using a single-source system in terms of achieved savings in documentation expenditures.\n\n\nMETHODS\nWe analyzed the documentation workflow of routine patient care and research documentation in the medical field of pruritus to identify redundant and error-prone process steps. Based on this, we established a novel documentation workflow including the x4T (exchange for Trials) system to connect hospital information systems with electronic data capture systems for the exchange of study data. To evaluate the workflow modifications, we performed a before/after analysis as well as a time-motion study. Data quality was assessed by measuring completeness, correctness and concordance of previously and newly collected data. A cost-benefit analysis was conducted to estimate the savings using x4T per collected data element and the additional costs for introducing x4T.\n\n\nRESULTS\nThe documentation workflow of patient care as well as clinical research was modified due to the introduction of the x4T system. After x4T implementation and workflow modifications, half of the redundant and error-prone process steps were eliminated. The generic x4T system allows direct transfer of routinely collected health care data into the x4T research database and avoids manual transcription steps. Since x4T has been introduced in March 2012, the number of included patients has increased by about 1000 per year. The average entire documentation time per patient visit has been significantly decreased by 70.1% (from 1116±185 to 334±83 s). After the introduction of the x4T system and associated workflow changes, the completeness of mandatory data elements raised from 82.2% to 100%. In case of the pruritus research study, the additional costs for introducing the x4T system are €434.01 and the savings are 0.48ct per collected data element. So, with the assumption of a 5-year runtime and 82 collected data elements per patient, the amount of documented patients has to be higher than 1102 to create a benefit.\n\n\nCONCLUSION\nIntroduction of the x4T system into the clinical and research documentation workflow can optimize the data collection workflow in both areas. Redundant and cumbersome process steps can be eliminated in the research documentation, with the result of reduced documentation times as well as increased data quality. The usage of the x4T system is especially worthwhile in a study with a large amount of collected data or a high number of included patients." }, { "pmid": "21888989", "title": "Integrating clinical research with the Healthcare Enterprise: from the RE-USE project to the EHR4CR platform.", "abstract": "BACKGROUND\nThere are different approaches for repurposing clinical data collected in the Electronic Healthcare Record (EHR) for use in clinical research. Semantic integration of \"siloed\" applications across domain boundaries is the raison d'être of the standards-based profiles developed by the Integrating the Healthcare Enterprise (IHE) initiative - an initiative by healthcare professionals and industry promoting the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. In particular, the combination of two IHE profiles - the integration profile \"Retrieve Form for Data Capture\" (RFD), and the IHE content profile \"Clinical Research Document\" (CRD) - offers a straightforward approach to repurposing EHR data by enabling the pre-population of the case report forms (eCRF) used for clinical research data capture by Clinical Data Management Systems (CDMS) with previously collected EHR data.\n\n\nOBJECTIVE\nImplement an alternative solution of the RFD-CRD integration profile centered around two approaches: (i) Use of the EHR as the single-source data-entry and persistence point in order to ensure that all the clinical data for a given patient could be found in a single source irrespective of the data collection context, i.e. patient care or clinical research; and (ii) Maximize the automatic pre-population process through the use of a semantic interoperability services that identify duplicate or semantically-equivalent eCRF/EHR data elements as they were collected in the EHR context.\n\n\nMETHODS\nThe RE-USE architecture and associated profiles are focused on defining a set of scalable, standards-based, IHE-compliant profiles that can enable single-source data collection/entry and cross-system data reuse through semantic integration. Specifically, data reuse is realized through the semantic mapping of data collection fields in electronic Case Report Forms (eCRFs) to data elements previously defined as part of patient care-centric templates in the EHR context. The approach was evaluated in the context of a multi-center clinical trial conducted in a large, multi-disciplinary hospital with an installed EHR.\n\n\nRESULTS\nData elements of seven eCRFs used in a multi-center clinical trial were mapped to data elements of patient care-centric templates in use in the EHR at the George Pompidou hospital. 13.4% of the data elements of the eCRFs were found to be represented in EHR templates and were therefore candidate for pre-population. During the execution phase of the clinical study, the semantic mapping architecture enabled data persisted in the EHR context as part of clinical care to be used to pre-populate eCRFS for use without secondary data entry. To ensure that the pre-populated data is viable for use in the clinical research context, all pre-populated eCRF data needs to be first approved by a trial investigator prior to being persisted in a research data store within a CDMS.\n\n\nCONCLUSION\nSingle-source data entry in the clinical care context for use in the clinical research context - a process enabled through the use of the EHR as single point of data entry, can - if demonstrated to be a viable strategy - not only significantly reduce data collection efforts while simultaneously increasing data collection accuracy secondary to elimination of transcription or double-entry errors between the two contexts but also ensure that all the clinical data for a given patient, irrespective of the data collection context, are available in the EHR for decision support and treatment planning. The RE-USE approach used mapping algorithms to identify semantic coherence between clinical care and clinical research data elements and pre-populate eCRFs. The RE-USE project utilized SNOMED International v.3.5 as its \"pivot reference terminology\" to support EHR-to-eCRF mapping, a decision that likely enhanced the \"recall\" of the mapping algorithms. The RE-USE results demonstrate the difficult challenges involved in semantic integration between the clinical care and clinical research contexts." }, { "pmid": "23266063", "title": "Secondary use of routinely collected patient data in a clinical trial: an evaluation of the effects on patient recruitment and data acquisition.", "abstract": "PURPOSE\nClinical trials are time-consuming and require constant focus on data quality. Finding sufficient time for a trial is a challenging task for involved physicians, especially when it is conducted in parallel to patient care. From the point of view of medical informatics, the growing amount of electronically available patient data allows to support two key activities: the recruitment of patients into the study and the documentation of trial data.\n\n\nMETHODS\nThe project was carried out at one site of a European multicenter study. The study protocol required eligibility assessment for 510 patients in one week and the documentation of 46-186 data elements per patient. A database query based on routine data from patient care was set up to identify eligible patients and its results were compared to those of manual recruitment. Additionally, routine data was used to pre-populate the paper-based case report forms and the time necessary to fill in the remaining data elements was compared to completely manual data collection.\n\n\nRESULTS\nEven though manual recruitment of 327 patients already achieved high sensitivity (88%) and specificity (87%), the subsequent electronic report helped to include 42 (14%) additional patients and identified 21 (7%) patients, who were incorrectly included. Pre-populating the case report forms decreased the time required for documentation from a median of 255 to 30s.\n\n\nCONCLUSIONS\nReuse of routine data can help to improve the quality of patient recruitment and may reduce the time needed for data acquisition. These benefits can exceed the efforts required for development and implementation of the corresponding electronic support systems." }, { "pmid": "23690362", "title": "Using electronic dental record data for research: a data-mapping study.", "abstract": "Anecdotal evidence suggests that, during the clinical care process, many dental practices record some data that are also collected in dental practice based research network (PBRN) studies. Since the use of existing, electronically stored data for research has multiple benefits, we investigated the overlap between research data fields used in dental PBRN studies and clinical data fields typically found in general dental records. We mapped 734 unique data elements from the Dental Information Model (DIM) to 2,487 Common Data Elements (CDE) curated by the NIDCR's PBRNs in the Cancer Data Standards Registry and Repository (caDSR). Thirty-three percent of the DIM data elements matched at least one CDE completely and 9% partially, translating to about 9% and 2%, respectively, of all data elements used in PBRN studies. The most frequently used CDEs found in the DIM included data about dental anatomy, medications, and items such as oral biopsy and caries. Our study shows that a non-trivial number of data elements in general dental records can be mapped either completely or partially to data fields in research studies. Further studies should investigate the feasibility of electronic clinical data for research purposes." }, { "pmid": "22733976", "title": "Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research.", "abstract": "OBJECTIVE\nTo review the methods and dimensions of data quality assessment in the context of electronic health record (EHR) data reuse for research.\n\n\nMATERIALS AND METHODS\nA review of the clinical research literature discussing data quality assessment methodology for EHR data was performed. Using an iterative process, the aspects of data quality being measured were abstracted and categorized, as well as the methods of assessment used.\n\n\nRESULTS\nFive dimensions of data quality were identified, which are completeness, correctness, concordance, plausibility, and currency, and seven broad categories of data quality assessment methods: comparison with gold standards, data element agreement, data source agreement, distribution comparison, validity checks, log review, and element presence.\n\n\nDISCUSSION\nExamination of the methods by which clinical researchers have investigated the quality and suitability of EHR data for research shows that there are fundamental features of data quality, which may be difficult to measure, as well as proxy dimensions. Researchers interested in the reuse of EHR data for clinical research are recommended to consider the adoption of a consistent taxonomy of EHR data quality, to remain aware of the task-dependence of data quality, to integrate work on data quality assessment from other fields, and to adopt systematic, empirically driven, statistically based methods of data quality assessment.\n\n\nCONCLUSION\nThere is currently little consistency or potential generalizability in the methods used to assess EHR data quality. If the reuse of EHR data for clinical research is to become accepted, researchers should adopt validated, systematic methods of EHR data quality assessment." }, { "pmid": "19828572", "title": "Using your electronic medical record for research: a primer for avoiding pitfalls.", "abstract": "In Canada, use of electronic medical records (EMRs) among primary health care (PHC) providers is relatively low. However, it appears that EMRs will eventually become more ubiquitous in PHC. This represents an important development in the use of health care information technology as well as a potential new source of PHC data for research. However, care in the use of EMR data is required. Four years ago, researchers at the Centre for Studies in Family Medicine, The University of Western Ontario created an EMR-based research project, called Deliver Primary Health Care Information. Implementing this project led us to two conclusions about using PHC EMR data for research: first, additional time is required for providers to undertake EMR training and to standardize the way data are entered into the EMR and second, EMRs are designed for clinical care, not research. Based on these experiences, we offer our thoughts about how EMRs may, nonetheless, be used for research. Family physician researchers who intend to use EMR data to answer timely questions relevant to practice should evaluate the possible impact of the four questions raised by this paper: (i) why are EMR data different?; (ii) how do you extract data from an EMR?; (iii) where are the data stored? and (iv) what is the data quality? In addition, consideration needs to be given to the complexity of the research question since this can have an impact on how easily issues of using EMR data for research can be overcome." }, { "pmid": "25123746", "title": "Clinical research informatics and electronic health record data.", "abstract": "OBJECTIVES\nThe goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics.\n\n\nRESULTS\nMajor challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data.\n\n\nCONCLUSIONS\nThe use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system." }, { "pmid": "21324182", "title": "HIS-based Kaplan-Meier plots--a single source approach for documenting and reusing routine survival information.", "abstract": "BACKGROUND\nSurvival or outcome information is important for clinical routine as well as for clinical research and should be collected completely, timely and precisely. This information is relevant for multiple usages including quality control, clinical trials, observational studies and epidemiological registries. However, the local hospital information system (HIS) does not support this documentation and therefore this data has to generated by paper based or spreadsheet methods which can result in redundantly documented data. Therefore we investigated, whether integrating the follow-up documentation of different departments in the HIS and reusing it for survival analysis can enable the physician to obtain survival curves in a timely manner and to avoid redundant documentation.\n\n\nMETHODS\nWe analysed the current follow-up process of oncological patients in two departments (urology, haematology) with respect to different documentation forms. We developed a concept for comprehensive survival documentation based on a generic data model and implemented a follow-up form within the HIS of the University Hospital Muenster which is suitable for a secondary use of these data. We designed a query to extract the relevant data from the HIS and implemented Kaplan-Meier plots based on these data. To re-use this data sufficient data quality is needed. We measured completeness of forms with respect to all tumour cases in the clinic and completeness of documented items per form as incomplete information can bias results of the survival analysis.\n\n\nRESULTS\nBased on the form analysis we discovered differences and concordances between both departments. We identified 52 attributes from which 13 were common (e.g. procedures and diagnosis dates) and were used for the generic data model. The electronic follow-up form was integrated in the clinical workflow. Survival data was also retrospectively entered in order to perform survival and quality analyses on a comprehensive data set. Physicians are now able to generate timely Kaplan-Meier plots on current data. We analysed 1029 follow-up forms of 965 patients with survival information between 1992 and 2010. Completeness of forms was 60.2%, completeness of items ranges between 94.3% and 98.5%. Median overall survival time was 16.4 years; median event-free survival time was 7.7 years.\n\n\nCONCLUSION\nIt is feasible to integrate survival information into routine HIS documentation such that Kaplan-Meier plots can be generated directly and in a timely manner." }, { "pmid": "19151883", "title": "Future developments of medical informatics from the viewpoint of networked clinical research. Interoperability and integration.", "abstract": "OBJECTIVES\nTo be prepared for future developments, such as enabling support of rapid innovation transfer and personalized medicine concepts, interoperability of basic research, clinical research and medical care is essential. It is the objective of our paper to give an overview of developments, indicate problem areas and to specify future requirements.\n\n\nMETHODS\nIn this paper recent and ongoing large-scaled activities related to interoperability and integration of networked clinical research are described and evaluated. The following main topics are covered: necessity for general IT-conception, open source/open community approach, acceptance of eSource in clinical research, interoperability of the electronic health record and electronic data capture and harmonization and bridging of standards for technical and semantic interoperability.\n\n\nRESULTS\nNational infrastructures and programmes have been set up to provide general IT-conceptions to guide planning and development of software tools (e.g. TMF, caBIG, NIHR). The concept of open research described by transparency achieved through open access, open data, open communication and open source software is becoming more and more important in clinical research infrastructure development (e.g. caBIG, ePCRN). Meanwhile visions and rules for using eSource in clinical research are available, with the potential to improve interoperability between the electronic health record and electronic data capture (e.g. CDISC eSDI, eClinical Forum/PhRMA EDC/eSource Taskforce). Several groups have formulated user requirements, use cases and technical frameworks to advance these issues (e.g. NHIN Slipstream-project, EHR/CR-project, IHE). In order to achieve technical and semantic interoperability, existing standards (e.g. CDISC) have to be harmonized and bridged. Major consortia have been formed to provide semantical interoperability (e.g. HL7 RCRIM under joint leadership of HL7, CDISC and FDA, or BRIDG covering CDISC, HL7, FDA, NCI) and to provide core sets of data collection fields (CDASH).\n\n\nCONCLUSIONS\nThe essential tasks for medical informatics within the next ten years will now be the development and implementation of encompassing IT conceptions, strong support of the open community and open source approach, the acceptance of eSource in clinical research, the uncompromising continuity of standardization and bridging of technical standards and the widespread use of electronic health record systems." }, { "pmid": "25463966", "title": "Using electronic health records for clinical research: the case of the EHR4CR project.", "abstract": "OBJECTIVES\nTo describe the IMI EHR4CR project which is designing and developing, and aims to demonstrate, a scalable, widely acceptable and efficient approach to interoperability between EHR systems and clinical research systems.\n\n\nMETHODS\nThe IMI EHR4CR project is combining and extending several previously isolated state-of-the-art technical components through a new approach to develop a platform for reusing EHR data to support medical research. This will be achieved through multiple but unified initiatives across different major disease areas (e.g. cardiovascular, cancer) and clinical research use cases (protocol feasibility, patient identification and recruitment, clinical trial execution and serious adverse event reporting), with various local and national stakeholders across several countries and therefore under various legal frameworks.\n\n\nRESULTS\nAn initial instance of the platform has been built, providing communication, security and terminology services to the eleven participating hospitals and ten pharmaceutical companies located in seven European countries. Proof-of-concept demonstrators have been built and evaluated for the protocol feasibility and patient recruitment scenarios. The specifications of the clinical trial execution and the adverse event reporting scenarios have been documented and reviewed.\n\n\nCONCLUSIONS\nThrough a combination of a consortium that brings collectively many years of experience from previous relevant EU projects and of the global conduct of clinical trials, of an approach to ethics that engages many important stakeholders across Europe to ensure acceptability, of a robust iterative design methodology for the platform services that is anchored on requirements of an underlying Service Oriented Architecture that has been designed to be scalable and adaptable, EHR4CR could be well placed to deliver a sound, useful and well accepted pan-European solution for the reuse of hospital EHR data to support clinical research studies." }, { "pmid": "17149500", "title": "The Common Data Elements for cancer research: remarks on functions and structure.", "abstract": "OBJECTIVES\nThe National Cancer Institute (NCI) has developed the Common Data Elements (CDE) to serve as a controlled vocabulary of data descriptors for cancer research, to facilitate data interchange and inter-operability between cancer research centers. We evaluated CDE's structure to see whether it could represent the elements necessary to support its intended purpose, and whether it could prevent errors and inconsistencies from being accidentally introduced. We also performed automated checks for certain types of content errors that provided a rough measure of curation quality.\n\n\nMETHODS\nEvaluation was performed on CDE content downloaded via the NCI's CDE Browser, and transformed into relational database form. Evaluation was performed under three categories: 1) compatibility with the ISO/IEC 11179 metadata model, on which CDE structure is based, 2) features necessary for controlled vocabulary support, and 3) support for a stated NCI goal, set up of data collection forms for cancer research.\n\n\nRESULTS\nVarious limitations were identified both with respect to content (inconsistency, insufficient definition of elements, redundancy) as well as structure--particularly the need for term and relationship support, as well as the need for metadata supporting the explicit representation of electronic forms that utilize sets of common data elements.\n\n\nCONCLUSIONS\nWhile there are numerous positive aspects to the CDE effort, there is considerable opportunity for improvement. Our recommendations include review of existing content by diverse experts in the cancer community; integration with the NCI thesaurus to take advantage of the latter's links to nationally used controlled vocabularies, and various schema enhancements required for electronic form support." }, { "pmid": "26250061", "title": "Advancing Symptom Science Through Use of Common Data Elements.", "abstract": "BACKGROUND\nUse of common data elements (CDEs), conceptually defined as variables that are operationalized and measured in identical ways across studies, enables comparison of data across studies in ways that would otherwise be impossible. Although healthcare researchers are increasingly using CDEs, there has been little systematic use of CDEs for symptom science. CDEs are especially important in symptom science because people experience common symptoms across a broad range of health and developmental states, and symptom management interventions may have common outcomes across populations.\n\n\nPURPOSES\nThe purposes of this article are to (a) recommend best practices for the use of CDEs for symptom science within and across centers; (b) evaluate the benefits and challenges associated with the use of CDEs for symptom science; (c) propose CDEs to be used in symptom science to serve as the basis for this emerging science; and (d) suggest implications and recommendations for future research and dissemination of CDEs for symptom science.\n\n\nDESIGN\nThe National Institute of Nursing Research (NINR)-supported P20 and P30 Center directors applied published best practices, expert advice, and the literature to identify CDEs to be used across the centers to measure pain, sleep, fatigue, and affective and cognitive symptoms.\n\n\nFINDINGS\nWe generated a minimum set of CDEs to measure symptoms.\n\n\nCONCLUSIONS\nThe CDEs identified through this process will be used across the NINR Centers and will facilitate comparison of symptoms across studies. We expect that additional symptom CDEs will be added and the list will be refined in future work.\n\n\nCLINICAL RELEVANCE\nSymptoms are an important focus of nursing care. Use of CDEs will facilitate research that will lead to better ways to assist people to manage their symptoms." }, { "pmid": "22308239", "title": "Standardizing the structure of stroke clinical and epidemiologic research data: the National Institute of Neurological Disorders and Stroke (NINDS) Stroke Common Data Element (CDE) project.", "abstract": "BACKGROUND AND PURPOSE\nThe National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical, and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines.\n\n\nMETHODS\nA working group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised on the basis of comments from leading national and international neurovascular research organizations and the public.\n\n\nRESULTS\nThe first iteration of the National Institute of Neurological Disorders and Stroke (NINDS) stroke-specific CDEs comprises 980 data elements spanning 9 content areas: (1) biospecimens and biomarkers; (2) hospital course and acute therapies; (3) imaging; (4) laboratory tests and vital signs; (5) long-term therapies; (6) medical history and prior health status; (7) outcomes and end points; (8) stroke presentation; and (9) stroke types and subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms, using the CDEs.\n\n\nCONCLUSIONS\nStroke-specific CDEs are now available as standardized, scientifically vetted, variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings." }, { "pmid": "25991199", "title": "A European inventory of data elements for patient recruitment.", "abstract": "INTRODUCTION\nIn the last few years much work has been conducted in creating systems that support clinical trials for example by utilizing electronic health record data. One of these endeavours is the Electronic Health Record for Clinical Research project (EHR4CR). An unanswered question that the project aims to answer is which data elements are most commonly required for patient recruitment.\n\n\nMETHODS\nFree text eligibility criteria from 40 studies were analysed, simplified and elements were extracted. These elements where then added to an existing inventory of data elements for protocol feasibility.\n\n\nRESULTS\nWe simplified and extracted data elements from 40 trials, which resulted in 1170 elements. From these we created an inventory of 150 unique data elements relevant for patient identification and recruitment with definitions and referenced codes to standard terminologies.\n\n\nDISCUSSION\nOur list was created with expertise from pharmaceutical companies. Comparisons with related work shows that identified concepts are similar. An evaluation of the availability of these elements in electronic health records is still ongoing. Hospitals that want to engage in re-use of electronic health record data for research purposes, for example by joining networks like EHR4CR, can now prioritize their effort based on this list." }, { "pmid": "24410735", "title": "A European inventory of common electronic health record data elements for clinical trial feasibility.", "abstract": "BACKGROUND\nClinical studies are a necessity for new medications and therapies. Many studies, however, struggle to meet their recruitment numbers in time or have problems in meeting them at all. With increasing numbers of electronic health records (EHRs) in hospitals, huge databanks emerge that could be utilized to support research. The Innovative Medicine Initiative (IMI) funded project 'Electronic Health Records for Clinical Research' (EHR4CR) created a standardized and homogenous inventory of data elements to support research by utilizing EHRs. Our aim was to develop a Data Inventory that contains elements required for site feasibility analysis.\n\n\nMETHODS\nThe Data Inventory was created in an iterative, consensus driven approach, by a group of up to 30 people consisting of pharmaceutical experts and informatics specialists. An initial list was subsequently expanded by data elements of simplified eligibility criteria from clinical trial protocols. Each element was manually reviewed by pharmaceutical experts and standard definitions were identified and added. To verify their availability, data exports of the source systems at eleven university hospitals throughout Europe were conducted and evaluated.\n\n\nRESULTS\nThe Data Inventory consists of 75 data elements that, on the one hand are frequently used in clinical studies, and on the other hand are available in European EHR systems. Rankings of data elements were created from the results of the data exports. In addition a sub-list was created with 21 data elements that were separated from the Data Inventory because of their low usage in routine documentation.\n\n\nCONCLUSION\nThe data elements in the Data Inventory were identified with the knowledge of domain experts from pharmaceutical companies. Currently, not all information that is frequently used in site feasibility is documented in routine patient care." }, { "pmid": "21347133", "title": "Secondary Use of EHR: Data Quality Issues and Informatics Opportunities.", "abstract": "Given the large-scale deployment of Electronic Health Records (EHR), secondary use of EHR data will be increasingly needed in all kinds of health services or clinical research. This paper reports some data quality issues we encountered in a survival analysis of pancreatic cancer patients. Using the clinical data warehouse at Columbia University Medical Center in the City of New York, we mined EHR data elements collected between 1999 and 2009 for a cohort of pancreatic cancer patients. Of the 3068 patients who had ICD-9-CM diagnoses for pancreatic cancer, only 1589 had corresponding disease documentation in pathology reports. Incompleteness was the leading data quality issue; many study variables had missing values to various degrees. Inaccuracy and inconsistency were the next common problems. In this paper, we present the manifestations of these data quality issues and discuss some strategies for using emerging informatics technologies to solve these problems." }, { "pmid": "23514203", "title": "Evaluation of data completeness in the electronic health record for the purpose of patient recruitment into clinical trials: a retrospective analysis of element presence.", "abstract": "BACKGROUND\nComputerized clinical trial recruitment support is one promising field for the application of routine care data for clinical research. The primary task here is to compare the eligibility criteria defined in trial protocols with patient data contained in the electronic health record (EHR). To avoid the implementation of different patient definitions in multi-site trials, all participating research sites should use similar patient data from the EHR. Knowledge of the EHR data elements which are commonly available from most EHRs is required to be able to define a common set of criteria. The objective of this research is to determine for five tertiary care providers the extent of available data compared with the eligibility criteria of randomly selected clinical trials.\n\n\nMETHODS\nEach participating study site selected three clinical trials at random. All eligibility criteria sentences were broken up into independent patient characteristics, which were then assigned to one of the 27 semantic categories for eligibility criteria developed by Luo et al. We report on the fraction of patient characteristics with corresponding structured data elements in the EHR and on the fraction of patients with available data for these elements. The completeness of EHR data for the purpose of patient recruitment is calculated for each semantic group.\n\n\nRESULTS\n351 eligibility criteria from 15 clinical trials contained 706 patient characteristics. In average, 55% of these characteristics could be documented in the EHR. Clinical data was available for 64% of all patients, if corresponding data elements were available. The total completeness of EHR data for recruitment purposes is 35%. The best performing semantic groups were 'age' (89%), 'gender' (89%), 'addictive behaviour' (74%), 'disease, symptom and sign' (64%) and 'organ or tissue status' (61%). No data was available for 6 semantic groups.\n\n\nCONCLUSIONS\nThere exists a significant gap in structure and content between data documented during patient care and data required for patient eligibility assessment. Nevertheless, EHR data on age and gender of the patient, as well as selected information on his disease can be complete enough to allow for an effective support of the manual screening process with an intelligent preselection of patients and patient data." }, { "pmid": "16160223", "title": "The openEHR Foundation.", "abstract": "The openEHR Foundation is an independent, not-for-profit organisation and community, facilitating the creation and sharing of health records by consumers and clinicians via open-source, standards-based implementations. It was formed as a union of ten-year international R&D efforts in specifying the requirements, information models and implementation of comprehensive and ethico-legally sound electronic health record systems. Between 2000 and 2004 it has grown to having an on-line membership of over 300, published a wide range of EHR information viewpoint specifications. Several groups have now begun collaborative software development, within an open source framework. This chapter summarises the formation of openEHR, its research underpinning, practical demonstrators, the principle design concepts, and the roles openEHR members are playing in international standards." }, { "pmid": "22326800", "title": "Building a robust, scalable and standards-driven infrastructure for secondary use of EHR data: the SHARPn project.", "abstract": "The Strategic Health IT Advanced Research Projects (SHARP) Program, established by the Office of the National Coordinator for Health Information Technology in 2010 supports research findings that remove barriers for increased adoption of health IT. The improvements envisioned by the SHARP Area 4 Consortium (SHARPn) will enable the use of the electronic health record (EHR) for secondary purposes, such as care process and outcomes improvement, biomedical research and epidemiologic monitoring of the nation's health. One of the primary informatics problem areas in this endeavor is the standardization of disparate health data from the nation's many health care organizations and providers. The SHARPn team is developing open source services and components to support the ubiquitous exchange, sharing and reuse or 'liquidity' of operational clinical data stored in electronic health records. One year into the design and development of the SHARPn framework, we demonstrated end to end data flow and a prototype SHARPn platform, using thousands of patient electronic records sourced from two large healthcare organizations: Mayo Clinic and Intermountain Healthcare. The platform was deployed to (1) receive source EHR data in several formats, (2) generate structured data from EHR narrative text, and (3) normalize the EHR data using common detailed clinical models and Consolidated Health Informatics standard terminologies, which were (4) accessed by a phenotyping service using normalized data specifications. The architecture of this prototype SHARPn platform is presented. The EHR data throughput demonstration showed success in normalizing native EHR data, both structured and narrative, from two independent organizations and EHR systems. Based on the demonstration, observed challenges for standardization of EHR data for interoperable secondary use are discussed." }, { "pmid": "25377309", "title": "Secure Secondary Use of Clinical Data with Cloud-based NLP Services. Towards a Highly Scalable Research Infrastructure.", "abstract": "OBJECTIVES\nThe secondary use of clinical data provides large opportunities for clinical and translational research as well as quality assurance projects. For such purposes, it is necessary to provide a flexible and scalable infrastructure that is compliant with privacy requirements. The major goals of the cloud4health project are to define such an architecture, to implement a technical prototype that fulfills these requirements and to evaluate it with three use cases.\n\n\nMETHODS\nThe architecture provides components for multiple data provider sites such as hospitals to extract free text as well as structured data from local sources and de-identify such data for further anonymous or pseudonymous processing. Free text documentation is analyzed and transformed into structured information by text-mining services, which are provided within a cloud-computing environment. Thus, newly gained annotations can be integrated along with the already available structured data items and the resulting data sets can be uploaded to a central study portal for further analysis.\n\n\nRESULTS\nBased on the architecture design, a prototype has been implemented and is under evaluation in three clinical use cases. Data from several hundred patients provided by a University Hospital and a private hospital chain have already been processed.\n\n\nCONCLUSIONS\nCloud4health has shown how existing components for secondary use of structured data can be complemented with text-mining in a privacy compliant manner. The cloud-computing paradigm allows a flexible and dynamically adaptable service provision that facilitates the adoption of services by data providers without own investments in respective hardware resources and software tools." } ]
Scientific Reports
27876847
PMC5120294
10.1038/srep37470
A simplified computational memory model from information processing
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
Related WorkIn traditional memory studies memory has been accepted as network10, and visual modeling has been used from psychological to neural, physiological, anatomical, computational etc.12345678910 around neuron, cortex, physical signal, chemical signal and information processing910111213; various memory networks were modeled from structural characters or functional characters such as analogy, dimensionality reduction12, classification14 and so on to study the associative16, free recall15, coding16, retrieval efficiency13 etc., and many different structures of the memory networks were achieved such as the modular6, the hierarchical57, the tree17 and the small world network101819; the results simulated the structure, the function or the behavior partly or the whole, and some of them were hybrid.Aimed at structural modeling, Renart and Rolls6 reported a multi-modular neural network model to simulate the associative memory properties from the neuron perspective in 1999; they used the tri-modular architecture to simulate multiple cortical modules and developed a bi-modular recurrent associative network from neuron and cortical levels for brain memory functions; in the network the local functional features were implemented by intra-modular connections, and the global modular information features were implemented by inter-modular connections. Ten years later, Rolls9 continued the computational theory of episodic memory formation based on the hippocampus and proposed the architecture of an auto-association or attractor neural network; in their results modular is a remarkable character. Meunier20 made a specialized study of modular for memory. Hierarchical5 is another remarkable character in memory network. Cartling7 had put forward a series of memory models and discussed the dynamics of the hierarchical associative process from the neuron; he emphasized the storage mode and information retrieval and mentioned the graph theory for memory network. Besides, Fiebig21 put forward a three-stage neural network model from autonomous reinstatement dynamics to discuss the memory consolidation.Aimed at functional modeling, Lo22 used a mathematical model to simulate biological neural networks and proposed a functional temporal hierarchical probabilistic associative memory network. Lo simulated Hebb’s rule by a recurrent multilayer network and used a dendritic tree structure to simulate the neuron information input, moreover, he discussed multiple and/or hierarchical architecture. Polyn15 discussed the free recall process and reported the retrieval maintenance. Bahland10 delineated an efficient associative memory model by local connections and got a small-world network structure with high retrieval performance.Of course, every structural or functional model is not independent, it is the integration of structural, functional and behavioral; the results fit for the computational modeling. Xie12 selected higher firing rate neuron to set up a low-dimensional network and discussed the functional connections by graph theory. Lu19 got a neuronal network of small-world from multi-electrode recordings related to the working memory tasks in the rat. Xu23 presented a simplified memory network aimed at pattern formation; they used two loops of coupled neuron to analyze the process from short-term to long-term memory theoretically.Especially, memory simulating has attracted close attention aimed at information representation from structure to function, such as retrieval and efficiency. Just like Mizraji1 said, “Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task”; they proposed a multi-modular network, which processed the context-dependent memory as the basic model; this model was developed according to the information query from brain dynamics searching perspective. Miyamoto24 reviewed the memory encoding and retrieval from the neuroimaging in primates. Tsukada11 used the real neuron as the basic structure to set up an associative memory model, which could realize successive retrieval. Bednar13 combined cortical structure and self-organizing functions to discuss the computing efficiency of memory network. Sacramento17 used a hierarchical structure to connect the neuron to improve the retrieval efficiency, and set up a tree associative memory network. Anishchenko18 pointed that the metric structure of synaptic connections was vital for network capacity to retrieve memories. Rendeiro14 improved memory efficiency by classification, but the model was hierarchical tree structure. Snaider25 set up an extended sparse distributed memory network using large word vectors as data structure, and got a high efficient auto-associative storage method using tree as data structure.From the models above, we can find the memory networks have many remarkable structures and characters, such as modular, hierarchical, small world and so on2; with these characters the information processing can be high efficiency. It is ideal but difficult to model the memory thoroughly from the neuron level91124. Flavell4 ever proposed meta-memory in 1971, and assumed that meta-memory was a special type of memory and represented the memory of memory, i.e., the reorganization, evaluation and monitor processes of memory in humanity itself. We introduced a logic conception of meta-memory2 to avoid the restriction of micro-scale. Meta-memory is an abstract definition, it includes a memory unit in the memory task, and it reflects the function of a neuron but not a neuron. Meta-memory represents an independent memory task unit or integrated information storage in our definition. Meta-memory node can be defined by different scales, for example, a meta-memory can be the memory of a number, a letter or a picture, thus, a neuron or cortices can be used as the meta-memory node in the memory network.We ever put forward an initial memory model with small world characters based on the meta-memory2. In that model the cluster coefficient was discussed detailed but the algorithms were immature with ambiguous memory functions, in this paper we improve our retrieval algorithm and clarify the corresponding relation between the biological structure and information processing taking the word memory as example in order to refine our model in accordance with the memory functions precisely, such as the forgetting and association, which increases the map and understanding of the model.
[ "19496023", "11305890", "3676355", "8573654", "19190637", "20307583", "24427215", "25150041", "19159151", "20970304", "17320359", "24662576", "22132040", "25192634", "9623998" ]
[ { "pmid": "19496023", "title": "Dynamic searching in the brain.", "abstract": "Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task. Human cognition largely uses biological search engines, and we assume that to study cognitive function we need to understand the way these brain search engines work. The approach we favor is to study multi-modular network models, able to solve particular problems that involve searching for information. The building blocks of these multimodular networks are the context dependent memory models we have been using for almost 20 years. These models work by associating an output to the Kronecker product of an input and a context. Input, context and output are vectors that represent cognitive variables. Our models constitute a natural extension of the traditional linear associator. We show that coding the information in vectors that are processed through association matrices, allows for a direct contact between these memory models and some procedures that are now classical in the Information Retrieval field. One essential feature of context-dependent models is that they are based on the thematic packing of information, whereby each context points to a particular set of related concepts. The thematic packing can be extended to multimodular networks involving input-output contexts, in order to accomplish more complex tasks. Contexts act as passwords that elicit the appropriate memory to deal with a query. We also show toy versions of several 'neuromimetic' devices that solve cognitive tasks as diverse as decision making or word sense disambiguation. The functioning of these multimodular networks can be described as dynamical systems at the level of cognitive variables." }, { "pmid": "11305890", "title": "Cortical responses to single mechanoreceptive afferent microstimulation revealed with fMRI.", "abstract": "The technique of intraneural microneurography/microstimulation has been used extensively to study contributions of single, physiologically characterized mechanoreceptive afferents (MRAs) to properties of somatosensory experience in awake human subjects. Its power as a tool for sensory neurophysiology can be greatly enhanced, however, by combining it with functional neuroimaging techniques that permit simultaneous measurement of the associated CNS responses. Here we report its successful adaptation to the environment of a high-field MR scanner. Eight median-nerve MRAs were isolated and characterized in three subjects and microstimulated in conjunction with fMRI at 3.0 T. Hemodynamic responses were observed in every case, and these responses were robust, focal, and physiologically orderly. The combination of fMRI with microstimulation will enable more detailed studies of the representation of the body surface in human somatosensory cortex and further studies of the relationship of that organization to short-term plasticity in the human SI cortical response to natural tactile stimuli. It can also be used to study many additional topics in sensory neurophysiology, such as CNS responses to additional classes of afferents and the effects of stimulus patterning and unimodal/crossmodal attentional manipulations. Finally, it presents unique opportunities to investigate the basic physiology of the BOLD effect and to compare the operating characteristics of fMRI and EEG as human functional neuroimaging modalities in an unusually specific and well-characterized neurophysiological setting." }, { "pmid": "3676355", "title": "A hierarchical neural-network model for control and learning of voluntary movement.", "abstract": "In order to control voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: the determination of a desired trajectory in the visual coordinates, the transformation of its coordinates to the body coordinates and the generation of motor command. Based on physiological knowledge and previous models, we propose a hierarchical neural network model which accounts for the generation of motor command. In our model the association cortex provides the motor cortex with the desired trajectory in the body coordinates, where the motor command is then calculated by means of long-loop sensory feedback. Within the spinocerebellum--magnocellular red nucleus system, an internal neural model of the dynamics of the musculoskeletal system is acquired with practice, because of the heterosynaptic plasticity, while monitoring the motor command and the results of movement. Internal feedback control with this dynamical model updates the motor command by predicting a possible error of movement. Within the cerebrocerebellum--parvocellular red nucleus system, an internal neural model of the inverse-dynamics of the musculo-skeletal system is acquired while monitoring the desired trajectory and the motor command. The inverse-dynamics model substitutes for other brain regions in the complex computation of the motor command. The dynamics and the inverse-dynamics models are realized by a parallel distributed neural network, which comprises many sub-systems computing various nonlinear transformations of input signals and a neuron with heterosynaptic plasticity (that is, changes of synaptic weights are assumed proportional to a product of two kinds of synaptic inputs). Control and learning performance of the model was investigated by computer simulation, in which a robotic manipulator was used as a controlled system, with the following results: (1) Both the dynamics and the inverse-dynamics models were acquired during control of movements. (2) As motor learning proceeded, the inverse-dynamics model gradually took the place of external feedback as the main controller. Concomitantly, overall control performance became much better. (3) Once the neural network model learned to control some movement, it could control quite different and faster movements. (4) The neural network model worked well even when only very limited information about the fundamental dynamical structure of the controlled system was available.(ABSTRACT TRUNCATED AT 400 WORDS)" }, { "pmid": "8573654", "title": "Dynamics control of semantic processes in a hierarchical associative memory.", "abstract": "A neural mechanism for control of dynamics and function of associative processes in a hierarchical memory system is demonstrated. For the representation and processing of abstract knowledge, the semantic declarative memory system of the human brain is considered. The dynamics control mechanism is based on the influence of neuronal adaptation on the complexity of neural network dynamics. Different dynamical modes correspond to different levels of the ultrametric structure of the hierarchical memory being invoked during an associative process. The mechanism is deterministic but may also underlie free associative thought processes. The formulation of an abstract neural network model of hierarchical associative memory utilizes a recent approach to incorporate neuronal adaptation. It includes a generalized neuronal activation function recently derived by a Hodgkin-Huxley-type model. It is shown that the extent to which a hierarchically organized memory structure is searched is controlled by the neuronal adaptability, i.e. the strength of coupling between neuronal activity and excitability. In the brain, the concentration of various neuromodulators in turn can regulate the adaptability. An autonomously controlled sequence of bifurcations, from an initial exploratory to a final retrieval phase, of an associative process is shown to result from an activity-dependent release of neuromodulators. The dynamics control mechanism may be important in the context of various disorders of the brain and may also extend the range of applications of artificial neural networks." }, { "pmid": "19190637", "title": "Complex brain networks: graph theoretical analysis of structural and functional systems.", "abstract": "Recent developments in the quantitative analysis of complex networks, based largely on graph theory, have been rapidly translated to studies of brain network organization. The brain's structural and functional systems have features of complex networks--such as small-world topology, highly connected hubs and modularity--both at the whole-brain scale of human neuroimaging and at a cellular scale in non-human animals. In this article, we review studies investigating complex brain networks in diverse experimental modalities (including structural and functional MRI, diffusion tensor imaging, magnetoencephalography and electroencephalography in humans) and provide an accessible introduction to the basic principles of graph theory. We also highlight some of the technical challenges and key questions to be addressed by future developments in this rapidly moving field." }, { "pmid": "20307583", "title": "A computational theory of episodic memory formation in the hippocampus.", "abstract": "A quantitative computational theory of the operation of the hippocampus as an episodic memory system is described. The CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial associations between any spatial location (place in rodents or spatial view in primates) and an object or reward and to provide for completion of the whole memory during recall from any part. The theory is extended to associations between time and object or reward to implement temporal order memory, also important in episodic memory. The dentate gyrus performs pattern separation by competitive learning to produce sparse representations, producing for example neurons with place-like fields from entorhinal cortex grid cells. The dentate granule cells produce by the very small number of mossy fibre connections to CA3 a randomizing pattern separation effect important during learning but not recall that separates out the patterns represented by CA3 firing to be very different from each other, which is optimal for an unstructured episodic memory system in which each memory must be kept distinct from other memories. The direct perforant path input to CA3 is quantitatively appropriate to provide the cue for recall in CA3, but not for learning. The CA1 recodes information from CA3 to set up associatively learned backprojections to neocortex to allow subsequent retrieval of information to neocortex, providing a quantitative account of the large number of hippocampo-neocortical and neocortical-neocortical backprojections. Tests of the theory including hippocampal subregion analyses and hippocampal NMDA receptor knockouts are described and support the theory." }, { "pmid": "24427215", "title": "Transitory memory retrieval in a biologically plausible neural network model.", "abstract": "A number of memory models have been proposed. These all have the basic structure that excitatory neurons are reciprocally connected by recurrent connections together with the connections with inhibitory neurons, which yields associative memory (i.e., pattern completion) and successive retrieval of memory. In most of the models, a simple mathematical model for a neuron in the form of a discrete map is adopted. It has not, however, been clarified whether behaviors like associative memory and successive retrieval of memory appear when a biologically plausible neuron model is used. In this paper, we propose a network model for associative memory and successive retrieval of memory based on Pinsky-Rinzel neurons. The state of pattern completion in associative memory can be observed with an appropriate balance of excitatory and inhibitory connection strengths. Increasing of the connection strength of inhibitory interneurons changes the state of memory retrieval from associative memory to successive retrieval of memory. We investigate this transition." }, { "pmid": "25150041", "title": "Functional connectivity among spike trains in neural assemblies during rat working memory task.", "abstract": "Working memory refers to a brain system that provides temporary storage to manipulate information for complex cognitive tasks. As the brain is a more complex, dynamic and interwoven network of connections and interactions, the questions raised here: how to investigate the mechanism of working memory from the view of functional connectivity in brain network? How to present most characteristic features of functional connectivity in a low-dimensional network? To address these questions, we recorded the spike trains in prefrontal cortex with multi-electrodes when rats performed a working memory task in Y-maze. The functional connectivity matrix among spike trains was calculated via maximum likelihood estimation (MLE). The average connectivity value Cc, mean of the matrix, was calculated and used to describe connectivity strength quantitatively. The spike network was constructed by the functional connectivity matrix. The information transfer efficiency Eglob was calculated and used to present the features of the network. In order to establish a low-dimensional spike network, the active neurons with higher firing rates than average rate were selected based on sparse coding. The results show that the connectivity Cc and the network transfer efficiency Eglob vaired with time during the task. The maximum values of Cc and Eglob were prior to the working memory behavior reference point. Comparing with the results in the original network, the feature network could present more characteristic features of functional connectivity." }, { "pmid": "19159151", "title": "A context maintenance and retrieval model of organizational processes in free recall.", "abstract": "The authors present the context maintenance and retrieval (CMR) model of memory search, a generalized version of the temporal context model of M. W. Howard and M. J. Kahana (2002a), which proposes that memory search is driven by an internally maintained context representation composed of stimulus-related and source-related features. In the CMR model, organizational effects (the tendency for related items to cluster during the recall sequence) arise as a consequence of associations between active context elements and features of the studied material. Semantic clustering is due to longstanding context-to-item associations, whereas temporal clustering and source clustering are both due to associations formed during the study episode. A behavioral investigation of the three forms of organization provides data to constrain the CMR model, revealing interactions between the organizational factors. Finally, the authors discuss the implications of CMR for their understanding of a broad class of episodic memory phenomena and suggest ways in which this theory may guide exploration of the neural correlates of memory search." }, { "pmid": "20970304", "title": "Tree-like hierarchical associative memory structures.", "abstract": "In this letter we explore an alternative structural representation for Steinbuch-type binary associative memories. These networks offer very generous storage capacities (both asymptotic and finite) at the expense of sparse coding. However, the original retrieval prescription performs a complete search on a fully-connected network, whereas only a small fraction of units will eventually contain desired results due to the sparse coding requirement. Instead of modelling the network as a single layer of neurons we suggest a hierarchical organization where the information content of each memory is a successive approximation of one another. With such a structure it is possible to enhance retrieval performance using a progressively deepening procedure. To backup our intuition we provide collected experimental evidence alongside comments on eventual biological plausibility." }, { "pmid": "17320359", "title": "Autoassociative memory retrieval and spontaneous activity bumps in small-world networks of integrate-and-fire neurons.", "abstract": "The metric structure of synaptic connections is obviously an important factor in shaping the properties of neural networks, in particular the capacity to retrieve memories, with which are endowed autoassociative nets operating via attractor dynamics. Qualitatively, some real networks in the brain could be characterized as 'small worlds', in the sense that the structure of their connections is intermediate between the extremes of an orderly geometric arrangement and of a geometry-independent random mesh. Small worlds can be defined more precisely in terms of their mean path length and clustering coefficient; but is such a precise description useful for a better understanding of how the type of connectivity affects memory retrieval? We have simulated an autoassociative memory network of integrate-and-fire units, positioned on a ring, with the network connectivity varied parametrically between ordered and random. We find that the network retrieves previously stored memory patterns when the connectivity is close to random, and displays the characteristic behavior of ordered nets (localized 'bumps' of activity) when the connectivity is close to ordered. Recent analytical work shows that these two behaviors can coexist in a network of simple threshold-linear units, leading to localized retrieval states. We find that they tend to be mutually exclusive behaviors, however, with our integrate-and-fire units. Moreover, the transition between the two occurs for values of the connectivity parameter which are not simply related to the notion of small worlds." }, { "pmid": "24662576", "title": "Modular structure of functional networks in olfactory memory.", "abstract": "Graph theory enables the study of systems by describing those systems as a set of nodes and edges. Graph theory has been widely applied to characterize the overall structure of data sets in the social, technological, and biological sciences, including neuroscience. Modular structure decomposition enables the definition of sub-networks whose components are gathered in the same module and work together closely, while working weakly with components from other modules. This processing is of interest for studying memory, a cognitive process that is widely distributed. We propose a new method to identify modular structure in task-related functional magnetic resonance imaging (fMRI) networks. The modular structure was obtained directly from correlation coefficients and thus retained information about both signs and weights. The method was applied to functional data acquired during a yes-no odor recognition memory task performed by young and elderly adults. Four response categories were explored: correct (Hit) and incorrect (False alarm, FA) recognition and correct and incorrect rejection. We extracted time series data for 36 areas as a function of response categories and age groups and calculated condition-based weighted correlation matrices. Overall, condition-based modular partitions were more homogeneous in young than elderly subjects. Using partition similarity-based statistics and a posteriori statistical analyses, we demonstrated that several areas, including the hippocampus, caudate nucleus, and anterior cingulate gyrus, belonged to the same module more frequently during Hit than during all other conditions. Modularity values were negatively correlated with memory scores in the Hit condition and positively correlated with bias scores (liberal/conservative attitude) in the Hit and FA conditions. We further demonstrated that the proportion of positive and negative links between areas of different modules (i.e., the proportion of correlated and anti-correlated areas) accounted for most of the observed differences in signed modularity. Taken together, our results provided some evidence that the neural networks involved in odor recognition memory are organized into modules and that these modular partitions are linked to behavioral performance and individual strategies." }, { "pmid": "22132040", "title": "Functional model of biological neural networks.", "abstract": "A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks." }, { "pmid": "25192634", "title": "Remapping of memory encoding and retrieval networks: insights from neuroimaging in primates.", "abstract": "Advancements in neuroimaging techniques have allowed for the investigation of the neural correlates of memory functions in the whole human brain. Thus, the involvement of various cortical regions, including the medial temporal lobe (MTL) and posterior parietal cortex (PPC), has been repeatedly reported in the human memory processes of encoding and retrieval. However, the functional roles of these sites could be more fully characterized utilizing nonhuman primate models, which afford the potential for well-controlled, finer-scale experimental procedures that are inapplicable to humans, including electrophysiology, histology, genetics, and lesion approaches. Yet, the presence and localization of the functional counterparts of these human memory-related sites in the macaque monkey MTL or PPC were previously unknown. Therefore, to bridge the inter-species gap, experiments were required in monkeys using functional magnetic resonance imaging (fMRI), the same methodology adopted in human studies. Here, we briefly review the history of experimentation on memory systems using a nonhuman primate model and our recent fMRI studies examining memory processing in monkeys performing recognition memory tasks. We will discuss the memory systems common to monkeys and humans and future directions of finer cell-level characterization of memory-related processes using electrophysiological recording and genetic manipulation approaches." }, { "pmid": "9623998", "title": "Collective dynamics of 'small-world' networks.", "abstract": "Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices." } ]
Scientific Reports
27905523
PMC5131304
10.1038/srep38185
Test of quantum thermalization in the two-dimensional transverse-field Ising model
We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems.
Related workRecently an exact theorem on generalized thermalization in D-dimensional quantum systems in the thermodynamic limit has been formulated55. The theorem states that generalized thermalization can be observed if the state of the system is algebraically sizably clustering. It also holds for exponentially sizably clustering states. Then the stationary state of the system can be described by a GGE which has to take into account all local and quasilocal charges of the system. For non-integrable systems, for which the total energy is the only conserved quantity, the generalized thermalization reduces to thermalization with a stationary state according to the CGE. We now discuss the exact theorem with respect to the 2D-TFIM. Considering the 2D-TFIM one has to distinguish between the ferromagnetic and the paramagnetic phase. In the paramagnetic phase the 2D-TFIM is gapped and there is no symmetry breaking both for finite system sizes as well as in the thermodynamic limit. As the ground state of gapped quantum systems is sizably exponentially clustering8384 the exact theorem can be applied to the 2D-TFIM in the thermodynamic limit after the interaction quenches in the paramagnetic phase and predicts thermalization. In our numerical studies we have indeed observed a very good agreement both between the time-averaged observables and their thermal counterparts as well as between the distributions for small quenches, i.e. large ratios h/J, also for the finite system sizes that we can simulate. For larger quenches closer to the phase transition we have found deviations between the time averages and the thermal values, but the finite size scaling shows that they decrease with the system size. Our results for the interaction quenches in the paramagnetic phase are thus in agreement with the exact theorem. In the ferromagnetic phase on the other hand the Hamiltonian of the system is not gapped in the thermodynamic limit. The spin flip symmetry is spontaneously broken and long-range order exists, so that all spins of the system are correlated to each other and the correlations do not cluster. An analytic expression for the shape of the decay of the correlations has not been found yet, thus it cannot be decided whether the exact theorem can be applied in the ferromagnetic phase or not.
[ "17155348", "17358832", "18421349", "19792288", "17677755", "17501552", "21702628", "18518309", "18232957", "19392341", "19392319", "20868062", "11019309", "21405280", "23829756", "23952371", "26551800", "19113246", "18352169", "21231563", "18517843", "19792519", "23581350", "24266486", "21405308", "22463502", "27088565", "21561173", "23215197", "25166634", "25260002", "25260003", "26550747", "27078301", "9905246", "22540449", "27078289", "10041039", "22355756", "15524771" ]
[ { "pmid": "17155348", "title": "Effect of suddenly turning on interactions in the Luttinger model.", "abstract": "The evolution of correlations in the exactly solvable Luttinger model (a model of interacting fermions in one dimension) after a suddenly switched-on interaction is analytically studied. When the model is defined on a finite-size ring, zero-temperature correlations are periodic in time. However, in the thermodynamic limit, the system relaxes algebraically towards a stationary state which is well described, at least for some simple correlation functions, by the generalized Gibbs ensemble recently introduced by Rigol et al. (cond-mat/0604476). The critical exponent that characterizes the decay of the one-particle correlation function is different from the known equilibrium exponents. Experiments for which these results can be relevant are also discussed." }, { "pmid": "17358832", "title": "Relaxation in a completely integrable many-body quantum system: an ab initio study of the dynamics of the highly excited states of 1D lattice hard-core bosons.", "abstract": "In this Letter we pose the question of whether a many-body quantum system with a full set of conserved quantities can relax to an equilibrium state, and, if it can, what the properties of such a state are. We confirm the relaxation hypothesis through an ab initio numerical investigation of the dynamics of hard-core bosons on a one-dimensional lattice. Further, a natural extension of the Gibbs ensemble to integrable systems results in a theory that is able to predict the mean values of physical observables after relaxation. Finally, we show that our generalized equilibrium carries more memory of the initial conditions than the usual thermodynamic one. This effect may have many experimental consequences, some of which have already been observed in the recent experiment on the nonequilibrium dynamics of one-dimensional hard-core bosons in a harmonic potential [T. Kinoshita et al., Nature (London) 440, 900 (2006)10.1038/nature04693]." }, { "pmid": "18421349", "title": "Thermalization and its mechanism for generic isolated quantum systems.", "abstract": "An understanding of the temporal evolution of isolated many-body quantum systems has long been elusive. Recently, meaningful experimental studies of the problem have become possible, stimulating theoretical interest. In generic isolated systems, non-equilibrium dynamics is expected to result in thermalization: a relaxation to states in which the values of macroscopic quantities are stationary, universal with respect to widely differing initial conditions, and predictable using statistical mechanics. However, it is not obvious what feature of many-body quantum mechanics makes quantum thermalization possible in a sense analogous to that in which dynamical chaos makes classical thermalization possible. For example, dynamical chaos itself cannot occur in an isolated quantum system, in which the time evolution is linear and the spectrum is discrete. Some recent studies even suggest that statistical mechanics may give incorrect predictions for the outcomes of relaxation in such systems. Here we demonstrate that a generic isolated quantum many-body system does relax to a state well described by the standard statistical-mechanical prescription. Moreover, we show that time evolution itself plays a merely auxiliary role in relaxation, and that thermalization instead happens at the level of individual eigenstates, as first proposed by Deutsch and Srednicki. A striking consequence of this eigenstate-thermalization scenario, confirmed for our system, is that knowledge of a single many-body eigenstate is sufficient to compute thermal averages-any eigenstate in the microcanonical energy window will do, because they all give the same result." }, { "pmid": "19792288", "title": "Breakdown of thermalization in finite one-dimensional systems.", "abstract": "We use quantum quenches to study the dynamics and thermalization of hard core bosons in finite one-dimensional lattices. We perform exact diagonalizations and find that, far away from integrability, few-body observables thermalize. We then study the breakdown of thermalization as one approaches an integrable point. This is found to be a smooth process in which the predictions of standard statistical mechanics continuously worsen as the system moves toward integrability. We establish a direct connection between the presence or absence of thermalization and the validity or failure of the eigenstate thermalization hypothesis, respectively." }, { "pmid": "17677755", "title": "Strongly correlated fermions after a quantum quench.", "abstract": "Using the adaptive time-dependent density-matrix renormalization group method, we study the time evolution of strongly correlated spinless fermions on a one-dimensional lattice after a sudden change of the interaction strength. For certain parameter values, two different initial states (e.g., metallic and insulating) lead to observables which become indistinguishable after relaxation. We find that the resulting quasistationary state is nonthermal. This result holds for both integrable and nonintegrable variants of the system." }, { "pmid": "17501552", "title": "Quench dynamics and nonequilibrium phase diagram of the bose-hubbard model.", "abstract": "We investigate the time evolution of correlations in the Bose-Hubbard model following a quench from the superfluid to the Mott insulator. For large values of the final interaction strength the system approaches a distinctly nonequilibrium steady state that bears strong memory of the initial conditions. In contrast, when the final interaction strength is comparable to the hopping, the correlations are rather well approximated by those at thermal equilibrium. The existence of two distinct nonequilibrium regimes is surprising given the nonintegrability of the Bose-Hubbard model. We relate this phenomenon to the role of quasiparticle interactions in the Mott insulator." }, { "pmid": "21702628", "title": "Quantum quench in the transverse-field Ising chain.", "abstract": "We consider the time evolution of observables in the transverse-field Ising chain after a sudden quench of the magnetic field. We provide exact analytical results for the asymptotic time and distance dependence of one- and two-point correlation functions of the order parameter. We employ two complementary approaches based on asymptotic evaluations of determinants and form-factor sums. We prove that the stationary value of the two-point correlation function is not thermal, but can be described by a generalized Gibbs ensemble (GGE). The approach to the stationary state can also be understood in terms of a GGE. We present a conjecture on how these results generalize to particular quenches in other integrable models." }, { "pmid": "18518309", "title": "Interaction quench in the Hubbard model.", "abstract": "Motivated by recent experiments in ultracold atomic gases that explore the nonequilibrium dynamics of interacting quantum many-body systems, we investigate the opposite limit of Landau's Fermi-liquid paradigm: We study a Hubbard model with a sudden interaction quench, that is, the interaction is switched on at time t=0. Using the flow equation method, we are able to study the real time dynamics for weak interaction U in a systematic expansion and find three clearly separated time regimes: (i) An initial buildup of correlations where the quasiparticles are formed. (ii) An intermediate quasi-steady regime resembling a zero temperature Fermi liquid with a nonequilibrium quasiparticle distribution function. (iii) The long-time limit described by a quantum Boltzmann equation leading to thermalization of the momentum distribution function with a temperature T proportional, variantU." }, { "pmid": "18232957", "title": "Exact relaxation in a class of nonequilibrium quantum lattice systems.", "abstract": "A reasonable physical intuition in the study of interacting quantum systems says that, independent of the initial state, the system will tend to equilibrate. In this work we introduce an experimentally accessible setting where relaxation to a steady state is exact, namely, for the Bose-Hubbard model quenched from a Mott quantum phase to the free strong superfluid regime. We rigorously prove that the evolving state locally relaxes to a steady state with maximum entropy constrained by second moments--thus maximizing the entanglement. Remarkably, for this to be true, no time average is necessary. Our argument includes a central limit theorem and exploits the finite speed of information transfer. We also show that for all periodic initial configurations (charge density waves) the system relaxes locally, and identify experimentally accessible signatures in optical lattices as well as implications for the foundations of statistical mechanics." }, { "pmid": "19392341", "title": "Relaxation of antiferromagnetic order in spin-1/2 chains following a quantum quench.", "abstract": "We study the unitary time evolution of antiferromagnetic order in anisotropic Heisenberg chains that are initially prepared in a pure quantum state far from equilibrium. Our analysis indicates that the antiferromagnetic order imprinted in the initial state vanishes exponentially. Depending on the anisotropy parameter, oscillatory or nonoscillatory relaxation dynamics is observed. Furthermore, the corresponding relaxation time exhibits a minimum at the critical point, in contrast to the usual notion of critical slowing down, from which a maximum is expected." }, { "pmid": "19392319", "title": "Effective thermal dynamics following a quantum quench in a spin chain.", "abstract": "We study the nonequilibrium dynamics of the quantum Ising model following an abrupt quench of the transverse field. We focus on the on-site autocorrelation function of the order parameter, and extract the phase-coherence time tau(Q)(phi) from its asymptotic behavior. We show that the initial state determines tau(Q)(phi) only through an effective temperature set by its energy and the final Hamiltonian. Moreover, we observe that the dependence of tau(Q)(phi) on the effective temperature fairly agrees with that obtained in thermal equilibrium as a function of the equilibrium temperature." }, { "pmid": "20868062", "title": "Time-dependent mean field theory for quench dynamics in correlated electron systems.", "abstract": "A simple and very flexible variational approach to the out-of-equilibrium quantum dynamics in strongly correlated electron systems is introduced through a time-dependent Gutzwiller wave function. As an application, we study the simple case of a sudden change of the interaction in the fermionic Hubbard model and find at the mean-field level an extremely rich behavior. In particular, a dynamical transition between small and large quantum quench regimes is found to occur at half-filling, in accordance with the analysis of Eckstein, Phys. Rev. Lett. 103, 056403 (2009)10.1103/PhysRevLett.103.056403, obtained by dynamical mean-field theory, that turns into a crossover at any finite doping." }, { "pmid": "11019309", "title": "Long-range correlations in the nonequilibrium quantum relaxation of a spin chain.", "abstract": "We consider the nonstationary quantum relaxation of the Ising spin chain in a transverse field of strength h. Starting from a homogeneously magnetized initial state the system approaches a stationary state by a process possessing quasi-long-range correlations in time and space, independent of the value of h. In particular, the system exhibits aging (or lack of time-translational invariance on intermediate time scales) although no indications of coarsening are present." }, { "pmid": "21405280", "title": "Quantum relaxation after a quench in systems with boundaries.", "abstract": "We study the time dependence of the magnetization profile, m(l)(t), of a large finite open quantum Ising chain after a quench. We observe a cyclic variation, in which starting with an exponentially decreasing period the local magnetization arrives to a quasistationary regime, which is followed by an exponentially fast reconstruction period. The nonthermal behavior observed at near-surface sites turns over to thermal behavior for bulk sites. In addition to the standard time and length scales a nonstandard time scale is identified in the reconstruction period." }, { "pmid": "23829756", "title": "Time evolution of local observables after quenching to an integrable model.", "abstract": "We consider quantum quenches in integrable models. We argue that the behavior of local observables at late times after the quench is given by their expectation values with respect to a single representative Hamiltonian eigenstate. This can be viewed as a generalization of the eigenstate thermalization hypothesis to quantum integrable models. We present a method for constructing this representative state by means of a generalized thermodynamic Bethe ansatz (GTBA). Going further, we introduce a framework for calculating the time dependence of local observables as they evolve towards their stationary values. As an explicit example we consider quantum quenches in the transverse-field Ising chain and show that previously derived results are recovered efficiently within our framework." }, { "pmid": "23952371", "title": "Fluctuation-dissipation theorem in an isolated system of quantum dipolar bosons after a quench.", "abstract": "We examine the validity of fluctuation-dissipation relations in isolated quantum systems taken out of equilibrium by a sudden quench. We focus on the dynamics of trapped hard-core bosons in one-dimensional lattices with dipolar interactions whose strength is changed during the quench. We find indications that fluctuation-dissipation relations hold if the system is nonintegrable after the quench, as well as if it is integrable after the quench if the initial state is an equilibrium state of a nonintegrable Hamiltonian. On the other hand, we find indications that they fail if the system is integrable both before and after quenching." }, { "pmid": "26551800", "title": "Scaling and Universality at Dynamical Quantum Phase Transitions.", "abstract": "Dynamical quantum phase transitions (DQPTs) at critical times appear as nonanalyticities during nonequilibrium quantum real-time evolution. Although there is evidence for a close relationship between DQPTs and equilibrium phase transitions, a major challenge is still to connect to fundamental concepts such as scaling and universality. In this work, renormalization group transformations in complex parameter space are formulated for quantum quenches in Ising models showing that the DQPTs are critical points associated with unstable fixed points of equilibrium Ising models. Therefore, these DQPTs obey scaling and universality. On the basis of numerical simulations, signatures of these DQPTs in the dynamical buildup of spin correlations are found with an associated power-law scaling determined solely by the fixed point's universality class. An outlook is given on how to explore this dynamical scaling experimentally in systems of trapped ions." }, { "pmid": "19113246", "title": "Foundation of statistical mechanics under experimentally realistic conditions.", "abstract": "We demonstrate the equilibration of isolated macroscopic quantum systems, prepared in nonequilibrium mixed states with a significant population of many energy levels, and observed by instruments with a reasonably bound working range compared to the resolution limit. Both properties are satisfied under many, if not all, experimentally realistic conditions. At equilibrium, the predictions and limitations of statistical mechanics are recovered." }, { "pmid": "18352169", "title": "Dephasing and the steady state in quantum many-particle systems.", "abstract": "We discuss relaxation in bosonic and fermionic many-particle systems. For integrable systems, time evolution can cause a dephasing effect, leading for finite subsystems to steady states. We explicitly derive those steady subsystem states and devise sufficient prerequisites for the dephasing to occur. We also find simple scenarios, in which dephasing is ineffective and discuss the dependence on dimensionality and criticality. It follows further that, after a quench of system parameters, entanglement entropy will become extensive. This provides a way of creating strong entanglement in a controlled fashion." }, { "pmid": "21231563", "title": "Effect of rare fluctuations on the thermalization of isolated quantum systems.", "abstract": "We consider the question of thermalization for isolated quantum systems after a sudden parameter change, a so-called quantum quench. In particular, we investigate the prerequisites for thermalization, focusing on the statistical properties of the time-averaged density matrix and of the expectation values of observables in the final eigenstates. We find that eigenstates, which are rare compared to the typical ones sampled by the microcanonical distribution, are responsible for the absence of thermalization of some infinite integrable models and play an important role for some nonintegrable systems of finite size, such as the Bose-Hubbard model. We stress the importance of finite size effects for the thermalization of isolated quantum systems and discuss two scenarios for thermalization." }, { "pmid": "18517843", "title": "Nonthermal steady states after an interaction quench in the Falicov-Kimball model.", "abstract": "We present the exact solution of the Falicov-Kimball model after a sudden change of its interaction parameter using nonequilibrium dynamical mean-field theory. For different interaction quenches between the homogeneous metallic and insulating phases the system relaxes to a nonthermal steady state on time scales on the order of variant Planck's over 2pi/bandwidth, showing collapse and revival with an approximate period of h/interaction if the interaction is large. We discuss the reasons for this behavior and provide a statistical description of the final steady state by means of generalized Gibbs ensembles." }, { "pmid": "19792519", "title": "Thermalization after an interaction quench in the Hubbard model.", "abstract": "We use nonequilibrium dynamical mean-field theory to study the time evolution of the fermionic Hubbard model after an interaction quench. Both in the weak-coupling and in the strong-coupling regime the system is trapped in quasistationary states on intermediate time scales. These two regimes are separated by a sharp crossover at U(c)dyn=0.8 in units of the bandwidth, where fast thermalization occurs. Our results indicate a dynamical phase transition which should be observable in experiments on trapped fermionic atoms." }, { "pmid": "23581350", "title": "Nonthermal antiferromagnetic order and nonequilibrium criticality in the Hubbard model.", "abstract": "We study dynamical phase transitions from antiferromagnetic to paramagnetic states driven by an interaction quench in the fermionic Hubbard model using the nonequilibrium dynamical mean-field theory. We identify two dynamical transition points where the relaxation behavior qualitatively changes: one corresponds to the thermal phase transition at which the order parameter decays critically slowly in a power law ∝t(-1/2), and the other is connected to the existence of nonthermal antiferromagnetic order in systems with effective temperature above the thermal critical temperature. The frequency of the amplitude mode extrapolates to zero as one approaches the nonthermal (quasi)critical point, and thermalization is significantly delayed by the trapping in the nonthermal state. A slow relaxation of the nonthermal order is followed by a faster thermalization process." }, { "pmid": "24266486", "title": "Prethermalization in a nonintegrable quantum spin chain after a quench.", "abstract": "We study the dynamics of a quantum Ising chain after the sudden introduction of a nonintegrable long-range interaction. Via an exact mapping onto a fully connected lattice of hard-core bosons, we show that a prethermal state emerges and we investigate its features by focusing on a class of physically relevant observables. In order to gain insight into the eventual thermalization, we outline a diagrammatic approach which complements the study of the previous quasistationary state and provides the basis for a self-consistent solution of the kinetic equation. This analysis suggests that both the temporal decay towards the prethermal state and the crossover to the eventual thermal one may occur algebraically." }, { "pmid": "21405308", "title": "Absence of thermalization in nonintegrable systems.", "abstract": "We establish a link between unitary relaxation dynamics after a quench in closed many-body systems and the entanglement in the energy eigenbasis. We find that even if reduced states equilibrate, they can have memory on the initial conditions even in certain models that are far from integrable. We show that in such situations the equilibrium states are still described by a maximum entropy or generalized Gibbs ensemble, regardless of whether a model is integrable or not, thereby contributing to a recent debate. In addition, we discuss individual aspects of the thermalization process, comment on the role of Anderson localization, and collect and compare different notions of integrability." }, { "pmid": "22463502", "title": "Thermalization in nature and on a quantum computer.", "abstract": "In this work, we show how Gibbs or thermal states appear dynamically in closed quantum many-body systems, building on the program of dynamical typicality. We introduce a novel perturbation theorem for physically relevant weak system-bath couplings that is applicable even in the thermodynamic limit. We identify conditions under which thermalization happens and discuss the underlying physics. Based on these results, we also present a fully general quantum algorithm for preparing Gibbs states on a quantum computer with a certified runtime and error bound. This complements quantum Metropolis algorithms, which are expected to be efficient but have no known runtime estimates and only work for local Hamiltonians." }, { "pmid": "27088565", "title": "Equilibration, thermalisation, and the emergence of statistical mechanics in closed quantum systems.", "abstract": "We review selected advances in the theoretical understanding of complex quantum many-body systems with regard to emergent notions of quantum statistical mechanics. We cover topics such as equilibration and thermalisation in pure state statistical mechanics, the eigenstate thermalisation hypothesis, the equivalence of ensembles, non-equilibration dynamics following global and local quenches as well as ramps. We also address initial state independence, absence of thermalisation, and many-body localisation. We elucidate the role played by key concepts for these phenomena, such as Lieb-Robinson bounds, entanglement growth, typicality arguments, quantum maximum entropy principles and the generalised Gibbs ensembles, and quantum (non-)integrability. We put emphasis on rigorous approaches and present the most important results in a unified language." }, { "pmid": "21561173", "title": "Generalized thermalization in an integrable lattice system.", "abstract": "After a quench, observables in an integrable system may not relax to the standard thermal values, but can relax to the ones predicted by the generalized Gibbs ensemble (GGE) [M. Rigol et al., Phys. Rev. Lett. 98, 050405 (2007)]. The GGE has been shown to accurately describe observables in various one-dimensional integrable systems, but the origin of its success is not fully understood. Here we introduce a microcanonical version of the GGE and provide a justification of the GGE based on a generalized interpretation of the eigenstate thermalization hypothesis, which was previously introduced to explain thermalization of nonintegrable systems. We study relaxation after a quench of one-dimensional hard-core bosons in an optical lattice. Exact numerical calculations for up to 10 particles on 50 lattice sites (≈10(10) eigenstates) validate our approach." }, { "pmid": "23215197", "title": "Constructing the generalized Gibbs ensemble after a quantum quench.", "abstract": "Using a numerical renormalization group based on exploiting an underlying exactly solvable nonrelativistic theory, we study the out-of-equilibrium dynamics of a 1D Bose gas (as described by the Lieb-Liniger model) released from a parabolic trap. Our method allows us to track the postquench dynamics of the gas all the way to infinite time. We also exhibit a general construction, applicable to all integrable models, of the thermodynamic ensemble that has been suggested to govern this dynamics, the generalized Gibbs ensemble. We compare the predictions of equilibration from this ensemble against the long time dynamics observed using our method." }, { "pmid": "25166634", "title": "Infinite-time average of local fields in an integrable quantum field theory after a quantum quench.", "abstract": "The infinite-time average of the expectation values of local fields of any interacting quantum theory after a global quench process are key quantities for matching theoretical and experimental results. For quantum integrable field theories, we show that they can be obtained by an ensemble average that employs a particular limit of the form factors of local fields and quantities extracted by the generalized Bethe ansatz." }, { "pmid": "25260002", "title": "Quenching the anisotropic heisenberg chain: exact solution and generalized Gibbs ensemble predictions.", "abstract": "We study quenches in integrable spin-1/2 chains in which we evolve the ground state of the antiferromagnetic Ising model with the anisotropic Heisenberg Hamiltonian. For this nontrivially interacting situation, an application of the first-principles-based quench-action method allows us to give an exact description of the postquench steady state in the thermodynamic limit. We show that a generalized Gibbs ensemble, implemented using all known local conserved charges, fails to reproduce the exact quench-action steady state and to correctly predict postquench equilibrium expectation values of physical observables. This is supported by numerical linked-cluster calculations within the diagonal ensemble in the thermodynamic limit." }, { "pmid": "25260003", "title": "Correlations after quantum quenches in the XXZ spin chain: failure of the generalized Gibbs ensemble.", "abstract": "We study the nonequilibrium time evolution of the spin-1/2 anisotropic Heisenberg (XXZ) spin chain, with a choice of dimer product and Néel states as initial states. We investigate numerically various short-ranged spin correlators in the long-time limit and find that they deviate significantly from predictions based on the generalized Gibbs ensemble (GGE) hypotheses. By computing the asymptotic spin correlators within the recently proposed quench-action formalism [Phys. Rev. Lett. 110, 257203 (2013)], however, we find excellent agreement with the numerical data. We, therefore, conclude that the GGE cannot give a complete description even of local observables, while the quench-action formalism correctly captures the steady state in this case." }, { "pmid": "26550747", "title": "Complete Generalized Gibbs Ensembles in an Interacting Theory.", "abstract": "In integrable many-particle systems, it is widely believed that the stationary state reached at late times after a quantum quench can be described by a generalized Gibbs ensemble (GGE) constructed from their extensive number of conserved charges. A crucial issue is then to identify a complete set of these charges, enabling the GGE to provide exact steady-state predictions. Here we solve this long-standing problem for the case of the spin-1/2 Heisenberg chain by explicitly constructing a GGE which uniquely fixes the macrostate describing the stationary behavior after a general quantum quench. A crucial ingredient in our method, which readily generalizes to other integrable models, are recently discovered quasilocal charges. As a test, we reproduce the exact postquench steady state of the Néel quench problem obtained previously by means of the Quench Action method." }, { "pmid": "27078301", "title": "Generalized Gibbs ensemble in a nonintegrable system with an extensive number of local symmetries.", "abstract": "We numerically study the unitary time evolution of a nonintegrable model of hard-core bosons with an extensive number of local Z(2) symmetries. We find that the expectation values of local observables in the stationary state are described better by the generalized Gibbs ensemble (GGE) than by the canonical ensemble. We also find that the eigenstate thermalization hypothesis fails for the entire spectrum but holds true within each symmetry sector, which justifies the GGE. In contrast, if the model has only one global Z(2) symmetry or a size-independent number of local Z(2) symmetries, we find that the stationary state is described by the canonical ensemble. Thus, the GGE is necessary to describe the stationary state even in a nonintegrable system if it has an extensive number of local symmetries." }, { "pmid": "22540449", "title": "Alternatives to eigenstate thermalization.", "abstract": "An isolated quantum many-body system in an initial pure state will come to thermal equilibrium if it satisfies the eigenstate thermalization hypothesis (ETH). We consider alternatives to ETH that have been proposed. We first show that von Neumann's quantum ergodic theorem relies on an assumption that is essentially equivalent to ETH. We also investigate whether, following a sudden quench, special classes of pure states can lead to thermal behavior in systems that do not obey ETH, namely, integrable systems. We find examples of this, but only for initial states that obeyed ETH before the quench." }, { "pmid": "27078289", "title": "Eigenstate thermalization in the two-dimensional transverse field Ising model.", "abstract": "We study the onset of eigenstate thermalization in the two-dimensional transverse field Ising model (2D-TFIM) in the square lattice. We consider two nonequivalent Hamiltonians: the ferromagnetic 2D-TFIM and the antiferromagnetic 2D-TFIM in the presence of a uniform longitudinal field. We use full exact diagonalization to examine the behavior of quantum chaos indicators and of the diagonal matrix elements of operators of interest in the eigenstates of the Hamiltonian. An analysis of finite size effects reveals that quantum chaos and eigenstate thermalization occur in those systems whenever the fields are nonvanishing and not too large." }, { "pmid": "22355756", "title": "Localization and glassy dynamics of many-body quantum systems.", "abstract": "When classical systems fail to explore their entire configurational space, intriguing macroscopic phenomena like aging and glass formation may emerge. Also closed quanto-mechanical systems may stop wandering freely around the whole Hilbert space, even if they are initially prepared into a macroscopically large combination of eigenstates. Here, we report numerical evidences that the dynamics of strongly interacting lattice bosons driven sufficiently far from equilibrium can be trapped into extremely long-lived inhomogeneous metastable states. The slowing down of incoherent density excitations above a threshold energy, much reminiscent of a dynamical arrest on the verge of a glass transition, is identified as the key feature of this phenomenon. We argue that the resulting long-lived inhomogeneities are responsible for the lack of thermalization observed in large systems. Such a rich phenomenology could be experimentally uncovered upon probing the out-of-equilibrium dynamics of conveniently prepared quantum states of trapped cold atoms which we hereby suggest." }, { "pmid": "15524771", "title": "Locality in quantum and Markov dynamics on lattices and networks.", "abstract": "We consider gapped systems governed by either quantum or Markov dynamics, with the low-lying states below the gap being approximately degenerate. For a broad class of dynamics, we prove that ground or stationary state correlation functions can be written as a piece decaying exponentially in space plus a term set by matrix elements between the low-lying states. The key to the proof is a local approximation to the negative energy, or annihilation, part of an operator in a gapped system. Applications to numerical simulation of quantum systems and to networks are discussed." } ]
BioData Mining
27980679
PMC5139023
10.1186/s13040-016-0116-2
MISSEL: a method to identify a large number of small species-specific genomic subsequences and its application to viruses classification
BackgroundContinuous improvements in next generation sequencing technologies led to ever-increasing collections of genomic sequences, which have not been easily characterized by biologists, and whose analysis requires huge computational effort. The classification of species emerged as one of the main applications of DNA analysis and has been addressed with several approaches, e.g., multiple alignments-, phylogenetic trees-, statistical- and character-based methods.ResultsWe propose a supervised method based on a genetic algorithm to identify small genomic subsequences that discriminate among different species. The method identifies multiple subsequences of bounded length with the same information power in a given genomic region. The algorithm has been successfully evaluated through its integration into a rule-based classification framework and applied to three different biological data sets: Influenza, Polyoma, and Rhino virus sequences.ConclusionsWe discover a large number of small subsequences that can be used to identify each virus type with high accuracy and low computational time, and moreover help to characterize different genomic regions. Bounding their length to 20, our method found 1164 characterizing subsequences for all the Influenza virus subtypes, 194 for all the Polyoma viruses, and 11 for Rhino viruses. The abundance of small separating subsequences extracted for each genomic region may be an important support for quick and robust virus identification.Finally, useful biological information can be derived by the relative location and abundance of such subsequences along the different regions.Electronic supplementary materialThe online version of this article (doi:10.1186/s13040-016-0116-2) contains supplementary material, which is available to authorized users.
Related workThe above problem contains several complex aspects. The main one is that searching for many subsequences with desirable properties is much more difficult than searching for a single optimal one. Additionally, the dimensions of the problem to be solved are typically very large (i.e., DNA sequences with thousands of bases). The complexity of the problem does not suggest a straightforward deployment of a mathematical optimization model, and therefore we consider a meta-heuristic approach that is much faster than enumeration, and sufficiently precise and time-effective. Meta-heuristics are nature-inspired algorithms that can be suitably customized to solve complex and computationally hard problems, and can be inspired to different principles, such as Ant colony optimization [33], Genetic Algorithms [34], Simulated annealing [35], Tabu Search [36], Particle swarm optimization [37]. Several authors in the literature considered similar problems, although they cannot be reconducted to the framework of multiple solutions that we adopt here.Recent studies [38–40] focused on problems with multiple objective functions, often used as a tool to counterbalance the measurement bias affecting solutions based on a single objective functions, or to mitigate the effect of noise in the data. Deb et al. [41] also approached the issue of identifying gene subsets to achieve reliable classification on available disease samples by modeling it as a multi-objective optimization problem. Furthermore, they proposed a multimodal multi-objective evolutionary algorithm that finds multiple, multimodal, non-dominated solutions [42] in one single run. Those are defined as solutions that have identical objective values, but differ in their phenotypes. Other works [43, 44] pointed to multiple membership classification, dealing with the fitting of complex statistical models to large data sets. Again, Liu et al. [45] proposed a subset gene identification consisting of multiple objectives, but, differently from Deb et al. [41], they scalarize the objective vector into one objective that is solved by using a parallel genetic algorithm, in order to avoid expensive computing cost. Kohavi et al. [46] addressed the problem of searching for optimal gene subsets of the same size, emphasizing the use of wrapper methods for the features selection step. Rather than trying to maximize accuracy, they identified which features were relevant, and used only those features during learning. The goal of our work is again different: to extract information on interesting portions of the genomic sequences by taking into account equivalent subsequences.The rest of the paper is organized as follows: in Section “Materials and methods”, we provide a detailed description of the algorithm. In Section “Results and discussion”, we report and discuss the application of our algorithm to extract equivalent and multiple subsequences from three experimental data sets of virus sequences, described at the beginning of that section, and we describe the results of the classification analysis of the species of those samples. Finally, in Section “Conclusions”, we delineate the conclusions of the work both from the algorithmic and biological point of view jointly with its future extensions.
[ "270744", "3447015", "18853361", "9254694", "17060194", "15961439", "25621011", "19857251", "23012628", "22214262", "23340592", "24127561", "12753540", "21852336", "23625169", "17813860", "14642662", "17008640", "3037780", "17623054", "19960211", "19828751", "20668080", "23677786", "21342784", "19458158", "21543442" ]
[ { "pmid": "270744", "title": "Phylogenetic structure of the prokaryotic domain: the primary kingdoms.", "abstract": "A phylogenetic analysis based upon ribosomal RNA sequence characterization reveals that living systems represent one of three aboriginal lines of descent: (i) the eubacteria, comprising all typical bacteria; (ii) the archaebacteria, containing methanogenic bacteria; and (iii) the urkaryotes, now represented in the cytoplasmic component of eukaryotic cells." }, { "pmid": "3447015", "title": "The neighbor-joining method: a new method for reconstructing phylogenetic trees.", "abstract": "A new method called the neighbor-joining method is proposed for reconstructing phylogenetic trees from evolutionary distance data. The principle of this method is to find pairs of operational taxonomic units (OTUs [= neighbors]) that minimize the total branch length at each stage of clustering of OTUs starting with a starlike tree. The branch lengths as well as the topology of a parsimonious tree can quickly be obtained by using this method. Using computer simulation, we studied the efficiency of this method in obtaining the correct unrooted tree in comparison with that of five other tree-making methods: the unweighted pair group method of analysis, Farris's method, Sattath and Tversky's method, Li's method, and Tateno et al.'s modified Farris method. The new, neighbor-joining method and Sattath and Tversky's method are shown to be generally better than the other methods." }, { "pmid": "18853361", "title": "Statistical assignment of DNA sequences using Bayesian phylogenetics.", "abstract": "We provide a new automated statistical method for DNA barcoding based on a Bayesian phylogenetic analysis. The method is based on automated database sequence retrieval, alignment, and phylogenetic analysis using a custom-built program for Bayesian phylogenetic analysis. We show on real data that the method outperforms Blast searches as a measure of confidence and can help eliminate 80% of all false assignment based on best Blast hit. However, the most important advance of the method is that it provides statistically meaningful measures of confidence. We apply the method to a re-analysis of previously published ancient DNA data and show that, with high statistical confidence, most of the published sequences are in fact of Neanderthal origin. However, there are several cases of chimeric sequences that are comprised of a combination of both Neanderthal and modern human DNA." }, { "pmid": "9254694", "title": "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs.", "abstract": "The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSI-BLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily." }, { "pmid": "17060194", "title": "DNA barcoding and taxonomy in Diptera: a tale of high intraspecific variability and low identification success.", "abstract": "DNA barcoding and DNA taxonomy have recently been proposed as solutions to the crisis of taxonomy and received significant attention from scientific journals, grant agencies, natural history museums, and mainstream media. Here, we test two key claims of molecular taxonomy using 1333 mitochondrial COI sequences for 449 species of Diptera. We investigate whether sequences can be used for species identification (\"DNA barcoding\") and find a relatively low success rate (< 70%) based on tree-based and newly proposed species identification criteria. Misidentifications are due to wide overlap between intra- and interspecific genetic variability, which causes 6.5% of all query sequences to have allospecific or a mixture of allo- and conspecific (3.6%) best-matching barcodes. Even when two COI sequences are identical, there is a 6% chance that they belong to different species. We also find that 21% of all species lack unique barcodes when consensus sequences of all conspecific sequences are used. Lastly, we test whether DNA sequences yield an unambiguous species-level taxonomy when sequence profiles are assembled based on pairwise distance thresholds. We find many sequence triplets for which two of the three pairwise distances remain below the threshold, whereas the third exceeds it; i.e., it is impossible to consistently delimit species based on pairwise distances. Furthermore, for species profiles based on a 3% threshold, only 47% of all profiles are consistent with currently accepted species limits, 20% contain more than one species, and 33% only some sequences from one species; i.e., adopting such a DNA taxonomy would require the redescription of a large proportion of the known species, thus worsening the taxonomic impediment. We conclude with an outlook on the prospects of obtaining complete barcode databases and the future use of DNA sequences in a modern integrative taxonomy." }, { "pmid": "15961439", "title": "DNA-BAR: distinguisher selection for DNA barcoding.", "abstract": "DNA-BAR is a software package for selecting DNA probes (henceforth referred to as distinguishers) that can be used in genomic-based identification of microorganisms. Given the genomic sequences of the microorganisms, DNA-BAR finds a near-minimum number of distinguishers yielding a distinct hybridization pattern for each microorganism. Selected distinguishers satisfy user specified bounds on length, melting temperature and GC content, as well as redundancy and cross-hybridization constraints." }, { "pmid": "25621011", "title": "Performance of genetic programming optimised Bowtie2 on genome comparison and analytic testing (GCAT) benchmarks.", "abstract": "BACKGROUND\nGenetic studies are increasingly based on short noisy next generation scanners. Typically complete DNA sequences are assembled by matching short NextGen sequences against reference genomes. Despite considerable algorithmic gains since the turn of the millennium, matching both single ended and paired end strings to a reference remains computationally demanding. Further tailoring Bioinformatics tools to each new task or scanner remains highly skilled and labour intensive. With this in mind, we recently demonstrated a genetic programming based automated technique which generated a version of the state-of-the-art alignment tool Bowtie2 which was considerably faster on short sequences produced by a scanner at the Broad Institute and released as part of The Thousand Genome Project.\n\n\nRESULTS\nBowtie2 (G P) and the original Bowtie2 release were compared on bioplanet's GCAT synthetic benchmarks. Bowtie2 (G P) enhancements were also applied to the latest Bowtie2 release (2.2.3, 29 May 2014) and retained both the GP and the manually introduced improvements.\n\n\nCONCLUSIONS\nOn both singled ended and paired-end synthetic next generation DNA sequence GCAT benchmarks Bowtie2GP runs up to 45% faster than Bowtie2. The lost in accuracy can be as little as 0.2-0.5% but up to 2.5% for longer sequences." }, { "pmid": "19857251", "title": "Classification of Myoviridae bacteriophages using protein sequence similarity.", "abstract": "BACKGROUND\nWe advocate unifying classical and genomic classification of bacteriophages by integration of proteomic data and physicochemical parameters. Our previous application of this approach to the entirely sequenced members of the Podoviridae fully supported the current phage classification of the International Committee on Taxonomy of Viruses (ICTV). It appears that horizontal gene transfer generally does not totally obliterate evolutionary relationships between phages.\n\n\nRESULTS\nCoreGenes/CoreExtractor proteome comparison techniques applied to 102 Myoviridae suggest the establishment of three subfamilies (Peduovirinae, Teequatrovirinae, the Spounavirinae) and eight new independent genera (Bcep781, BcepMu, FelixO1, HAP1, Bzx1, PB1, phiCD119, and phiKZ-like viruses). The Peduovirinae subfamily, derived from the P2-related phages, is composed of two distinct genera: the \"P2-like viruses\", and the \"HP1-like viruses\". At present, the more complex Teequatrovirinae subfamily has two genera, the \"T4-like\" and \"KVP40-like viruses\". In the genus \"T4-like viruses\" proper, four groups sharing >70% proteins are distinguished: T4-type, 44RR-type, RB43-type, and RB49-type viruses. The Spounavirinae contain the \"SPO1-\"and \"Twort-like viruses.\"\n\n\nCONCLUSION\nThe hierarchical clustering of these groupings provide biologically significant subdivisions, which are consistent with our previous analysis of the Podoviridae." }, { "pmid": "23012628", "title": "PAirwise Sequence Comparison (PASC) and its application in the classification of filoviruses.", "abstract": "PAirwise Sequence Comparison (PASC) is a tool that uses genome sequence similarity to help with virus classification. The PASC tool at NCBI uses two methods: local alignment based on BLAST and global alignment based on Needleman-Wunsch algorithm. It works for complete genomes of viruses of several families/groups, and for the family of Filoviridae, it currently includes 52 complete genomes available in GenBank. It has been shown that BLAST-based alignment approach works better for filoviruses, and therefore is recommended for establishing taxon demarcations criteria. When more genome sequences with high divergence become available, these demarcation will most likely become more precise. The tool can compare new genome sequences of filoviruses with the ones already in the database, and propose their taxonomic classification." }, { "pmid": "22214262", "title": "Antiretroviral activity of 5-azacytidine during treatment of a HTLV-1 positive myelodysplastic syndrome with autoimmune manifestations.", "abstract": "Myelodysplastic syndromes (MDS) are often accompanied by autoimmune phenomena. The underlying mechanisms for these associations remain uncertain, although T cell activation seems to be important. Human T-lymphotropic virus (HTLV-1) has been detected in patients with myelodysplastic syndromes, mostly in regions of the world which are endemic for the virus, and where association of HTLV-1 with rheumatological manifestation is not rare. We present here the case of a 58 year old man who presented with cytopenias, leukocytoclastic vasculitis of the skin and glomerulopathy, and was diagnosed as MDS (refractory anemia with excess blasts - RAEB 1). The patient also tested positive for HTLV-1 by PCR. After 8 monthly cycles of 5-azacytidine he achieved a complete hematologic remission. Following treatment, a second PCR for HTLV-1 was carried out and found to be negative. This is the first report in the literature of a HTLV-1-positive MDS with severe autoimmune manifestations, which was treated with the hypomethylating factor 5-azacitidine, achieving cytogenetic remission with concomitant resolution of the autoimmune manifestations, as well as HTLV-1-PCR negativity. HTLV-1-PCR negativity may be due to either immune mediated clearance of the virus, or a potential antiretroviral effect of 5-azacytidine. 5-azacytidine is known for its antiretroviral effects, although there is no proof of its activity against HTLV-1 infection in vivo." }, { "pmid": "24127561", "title": "Sequence analysis of hepatitis C virus from patients with relapse after a sustained virological response: relapse or reinfection?", "abstract": "BACKGROUND\nA sustained virological response (SVR) is the major end point of therapy for chronic hepatitis C virus (HCV) infection. Late relapse of infection is rare and poorly characterized. Three of 103 patients with a SVR treated at the National Institutes of Health had late relapse. We evaluated HCV RNA sequences in serum and liver tissue to distinguish relapse from reinfection.\n\n\nMETHODS\nPer patient, 10-22 clones of amplified 5' untranslated region were evaluated in pretreatment and relapse serum specimens and in liver biopsy specimens obtained during SVR. Genotypes and sequence diversity were evaluated. Four patients whose infection relapsed before they reached a SVR (ie, the early relapse group) were used as a comparison.\n\n\nRESULTS\nResults of tests for detection of serum HCV RNA in all patients with late relapse were repeatedly negative during the first 24 weeks after therapy but became positive 8, 75, and 78 months after SVR. Reinfection risk factors were absent in 2 of 3 patients. In all patients with early or late relapse, apart from minor variations, the original HCV sequence was present before treatment and after relapse. All liver biopsy specimens from patients with late relapse were HCV RNA positive at SVR, with sequences nearly identical to those of specimens obtained at other time points.\n\n\nCONCLUSIONS\nSequence comparisons suggest that reappearance of HCV RNA years after a SVR can be from relapse of the initial viral infection rather than reinfection from a different virus." }, { "pmid": "12753540", "title": "Differences in clinical features between influenza A H1N1, A H3N2, and B in adult patients.", "abstract": "OBJECTIVE\nThe differences in clinical features between influenza A H1N1, A H3N2, and B in the past three influenza seasons were examined.\n\n\nMETHODOLOGY\nPatients with respiratory symptoms who consulted Kurume University Medical Center, Department of Internal Medicine, Kurume, Fukuoka, Japan, from January to March in 1999, 2000, and 2001 were included. Based on virological and serological findings, the influenza patients were divided into the above three groups for comparison of symptoms and laboratory data.\n\n\nRESULTS\n\n\n\nPATIENTS\n(n = 196) included 54 with influenza A H1N1, 98 with A H3N2, and 44 with B. Mean ages in the groups were 33 +/- 8.4 years, 41 +/- 15.2 years, and 29 +/- 9.8 years (influenza B patients tended to be younger). Fever was much greater in the A H3N2 group (38.6 +/- 0.46 degrees C) than in the A H1N1 or B groups. This was also true for laboratory indices of viral infection. Gastrointestinal symptoms such as nausea, epigastralgia, and diarrhoea were prominent in influenza B. Myalgia was common in all groups.\n\n\nCONCLUSIONS\nInfluenza A H3N2 infection was more severe than A H1N1 or B in terms of fever, leukopenia, and C-reactive protein. Myalgia and other symptoms such as fever, headache, general malaise and sore throat were equally frequent in influenza A H3N2, A H1N1, and B infections. Gastrointestinal symptoms were more common in influenza B." }, { "pmid": "21852336", "title": "Rhinovirus bronchiolitis and recurrent wheezing: 1-year follow-up.", "abstract": "The association between bronchiolitis and recurrent wheezing remains controversial. In this prospective study, we assessed risk factors for recurrent wheezing during a 12-month follow-up in 313 infants aged <12 months hospitalised for their first episode of bronchiolitis. Demographic, clinical and laboratory data were obtained with a questionnaire and from medical files. A total of 14 respiratory viruses were concurrently assayed in nasal washings. Parents were interviewed 12 months after hospitalisation to check whether their infants experienced recurrent wheezing. The rate of recurrent wheezing was higher in infants with bronchiolitis than in controls (52.7 versus 10.3%; p<0.001). Multivariate analysis identified rhinovirus (RV) infection (OR 3.3, 95% CI 1.0-11.1) followed by a positive family history for asthma (OR 2.5, 95% CI 1.2-4.9) as major independent risk factors for recurrent wheezing. In conclusion, the virus most likely to be associated with recurrent wheezing at 12 months after initial bronchiolitis is RV, a viral agent that could predict infants prone to the development of recurrent wheezing." }, { "pmid": "23625169", "title": "Molecular epidemiology and genetic diversity of human rhinovirus affecting hospitalized children in Rome.", "abstract": "Human rhinoviruses (HRV) have been re-classified into three species (A-C), but the recently discovered HRV-C strains are not fully characterized yet. This study aimed to undertake a molecular and epidemiological characterization of HRV strains infecting children hospitalized over one year in two large research hospitals in Rome. Nasal washings from single HRV infections were retrospectively subjected to phylogenetic analysis on two genomic regions: the central part of the 5'Untranslated Region (5'UTR) and the Viral Protein (VP) 4 gene with the 5' portion of the VP2 gene (VP4/2). Forty-five different strains were identified in 73 HRV-positive children: 55 % of the cases were HRV-A, 38 % HRV-C and only 7 % HRV-B. HRV-C cases were less frequent than HRV-A during summer months and more frequent in cases presenting wheezing with respect to HRV-A. Species distribution was similar with respect to patient age, and seasonality differed during summer months with fewer HRV-C than HRV-A cases. On admission, a significantly higher number of HRV-C cases presented with wheezing with respect to HRV-A. The inter- and intra-genotype variability in VP4/2 was higher than in 5'UTR; in particular, HRV-A patient VP4/2 sequences were highly divergent (8-14 %) at the nucleotide level from those of their reference strains, but VP4 amino acid sequence was highly conserved. In HRV-C isolates, the region preceding the initiator AUG, the amino acids involved in VP4 myristoylation, the VP4-VP2 cleavage site and the cis-acting replication element were highly conserved. Differently, VP4 amino acid conservation was significantly lower in HRV-C than in HRV-A strains, especially in the transiently exposed VP4 N-terminus. This study confirmed the high number of different HRV genotypes infecting hospitalized children over one year and reveals a greater than expected variability in HRV-C VP4 protein, potentially suggestive of differences in replication." }, { "pmid": "17813860", "title": "Optimization by simulated annealing.", "abstract": "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods." }, { "pmid": "14642662", "title": "Reliable classification of two-class cancer data using evolutionary algorithms.", "abstract": "In the area of bioinformatics, the identification of gene subsets responsible for classifying available disease samples to two or more of its variants is an important task. Such problems have been solved in the past by means of unsupervised learning methods (hierarchical clustering, self-organizing maps, k-mean clustering, etc.) and supervised learning methods (weighted voting approach, k-nearest neighbor method, support vector machine method, etc.). Such problems can also be posed as optimization problems of minimizing gene subset size to achieve reliable and accurate classification. The main difficulties in solving the resulting optimization problem are the availability of only a few samples compared to the number of genes in the samples and the exorbitantly large search space of solutions. Although there exist a few applications of evolutionary algorithms (EAs) for this task, here we treat the problem as a multiobjective optimization problem of minimizing the gene subset size and minimizing the number of misclassified samples. Moreover, for a more reliable classification, we consider multiple training sets in evaluating a classifier. Contrary to the past studies, the use of a multiobjective EA (NSGA-II) has enabled us to discover a smaller gene subset size (such as four or five) to correctly classify 100% or near 100% samples for three cancer samples (Leukemia, Lymphoma, and Colon). We have also extended the NSGA-II to obtain multiple non-dominated solutions discovering as much as 352 different three-gene combinations providing a 100% correct classification to the Leukemia data. In order to have further confidence in the identification task, we have also introduced a prediction strength threshold for determining a sample's belonging to one class or the other. All simulation results show consistent gene subset identifications on three disease samples and exhibit the flexibilities and efficacies in using a multiobjective EA for the gene subset identification task." }, { "pmid": "17008640", "title": "Chronic rhinoviral infection in lung transplant recipients.", "abstract": "RATIONALE\nLung transplant recipients are particularly at risk of complications from rhinovirus, the most frequent respiratory virus circulating in the community.\n\n\nOBJECTIVES\nTo determine whether lung transplant recipients can be chronically infected by rhinovirus and the potential clinical impact.\n\n\nMETHODS\nWe first identified an index case, in which rhinovirus was isolated repeatedly, and conducted detailed molecular analysis to determine whether this was related to a unique strain or to re-infection episodes. Transbronchial biopsies were used to assess the presence of rhinovirus in the lung parenchyma. The incidence of chronic rhinoviral infections and potential clinical impact was assessed prospectively in a cohort of 68 lung transplant recipients during 19 mo by screening of bronchoalveolar lavages.\n\n\nMEASUREMENTS AND MAIN RESULTS\nWe describe 3 lung transplant recipients with graft dysfunctions in whom rhinovirus was identified by reverse transcriptase-polymerase chain reaction in upper and lower respiratory specimens over a 12-mo period. In two cases, rhinovirus was repeatedly isolated in culture. The persistence of a unique strain in each case was confirmed by sequence analysis of the 5'NCR and VP1 gene. In the index case, rhinovirus was detected in the lower respiratory parenchyma. In the cohort of lung transplant recipients, rhinoviral infections were documented in bronchoalveolar lavage specimens of 10 recipients, and 2 presented with a persistent infection.\n\n\nCONCLUSIONS\nRhinoviral infection can be persistent in lung transplant recipients with graft dysfunction, and the virus can be detected in the lung parenchyma. Given the potential clinical impact, chronic rhinoviral infection needs to be considered in lung transplant recipients." }, { "pmid": "3037780", "title": "A collaborative report: rhinoviruses--extension of the numbering system from 89 to 100.", "abstract": "To define the number of rhinovirus serotypes, cross neutralization tests and characterization studies were completed on 25 candidate prototype rhinoviruses submitted to a third phase of a collaborative program. Based on the results, 11 distinct prototype strains were designated and the numbering system was extended to include 100 rhinoviruses. In addition, recent evidence indicates that over 90% of rhinoviruses isolated in three areas of the country could be typed with antisera for rhinovirus types 1-89." }, { "pmid": "17623054", "title": "New complete genome sequences of human rhinoviruses shed light on their phylogeny and genomic features.", "abstract": "BACKGROUND\nHuman rhinoviruses (HRV), the most frequent cause of respiratory infections, include 99 different serotypes segregating into two species, A and B. Rhinoviruses share extensive genomic sequence similarity with enteroviruses and both are part of the picornavirus family. Nevertheless they differ significantly at the phenotypic level. The lack of HRV full-length genome sequences and the absence of analysis comparing picornaviruses at the whole genome level limit our knowledge of the genomic features supporting these differences.\n\n\nRESULTS\nHere we report complete genome sequences of 12 HRV-A and HRV-B serotypes, more than doubling the current number of available HRV sequences. The whole-genome maximum-likelihood phylogenetic analysis suggests that HRV-B and human enteroviruses (HEV) diverged from the last common ancestor after their separation from HRV-A. On the other hand, compared to HEV, HRV-B are more related to HRV-A in the capsid and 3B-C regions. We also identified the presence of a 2C cis-acting replication element (cre) in HRV-B that is not present in HRV-A, and that had been previously characterized only in HEV. In contrast to HEV viruses, HRV-A and HRV-B share also markedly lower GC content along the whole genome length.\n\n\nCONCLUSION\nOur findings provide basis to speculate about both the biological similarities and the differences (e.g. tissue tropism, temperature adaptation or acid lability) of these three groups of viruses." }, { "pmid": "19828751", "title": "Screening respiratory samples for detection of human rhinoviruses (HRVs) and enteroviruses: comprehensive VP4-VP2 typing reveals high incidence and genetic diversity of HRV species C.", "abstract": "Rhinovirus infections are the most common cause of viral illness in humans, and there is increasing evidence of their etiological role in severe acute respiratory tract infections (ARTIs). Human rhinoviruses (HRVs) are classified into two species, species A and B, which contain over 100 serotypes, and a recently discovered genetically heterogeneous third species (HRV species C). To investigate their diversity and population turnover, screening for the detection and the genetic characterization of HRV variants in diagnostic respiratory samples was performed by using nested primers for the efficient amplification of the VP4-VP2 region of HRV (and enterovirus) species and serotype identification. HRV species A, B, and C variants were detected in 14%, 1.8%, and 6.8%, respectively, of 456 diagnostic respiratory samples from 345 subjects (6 samples also contained enteroviruses), predominantly among children under age 10 years. HRV species A and B variants were remarkably heterogeneous, with 22 and 6 different serotypes, respectively, detected among 73 positive samples. Similarly, by using a pairwise distance threshold of 0.1, species C variants occurring worldwide were provisionally assigned to 47 different types, of which 15 were present among samples from Edinburgh, United Kingdom. There was a rapid turnover of variants, with only 5 of 43 serotypes detected during both sampling periods. By using divergence thresholds and phylogenetic analysis, several species A and C variants could provisionally be assigned to new types. An initial investigation of the clinical differences between rhinovirus species found HRV species C to be nearly twice as frequently associated with ARTIs than other rhinovirus species, which matches the frequencies of detection of respiratory syncytial virus. The study demonstrates the extraordinary genetic diversity of HRVs, their rapid population turnover, and their extensive involvement in childhood respiratory disease." }, { "pmid": "20668080", "title": "Analysis of genetic diversity and sites of recombination in human rhinovirus species C.", "abstract": "Human rhinoviruses (HRVs) are a highly prevalent and diverse group of respiratory viruses. Although HRV-A and HRV-B are traditionally detected by virus isolation, a series of unculturable HRV variants have recently been described and assigned as a new species (HRV-C) within the picornavirus Enterovirus genus. To investigate their genetic diversity and occurrence of recombination, we have performed comprehensive phylogenetic analysis of sequences from the 5' untranslated region (5' UTR), VP4/VP2, VP1, and 3Dpol regions amplified from 89 HRV-C-positive respiratory samples and available published sequences. Branching orders of VP4/VP2, VP1, and 3Dpol trees were identical, consistent with the absence of intraspecies recombination in the coding regions. However, numerous tree topology changes were apparent in the 5' UTR, where >60% of analyzed HRV-C variants showed recombination with species A sequences. Two recombination hot spots in stem-loop 5 and the polypyrimidine tract in the 5' UTR were mapped using the program GroupingScan. Available HRV-C sequences showed evidence for additional interspecies recombination with HRV-A in the 2A gene, with breakpoints mapping precisely to the boundaries of the C-terminal domain of the encoded proteinase. Pairwise distances between HRV-C variants in VP1 and VP4/VP2 regions fell into two separate distributions, resembling inter- and intraserotype distances of species A and B. These observations suggest that, without serological cross-neutralization data, HRV-C genetic groups may be equivalently classified into types using divergence thresholds derived from distance distributions. The extensive sequence data from multiple genome regions of HRV-C and analyses of recombination in the current study will assist future formulation of consensus criteria for HRV-C type assignment and identification." }, { "pmid": "23677786", "title": "Proposals for the classification of human rhinovirus species A, B and C into genotypically assigned types.", "abstract": "Human rhinoviruses (HRVs) frequently cause mild upper respiratory tract infections and more severe disease manifestations such as bronchiolitis and asthma exacerbations. HRV is classified into three species within the genus Enterovirus of the family Picornaviridae. HRV species A and B contain 75 and 25 serotypes identified by cross-neutralization assays, although the use of such assays for routine HRV typing is hampered by the large number of serotypes, replacement of virus isolation by molecular methods in HRV diagnosis and the poor or absent replication of HRV species C in cell culture. To address these problems, we propose an alternative, genotypic classification of HRV-based genetic relatedness analogous to that used for enteroviruses. Nucleotide distances between 384 complete VP1 sequences of currently assigned HRV (sero)types identified divergence thresholds of 13, 12 and 13 % for species A, B and C, respectively, that divided inter- and intra-type comparisons. These were paralleled by 10, 9.5 and 10 % thresholds in the larger dataset of >3800 VP4 region sequences. Assignments based on VP1 sequences led to minor revisions of existing type designations (such as the reclassification of serotype pairs, e.g. A8/A95 and A29/A44, as single serotypes) and the designation of new HRV types A101-106, B101-103 and C34-C51. A protocol for assignment and numbering of new HRV types using VP1 sequences and the restriction of VP4 sequence comparisons to type identification and provisional type assignments is proposed. Genotypic assignment and identification of HRV types will be of considerable value in the future investigation of type-associated differences in disease outcomes, transmission and epidemiology." }, { "pmid": "21342784", "title": "Human rhinovirus C--associated severe pneumonia in a neonate.", "abstract": "We present a case of severe pneumonia, associated with a prolonged infection by a species C rhinovirus (HRV) in a 3-week old neonate. HRV RNA was identified in nasal and nasopharyngeal secretions, bronchoalveolar lavage and bronchial specimens, stool and urine, collected from the patient during a one-month period. No other viral or bacterial agents were detected. Sequence analysis of two regions of the viral genome, amplified directly from the clinical specimens revealed a novel HRV-C variant. These observations highlight the occurrence of severe neonatal infections caused by HRVs and the need of rapid viral diagnostics for their detection." }, { "pmid": "19458158", "title": "MEME SUITE: tools for motif discovery and searching.", "abstract": "The MEME Suite web server provides a unified portal for online discovery and analysis of sequence motifs representing features such as DNA binding sites and protein interaction domains. The popular MEME motif discovery algorithm is now complemented by the GLAM2 algorithm which allows discovery of motifs containing gaps. Three sequence scanning algorithms--MAST, FIMO and GLAM2SCAN--allow scanning numerous DNA and protein sequence databases for motifs discovered by MEME and GLAM2. Transcription factor motifs (including those discovered using MEME) can be compared with motifs in many popular motif databases using the motif database scanning algorithm TOMTOM. Transcription factor motifs can be further analyzed for putative function by association with Gene Ontology (GO) terms using the motif-GO term association tool GOMO. MEME output now contains sequence LOGOS for each discovered motif, as well as buttons to allow motifs to be conveniently submitted to the sequence and motif database scanning algorithms (MAST, FIMO and TOMTOM), or to GOMO, for further analysis. GLAM2 output similarly contains buttons for further analysis using GLAM2SCAN and for rerunning GLAM2 with different parameters. All of the motif-based tools are now implemented as web services via Opal. Source code, binaries and a web server are freely available for noncommercial use at http://meme.nbcr.net." }, { "pmid": "21543442", "title": "DREME: motif discovery in transcription factor ChIP-seq data.", "abstract": "MOTIVATION\nTranscription factor (TF) ChIP-seq datasets have particular characteristics that provide unique challenges and opportunities for motif discovery. Most existing motif discovery algorithms do not scale well to such large datasets, or fail to report many motifs associated with cofactors of the ChIP-ed TF.\n\n\nRESULTS\nWe present DREME, a motif discovery algorithm specifically designed to find the short, core DNA-binding motifs of eukaryotic TFs, and optimized to analyze very large ChIP-seq datasets in minutes. Using DREME, we discover the binding motifs of the the ChIP-ed TF and many cofactors in mouse ES cell (mESC), mouse erythrocyte and human cell line ChIP-seq datasets. For example, in mESC ChIP-seq data for the TF Esrrb, we discover the binding motifs for eight cofactor TFs important in the maintenance of pluripotency. Several other commonly used algorithms find at most two cofactor motifs in this same dataset. DREME can also perform discriminative motif discovery, and we use this feature to provide evidence that Sox2 and Oct4 do not bind in mES cells as an obligate heterodimer. DREME is much faster than many commonly used algorithms, scales linearly in dataset size, finds multiple, non-redundant motifs and reports a reliable measure of statistical significance for each motif found. DREME is available as part of the MEME Suite of motif-based sequence analysis tools (http://meme.nbcr.net)." } ]
Scientific Reports
27929098
PMC5144062
10.1038/srep38433
EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm
We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation.
Related WorkSeveral computational methods that use histone modification signatures to identify enhancer regions have been developed. Won et al. proposed the use of Hidden Markov Models (HMMs) to predict enhancers using three primary histone modifications30. Firpi et al. focused on the importance of recognizing the histone modification signals through data transformation and employed Time-Delayed Neural Networks (TDNNs) using a set of histone marks selected through simulated annealing31. Fernández et al. used Support Vector Machines (SVMs) on an optimized set of histone modifications found through Genetic Algorithms32. RFECS (Random Forest based Enhancer identification from Chromatin States) improved upon the limited number of training samples in previous approaches using Random Forests (RFs), in order to determine the optimal set of histone modifications to predict enhancers33. We provide a comparison of some of the recent methods of enhancer prediction in Table 1, comparing the following enhancer prediction protocols: RFECS34, DEEP-ENCODE35, ChromaGenSVM32, CSI-ANN31, and HMM30.In addition to histone modifications, recent work has also used other input features to classify regulatory sites in DNA. For example36, is a complementary line of work in which the authors further classify enhancers as strong or weak enhancers. For their input features, they use k-mers of DNA nucleotides, while we use histone modification patterns. The results are not directly comparable to ours because their ultimate classification task is also different. Further, looking at a finer level of detail, their classification ignores whether an enhancer is poised or active, and considers the simpler, two-way classification of strong or weak enhancers. Another recent paper shows how to input biological sequences into machine learning algorithms37. The difficulty arises from the fact that ML algorithms need vectors as inputs and a straightforward conversion of the biological sequence into a vector will lose important information, such as ordering effect of the basic elements38 (C for DNA, amino acids for protein). Prior work developed the idea of generating pseudo components from the sequences that can be fed into the ML algorithm. The above-mentioned paper unifies the different approaches for generating pseudo components from DNA sequences, RNA sequences, and protein sequences. This is a powerful and general-purpose method. In our work, however, we do not need this generality. We feed the (24 different) histone modification markers and, by binning, we consider features corresponding to adjacent genomic regions for each marker (20 for each histone modification marker). We shift the window gradually thus capturing the overlapping regions among contiguous windows and the DNN extracts the relevant ordering information, thanks to such overlap. Further in repDNA39, the authors consider DNA sequences alone. RepDNA calculates a total of 15 features that can be fed into ML algorithms. The 15 features fall into 3 categories—nucleic acid composition, autocorrelation features describing the level of correlation between two oligonucleotides along a DNA sequence in terms of their specific physicochemical properties, and pseudo nucleotide composition features.
[ "20025863", "18851828", "6277502", "12837695", "21295696", "21106903", "22955616", "25693562", "21097893", "23503198", "11526400", "24679523", "22169023", "21572438", "24614317", "19212405", "22534400", "19854636", "15345045", "11559745", "9445475", "17277777", "19094206", "20453004", "22328731", "23995138", "23526891", "26476782", "25958395", "23328393", "25504848", "23473601", "20526341", "21685454", "21160473", "22215806", "24565409", "23771147", "23019145", "23925113", "17130149" ]
[ { "pmid": "20025863", "title": "Enhancers: the abundance and function of regulatory sequences beyond promoters.", "abstract": "Transcriptional control in mammals and Drosophila is often mediated by regulatory sequences located far from gene promoters. Different classes of such elements - particularly enhancers, but also locus control regions and insulators - have been defined by specific functional assays, although it is not always clear how these assays relate to the function of these elements within their native loci. Recent advances in genomics suggest, however, that such elements are highly abundant within the genome and may represent the primary mechanism by which cell- and developmental-specific gene expression is accomplished. In this review, we discuss the functional parameters of enhancers as defined by specific assays, along with the frequency with which they occur in the genome. In addition, we examine the available evidence for the mechanism by which such elements communicate or interact with the promoters they regulate." }, { "pmid": "18851828", "title": "Chromatin insulators: regulatory mechanisms and epigenetic inheritance.", "abstract": "Enhancer-blocking insulators are DNA elements that disrupt the communication between a regulatory sequence, such as an enhancer or a silencer, and a promoter. Insulators participate in both transcriptional regulation and global nuclear organization, two features of chromatin that are thought to be maintained from one generation to the next through epigenetic mechanisms. Furthermore, there are many regulatory mechanisms in place that enhance or hinder insulator activity. These modes of regulation could be used to establish cell-type-specific insulator activity that is epigenetically inherited along a cell and/or organismal lineage. This review will discuss the evidence for epigenetic inheritance and regulation of insulator function." }, { "pmid": "6277502", "title": "Expression of a beta-globin gene is enhanced by remote SV40 DNA sequences.", "abstract": "We have studied the transient expression of a cloned rabbit hemoglobin beta 1 gene after its introduction into HeLa cells. Two and one-half days after transfection using the calcium phosphate technique we extracted RNA from the entire cell population and analyzed it by the S1 nuclease hybridization assay. Transcripts were barely detectable when beta-globin gene-plasmid recombinants were used. However, 200 times more beta-globin gene transcripts were found when the beta-globin gene recombinants also contained SV40 DNA, and 90% of these transcripts (about 1000 per cell) had the same 5' end as authentic rabbit globin mRNA. In the latter case, abundant production of beta-globin protein was readily detected in a fraction of transfected cells by immunofluorescent staining. Enhancement of globin gene expression was dependent on SV40 sequences acting in cis, but independent of the viral origin of DNA replication. The enhancing activity was associated with the 72 bp repeated sequence element located at the beginning of the viral late gene region. Viral DNA fragments containing the transcriptional enhancer element could act in either orientation at many positions, including 1400 bp upstream or 3300 bp downstream from the transcription initiation site of the rabbit beta-globin gene. These studies define a class of DNA elements with a mode of action that has not been heretofore described. The activation of genes by specific enhancer elements seems to be a widespread mechanism that may be used for the regulation of gene expression." }, { "pmid": "12837695", "title": "A long-range Shh enhancer regulates expression in the developing limb and fin and is associated with preaxial polydactyly.", "abstract": "Unequivocal identification of the full composition of a gene is made difficult by the cryptic nature of regulatory elements. Regulatory elements are notoriously difficult to locate and may reside at considerable distances from the transcription units on which they operate and, moreover, may be incorporated into the structure of neighbouring genes. The importance of regulatory mutations as the basis of human abnormalities remains obscure. Here, we show that the chromosome 7q36 associated preaxial polydactyly, a frequently observed congenital limb malformation, results from point mutations in a Shh regulatory element. Shh, normally expressed in the ZPA posteriorly in the limb bud, is expressed in an additional ectopic site at the anterior margin in mouse models of PPD. Our investigations into the basis of the ectopic Shh expression identified the enhancer element that drives normal Shh expression in the ZPA. The regulator, designated ZRS, lies within intron 5 of the Lmbr1 gene 1 Mb from the target gene Shh. The ZRS drives the early spatio-temporal expression pattern in the limb of tetrapods. Despite the morphological differences between limbs and fins, an equivalent regulatory element is found in fish. The ZRS contains point mutations that segregate with polydactyly in four unrelated families with PPD and in the Hx mouse mutant. Thus point mutations residing in long-range regulatory elements are capable of causing congenital abnormalities, and possess the capacity to modify gene activity such that a novel gamut of abnormalities is detected." }, { "pmid": "21295696", "title": "Functional and mechanistic diversity of distal transcription enhancers.", "abstract": "Biological differences among metazoans and between cell types in a given organism arise in large part due to differences in gene expression patterns. Gene-distal enhancers are key contributors to these expression patterns, exhibiting both sequence diversity and cell type specificity. Studies of long-range interactions indicate that enhancers are often important determinants of nuclear organization, contributing to a general model for enhancer function that involves direct enhancer-promoter contact. However, mechanisms for enhancer function are emerging that do not fit solely within such a model, suggesting that enhancers as a class of DNA regulatory element may be functionally and mechanistically diverse." }, { "pmid": "21106903", "title": "High-resolution genome-wide in vivo footprinting of diverse transcription factors in human cells.", "abstract": "Regulation of gene transcription in diverse cell types is determined largely by varied sets of cis-elements where transcription factors bind. Here we demonstrate that data from a single high-throughput DNase I hypersensitivity assay can delineate hundreds of thousands of base-pair resolution in vivo footprints in human cells that precisely mark individual transcription factor-DNA interactions. These annotations provide a unique resource for the investigation of cis-regulatory elements. We find that footprints for specific transcription factors correlate with ChIP-seq enrichment and can accurately identify functional versus nonfunctional transcription factor motifs. We also find that footprints reveal a unique evolutionary conservation pattern that differentiates functional footprinted bases from surrounding DNA. Finally, detailed analysis of CTCF footprints suggests multiple modes of binding and a novel DNA binding motif upstream of the primary binding site." }, { "pmid": "22955616", "title": "An integrated encyclopedia of DNA elements in the human genome.", "abstract": "The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation. The newly identified elements also show a statistical correspondence to sequence variants linked to human disease, and can thereby guide interpretation of this variation. Overall, the project provides new insights into the organization and regulation of our genes and genome, and is an expansive resource of functional annotations for biomedical research." }, { "pmid": "21097893", "title": "NCBI GEO: archive for functional genomics data sets--10 years on.", "abstract": "A decade ago, the Gene Expression Omnibus (GEO) database was established at the National Center for Biotechnology Information (NCBI). The original objective of GEO was to serve as a public repository for high-throughput gene expression data generated mostly by microarray technology. However, the research community quickly applied microarrays to non-gene-expression studies, including examination of genome copy number variation and genome-wide profiling of DNA-binding proteins. Because the GEO database was designed with a flexible structure, it was possible to quickly adapt the repository to store these data types. More recently, as the microarray community switches to next-generation sequencing technologies, GEO has again adapted to host these data sets. Today, GEO stores over 20,000 microarray- and sequence-based functional genomics studies, and continues to handle the majority of direct high-throughput data submissions from the research community. Multiple mechanisms are provided to help users effectively search, browse, download and visualize the data at the level of individual genes or entire studies. This paper describes recent database enhancements, including new search and data representation tools, as well as a brief review of how the community uses GEO data. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/." }, { "pmid": "23503198", "title": "Enhancers: five essential questions.", "abstract": "It is estimated that the human genome contains hundreds of thousands of enhancers, so understanding these gene-regulatory elements is a crucial goal. Several fundamental questions need to be addressed about enhancers, such as how do we identify them all, how do they work, and how do they contribute to disease and evolution? Five prominent researchers in this field look at how much we know already and what needs to be done to answer these questions." }, { "pmid": "11526400", "title": "Deletion of a coordinate regulator of type 2 cytokine expression in mice.", "abstract": "Mechanisms that underlie the patterning of cytokine expression in T helper (T(H)) cell subsets remain incompletely defined. An evolutionarily conserved approximately 400-bp noncoding sequence in the intergenic region between the genes Il4 and Il13, designated conserved noncoding sequence 1 (CNS-1), was deleted in mice. The capacity to develop T(H)2 cells was compromised in vitro and in vivo in the absence of CNS-1. Despite the profound effect in T cells, mast cells from CNS-1(-/-) mice maintained their capacity to produce interleukin 4. A T cell-specific element critical for the optimal expression of type 2 cytokines may represent the evolution of a regulatory sequence exploited by adaptive immunity." }, { "pmid": "24679523", "title": "Looping back to leap forward: transcription enters a new era.", "abstract": "Comparative genome analyses reveal that organismal complexity scales not with gene number but with gene regulation. Recent efforts indicate that the human genome likely contains hundreds of thousands of enhancers, with a typical gene embedded in a milieu of tens of enhancers. Proliferation of cis-regulatory DNAs is accompanied by increased complexity and functional diversification of transcriptional machineries recognizing distal enhancers and core promoters and by the high-order spatial organization of genetic elements. We review progress in unraveling one of the outstanding mysteries of modern biology: the dynamic communication of remote enhancers with target promoters in the specification of cellular identity." }, { "pmid": "22169023", "title": "Enhancer and promoter interactions-long distance calls.", "abstract": "In metazoans, enhancers of gene transcription must often exert their effects over tens of kilobases of DNA. Over the past decade it has become clear that to do this, enhancers come into close proximity with target promoters with the looping away of intervening sequences. In a few cases proteins that are involved in the establishment or maintenance of these loops have been revealed but how the proper gene target is selected remains mysterious. Chromatin insulators had been appreciated as elements that play a role in enhancer fidelity through their enhancer blocking or barrier activity. However, recent work suggests more direct participation of insulators in enhancer-gene interactions. The emerging view begins to incorporate transcription activation by distant enhancers with large scale nuclear architecture and subnuclear movement." }, { "pmid": "21572438", "title": "Reprogramming transcription by distinct classes of enhancers functionally defined by eRNA.", "abstract": "Mammalian genomes are populated with thousands of transcriptional enhancers that orchestrate cell-type-specific gene expression programs, but how those enhancers are exploited to institute alternative, signal-dependent transcriptional responses remains poorly understood. Here we present evidence that cell-lineage-specific factors, such as FoxA1, can simultaneously facilitate and restrict key regulated transcription factors, exemplified by the androgen receptor (AR), to act on structurally and functionally distinct classes of enhancer. Consequently, FoxA1 downregulation, an unfavourable prognostic sign in certain advanced prostate tumours, triggers dramatic reprogramming of the hormonal response by causing a massive switch in AR binding to a distinct cohort of pre-established enhancers. These enhancers are functional, as evidenced by the production of enhancer-templated non-coding RNA (eRNA) based on global nuclear run-on sequencing (GRO-seq) analysis, with a unique class apparently requiring no nucleosome remodelling to induce specific enhancer-promoter looping and gene activation. GRO-seq data also suggest that liganded AR induces both transcription initiation and elongation. Together, these findings reveal a large repository of active enhancers that can be dynamically tuned to elicit alternative gene expression programs, which may underlie many sequential gene expression events in development, cell differentiation and disease progression." }, { "pmid": "24614317", "title": "Transcriptional enhancers: from properties to genome-wide predictions.", "abstract": "Cellular development, morphology and function are governed by precise patterns of gene expression. These are established by the coordinated action of genomic regulatory elements known as enhancers or cis-regulatory modules. More than 30 years after the initial discovery of enhancers, many of their properties have been elucidated; however, despite major efforts, we only have an incomplete picture of enhancers in animal genomes. In this Review, we discuss how properties of enhancer sequences and chromatin are used to predict enhancers in genome-wide studies. We also cover recently developed high-throughput methods that allow the direct testing and identification of enhancers on the basis of their activity. Finally, we discuss recent technological advances and current challenges in the field of regulatory genomics." }, { "pmid": "19212405", "title": "ChIP-seq accurately predicts tissue-specific activity of enhancers.", "abstract": "A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover because they are scattered among the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here we present the results of chromatin immunoprecipitation with the enhancer-associated protein p300 followed by massively parallel sequencing, and map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases demonstrated reproducible enhancer activity in the tissues that were predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities, and suggest that such data sets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale." }, { "pmid": "22534400", "title": "Uncovering cis-regulatory sequence requirements for context-specific transcription factor binding.", "abstract": "The regulation of gene expression is mediated at the transcriptional level by enhancer regions that are bound by sequence-specific transcription factors (TFs). Recent studies have shown that the in vivo binding sites of single TFs differ between developmental or cellular contexts. How this context-specific binding is encoded in the cis-regulatory DNA sequence has, however, remained unclear. We computationally dissect context-specific TF binding sites in Drosophila, Caenorhabditis elegans, mouse, and human and find distinct combinations of sequence motifs for partner factors, which are predictive and reveal specific motif requirements of individual binding sites. We predict that TF binding in the early Drosophila embryo depends on motifs for the early zygotic TFs Vielfaltig (also known as Zelda) and Tramtrack. We validate experimentally that the activity of Twist-bound enhancers and Twist binding itself depend on Vielfaltig motifs, suggesting that Vielfaltig is more generally important for early transcription. Our finding that the motif content can predict context-specific binding and that the predictions work across different Drosophila species suggests that characteristic motif combinations are shared between sites, revealing context-specific motif codes (cis-regulatory signatures), which appear to be conserved during evolution. Taken together, this study establishes a novel approach to derive predictive cis-regulatory motif requirements for individual TF binding sites and enhancers. Importantly, the method is generally applicable across different cell types and organisms to elucidate cis-regulatory sequence determinants and the corresponding trans-acting factors from the increasing number of tissue- and cell-type-specific TF binding studies." }, { "pmid": "19854636", "title": "Finding distal regulatory elements in the human genome.", "abstract": "Transcriptional regulation of human genes depends not only on promoters and nearby cis-regulatory elements, but also on distal regulatory elements such as enhancers, insulators, locus control regions, and silencing elements, which are often located far away from the genes they control. Our knowledge of human distal regulatory elements is very limited, but the last several years have seen rapid progress in the development of strategies to identify these long-range regulatory sequences throughout the human genome. Here, we review these advances, focusing on two important classes of distal regulatory sequences-enhancers and insulators." }, { "pmid": "15345045", "title": "Computational identification of developmental enhancers: conservation and function of transcription factor binding-site clusters in Drosophila melanogaster and Drosophila pseudoobscura.", "abstract": "BACKGROUND\nThe identification of sequences that control transcription in metazoans is a major goal of genome analysis. In a previous study, we demonstrated that searching for clusters of predicted transcription factor binding sites could discover active regulatory sequences, and identified 37 regions of the Drosophila melanogaster genome with high densities of predicted binding sites for five transcription factors involved in anterior-posterior embryonic patterning. Nine of these clusters overlapped known enhancers. Here, we report the results of in vivo functional analysis of 27 remaining clusters.\n\n\nRESULTS\nWe generated transgenic flies carrying each cluster attached to a basal promoter and reporter gene, and assayed embryos for reporter gene expression. Six clusters are enhancers of adjacent genes: giant, fushi tarazu, odd-skipped, nubbin, squeeze and pdm2; three drive expression in patterns unrelated to those of neighboring genes; the remaining 18 do not appear to have enhancer activity. We used the Drosophila pseudoobscura genome to compare patterns of evolution in and around the 15 positive and 18 false-positive predictions. Although conservation of primary sequence cannot distinguish true from false positives, conservation of binding-site clustering accurately discriminates functional binding-site clusters from those with no function. We incorporated conservation of binding-site clustering into a new genome-wide enhancer screen, and predict several hundred new regulatory sequences, including 85 adjacent to genes with embryonic patterns.\n\n\nCONCLUSIONS\nMeasuring conservation of sequence features closely linked to function--such as binding-site clusterin--makes better use of comparative sequence data than commonly used methods that examine only sequence identity." }, { "pmid": "11559745", "title": "p300/CBP proteins: HATs for transcriptional bridges and scaffolds.", "abstract": "p300/CBP transcriptional co-activator proteins play a central role in co-ordinating and integrating multiple signal-dependent events with the transcription apparatus, allowing the appropriate level of gene activity to occur in response to diverse physiological cues that influence, for example, proliferation, differentiation and apoptosis. p300/CBP activity can be under aberrant control in human disease, particularly in cancer, which may inactivate a p300/CBP tumour-suppressor-like activity. The transcription regulating-properties of p300 and CBP appear to be exerted through multiple mechanisms. They act as protein bridges, thereby connecting different sequence-specific transcription factors to the transcription apparatus. Providing a protein scaffold upon which to build a multicomponent transcriptional regulatory complex is likely to be an important feature of p300/CBP control. Another key property is the presence of histone acetyltransferase (HAT) activity, which endows p300/CBP with the capacity to influence chromatin activity by modulating nucleosomal histones. Other proteins, including the p53 tumour suppressor, are targets for acetylation by p300/CBP. With the current intense level of research activity, p300/CBP will continue to be in the limelight and, we can be confident, yield new and important information on fundamental processes involved in transcriptional control." }, { "pmid": "9445475", "title": "Transcription factor-specific requirements for coactivators and their acetyltransferase functions.", "abstract": "Different classes of mammalian transcription factors-nuclear receptors, cyclic adenosine 3',5'-monophosphate-regulated enhancer binding protein (CREB), and signal transducer and activator of transcription-1 (STAT-1)-functionally require distinct components of the coactivator complex, including CREB-binding protein (CBP/p300), nuclear receptor coactivators (NCoAs), and p300/CBP-associated factor (p/CAF), based on their platform or assembly properties. Retinoic acid receptor, CREB, and STAT-1 also require different histone acetyltransferase (HAT) activities to activate transcription. Thus, transcription factor-specific differences in configuration and content of the coactivator complex dictate requirements for specific acetyltransferase activities, providing an explanation, at least in part, for the presence of multiple HAT components of the complex." }, { "pmid": "17277777", "title": "Distinct and predictive chromatin signatures of transcriptional promoters and enhancers in the human genome.", "abstract": "Eukaryotic gene transcription is accompanied by acetylation and methylation of nucleosomes near promoters, but the locations and roles of histone modifications elsewhere in the genome remain unclear. We determined the chromatin modification states in high resolution along 30 Mb of the human genome and found that active promoters are marked by trimethylation of Lys4 of histone H3 (H3K4), whereas enhancers are marked by monomethylation, but not trimethylation, of H3K4. We developed computational algorithms using these distinct chromatin signatures to identify new regulatory elements, predicting over 200 promoters and 400 enhancers within the 30-Mb region. This approach accurately predicted the location and function of independently identified regulatory elements with high sensitivity and specificity and uncovered a novel functional enhancer for the carnitine transporter SLC22A5 (OCTN2). Our results give insight into the connections between chromatin modifications and transcriptional regulatory activity and provide a new tool for the functional annotation of the human genome." }, { "pmid": "19094206", "title": "Prediction of regulatory elements in mammalian genomes using chromatin signatures.", "abstract": "BACKGROUND\nRecent genomic scale survey of epigenetic states in the mammalian genomes has shown that promoters and enhancers are correlated with distinct chromatin signatures, providing a pragmatic way for systematic mapping of these regulatory elements in the genome. With rapid accumulation of chromatin modification profiles in the genome of various organisms and cell types, this chromatin based approach promises to uncover many new regulatory elements, but computational methods to effectively extract information from these datasets are still limited.\n\n\nRESULTS\nWe present here a supervised learning method to predict promoters and enhancers based on their unique chromatin modification signatures. We trained Hidden Markov models (HMMs) on the histone modification data for known promoters and enhancers, and then used the trained HMMs to identify promoter or enhancer like sequences in the human genome. Using a simulated annealing (SA) procedure, we searched for the most informative combination and the optimal window size of histone marks.\n\n\nCONCLUSION\nCompared with the previous methods, the HMM method can capture the complex patterns of histone modifications particularly from the weak signals. Cross validation and scanning the ENCODE regions showed that our method outperforms the previous profile-based method in mapping promoters and enhancers. We also showed that including more histone marks can further boost the performance of our method. This observation suggests that the HMM is robust and is capable of integrating information from multiple histone marks. To further demonstrate the usefulness of our method, we applied it to analyzing genome wide ChIP-Seq data in three mouse cell lines and correctly predicted active and inactive promoters with positive predictive values of more than 80%. The software is available at http://http:/nash.ucsd.edu/chromatin.tar.gz." }, { "pmid": "20453004", "title": "Discover regulatory DNA elements using chromatin signatures and artificial neural network.", "abstract": "MOTIVATION\nRecent large-scale chromatin states mapping efforts have revealed characteristic chromatin modification signatures for various types of functional DNA elements. Given the important influence of chromatin states on gene regulation and the rapid accumulation of genome-wide chromatin modification data, there is a pressing need for computational methods to analyze these data in order to identify functional DNA elements. However, existing computational tools do not exploit data transformation and feature extraction as a means to achieve a more accurate prediction.\n\n\nRESULTS\nWe introduce a new computational framework for identifying functional DNA elements using chromatin signatures. The framework consists of a data transformation and a feature extraction step followed by a classification step using time-delay neural network. We implemented our framework in a software tool CSI-ANN (chromatin signature identification by artificial neural network). When applied to predict transcriptional enhancers in the ENCODE region, CSI-ANN achieved a 65.5% sensitivity and 66.3% positive predictive value, a 5.9% and 11.6% improvement, respectively, over the previously best approach.\n\n\nAVAILABILITY AND IMPLEMENTATION\nCSI-ANN is implemented in Matlab. The source code is freely available at http://www.medicine.uiowa.edu/Labs/tan/CSIANNsoft.zip\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary Materials are available at Bioinformatics online." }, { "pmid": "22328731", "title": "Genome-wide enhancer prediction from epigenetic signatures using genetic algorithm-optimized support vector machines.", "abstract": "The chemical modification of histones at specific DNA regulatory elements is linked to the activation, inactivation and poising of genes. A number of tools exist to predict enhancers from chromatin modification maps, but their practical application is limited because they either (i) consider a smaller number of marks than those necessary to define the various enhancer classes or (ii) work with an excessive number of marks, which is experimentally unviable. We have developed a method for chromatin state detection using support vector machines in combination with genetic algorithm optimization, called ChromaGenSVM. ChromaGenSVM selects optimum combinations of specific histone epigenetic marks to predict enhancers. In an independent test, ChromaGenSVM recovered 88% of the experimentally supported enhancers in the pilot ENCODE region of interferon gamma-treated HeLa cells. Furthermore, ChromaGenSVM successfully combined the profiles of only five distinct methylation and acetylation marks from ChIP-seq libraries done in human CD4(+) T cells to predict ∼21,000 experimentally supported enhancers within 1.0 kb regions and with a precision of ∼90%, thereby improving previous predictions on the same dataset by 21%. The combined results indicate that ChromaGenSVM comfortably outperforms previously published methods and that enhancers are best predicted by specific combinations of histone methylation and acetylation marks." }, { "pmid": "23995138", "title": "Epigenetic memory at embryonic enhancers identified in DNA methylation maps from adult mouse tissues.", "abstract": "Mammalian development requires cytosine methylation, a heritable epigenetic mark of cellular memory believed to maintain a cell's unique gene expression pattern. However, it remains unclear how dynamic DNA methylation relates to cell type-specific gene expression and animal development. Here, by mapping base-resolution methylomes in 17 adult mouse tissues at shallow coverage, we identify 302,864 tissue-specific differentially methylated regions (tsDMRs) and estimate that >6.7% of the mouse genome is variably methylated. Supporting a prominent role for DNA methylation in gene regulation, most tsDMRs occur at distal cis-regulatory elements. Unexpectedly, some tsDMRs mark enhancers that are dormant in adult tissues but active in embryonic development. These 'vestigial' enhancers are hypomethylated and lack active histone modifications in adult tissues but nevertheless exhibit activity during embryonic development. Our results provide new insights into the role of DNA methylation at tissue-specific enhancers and suggest that epigenetic memory of embryonic development may be retained in adult tissues." }, { "pmid": "23526891", "title": "RFECS: a random-forest based algorithm for enhancer identification from chromatin state.", "abstract": "Transcriptional enhancers play critical roles in regulation of gene expression, but their identification in the eukaryotic genome has been challenging. Recently, it was shown that enhancers in the mammalian genome are associated with characteristic histone modification patterns, which have been increasingly exploited for enhancer identification. However, only a limited number of cell types or chromatin marks have previously been investigated for this purpose, leaving the question unanswered whether there exists an optimal set of histone modifications for enhancer prediction in different cell types. Here, we address this issue by exploring genome-wide profiles of 24 histone modifications in two distinct human cell types, embryonic stem cells and lung fibroblasts. We developed a Random-Forest based algorithm, RFECS (Random Forest based Enhancer identification from Chromatin States) to integrate histone modification profiles for identification of enhancers, and used it to identify enhancers in a number of cell-types. We show that RFECS not only leads to more accurate and precise prediction of enhancers than previous methods, but also helps identify the most informative and robust set of three chromatin marks for enhancer prediction." }, { "pmid": "26476782", "title": "iEnhancer-2L: a two-layer predictor for identifying enhancers and their strength by pseudo k-tuple nucleotide composition.", "abstract": "MOTIVATION\nEnhancers are of short regulatory DNA elements. They can be bound with proteins (activators) to activate transcription of a gene, and hence play a critical role in promoting gene transcription in eukaryotes. With the avalanche of DNA sequences generated in the post-genomic age, it is a challenging task to develop computational methods for timely identifying enhancers from extremely complicated DNA sequences. Although some efforts have been made in this regard, they were limited at only identifying whether a query DNA element being of an enhancer or not. According to the distinct levels of biological activities and regulatory effects on target genes, however, enhancers should be further classified into strong and weak ones in strength.\n\n\nRESULTS\nIn view of this, a two-layer predictor called ' IENHANCER-2L: ' was proposed by formulating DNA elements with the 'pseudo k-tuple nucleotide composition', into which the six DNA local parameters were incorporated. To the best of our knowledge, it is the first computational predictor ever established for identifying not only enhancers, but also their strength. Rigorous cross-validation tests have indicated that IENHANCER-2L: holds very high potential to become a useful tool for genome analysis.\n\n\nAVAILABILITY AND IMPLEMENTATION\nFor the convenience of most experimental scientists, a web server for the two-layer predictor was established at http://bioinformatics.hitsz.edu.cn/iEnhancer-2L/, by which users can easily get their desired results without the need to go through the mathematical details.\n\n\nCONTACT\[email protected], [email protected], [email protected], [email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "25958395", "title": "Pse-in-One: a web server for generating various modes of pseudo components of DNA, RNA, and protein sequences.", "abstract": "With the avalanche of biological sequences generated in the post-genomic age, one of the most challenging problems in computational biology is how to effectively formulate the sequence of a biological sample (such as DNA, RNA or protein) with a discrete model or a vector that can effectively reflect its sequence pattern information or capture its key features concerned. Although several web servers and stand-alone tools were developed to address this problem, all these tools, however, can only handle one type of samples. Furthermore, the number of their built-in properties is limited, and hence it is often difficult for users to formulate the biological sequences according to their desired features or properties. In this article, with a much larger number of built-in properties, we are to propose a much more flexible web server called Pse-in-One (http://bioinformatics.hitsz.edu.cn/Pse-in-One/), which can, through its 28 different modes, generate nearly all the possible feature vectors for DNA, RNA and protein sequences. Particularly, it can also generate those feature vectors with the properties defined by users themselves. These feature vectors can be easily combined with machine-learning algorithms to develop computational predictors and analysis methods for various tasks in bioinformatics and system biology. It is anticipated that the Pse-in-One web server will become a very useful tool in computational proteomics, genomics, as well as biological sequence analysis. Moreover, to maximize users' convenience, its stand-alone version can also be downloaded from http://bioinformatics.hitsz.edu.cn/Pse-in-One/download/, and directly run on Windows, Linux, Unix and Mac OS." }, { "pmid": "23328393", "title": "Genome-wide quantitative enhancer activity maps identified by STARR-seq.", "abstract": "Genomic enhancers are important regulators of gene expression, but their identification is a challenge, and methods depend on indirect measures of activity. We developed a method termed STARR-seq to directly and quantitatively assess enhancer activity for millions of candidates from arbitrary sources of DNA, which enables screens across entire genomes. When applied to the Drosophila genome, STARR-seq identifies thousands of cell type-specific enhancers across a broad continuum of strengths, links differential gene expression to differences in enhancer activity, and creates a genome-wide quantitative enhancer map. This map reveals the highly complex regulation of transcription, with several independent enhancers for both developmental regulators and ubiquitously expressed genes. STARR-seq can be used to identify and quantify enhancer activity in other eukaryotes, including humans." }, { "pmid": "25504848", "title": "repDNA: a Python package to generate various modes of feature vectors for DNA sequences by incorporating user-defined physicochemical properties and sequence-order effects.", "abstract": "UNLABELLED\nIn order to develop powerful computational predictors for identifying the biological features or attributes of DNAs, one of the most challenging problems is to find a suitable approach to effectively represent the DNA sequences. To facilitate the studies of DNAs and nucleotides, we developed a Python package called representations of DNAs (repDNA) for generating the widely used features reflecting the physicochemical properties and sequence-order effects of DNAs and nucleotides. There are three feature groups composed of 15 features. The first group calculates three nucleic acid composition features describing the local sequence information by means of kmers; the second group calculates six autocorrelation features describing the level of correlation between two oligonucleotides along a DNA sequence in terms of their specific physicochemical properties; the third group calculates six pseudo nucleotide composition features, which can be used to represent a DNA sequence with a discrete model or vector yet still keep considerable sequence-order information via the physicochemical properties of its constituent oligonucleotides. In addition, these features can be easily calculated based on both the built-in and user-defined properties via using repDNA.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe repDNA Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repDNA/.\n\n\nCONTACT\[email protected] or [email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "23473601", "title": "Modification of enhancer chromatin: what, how, and why?", "abstract": "Emergence of form and function during embryogenesis arises in large part through cell-type- and cell-state-specific variation in gene expression patterns, mediated by specialized cis-regulatory elements called enhancers. Recent large-scale epigenomic mapping revealed unexpected complexity and dynamics of enhancer utilization patterns, with 400,000 putative human enhancers annotated by the ENCODE project alone. These large-scale efforts were largely enabled through the understanding that enhancers share certain stereotypical chromatin features. However, an important question still lingers: what is the functional significance of enhancer chromatin modification? Here we give an overview of enhancer-associated modifications of histones and DNA and discuss enzymatic activities involved in their dynamic deposition and removal. We describe potential downstream effectors of these marks and propose models for exploring functions of chromatin modification in regulating enhancer activity during development." }, { "pmid": "20526341", "title": "Transposable elements have rewired the core regulatory network of human embryonic stem cells.", "abstract": "Detection of new genomic control elements is critical in understanding transcriptional regulatory networks in their entirety. We studied the genome-wide binding locations of three key regulatory proteins (POU5F1, also known as OCT4; NANOG; and CTCF) in human and mouse embryonic stem cells. In contrast to CTCF, we found that the binding profiles of OCT4 and NANOG are markedly different, with only approximately 5% of the regions being homologously occupied. We show that transposable elements contributed up to 25% of the bound sites in humans and mice and have wired new genes into the core regulatory network of embryonic stem cells. These data indicate that species-specific transposable elements have substantially altered the transcriptional circuitry of pluripotent stem cells." }, { "pmid": "21685454", "title": "Enhancers in embryonic stem cells are enriched for transposable elements and genetic variations associated with cancers.", "abstract": "Using an enhancer-associated epigenetic signature, we made genome-wide predictions of transcriptional enhancers in human B and T lymphocytes and embryonic stem cells (ES cells). We validated and characterized the predicted enhancers using four types of information, including overlap with other genomic marks for enhancers; association with cell-type-specific genes; enrichment of cell-type-specific transcription factor binding sites; and genetic polymorphisms in predicted enhancers. We find that enhancers from ES cells, but not B or T cells, are significantly enriched for DNA sequences derived from transposable elements. This may be due to the generally relaxed repressive epigenetic state and increased activity of transposable elements in ES cells. We demonstrate that the wealth of new enhancer sequences discerned here provides an invaluable resource for the functional annotation of gene-distal single nucleotide polymorphisms identified through expression quantitative trait loci and genome-wide association studies analyses. Notably, we find GWAS SNPs associated with various cancers are enriched in ES cell enhancers. In comparison, GWAS SNPs associated with diseases due to immune dysregulation are enriched in B and T cell enhancers." }, { "pmid": "21160473", "title": "A unique chromatin signature uncovers early developmental enhancers in humans.", "abstract": "Cell-fate transitions involve the integration of genomic information encoded by regulatory elements, such as enhancers, with the cellular environment. However, identification of genomic sequences that control human embryonic development represents a formidable challenge. Here we show that in human embryonic stem cells (hESCs), unique chromatin signatures identify two distinct classes of genomic elements, both of which are marked by the presence of chromatin regulators p300 and BRG1, monomethylation of histone H3 at lysine 4 (H3K4me1), and low nucleosomal density. In addition, elements of the first class are distinguished by the acetylation of histone H3 at lysine 27 (H3K27ac), overlap with previously characterized hESC enhancers, and are located proximally to genes expressed in hESCs and the epiblast. In contrast, elements of the second class, which we term 'poised enhancers', are distinguished by the absence of H3K27ac, enrichment of histone H3 lysine 27 trimethylation (H3K27me3), and are linked to genes inactive in hESCs and instead are involved in orchestrating early steps in embryogenesis, such as gastrulation, mesoderm formation and neurulation. Consistent with the poised identity, during differentiation of hESCs to neuroepithelium, a neuroectoderm-specific subset of poised enhancers acquires a chromatin signature associated with active enhancers. When assayed in zebrafish embryos, poised enhancers are able to direct cell-type and stage-specific expression characteristic of their proximal developmental gene, even in the absence of sequence conservation in the fish genome. Our data demonstrate that early developmental enhancers are epigenetically pre-marked in hESCs and indicate an unappreciated role of H3K27me3 at distal regulatory elements. Moreover, the wealth of new regulatory sequences identified here provides an invaluable resource for studies and isolation of transient, rare cell populations representing early stages of human embryogenesis." }, { "pmid": "22215806", "title": "A decade of 3C technologies: insights into nuclear organization.", "abstract": "Over the past 10 years, the development of chromosome conformation capture (3C) technology and the subsequent genomic variants thereof have enabled the analysis of nuclear organization at an unprecedented resolution and throughput. The technology relies on the original and, in hindsight, remarkably simple idea that digestion and religation of fixed chromatin in cells, followed by the quantification of ligation junctions, allows for the determination of DNA contact frequencies and insight into chromosome topology. Here we evaluate and compare the current 3C-based methods (including 4C [chromosome conformation capture-on-chip], 5C [chromosome conformation capture carbon copy], HiC, and ChIA-PET), summarize their contribution to our current understanding of genome structure, and discuss how shape influences genome function." }, { "pmid": "24565409", "title": "Active enhancer positions can be accurately predicted from chromatin marks and collective sequence motif data.", "abstract": "BACKGROUND\nTranscriptional regulation in multi-cellular organisms is a complex process involving multiple modular regulatory elements for each gene. Building whole-genome models of transcriptional networks requires mapping all relevant enhancers and then linking them to target genes. Previous methods of enhancer identification based either on sequence information or on epigenetic marks have different limitations stemming from incompleteness of each of these datasets taken separately.\n\n\nRESULTS\nIn this work we present a new approach for discovery of regulatory elements based on the combination of sequence motifs and epigenetic marks measured with ChIP-Seq. Our method uses supervised learning approaches to train a model describing the dependence of enhancer activity on sequence features and histone marks. Our results indicate that using combination of features provides superior results to previous approaches based on either one of the datasets. While histone modifications remain the dominant feature for accurate predictions, the models based on sequence motifs have advantages in their general applicability to different tissues. Additionally, we assess the relevance of different sequence motifs in prediction accuracy showing that even tissue-specific enhancer activity depends on multiple motifs.\n\n\nCONCLUSIONS\nBased on our results, we conclude that it is worthwhile to include sequence motif data into computational approaches to active enhancer prediction and also that classifiers trained on a specific set of enhancers can generalize with significant accuracy beyond the training set." }, { "pmid": "23771147", "title": "kmer-SVM: a web server for identifying predictive regulatory sequence features in genomic data sets.", "abstract": "Massively parallel sequencing technologies have made the generation of genomic data sets a routine component of many biological investigations. For example, Chromatin immunoprecipitation followed by sequence assays detect genomic regions bound (directly or indirectly) by specific factors, and DNase-seq identifies regions of open chromatin. A major bottleneck in the interpretation of these data is the identification of the underlying DNA sequence code that defines, and ultimately facilitates prediction of, these transcription factor (TF) bound or open chromatin regions. We have recently developed a novel computational methodology, which uses a support vector machine (SVM) with kmer sequence features (kmer-SVM) to identify predictive combinations of short transcription factor-binding sites, which determine the tissue specificity of these genomic assays (Lee, Karchin and Beer, Discriminative prediction of mammalian enhancers from DNA sequence. Genome Res. 2011; 21:2167-80). This regulatory information can (i) give confidence in genomic experiments by recovering previously known binding sites, and (ii) reveal novel sequence features for subsequent experimental testing of cooperative mechanisms. Here, we describe the development and implementation of a web server to allow the broader research community to independently apply our kmer-SVM to analyze and interpret their genomic datasets. We analyze five recently published data sets and demonstrate how this tool identifies accessory factors and repressive sequence elements. kmer-SVM is available at http://kmersvm.beerlab.org." }, { "pmid": "23019145", "title": "Integration of ChIP-seq and machine learning reveals enhancers and a predictive regulatory sequence vocabulary in melanocytes.", "abstract": "We take a comprehensive approach to the study of regulatory control of gene expression in melanocytes that proceeds from large-scale enhancer discovery facilitated by ChIP-seq; to rigorous validation in silico, in vitro, and in vivo; and finally to the use of machine learning to elucidate a regulatory vocabulary with genome-wide predictive power. We identify 2489 putative melanocyte enhancer loci in the mouse genome by ChIP-seq for EP300 and H3K4me1. We demonstrate that these putative enhancers are evolutionarily constrained, enriched for sequence motifs predicted to bind key melanocyte transcription factors, located near genes relevant to melanocyte biology, and capable of driving reporter gene expression in melanocytes in culture (86%; 43/50) and in transgenic zebrafish (70%; 7/10). Next, using the sequences of these putative enhancers as a training set for a supervised machine learning algorithm, we develop a vocabulary of 6-mers predictive of melanocyte enhancer function. Lastly, we demonstrate that this vocabulary has genome-wide predictive power in both the mouse and human genomes. This study provides deep insight into the regulation of gene expression in melanocytes and demonstrates a powerful approach to the investigation of regulatory sequences that can be applied to other cell types." }, { "pmid": "23925113", "title": "Charting a dynamic DNA methylation landscape of the human genome.", "abstract": "DNA methylation is a defining feature of mammalian cellular identity and is essential for normal development. Most cell types, except germ cells and pre-implantation embryos, display relatively stable DNA methylation patterns, with 70-80% of all CpGs being methylated. Despite recent advances, we still have a limited understanding of when, where and how many CpGs participate in genomic regulation. Here we report the in-depth analysis of 42 whole-genome bisulphite sequencing data sets across 30 diverse human cell and tissue types. We observe dynamic regulation for only 21.8% of autosomal CpGs within a normal developmental context, most of which are distal to transcription start sites. These dynamic CpGs co-localize with gene regulatory elements, particularly enhancers and transcription-factor-binding sites, which allow identification of key lineage-specific regulators. In addition, differentially methylated regions (DMRs) often contain single nucleotide polymorphisms associated with cell-type-related diseases as determined by genome-wide association studies. The results also highlight the general inefficiency of whole-genome bisulphite sequencing, as 70-80% of the sequencing reads across these data sets provided little or no relevant information about CpG methylation. To demonstrate further the utility of our DMR set, we use it to classify unknown samples and identify representative signature regions that recapitulate major DNA methylation dynamics. In summary, although in theory every CpG can change its methylation state, our results suggest that only a fraction does so as part of coordinated regulatory programs. Therefore, our selected DMRs can serve as a starting point to guide new, more effective reduced representation approaches to capture the most informative fraction of CpGs, as well as further pinpoint putative regulatory elements." }, { "pmid": "17130149", "title": "VISTA Enhancer Browser--a database of tissue-specific human enhancers.", "abstract": "Despite the known existence of distant-acting cis-regulatory elements in the human genome, only a small fraction of these elements has been identified and experimentally characterized in vivo. This paucity of enhancer collections with defined activities has thus hindered computational approaches for the genome-wide prediction of enhancers and their functions. To fill this void, we utilize comparative genome analysis to identify candidate enhancer elements in the human genome coupled with the experimental determination of their in vivo enhancer activity in transgenic mice [L. A. Pennacchio et al. (2006) Nature, in press]. These data are available through the VISTA Enhancer Browser (http://enhancer.lbl.gov). This growing database currently contains over 250 experimentally tested DNA fragments, of which more than 100 have been validated as tissue-specific enhancers. For each positive enhancer, we provide digital images of whole-mount embryo staining at embryonic day 11.5 and an anatomical description of the reporter gene expression pattern. Users can retrieve elements near single genes of interest, search for enhancers that target reporter gene expression to a particular tissue, or download entire collections of enhancers with a defined tissue specificity or conservation depth. These experimentally validated training sets are expected to provide a basis for a wide range of downstream computational and functional studies of enhancer function." } ]
Frontiers in Psychology
28018271
PMC5149550
10.3389/fpsyg.2016.01936
Comparing a Perceptual and an Automated Vision-Based Method for Lie Detection in Younger Children
The present study investigates how easily it can be detected whether a child is being truthful or not in a game situation, and it explores the cue validity of bodily movements for such type of classification. To achieve this, we introduce an innovative methodology – the combination of perception studies (in which eye-tracking technology is being used) and automated movement analysis. Film fragments from truthful and deceptive children were shown to human judges who were given the task to decide whether the recorded child was being truthful or not. Results reveal that judges are able to accurately distinguish truthful clips from lying clips in both perception studies. Even though the automated movement analysis for overall and specific body regions did not yield significant results between the experimental conditions, we did find a positive correlation between the amount of movement in a child and the perception of lies, i.e., the more movement the children exhibited during a clip, the higher the chance that the clip was perceived as a lie. The eye-tracking study revealed that, even when there is movement happening in different body regions, judges tend to focus their attention mainly on the face region. This is the first study that compares a perceptual and an automated method for the detection of deceptive behavior in children whose data have been elicited through an ecologically valid paradigm.
Related WorkChildren’s Lying BehaviorPrevious research suggests that children between 3 and 7 years old are quite good manipulators of their non-verbal behavior when lying, which makes the discrimination between truth-tellers and lie-tellers very difficult to accomplish (Lewis et al., 1989; Talwar and Lee, 2002a; Talwar et al., 2007). Most studies report that the detection of children’s lies is around or slightly above chance level, comparable to what has been claimed for adults (Bond and Depaulo, 2006; Edelstein et al., 2006).Yet, the extent to which children display non-verbal cues could be related to the kind of lie and to the circumstances under which these are told. There is evidence that children start lying from a very young age as early as 2 1/2 years old, and lie- tellers between 3 and 7 years old are almost indistinguishable from truth-tellers (Newton et al., 2000; Talwar and Lee, 2002a). Around 3 years old, children are already able to tell “white lies”, before that they mainly lie for self-serving purposes, such as: to avoid punishment, or to win a prize (Talwar and Lee, 2002b). Nevertheless, some research suggests that lie-tellers tend to exhibit slightly more positive non-verbal behaviors, such as smiles, relaxed and confident facial expressions, and a positive tone of voice (Lewis et al., 1989). However, other research suggests that children have poor control of their non-verbal behavior, which points toward opposite and conflictive directions of what has been previously reported (Vrij et al., 2004; McCarthy and Lee, 2009). For instance, a study has reported that children between the ages of 7–9 years old show less eye contact when lying rather than when answering the truth while older children show longer eye contact, which is similar to what adults exhibit during a lying situation (McCarthy and Lee, 2009). Another study suggests a decrease of movement during a lie-tell, particularly on the hands and fingers (Vrij et al., 2004).Furthermore, it has been reported that children tend to leak more cues to deception when they are more aware of their deceptive attempt: For example, children’s second attempts to lie (after having been told to repeat a previous lie) reveal more non-verbal cues in their facial expressions when compared to their first attempts (Swerts, 2012; Swerts et al., 2013). These findings, according to the authors, might be explained by the ironic effect of lying which states that lying becomes more difficult and most likely less successful, if a person becomes more conscious about his or her behavior when trying to intentionally produce a deceiving message.Non-verbal Cues to LyingBecause people are often highly skilled deceivers, accurate lie detection is in general very difficult for human judges. This means that lie detection accuracy is usually around or slightly above chance level (Bond and Depaulo, 2006; Porter and Ten Brinke, 2008; ten Brinke et al., 2012; Serras Pereira et al., 2014). However, most researchers in this field share the idea that there are certain verbal and non-verbal cues that may uncover whether a person is lying or not, and that the accuracy levels of deception detection are higher if both non-verbal and verbal cues are taken into account (Vrij et al., 2004). One line of research has been focusing on finding these cues by manipulating levels of cognitive load during a lie-tell, which makes lying more difficult, and probably facilitates the emergence of deception cues (Vrij et al., 2006, 2008). Other studies have been focusing on specific non-verbal cues of deception, which can disclose some signals related to deception, such as stress and anxiety (DePaulo, 1988; Bond, 2012). In addition, one can sometimes distinguish truth-tellers from liars on the basis of particular micro-expressions, such as minor cues in the mouth or eye region (Ekman, 2009; Swerts, 2012), like pressed lips, and certain types and frequencies of smiles (DePaulo et al., 2003). However, by their specific nature, such micro-expressions are so subtle, and last only a few milliseconds that they might escape a person’s attention, so that deception detection tends to be a very difficult task. Another study suggests that emotional leakage is stronger in masked high-intensity expressions rather than in low-intensity ones, in both upper and lower face (Porter et al., 2012). Furthermore, the highest emotional leak occurs during fear, whereas happiness shows the smallest emotional leakage. Despite the effort on finding deception cues on the face, results from many studies are frequently discrepant, and the supposed cues are often very subtle in nature (Feldman et al., 1979).Additionally, it has been argued that eye gaze can also be a cue for deception, although the results from different studies are contradictory (Mann et al., 2002, 2004, 2013). According to one study, liars showed more eye contact deliberately than truth-tellers, whereas gaze aversion did not differ between truth-tellers and lie-tellers (Mann et al., 2013). In another study deception seems to be correlated with a decrease in blink rate, which appears to be associated with an increase of the cognitive load (Mann et al., 2002). However, in a different study, the opposite result has been reported, emphasizing that blink rate rises while masking a genuine emotion in a deceptive expression (Porter and Ten Brinke, 2008).Body movement has also been suggested as a source for lie detection but there are some contradictory statements about the usefulness of this feature. On the one hand, some literature states that when lying, people tend to constrain their movements, even though it is unclear whether these restrictions are related to strategic overcompensations (DePaulo, 1988), or to avoid deception leakage cues (Burgoon, 2005). In a similar vein, another study measured the continuous body movement of people in spontaneous lying situations, and found that those who decided to lie showed significantly reduced bodily movement (Eapen et al., 2010). On the other hand, a study based on a dynamical systems perspective, has suggested the existence of continuous fluctuations of movement in the upper face, and moderately in the arms during a deceptive circumstance, which can be discriminated by dynamical properties of less stability, but larger complexity (Duran et al., 2013). Although, these distinctions are presented in the upper face, this study failed to find a significant difference in the total amount of movement between a deceptive and truthful condition. Moreover, when considering hand movements, another study found that lie-tellers have the tendency to do more speech prompting gestures, while truth-tellers do more rhythmic pulsing gestures (Hillman et al., 2012).In sum, despite the fact that significant research about non-verbal cues for lie detection has been performed in the last years, results still seem to be very inconsistent and discrepant.Automated Methods for Deception DetectionIn the past few years, several efforts have been made to develop efficient methods for deception detection. Even though there is no clear consensus on the importance of non-verbal cues (see previous section), there has been a specific interest in human face as the main source of cues for deception detection (Ekman, 2009; ten Brinke et al., 2012; Swerts et al., 2013). Many of these methods are based on the Facial Action Code System (FACS) (Ekman and Friesen, 1976), usually taken as the reference method for detecting facial movement and expressions, which has thus also been applied for detecting facial cues to deception (ten Brinke et al., 2012). As a manual method, FACS is time consuming and rather complex to apply since it demands trained coders.More recently, automated measures are being used to help researchers to understand and detect lies more efficiently and rapidly. An example, is the Computer Expression Recognition Toolbox (CERT) which is a software tool that detects the facial expressions in real-time (Littlewort et al., 2011), and it is based on the Facial Action Coding System (FACS) (Ekman and Friesen, 1976). It is able to identify the intensity of 19 different actions units, as well as 6 basic emotions. This automated procedure to detect facial movements and microexpressions can facilitate the research of non-verbal correlates of deception, but that obviously also depends on the accuracy with which these expressions can be detected and classified. One issue is that is not immediately clear how well they would work on children’s faces.Additionally, more novel automated measures are being used to investigate deception from different angles. Automated movement analysis is starting to be used for this purpose (Eapen et al., 2010; Duran et al., 2013; Serras Pereira et al., 2014). Eye tracking has also been used in several different ways for deception detection. Some studies (Wang et al., 2010) use eye tracking to try to define gaze patterns of liars versus truth-tellers; another option for using eye tracking systems is to study the eye-gaze patterns from the experts of deception detection. For instance, a study (Bond, 2008) has reported that experts on deception detection, when deciding about a message veracity, are perceptually faster and more highly accurate, and seem to fixate their gaze behavior in areas such as face and/or body (arms torso and legs). Likewise, some other studies have been focusing on whether deception detection can be achieved by measuring physiological data, such as brain activity, galvanic skin conductance, and thermography techniques (Kozel et al., 2005; Ding et al., 2013; Van’t Veer et al., 2014). However, these methods are quite intrusive, and not suitable for all contexts, especially when dealing with specific types of population, such as children.Current StudyIn sum, considerable work is currently being done on the development of efficient automated methods to detect deception, but there is still a tendency to discard the body as a source of possible non-verbal cues. In the future, such methods could be combined with what has been achieved via automated analysis of verbal cues (Benus et al., 2006) and gestures (Hillman et al., 2012) as potential sources for lie detection, since combining verbal and non-verbal cues have proven to be more accurate for lie detection (Vrij et al., 2004). Moreover, the inconsistency regarding the relevance and value of bodily cues for deception may partly be due to the use of different detection methods. This discrepancy is worthy to be investigated in a more systematic approach.Finally, most of the research with children focuses on developmental questions of lying. In this study, we are interested in exploring the non-verbal cues of such behavior based on the assumption that children are less formatted by the social rules, and that they tend to leak more cues to deception when they are more aware of their deceptive effort (Swerts, 2012). Based on what is above described, this study presents a new approach to look into non-verbal cues of deception. It investigates how easily it can be detected whether a child is being truthful or not in a game situation, in which the lies are more spontaneous, and much closer to a normal social context. In addition, it explores the cue validity of bodily movements for such type of classification, by using an original methodology – the combination of perception studies and automated movement analysis.
[ "16859438", "17690956", "12555795", "23340482", "16729205", "22704035", "16185668", "12061624", "14769126", "18678376", "21463058", "21887961", "18997880", "16516533" ]
[ { "pmid": "16859438", "title": "Accuracy of deception judgments.", "abstract": "We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature." }, { "pmid": "17690956", "title": "Deception detection expertise.", "abstract": "A lively debate between Bond and Uysal (2007, Law and Human Behavior, 31, 109-115) and O'Sullivan (2007, Law and Human Behavior, 31, 117-123) concerns whether there are experts in deception detection. Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection." }, { "pmid": "12555795", "title": "Cues to deception.", "abstract": "Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a more negative impression and are more tense. Their stories include fewer ordinary imperfections and unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed, especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions." }, { "pmid": "23340482", "title": "Neural correlates of spontaneous deception: A functional near-infrared spectroscopy (fNIRS)study.", "abstract": "Deception is commonly seen in everyday social interactions. However, most of the knowledge about the underlying neural mechanism of deception comes from studies where participants were instructed when and how to lie. To study spontaneous deception, we designed a guessing game modeled after Greene and Paxton (2009) \"Proceedings of the National Academy of Sciences, 106(30), 12506-12511\", in which lying is the only way to achieve the performance level needed to end the game. We recorded neural responses during the game using near-infrared spectroscopy (NIRS). We found that when compared to truth-telling, spontaneous deception, like instructed deception, engenders greater involvement of such prefrontal regions as the left superior frontal gyrus. We also found that the correct-truth trials produced greater neural activities in the left middle frontal gyrus and right superior frontal gyrus than the incorrect-truth trials, suggesting the involvement of the reward system. Furthermore, the present study confirmed the feasibility of using NIRS to study spontaneous deception." }, { "pmid": "16729205", "title": "Detecting lies in children and adults.", "abstract": "In this study, observers' abilities to detect lies in children and adults were examined. Adult participants observed videotaped interviews of both children and adults either lying or telling the truth about having been touched by a male research assistant. As hypothesized, observers detected children's lies more accurately than adults' lies; however, adults' truthful statements were detected more accurately than were children's. Further analyses revealed that observers were biased toward judging adults' but not children's statements as truthful. Finally, consistent with the notion that there are stable individual differences in the ability to detect lies, observers who were highly accurate in detecting children's lies were similarly accurate in detecting adults' lies. Implications of these findings for understanding lie-detection accuracy are discussed, as are potential applications to the forensic context." }, { "pmid": "22704035", "title": "Young children can tell strategic lies after committing a transgression.", "abstract": "This study investigated whether young children make strategic decisions about whether to lie to conceal a transgression based on the lie recipient's knowledge. In Experiment 1, 168 3- to 5-year-olds were asked not to peek at the toy in the experimenter's absence, and the majority of children peeked. Children were questioned about their transgression in either the presence or absence of an eyewitness of their transgression. Whereas 4- and 5-year-olds were able to adjust their decisions of whether to lie based on the presence or absence of the eyewitness, 3-year-olds did not. Experiments 2 and 3 manipulated whether the lie recipient appeared to have learned information about children's peeking from an eyewitness or was merely bluffing. Results revealed that when the lie recipient appeared to be genuinely knowledgeable about their transgression, even 3-year-olds were significantly less likely to lie compared with when the lie recipient appeared to be bluffing. Thus, preschool children are able to make strategic decisions about whether to lie or tell the truth based on whether the lie recipient is genuinely knowledgeable about the true state of affairs." }, { "pmid": "16185668", "title": "Detecting deception using functional magnetic resonance imaging.", "abstract": "BACKGROUND\nThe ability to accurately detect deception is presently very limited. Detecting deception might be more accurately achieved by measuring the brain correlates of lying in an individual. In addition, a method to investigate the neurocircuitry of deception might provide a unique opportunity to test the neurocircuitry of persons in whom deception is a prominent component (i.e., conduct disorder, antisocial personality disorder, etc.).\n\n\nMETHODS\nIn this study, we used functional magnetic resonance imaging (fMRI) to show that specific regions were reproducibly activated when subjects deceived. Subjects participated in a mock crime stealing either a ring or a watch. While undergoing an fMRI, the subjects denied taking either object, thus telling the truth with some responses, and lying with others. A Model-Building Group (MBG, n = 30) was used to develop the analysis methods, and the methods were subsequently applied to an independent Model-Testing Group (MTG, n = 31).\n\n\nRESULTS\nWe were able to correctly differentiate truthful from deceptive responses, correctly identifying the object stolen, for 93% of the subjects in the MBG and 90% of the subjects in the MTG.\n\n\nCONCLUSIONS\nThis is the first study to use fMRI to detect deception at the individual level. Further work is required to determine how well this technology will work in different settings and populations." }, { "pmid": "12061624", "title": "Suspects, lies, and videotape: an analysis of authentic high-stake liars.", "abstract": "This study is one of the very few, and the most extensive to date, which has examined deceptive behavior in a real-life, high-stakes setting. The behavior of 16 suspects in their police interviews has been analyzed. Clips of video footage have been selected where other sources (reliable witness statements and forensic evidence) provide evidence that the suspect lied or told the truth. Truthful and deceptive behaviors were compared. The suspects blinked less frequently and made longer pauses during deceptive clips than during truthful clips. Eye contact was maintained equally for deceptive and truthful clips. These findings negate the popular belief amongst both laypersons and professional lie detectors (such as the police) that liars behave nervously by fidgeting and avoiding eye contact. However, large individual differences were present." }, { "pmid": "14769126", "title": "Detecting true lies: police officers' ability to detect suspects' lies.", "abstract": "Ninety-nine police officers, not identified in previous research as belonging to groups that are superior in lie detection, attempted to detect truths and lies told by suspects during their videotaped police interviews. Accuracy rates were higher than those typically found in deception research and reached levels similar to those obtained by specialized lie detectors in previous research. Accuracy was positively correlated with perceived experience in interviewing suspects and with mentioning cues to detecting deceit that relate to a suspect's story. Accuracy was negatively correlated with popular stereotypical cues such as gaze aversion and fidgeting. As in previous research, accuracy and confidence were not significantly correlated, but the level of confidence was dependent on whether officers judged actual truths or actual lies and on the method by which confidence was measured." }, { "pmid": "18678376", "title": "Children's knowledge of deceptive gaze cues and its relation to their actual lying behavior.", "abstract": "Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children's gaze behavior when lying is consistent with this belief. In our study, 7- to 15-year-olds and adults answered questions truthfully (Truth questions) or untruthfully (Lie questions) or answered questions that required thinking (Think questions). Younger participants (7- and 9-year-olds) broke eye contact significantly more when lying compared with other conditions. Also, their averted gaze when lying differed significantly from their gaze display in other conditions. In contrast, older participants did not differ in their durations of eye contact or averted gaze across conditions. Participants' knowledge about eye gaze and deception increased with age. This knowledge significantly predicted their actual gaze behavior when lying. These findings suggest that with increased age, participants became increasingly sophisticated in their use of display rule knowledge to conceal their deception." }, { "pmid": "21463058", "title": "Age-related differences in deception.", "abstract": "Young and older participants judged the veracity of young and older speakers' opinions about topical issues. All participants found it easier to judge when an older adult was lying relative to a young adult, and older adults were worse than young adults at telling when speakers were telling the truth versus lying. Neither young nor older adults were advantaged when judging a speaker from the same age group. Overall, older adults were more transparent as liars and were worse at detecting lies, with older adults' worse emotion recognition fully mediating the relation between age group and lie detection failures." }, { "pmid": "21887961", "title": "From little white lies to filthy liars: the evolution of honesty and deception in young children.", "abstract": "Though it is frequently condemned, lie-telling is a common and frequent activity in interpersonal interactions, with apparent social risks and benefits. The current review examines the development of deception among children. It is argued that early lying is normative, reflecting children's emerging cognitive and social development. Children lie to preserve self-interests as well as for the benefit of others. With age, children learn about the social norms that promote honesty while encouraging occasional prosocial lie-telling. Yet, lying can become a problem behavior with frequent or inappropriate use over time. Chronic lie-telling of any sort risks social consequences, such as the loss of credibility and damage to relationships. By middle childhood, chronic reliance on lying may be related to poor development of conscience, weak self-regulatory control, and antisocial behavior, and it could be indicative of maladjustment and put the individual in conflict with the environment. The goal of the current chapter is to capture the complexity of lying and build a preliminary understanding of how children's social experiences with their environments, their own dispositions, and their developing cognitive maturity interact, over time, to predict their lying behavior and, for some, their chronic and problem lying. Implications for fostering honesty in young children are discussed." }, { "pmid": "18997880", "title": "White lie-telling in children for politeness purposes.", "abstract": "Prosocial lie-telling behavior in children between 3 and 11 years of age was examined using an undesirable gift paradigm. In the first condition, children received an undesirable gift and were questioned by the gift-giver about whether they liked the gift. In the second condition, children were also given an undesirable gift but received parental encouragement to tell a white lie prior to being questioned by the gift-giver. In the third condition, the child's parent received an undesirable gift and the child was encouraged to lie on behalf of their parent. In all conditions, the majority of children told a white lie and this tendency increased with age. Coding of children's facial expressions using Ekman and Friesen's (1978) Facial Action Coding System revealed significant but small differences between lie-tellers and control children in terms of both positive and negative facial expressions. Detailed parental instruction facilitated children's display of appropriate verbal and nonverbal expressive behaviors when they received an undesirable gift." } ]
JMIR Medical Informatics
27903489
PMC5156821
10.2196/medinform.6373
Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations
BackgroundMany health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients’ notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care.ObjectiveWe aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients.MethodsFirst, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians’ agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems.ResultsPhysicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen’s kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P<.001). Rich learning features contributed to FOCUS’s performance substantially.ConclusionsFOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care.
Related WorksNatural Language Processing Systems Facilitating Concept-Level Electronic Health Record ComprehensionThere has been active research on linking medical terms to lay terms [11,30,31], consumer-oriented definitions [12] and educational materials [32], and showing improved comprehension with such interventions [11,12].On the issue of determining which medical terms to simplify, there is previous work that used frequency-based and/or context-based approaches to check if a term is unfamiliar to the average patient or if it has simpler synonyms [11,30,31]. Such work focuses on identifying difficult medical terms and treats these terms as equally important.Our approach is different in 2 aspects: (1) we focus on finding important medical terms, which are not equivalent to difficult medical terms, as discussed in the Background and Significance subsection; and (2) our approach is patient centered and prioritizes important terms for each EHR note of individual patients. We developed several learning features, including term frequency, term position, term frequency-inverse document frequency (TF-IDF), and topic feature, to serve this purpose.It is worth noting that our approach is complementary to previous work. For example, in a real-world application, we can display the lay definitions for all the difficult medical terms in a patient’s EHR note, and then highlight those terms that FOCUS predicts to be most important to this patient.Single-Document Keyphrase ExtractionOur work is inspired by, but different from, single-document keyphrase extraction (KE), which identifies terms or phrases representing important concepts and topics in a document. KE targets topics that the writers wanted to convey when writing the documents. Unlike KE, our work does not focus on topics important to physicians (ie, the writers and the target readers when writing the EHR notes), but rather focuses on patients, the new readers of the notes.Both supervised and unsupervised methods have been developed for KE [33]. We use supervised methods, which in general perform better than unsupervised ones when training data is available.Most supervised methods formulate KE as a binary classification problem. The confidence scores output by the classification algorithms are used to rank candidate phrases. Various algorithms have been explored, such as naïve Bayes, decision tree, bagging, support vector machine (SVM), multilayer perceptron, and random forest (RF) [34-43]. In our study, we implemented RF [43] as a strong baseline system.KE in the biomedical domain mainly focused on literature articles and domain-specific methods and features [44-47]. For example, Li et al [44] developed a software tool called keyphrase identification program (KIP) to extract keyphrases from medical articles. KIP used Medical Subject Headings (MeSH) as the knowledge base to compute a score to reflect a phrase’s domain specificity. It assigned each candidate phrase a rank score by multiplying its within-document term frequency and domain-specificity score.Different from the aforementioned approaches, we treat KE as a ranking problem and use the ranking SVM (rankSVM) approach [48] as it has been shown to be effective in KE in scientific literature, news, and weblogs [42].Common learning features used by previous work include frequency-based features (eg, TF-IDF), term-related features (eg, the term itself, its position in a document, and its length), document structure-based features (eg, whether a term occurs in the title or abstract of a scientific paper), and syntactic features (eg, the part-of-speech [POS] tags). Features derived from external resources, such as Wikipedia and query logs, have also been used to represent term importance [39,40]. Unlike previous work, we explored rich semantic features specifically available to the medical domain.Medelyan and Witten [45] developed a system that extends the widely used keyphrase extraction algorithm KEA [34] by using semantic information from domain-specific thesauri, which they called KEA++. KEA++ has been applied to the medical domain, where it used MeSH vocabulary to extract candidate phrases from medical articles and used MeSH concept relations to compute its domain-specific feature. In this study, we adapted KEA++ to the EHR data and used the adapted KEA++ as a strong baseline system.
[ "19224738", "24359554", "26104044", "20643992", "23027317", "23407012", "23535584", "17911889", "21347002", "23920650", "9594918", "18811992", "25419896", "25661679", "14965405", "18693866", "12923796", "11103725", "1517087", "20845203", "23978618", "26681155", "20442139", "18693956", "26306273", "16545986", "20819853", "24729964", "25669328", "26291578", "16221948", "10566330", "11604772", "11720966", "12425240", "14728258", "16779162", "18436906" ]
[ { "pmid": "24359554", "title": "The Medicare Electronic Health Record Incentive Program: provider performance on core and menu measures.", "abstract": "OBJECTIVE\nTo measure performance by eligible health care providers on CMS's meaningful use measures.\n\n\nDATA SOURCE\nMedicare Electronic Health Record Incentive Program Eligible Professionals Public Use File (PUF), which contains data on meaningful use attestations by 237,267 eligible providers through May 31, 2013.\n\n\nSTUDY DESIGN\nCross-sectional analysis of the 15 core and 10 menu measures pertaining to use of EHR functions reported in the PUF.\n\n\nPRINCIPAL FINDINGS\nProviders in the dataset performed strongly on all core measures, with the most frequent response for each of the 15 measures being 90-100 percent compliance, even when the threshold for a particular measure was lower (e.g., 30 percent). PCPs had higher scores than specialists for computerized order entry, maintaining an active medication list, and documenting vital signs, while specialists had higher scores for maintaining a problem list, recording patient demographics and smoking status, and for providing patients with an after-visit summary. In fact, 90.2 percent of eligible providers claimed at least one exclusion, and half claimed two or more.\n\n\nCONCLUSIONS\nProviders are successfully attesting to CMS's requirements, and often exceeding the thresholds required by CMS; however, some troubling patterns in exclusions are present. CMS should raise program requirements in future years." }, { "pmid": "26104044", "title": "Patient Portals and Patient Engagement: A State of the Science Review.", "abstract": "BACKGROUND\nPatient portals (ie, electronic personal health records tethered to institutional electronic health records) are recognized as a promising mechanism to support greater patient engagement, yet questions remain about how health care leaders, policy makers, and designers can encourage adoption of patient portals and what factors might contribute to sustained utilization.\n\n\nOBJECTIVE\nThe purposes of this state of the science review are to (1) present the definition, background, and how current literature addresses the encouragement and support of patient engagement through the patient portal, and (2) provide a summary of future directions for patient portal research and development to meaningfully impact patient engagement.\n\n\nMETHODS\nWe reviewed literature from 2006 through 2014 in PubMed, Ovid Medline, and PsycInfo using the search terms \"patient portal\" OR \"personal health record\" OR \"electronic personal health record\". Final inclusion criterion dictated that studies report on the patient experience and/or ways that patients may be supported to make competent health care decisions and act on those decisions using patient portal functionality.\n\n\nRESULTS\nWe found 120 studies that met the inclusion criteria. Based on the research questions, explicit and implicit aims of the studies, and related measures addressed, the studies were grouped into five major topics (patient adoption, provider endorsement, health literacy, usability, and utility). We discuss the findings and conclusions of studies that address the five topical areas.\n\n\nCONCLUSIONS\nCurrent research has demonstrated that patients' interest and ability to use patient portals is strongly influenced by personal factors such age, ethnicity, education level, health literacy, health status, and role as a caregiver. Health care delivery factors, mainly provider endorsement and patient portal usability also contribute to patient's ability to engage through and with the patient portal. Future directions of research should focus on identifying specific populations and contextual considerations that would benefit most from a greater degree of patient engagement through a patient portal. Ultimately, adoption by patients and endorsement by providers will come when existing patient portal features align with patients' and providers' information needs and functionality." }, { "pmid": "20643992", "title": "Open notes: doctors and patients signing on.", "abstract": "Few patients read their doctors' notes, despite having the legal right to do so. As information technology makes medical records more accessible and society calls for greater transparency, patients' interest in reading their doctors' notes may increase. Inviting patients to review these notes could improve understanding of their health, foster productive communication, stimulate shared decision making, and ultimately lead to better outcomes. Yet, easy access to doctors' notes could have negative consequences, such as confusing or worrying patients and complicating rather than improving patient-doctor communication. To gain evidence about the feasibility, benefits, and harms of providing patients ready access to electronic doctors' notes, a team of physicians and nurses have embarked on a demonstration and evaluation of a project called OpenNotes. The authors describe the intervention and share what they learned from conversations with doctors and patients during the planning stages. The team anticipates that \"open notes\" will spread and suggests that over time, if drafted collaboratively and signed by both doctors and patients, they might evolve to become contracts for care." }, { "pmid": "23027317", "title": "Inviting patients to read their doctors' notes: a quasi-experimental study and a look ahead.", "abstract": "BACKGROUND\nLittle information exists about what primary care physicians (PCPs) and patients experience if patients are invited to read their doctors' office notes.\n\n\nOBJECTIVE\nTo evaluate the effect on doctors and patients of facilitating patient access to visit notes over secure Internet portals.\n\n\nDESIGN\nQuasi-experimental trial of PCPs and patient volunteers in a year-long program that provided patients with electronic links to their doctors' notes.\n\n\nSETTING\nPrimary care practices at Beth Israel Deaconess Medical Center (BIDMC) in Massachusetts, Geisinger Health System (GHS) in Pennsylvania, and Harborview Medical Center (HMC) in Washington.\n\n\nPARTICIPANTS\n105 PCPs and 13,564 of their patients who had at least 1 completed note available during the intervention period.\n\n\nMEASUREMENTS\nPortal use and electronic messaging by patients and surveys focusing on participants' perceptions of behaviors, benefits, and negative consequences.\n\n\nRESULTS\n11,155 [corrected] of 13,564 patients with visit notes available opened at least 1 note (84% at BIDMC, 82% [corrected] at GHS, and 47% at HMC). Of 5219 [corrected] patients who opened at least 1 note and completed a postintervention survey, 77% to 59% [corrected] across the 3 sites reported that open notes helped them feel more in control of their care; 60% to 78% of those taking medications reported increased medication adherence; 26% to 36% had privacy concerns; 1% to 8% reported that the notes caused confusion, worry, or offense; and 20% to 42% reported sharing notes with others. The volume of electronic messages from patients did not change. After the intervention, few doctors reported longer visits (0% to 5%) or more time addressing patients' questions outside of visits (0% to 8%), with practice size having little effect; 3% to 36% of doctors reported changing documentation content; and 0% to 21% reported taking more time writing notes. Looking ahead, 59% to 62% of patients believed that they should be able to add comments to a doctor's note. One out of 3 patients believed that they should be able to approve the notes' contents, but 85% to 96% of doctors did not agree. At the end of the experimental period, 99% of patients wanted open notes to continue and no doctor elected to stop.\n\n\nLIMITATIONS\nOnly 3 geographic areas were represented, and most participants were experienced in using portals. Doctors volunteering to participate and patients using portals and completing surveys may tend to offer favorable feedback, and the response rate of the patient surveys (41%) may further limit generalizability.\n\n\nCONCLUSION\nPatients accessed visit notes frequently, a large majority reported clinically relevant benefits and minimal concerns, and virtually all patients wanted the practice to continue. With doctors experiencing no more than a modest effect on their work lives, open notes seem worthy of widespread adoption.\n\n\nPRIMARY FUNDING SOURCE\nThe Robert Wood Johnson Foundation, the Drane Family Fund, the Richard and Florence Koplow Charitable Foundation, and the National Cancer Institute." }, { "pmid": "23407012", "title": "Evaluating patient access to Electronic Health Records: results from a survey of veterans.", "abstract": "OBJECTIVE\nPersonal Health Records (PHRs) tethered to an Electronic Health Record (EHR) offer patients unprecedented access to their personal health information. At the Department of Veteran Affairs (VA), the My HealtheVet Pilot Program was an early PHR prototype enabling patients to import 18 types of information, including clinical notes and laboratory test results, from the VA EHR into a secure PHR portal. The goal of this study was to explore Veteran perceptions about this access to their medical records, including perceived value and effect on satisfaction, self-care, and communication.\n\n\nMETHODS\nPatients enrolled in the pilot program were invited to participate in a web-based survey.\n\n\nRESULTS\nAmong 688 Veteran respondents, there was a high degree of satisfaction with the pilot program, with 84% agreeing that the information and services were helpful. The most highly ranked feature was access to personal health information from the VA EHR. The majority of respondents (72%) indicated that the pilot Web site made it easy for them to locate relevant information. Most participants (66%) agreed that the pilot program helped improve their care, with 90% indicating that they would recommend it to another Veteran.\n\n\nCONCLUSIONS\nVeterans' primary motivation for use of the pilot Web site was the ability to access their own personal health information from the EHR. With patients viewing such access as beneficial to their health and care, PHRs with access to EHR data are positioned to improve health care quality. Making additional information accessible to patients is crucial to meet their needs and preferences." }, { "pmid": "23535584", "title": "Patient experiences with full electronic access to health records and clinical notes through the My HealtheVet Personal Health Record Pilot: qualitative study.", "abstract": "BACKGROUND\nFull sharing of the electronic health record with patients has been identified as an important opportunity to engage patients in their health and health care. The My HealtheVet Pilot, the initial personal health record of the US Department of Veterans Affairs, allowed patients and their delegates to view and download content in their electronic health record, including clinical notes, laboratory tests, and imaging reports.\n\n\nOBJECTIVE\nA qualitative study with purposeful sampling sought to examine patients' views and experiences with reading their health records, including their clinical notes, online.\n\n\nMETHODS\nFive focus group sessions were conducted with patients and family members who enrolled in the My HealtheVet Pilot at the Portland Veterans Administration Medical Center, Oregon. A total of 30 patients enrolled in the My HealtheVet Pilot, and 6 family members who had accessed and viewed their electronic health records participated in the sessions.\n\n\nRESULTS\nFour themes characterized patient experiences with reading the full complement of their health information. Patients felt that seeing their records positively affected communication with providers and the health system, enhanced knowledge of their health and improved self-care, and allowed for greater participation in the quality of their care such as follow-up of abnormal test results or decision-making on when to seek care. While some patients felt that seeing previously undisclosed information, derogatory language, or inconsistencies in their notes caused challenges, they overwhelmingly felt that having more, rather than less, of their health record information provided benefits.\n\n\nCONCLUSIONS\nPatients and their delegates had predominantly positive experiences with health record transparency and the open sharing of notes and test results. Viewing their records appears to empower patients and enhance their contributions to care, calling into question common provider concerns about the effect of full record access on patient well-being. While shared records may or may not impact overall clinic workload, it is likely to change providers' work, necessitating new types of skills to communicate and partner with patients." }, { "pmid": "17911889", "title": "Text characteristics of clinical reports and their implications for the readability of personal health records.", "abstract": "Through personal health record applications (PHR), consumers are gaining access to their electronic health records (EHR). A new challenge is to make the content of these records comprehensible to consumers. To address this challenge, we analyzed the text unit length, syntactic and semantic characteristics of three sets of health texts: clinical reports from EHR, known difficult materials and easy-to-read materials. Our findings suggest that EHR texts are more different from easy texts and more similar to difficult texts in terms of syntactic and semantic characteristics, and EHR texts are more similar to easy texts and different from difficult texts in regard to text unit length features. Since commonly used readability formulas focus more on text unit length characteristics, this study points to the need to tackle syntactic and semantic issues in the effort to measure and improve PHR readability." }, { "pmid": "21347002", "title": "A semantic and syntactic text simplification tool for health content.", "abstract": "Text simplification is a challenging NLP task and it is particularly important in the health domain as most health information requires higher reading skills than an average consumer has. This low readability of health content is largely due to the presence of unfamiliar medical terms/concepts and certain syntactic characteristics, such as excessively complex sentences. In this paper, we discuss a simplification tool that was developed to simplify health information. The tool addresses semantic difficulty by substituting difficult terms with easier synonyms or through the use of hierarchically and/or semantically related terms. The tool also simplifies long sentences by splitting them into shorter grammatical sentences. We used the tool to simplify electronic medical records and journal articles and results show that the tool simplifies both document types though by different degrees. A cloze test on the electronic medical records showed a statistically significant improvement in the cloze score from 35.8% to 43.6%." }, { "pmid": "23920650", "title": "Improving patients' electronic health record comprehension with NoteAid.", "abstract": "Allowing patients direct access to their electronic health record (EHR) notes has been shown to enhance medical understanding and may improve healthcare management and outcome. However, EHR notes contain medical terms, shortened forms, complex disease and medication names, and other domain specific jargon that make them difficult for patients to fathom. In this paper, we present a BioNLP system, NoteAid, that automatically recognizes medical concepts and links these concepts with consumer oriented, simplified definitions from external resources. We conducted a pilot evaluation for linking EHR notes through NoteAid to three external knowledge resources: MedlinePlus, the Unified Medical Language System (UMLS), and Wikipedia. Our results show that Wikipedia significantly improves EHR note readability. Preliminary analyses show that MedlinePlus and the UMLS need to improve both content readability and content coverage for consumer health information. A demonstration version of fully functional NoteAid is available at http://clinicalnotesaid.org." }, { "pmid": "9594918", "title": "Improving comprehension for cancer patients with low literacy skills: strategies for clinicians.", "abstract": "This paper provides strategies to improve communication between clinicians and patients, particularly patients who are among the 44 million adult Americans with low literacy skills. Included are insights into the nature of the literacy problem and how it affects patient comprehension of information across the continuum of cancer care. Practical strategies address how to help patients understand medical advice, reduce literacy levels of cancer information, and help patients remember the advice given. A summary of the strategies is included in the Appendix for convenient reference." }, { "pmid": "18811992", "title": "Readability assessment of internet-based consumer health information.", "abstract": "BACKGROUND\nA substantial amount of consumer health-related information is available on the Internet. Studies suggest that consumer comprehension may be compromised if content exceeds a 7th-grade reading level, which is the average American reading level identified by the United States Department of Health and Human Services (USDHHS).\n\n\nOBJECTIVE\nTo determine the readability of Internet-based consumer health information offered by organizations that represent the top 5 medical-related causes of death in America. We hypothesized that the average readability (reading grade level) of Internet-based consumer health information on heart disease, cancer, stroke, chronic obstructive pulmonary disease, and diabetes would exceed the USDHHS recommended reading level.\n\n\nMETHODS\nFrom the Web sites of the American Heart Association, American Cancer Society, American Lung Association, American Diabetes Association, and American Stroke Association we randomly gathered 100 consumer-health-information articles. We assessed each article with 3 readability-assessment tools: SMOG (Simple Measure of Gobbledygook), Gunning FOG (Frequency of Gobbledygook), and Flesch-Kincaid Grade Level. We also categorized the articles per the USDHHS readability categories: easy to read (below 6th-grade level), average difficulty (7th to 9th grade level), and difficult (above 9th-grade level).\n\n\nRESULTS\nMost of the articles exceeded the 7th-grade reading level and were in the USDHHS \"difficult\" category. The mean +/- SD readability score ranges were: SMOG 11.80 +/- 2.44 to 14.40 +/- 1.47, Flesch-Kincaid 9.85 +/- 2.25 to 11.55 +/- 0.76, and Gunning FOG 13.10 +/- 3.42 to 16.05 +/- 2.31. The articles from the American Lung Association had the lowest reading-level scores with each of the readability-assessment tools.\n\n\nCONCLUSIONS\nOur findings support that Web-based medical information intended for consumer use is written above USDHHS recommended reading levels. Compliance with these recommendations may increase the likelihood of consumer comprehension." }, { "pmid": "25419896", "title": "Readability of patient education materials on the American Orthopaedic Society for Sports Medicine website.", "abstract": "BACKGROUND\nThe recommended readability of patient education materials by the American Medical Association (AMA) and National Institutes of Health (NIH) should be no greater than a sixth-grade reading level. However, online resources may be too complex for some patients to understand, and poor health literacy predicts inferior health-related quality of life outcomes.\n\n\nAIM\nThis study evaluated whether the American Orthopaedic Society for Sports Medicine (AOSSM) website's patient education materials meet recommended readability guidelines for medical information. We hypothesized that the readability of these online materials would have a Flesch-Kincaid formula grade above the sixth grade.\n\n\nMETHODS\nAll 65 patient education entries of the AOSSM website were analyzed for grade level readability using the Flesch-Kincaid formula, a widely used and validated tool to evaluate the text reading level.\n\n\nRESULTS\nThe average (standard deviation) readability of all 65 articles was grade level 10.03 (1.44); 64 articles had a readability score above the sixth-grade level, which is the maximum level recommended by the AMA and NIH. Mean readability of the articles exceeded this level by 4.03 grade levels (95% CI, 3.7-4.4; P < 0.0001). We found post-hoc that only 7 articles had a readability score ≤ an eighth-grade level, the average reading level of US adults. Mean readability of the articles exceeded this level by 2.03 grade levels (95% CI, 1.7-2.4; P < 0.0001).\n\n\nCONCLUSION\nThe readability of online AOSSM patient education materials exceeds the readability level recommended by the AMA and NIH, and is above the average reading level of the majority of US adults. This online information may be of limited utility to most patients due to a lack of comprehension. Our study provides a clear example of the need to improve the readability of specific education material in order to maximize the efficacy of multimedia sources." }, { "pmid": "25661679", "title": "Readability of Written Materials for CKD Patients: A Systematic Review.", "abstract": "BACKGROUND\nThe \"average\" patient has a literacy level of US grade 8 (age 13-14 years), but this may be lower for people with chronic kidney disease (CKD). Current guidelines suggest that patient education materials should be pitched at a literacy level of around 5th grade (age 10-11 years). This study aims to evaluate the readability of written materials targeted at patients with CKD.\n\n\nSTUDY DESIGN\nSystematic review.\n\n\nSETTING & POPULATION\nPatient information materials aimed at adults with CKD and written in English.\n\n\nSEARCH STRATEGY & SOURCES\nPatient education materials designed to be printed and read, sourced from practices in Australia and online at all known websites run by relevant international CKD organizations during March 2014.\n\n\nANALYTICAL APPROACH\nQuantitative analysis of readability using Lexile Analyzer and Flesch-Kincaid tools.\n\n\nRESULTS\nWe analyzed 80 materials. Both Lexile Analyzer and Flesch-Kincaid analyses suggested that most materials required a minimum of grade 9 (age 14-15 years) schooling to read them. Only 5% of materials were pitched at the recommended level (grade 5).\n\n\nLIMITATIONS\nReadability formulas have inherent limitations and do not account for visual information. We did not consider other media through which patients with CKD may access information. Although the study covered materials from the United States, United Kingdom, and Australia, all non-Internet materials were sourced locally, and it is possible that some international paper-based materials were missed. Generalizability may be limited due to exclusion of non-English materials.\n\n\nCONCLUSIONS\nThese findings suggest that patient information materials aimed at patients with CKD are pitched above the average patient's literacy level. This issue is compounded by cognitive decline in patients with CKD, who may have lower literacy than the average patient. It suggests that information providers need to consider their audience more carefully when preparing patient information materials, including user testing with a low-literacy patient population." }, { "pmid": "14965405", "title": "Patients' experiences when accessing their on-line electronic patient records in primary care.", "abstract": "BACKGROUND\nPatient access to on-line primary care electronic patient records is being developed nationally. Knowledge of what happens when patients access their electronic records is poor.\n\n\nAIM\nTo enable 100 patients to access their electronic records for the first time to elicit patients' views and to understand their requirements.\n\n\nDESIGN OF STUDY\nIn-depth interviews using semi-structured questionnaires as patients accessed their electronic records, plus a series of focus groups.\n\n\nSETTING\nSecure facilities for patients to view their primary care records privately.\n\n\nMETHOD\nOne hundred patients from a randomised group viewed their on-line electronic records for the first time. The questionnaire and focus groups addressed patients' views on the following topics: ease of use; confidentiality and security; consent to access; accuracy; printing records; expectations regarding content; exploitation of electronic records; receiving new information and bad news.\n\n\nRESULTS\nMost patients found the computer technology used acceptable. The majority found viewing their record useful and understood most of the content, although medical terms and abbreviations required explanation. Patients were concerned about security and confidentiality, including potential exploitation of records. They wanted the facility to give informed consent regarding access and use of data. Many found errors, although most were not medically significant. Many expected more detail and more information. Patients wanted to add personal information.\n\n\nCONCLUSION\nPatients have strong views on what they find acceptable regarding access to electronic records. Working in partnership with patients to develop systems is essential to their success. Further work is required to address legal and ethical issues of electronic records and to evaluate their impact on patients, health professionals and service provision." }, { "pmid": "18693866", "title": "Towards consumer-friendly PHRs: patients' experience with reviewing their health records.", "abstract": "Consumer-friendly Personal Health Records (PHRs) have the potential of providing patients with the basis for taking an active role in their healthcare. However, few studies focused on the features that make health records comprehensible for lay audiences. This paper presents a survey of patients' experience with reviewing their health records, in order to identify barriers to optimal record use. The data are analyzed via descriptive statistical and thematic analysis. The results point to providers' notes, laboratory test results and radiology reports as the most difficult records sections for lay reviewers. Professional medical terminology, lack of explanations of complex concepts (e.g., lab test ranges) and suboptimal data ordering emerge as the most common comprehension barriers. While most patients today access their records in paper format, electronic PHRs present much more opportunities for providing comprehension support." }, { "pmid": "12923796", "title": "Lay understanding of terms used in cancer consultations.", "abstract": "The study assessed lay understanding of terms used by doctors during cancer consultations. Terms and phrases were selected from 50-videotaped consultations and included in a survey of 105 randomly selected people in a seaside resort. The questionnaire included scenarios containing potentially ambiguous diagnostic/prognostic terms, multiple-choice, comprehension questions and figures on which to locate body organs that could be affected by cancer. Respondents also rated how confident they were about their answers. About half the sample understood euphemisms for the metastatic spread of cancer e.g. 'seedlings' and 'spots in the liver' (44 and 55% respectively). Sixty-three per cent were aware that the term 'metastasis' meant that the cancer was spreading but only 52% understood that the phrase 'the tumour is progressing' was not good news. Yet respondents were fairly confident that they understood these terms. Knowledge of organ location varied. For example, 94% correctly identified the lungs but only 46% located the liver. The findings suggest that a substantial proportion of the lay public do not understand phrases often used in cancer consultations and that knowledge of basic anatomy cannot be assumed. Yet high confidence ratings indicate that asking if patients understand is likely to overestimate comprehension. Awareness of the unfamiliarity of the lay population with cancer-related terms could prompt further explanation in cancer-related consultations." }, { "pmid": "11103725", "title": "Medical communication: do our patients understand?", "abstract": "The objective of this study was to determine emergency department (ED) patient's understanding of common medical terms used by health care providers (HCP). Consecutive patients over 18 years of age having nonurgent conditions were recruited from the EDs of an urban and a suburban hospital between the hours of 7 a.m. and 11 p.m. Patients were asked whether six pairs of terms had the same or different meaning and scored on the number of correct answers (maximum score 6). Multiple linear regression analysis was used to assess possible relationships between test scores and age, sex, hospital site, highest education level, and predicted household income (determined from zip code). Two hundred forty-nine patients (130 men/119 women) ranging in age from 18 to 87 years old (mean = 39.4, SD = 14.9) were enrolled on the study. The mean number of correct responses was 2.8 (SD = 1.2). The percentage of patients that did not recognize analogous terms was 79% for bleeding versus hemorrhage, 78% for broken versus fractured bone, 74% for heart attack versus myocardial infarction, and 38% for stitches versus sutures. The percentage that did not recognize nonanalogous terms was 37% for diarrhea versus loose stools, and 10% for cast versus splint. Regression analysis (R2 = .13) revealed a significant positive independent relationship between test score and age (P < .024), education (P < .001), and suburban hospital site (P < .004). Predicted income had a significant relationship with test score (P < .001); however, this was no longer significant when controlled for the confounding influence of age, education and hospital site. Medical terminology is often poorly understood, especially by young, urban, poorly educated patients. Emergency health care providers should remember that even commonly used medical terminology should be carefully explained to their patients." }, { "pmid": "1517087", "title": "Patient on-line access to medical records in general practice.", "abstract": "Many patients want more information about health and the computer offers tremendous potential for interactive patient education. However, patient education and the provision of information to patients will be most effective if it can be tailored to the individual patient by linkage to the medical record. Furthermore the Data Protection Act requires that patients can have access to explained versions of their computer-held medical record. We have examined the practicality and possible benefits of giving patients on-line access to their medical records in general practice. Seventy patients (20 males; 50 females) took part in the study. Sixty five of these used the computer to obtain information. The section on medical history was most popular, with 52 people accessing it. More than one in four of the problems were not understood until the further explanation screen had been seen. One in four also queried items or thought that something was incorrect. Most patients obviously enjoyed the opportunity to use the computer to see their own medical record and talk to the researcher. Many patients commented that because the General Practitioner (GP) didn't have enough time, the computer would be useful. Sixty one (87%) (95% CI: 79-95%) thought the computer easy to use and 59 (84%) would use it again. This is despite the fact that 43 (61%) thought they obtained enough information from their GP. This small study has shown that patients find this computer interface easy to use, and would use the computer to look at explanations of their medical record if it was routinely available.(ABSTRACT TRUNCATED AT 250 WORDS)" }, { "pmid": "20845203", "title": "The literacy divide: health literacy and the use of an internet-based patient portal in an integrated health system-results from the diabetes study of northern California (DISTANCE).", "abstract": "Internet-based patient portals are intended to improve access and quality, and will play an increasingly important role in health care, especially for diabetes and other chronic diseases. Diabetes patients with limited health literacy have worse health outcomes, and limited health literacy may be a barrier to effectively utilizing internet-based health access services. We investigated use of an internet-based patient portal among a well characterized population of adults with diabetes. We estimated health literacy using three validated self-report items. We explored the independent association between health literacy and use of the internet-based patient portal, adjusted for age, gender, race/ethnicity, educational attainment, and income. Among 14,102 participants (28% non-Hispanic White, 14% Latino, 21% African-American, 9% Asian, 12% Filipino, and 17% multiracial or other ethnicity), 6099 (62%) reported some limitation in health literacy, and 5671 (40%) respondents completed registration for the patient portal registration. In adjusted analyses, those with limited health literacy had higher odds of never signing on to the patient portal (OR 1.7, 1.4 to 1.9) compared with those who did not report any health literacy limitation. Even among those with internet access, the relationship between health literacy and patient portal use persisted (OR 1.4, 95% CI 1.2 to 1.8). Diabetes patients reporting limited health literacy were less likely to both access and navigate an internet-based patient portal than those with adequate health literacy. Although the internet has potential to greatly expand the capacity and reach of health care systems, current use patterns suggest that, in the absence of participatory design efforts involving those with limited health literacy, those most at risk for poor diabetes health outcomes will fall further behind if health systems increasingly rely on internet-based services." }, { "pmid": "23978618", "title": "Consumers' perceptions of patient-accessible electronic medical records.", "abstract": "BACKGROUND\nElectronic health information (eHealth) tools for patients, including patient-accessible electronic medical records (patient portals), are proliferating in health care delivery systems nationally. However, there has been very limited study of the perceived utility and functionality of portals, as well as limited assessment of these systems by vulnerable (low education level, racial/ethnic minority) consumers.\n\n\nOBJECTIVE\nThe objective of the study was to identify vulnerable consumers' response to patient portals, their perceived utility and value, as well as their reactions to specific portal functions.\n\n\nMETHODS\nThis qualitative study used 4 focus groups with 28 low education level, English-speaking consumers in June and July 2010, in New York City.\n\n\nRESULTS\nParticipants included 10 males and 18 females, ranging in age from 21-63 years; 19 non-Hispanic black, 7 Hispanic, 1 non-Hispanic White and 1 Other. None of the participants had higher than a high school level education, and 13 had less than a high school education. All participants had experience with computers and 26 used the Internet. Major themes were enhanced consumer engagement/patient empowerment, extending the doctor's visit/enhancing communication with health care providers, literacy and health literacy factors, improved prevention and health maintenance, and privacy and security concerns. Consumers were also asked to comment on a number of key portal features. Consumers were most positive about features that increased convenience, such as making appointments and refilling prescriptions. Consumers raised concerns about a number of potential barriers to usage, such as complex language, complex visual layouts, and poor usability features.\n\n\nCONCLUSIONS\nMost consumers were enthusiastic about patient portals and perceived that they had great utility and value. Study findings suggest that for patient portals to be effective for all consumers, portals must be designed to be easy to read, visually engaging, and have user-friendly navigation." }, { "pmid": "26681155", "title": "Barriers and Facilitators to Online Portal Use Among Patients and Caregivers in a Safety Net Health Care System: A Qualitative Study.", "abstract": "BACKGROUND\nPatient portals have the potential to support self-management for chronic diseases and improve health outcomes. With the rapid rise in adoption of patient portals spurred by meaningful use incentives among safety net health systems (a health system or hospital providing a significant level of care to low-income, uninsured, and vulnerable populations), it is important to understand the readiness and willingness of patients and caregivers in safety net settings to access their personal health records online.\n\n\nOBJECTIVE\nTo explore patient and caregiver perspectives on online patient portal use before its implementation at San Francisco General Hospital, a safety net hospital.\n\n\nMETHODS\nWe conducted 16 in-depth interviews with chronic disease patients and caregivers who expressed interest in using the Internet to manage their health. Discussions focused on health care experiences, technology use, and interest in using an online portal to manage health tasks. We used open coding to categorize all the barriers and facilitators to portal use, followed by a second round of coding that compared the categories to previously published findings. In secondary analyses, we also examined specific barriers among 2 subgroups: those with limited health literacy and caregivers.\n\n\nRESULTS\nWe interviewed 11 patients and 5 caregivers. Patients were predominantly male (82%, 9/11) and African American (45%, 5/11). All patients had been diagnosed with diabetes and the majority had limited health literacy (73%, 8/11). The majority of caregivers were female (80%, 4/5), African American (60%, 3/5), caregivers of individuals with diabetes (60%, 3/5), and had adequate health literacy (60%, 3/5). A total of 88% (14/16) of participants reported interest in using the portal after viewing a prototype. Major perceived barriers included security concerns, lack of technical skills/interest, and preference for in-person communication. Facilitators to portal use included convenience, health monitoring, and improvements in patient-provider communication. Participants with limited health literacy discussed more fundamental barriers to portal use, including challenges with reading and typing, personal experience with online security breaches/viruses, and distrust of potential security measures. Caregivers expressed high interest in portal use to support their roles in interpreting health information, advocating for quality care, and managing health behaviors and medical care.\n\n\nCONCLUSIONS\nDespite concerns about security, difficulty understanding medical information, and satisfaction with current communication processes, respondents generally expressed enthusiasm about portal use. Our findings suggest a strong need for training and support to assist vulnerable patients with portal registration and use, particularly those with limited health literacy. Efforts to encourage portal use among vulnerable patients should directly address health literacy and security/privacy issues and support access for caregivers." }, { "pmid": "20442139", "title": "An overview of MetaMap: historical perspective and recent advances.", "abstract": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD." }, { "pmid": "18693956", "title": "Making texts in electronic health records comprehensible to consumers: a prototype translator.", "abstract": "Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15)." }, { "pmid": "26306273", "title": "Methods for Linking EHR Notes to Education Materials.", "abstract": "It has been shown that providing patients with access to their own electronic health records (EHR) can enhance their medical understanding and provide clinically relevant benefits. However, languages that are difficult for non-medical professionals to comprehend are prevalent in the EHR notes, including medical terms, abbreviations, and domain-specific language patterns. Furthermore, limited average health literacy forms a barrier for patients to understand their health condition, impeding their ability to actively participate in managing their health. Therefore, we are developing a system to retrieve EHR note-tailored online consumer-oriented health education materials to improve patients' health knowledge of their own clinical conditions. Our experiments show that queries combining key concepts and other medical concepts present in the EHR notes significantly outperform (more than doubled) a baseline system of using the phrases from topic models." }, { "pmid": "16545986", "title": "Identifying important concepts from medical documents.", "abstract": "Automated medical concept recognition is important for medical informatics such as medical document retrieval and text mining research. In this paper, we present a software tool called keyphrase identification program (KIP) for identifying topical concepts from medical documents. KIP combines two functions: noun phrase extraction and keyphrase identification. The former automatically extracts noun phrases from medical literature as keyphrase candidates. The latter assigns weights to extracted noun phrases for a medical document based on how important they are to that document and how domain specific they are in the medical domain. The experimental results show that our noun phrase extractor is effective in identifying noun phrases from medical documents, so is the keyphrase extractor in identifying important medical conceptual terms. They both performed better than the systems they were compared to." }, { "pmid": "20819853", "title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.", "abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text." }, { "pmid": "24729964", "title": "Evaluating word representation features in biomedical named entity recognition tasks.", "abstract": "Biomedical Named Entity Recognition (BNER), which extracts important entities such as genes and proteins, is a crucial step of natural language processing in the biomedical domain. Various machine learning-based approaches have been applied to BNER tasks and showed good performance. In this paper, we systematically investigated three different types of word representation (WR) features for BNER, including clustering-based representation, distributional representation, and word embeddings. We selected one algorithm from each of the three types of WR features and applied them to the JNLPBA and BioCreAtIvE II BNER tasks. Our results showed that all the three WR algorithms were beneficial to machine learning-based BNER systems. Moreover, combining these different types of WR features further improved BNER performance, indicating that they are complementary to each other. By combining all the three types of WR features, the improvements in F-measure on the BioCreAtIvE II GM and JNLPBA corpora were 3.75% and 1.39%, respectively, when compared with the systems using baseline features. To the best of our knowledge, this is the first study to systematically evaluate the effect of three different types of WR features for BNER tasks." }, { "pmid": "25669328", "title": "Embedding assisted prediction architecture for event trigger identification.", "abstract": "Molecular events normally have significant meanings since they describe important biological interactions or alternations such as binding of a protein. As a crucial step of biological event extraction, event trigger identification has attracted much attention and many methods have been proposed. Traditionally those methods can be categorised into rule-based approach and machine learning approach and machine learning-based approaches have demonstrated its potential and outperformed rule-based approaches in many situations. However, machine learning-based approaches still face several challenges among which a notable one is how to model semantic and syntactic information of different words and incorporate it into the prediction model. There exist many ways to model semantic and syntactic information, among which word embedding is an effective one. Therefore, in order to address this challenge, in this study, a word embedding assisted neural network prediction model is proposed to conduct event trigger identification. The experimental study on commonly used dataset has shown its potential. It is believed that this study could offer researchers insights into semantic-aware solutions for event trigger identification." }, { "pmid": "26291578", "title": "Identifying adverse drug event information in clinical notes with distributional semantic representations of context.", "abstract": "For the purpose of post-marketing drug safety surveillance, which has traditionally relied on the voluntary reporting of individual cases of adverse drug events (ADEs), other sources of information are now being explored, including electronic health records (EHRs), which give us access to enormous amounts of longitudinal observations of the treatment of patients and their drug use. Adverse drug events, which can be encoded in EHRs with certain diagnosis codes, are, however, heavily underreported. It is therefore important to develop capabilities to process, by means of computational methods, the more unstructured EHR data in the form of clinical notes, where clinicians may describe and reason around suspected ADEs. In this study, we report on the creation of an annotated corpus of Swedish health records for the purpose of learning to identify information pertaining to ADEs present in clinical notes. To this end, three key tasks are tackled: recognizing relevant named entities (disorders, symptoms, drugs), labeling attributes of the recognized entities (negation, speculation, temporality), and relationships between them (indication, adverse drug event). For each of the three tasks, leveraging models of distributional semantics - i.e., unsupervised methods that exploit co-occurrence information to model, typically in vector space, the meaning of words - and, in particular, combinations of such models, is shown to improve the predictive performance. The ability to make use of such unsupervised methods is critical when faced with large amounts of sparse and high-dimensional data, especially in domains where annotated resources are scarce." }, { "pmid": "16221948", "title": "Exploring and developing consumer health vocabularies.", "abstract": "Laypersons (\"consumers\") often have difficulty finding, understanding, and acting on health information due to gaps in their domain knowledge. Ideally, consumer health vocabularies (CHVs) would reflect the different ways consumers express and think about health topics, helping to bridge this vocabulary gap. However, despite the recent research on mismatches between consumer and professional language (e.g., lexical, semantic, and explanatory), there have been few systematic efforts to develop and evaluate CHVs. This paper presents the point of view that CHV development is practical and necessary for extending research on informatics-based tools to facilitate consumer health information seeking, retrieval, and understanding. In support of the view, we briefly describe a distributed, bottom-up approach for (1) exploring the relationship between common consumer health expressions and professional concepts and (2) developing an open-access, preliminary (draft) \"first-generation\" CHV. While recognizing the limitations of the approach (e.g., not addressing psychosocial and cultural factors), we suggest that such exploratory research and development will yield insights into the nature of consumer health expressions and assist developers in creating tools and applications to support consumer health information seeking." }, { "pmid": "10566330", "title": "Terminology issues in user access to Web-based medical information.", "abstract": "We conducted a study of user queries to the National Library of Medicine Web site over a three month period. Our purpose was to study the nature and scope of these queries in order to understand how to improve users' access to the information they are seeking on our site. The results show that the queries are primarily medical in content (94%), with only a small percentage (5.5%) relating to library services, and with a very small percentage (.5%) not being medically relevant at all. We characterize the data set, and conclude with a discussion of our plans to develop a UMLS-based terminology server to assist NLM Web users." }, { "pmid": "11604772", "title": "Patient and clinician vocabulary: how different are they?", "abstract": "Consumers and patients are confronted with a plethora of health care information, especially through the proliferation of web content resources. Democratization of the web is an important milestone for patients and consumers since it helps to empower them, make them better advocates on their own behalf and foster better, more-informed decisions about their health. Yet lack of familiarity with medical vocabulary is a major problem for patients in accessing the available information. As a first step to providing better vocabulary support for patients, this study collected and analyzed patient and clinician terms to confirm and quantitatively assess their differences. We also analyzed the information retrieval (IR) performance resulting from these terms. The results showed that patient terminology does differ from clinician terminology in many respects including misspelling rate, mapping rate and semantic type distribution, and patient terms lead to poorer results in information retrieval." }, { "pmid": "11720966", "title": "Evaluation of controlled vocabulary resources for development of a consumer entry vocabulary for diabetes.", "abstract": "BACKGROUND\nDigital information technology can facilitate informed decision making by individuals regarding their personal health care. The digital divide separates those who do and those who do not have access to or otherwise make use of digital information. To close the digital divide, health care communications research must address a fundamental issue, the consumer vocabulary problem: consumers of health care, at least those who are laypersons, are not always familiar with the professional vocabulary and concepts used by providers of health care and by providers of health care information, and, conversely, health care and health care information providers are not always familiar with the vocabulary and concepts used by consumers. One way to address this problem is to develop a consumer entry vocabulary for health care communications.\n\n\nOBJECTIVES\nTo evaluate the potential of controlled vocabulary resources for supporting the development of consumer entry vocabulary for diabetes.\n\n\nMETHODS\nWe used folk medical terms from the Dictionary of American Regional English project to create extended versions of 3 controlled vocabulary resources: the Unified Medical Language System Metathesaurus, the Eurodicautom of the European Commission's Translation Service, and the European Commission Glossary of popular and technical medical terms. We extracted consumer terms from consumer-authored materials, and physician terms from physician-authored materials. We used our extended versions of the vocabulary resources to link diabetes-related terms used by health care consumers to synonymous, nearly-synonymous, or closely-related terms used by family physicians. We also examined whether retrieval of diabetes-related World Wide Web information sites maintained by nonprofit health care professional organizations, academic organizations, or governmental organizations can be improved by substituting a physician term for its related consumer term in the query.\n\n\nRESULTS\nThe Dictionary of American Regional English extension of the Metathesaurus provided coverage, either direct or indirect, of approximately 23% of the natural language consumer-term-physician-term pairs. The Dictionary of American Regional English extension of the Eurodicautom provided coverage for 16% of the term pairs. Both the Metathesaurus and the Eurodicautom indirectly related more terms than they directly related. A high percentage of covered term pairs, with more indirectly covered pairs than directly covered pairs, might be one way to make the most out of expensive controlled vocabulary resources. We compared retrieval of diabetes-related Web information sites using the physician terms to retrieval using related consumer terms We based the comparison on retrieval of sites maintained by non-profit healthcare professional organizations, academic organizations, or governmental organizations. The number of such sites in the first 20 Results from a search was increased by substituting a physician term for its related consumer term in the query. This suggests that the Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom may be used to provide useful links from natural language consumer terms to natural language physician terms.\n\n\nCONCLUSIONS\nThe Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom should be investigated further for support of consumer entry vocabulary for diabetes." }, { "pmid": "12425240", "title": "Characteristics of consumer terminology for health information retrieval.", "abstract": "OBJECTIVES\nAs millions of consumers perform health information retrieval online, the mismatch between their terminology and the terminologies of the information sources could become a major barrier to successful retrievals. To address this problem, we studied the characteristics of consumer terminology for health information retrieval.\n\n\nMETHODS\nOur study focused on consumer queries that were used on a consumer health service Web site and a consumer health information Web site. We analyzed data from the site-usage logs and conducted interviews with patients.\n\n\nRESULTS\nOur findings show that consumers' information retrieval performance is very poor. There are significant mismatches at all levels (lexical, semantic and mental models) between the consumer terminology and both the information source terminology and standard medical vocabularies.\n\n\nCONCLUSIONS\nComprehensive terminology support on all levels is needed for consumer health information retrieval." }, { "pmid": "14728258", "title": "Exploring medical expressions used by consumers and the media: an emerging view of consumer health vocabularies.", "abstract": "Healthcare consumers often have difficulty expressing and understanding medical concepts. The goal of this study is to identify and characterize medical expressions or \"terms\" (linguistic forms and associated concepts) used by consumers and health mediators. In particular, these terms were characterized according to the degree to which they mapped to professional medical vocabularies. Lay participants identified approximately 100,000 term tokens from online discussion forum postings and print media articles. Of the over 81,000 extracted term tokens reviewed, more than 75% were mapped as synonyms or quasi-synonyms to the Unified Medical Language System (UMLS) Metathesaurus. While 80% conceptual overlap was found between closely mapped lay (consumer and mediator) and technical (professional) medical terms, about half of these overlapping concepts contained lay forms different from technical forms. This study raises questions about the nature of consumer health vocabularies that we believe have theoretical and practical implications for bridging the medical vocabulary gap between consumers and professionals." }, { "pmid": "16779162", "title": "Identifying consumer-friendly display (CFD) names for health concepts.", "abstract": "We have developed a systematic methodology using corpus-based text analysis followed by human review to assign \"consumer-friendly display (CFD) names\" to medical concepts from the National Library of Medicine (NLM) Unified Medical Language System (UMLS) Metathesaurus. Using NLM MedlinePlus queries as a corpus of consumer expressions and a collaborative Web-based tool to facilitate review, we analyzed 425 frequently occurring concepts. As a preliminary test of our method, we evaluated 34 ana-lyzed concepts and their CFD names, using a questionnaire modeled on standard reading assessments. The initial results that consumers (n=10) are more likely to understand and recognize CFD names than alternate labels suggest that the approach is useful in the development of consumer health vocabularies for displaying understandable health information." }, { "pmid": "18436906", "title": "Consumer health concepts that do not map to the UMLS: where do they fit?", "abstract": "OBJECTIVE\nThis study has two objectives: first, to identify and characterize consumer health terms not found in the Unified Medical Language System (UMLS) Metathesaurus (2007 AB); second, to describe the procedure for creating new concepts in the process of building a consumer health vocabulary. How do the unmapped consumer health concepts relate to the existing UMLS concepts? What is the place of these new concepts in professional medical discourse?\n\n\nDESIGN\nThe consumer health terms were extracted from two large corpora derived in the process of Open Access Collaboratory Consumer Health Vocabulary (OAC CHV) building. Terms that could not be mapped to existing UMLS concepts via machine and manual methods prompted creation of new concepts, which were then ascribed semantic types, related to existing UMLS concepts, and coded according to specified criteria.\n\n\nRESULTS\nThis approach identified 64 unmapped concepts, 17 of which were labeled as uniquely \"lay\" and not feasible for inclusion in professional health terminologies. The remaining terms constituted potential candidates for inclusion in professional vocabularies, or could be constructed by post-coordinating existing UMLS terms. The relationship between new and existing concepts differed depending on the corpora from which they were extracted.\n\n\nCONCLUSION\nNon-mapping concepts constitute a small proportion of consumer health terms, but a proportion that is likely to affect the process of consumer health vocabulary building. We have identified a novel approach for identifying such concepts." } ]
Frontiers in Neuroscience
28066170
PMC5177654
10.3389/fnins.2016.00587
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
Related workReviews of research in decoding for BMIs can be found elsewhere (Homer M. L. et al., 2013; Andersen et al., 2014; Baranauskas, 2014; Bensmaia and Miller, 2014; Kao et al., 2014; Li, 2014). Here we discuss the decoders compared in the present study.The improved unscented Kalman filter decoder proposed in this study is a development of our previous unscented Kalman filter decoder (Li et al., 2009). That filter, which we refer to here as UKF1, used an encoding model with non-linear dependence on kinematic variables which modeled tuning to the speed or velocity magnitude of movements. The UKF1 modeled tuning at multiple temporal offsets, using an n-th order hidden Markov model framework where n taps of kinematics (n = 10 was tested) are held in the state space. Encoding studies by Paninski et al. (2004a,b), Hatsopoulos et al. (2007), Hatsopoulos and Amit (2012) and Saleh et al. (2010) found tuning to position and velocity trajectories, called movement fragments or pathlets. The n-th order framework makes the encoding model of the UKF1 flexible enough to capture such tuning. Even though including taps of position also indirectly includes velocity, explicitly including taps of velocity reduces the amount of non-linearity needed in the neural encoding model, which helps improve the approximation accuracy of the UKF. On the basis of UKF1, we expand the neural encoding model and add decoder engineering improvements developed by ourselves and other groups to make the UKF2.The ReFIT Kalman filter (Gilja et al., 2012) has demonstrated high communications bit rate by using two advances in decoder engineering. In closed-loop experiments, we compared the UKF2 with the FIT Kalman filter (Fan et al., 2014), which is similar to the ReFIT Kalman filter in using position-as-feedback and intention estimation, but does not have the online re-training component. The bin size in this study, 50 ms, was the same as the Gilja et al. study. Our Fitts's Law bit rate values for the FIT Kalman filter are lower than those reported by Gilja et al. for the ReFIT Kalman filter, likely due to a combination of factors. First, online re-training separates the FIT and ReFIT Kalman filters. In terms of experimental setup, Gilja et al. used video tracking of natural reaching movements, whereas we used a joystick during hand control of the cursor. The use of an unnatural joystick made our task more difficult: the mean movement time during hand control in our task was approximately double those reported by Gilja et al. we used a joystick due to the limitations of our experimental platform and to compare with our previous work (Li et al., 2009). While using the same Fitts's law bit rate measure, our task used circular targets, which have a smaller acceptance area for the same width compared to the square targets of Gilja et al. We used circular targets because they are more natural in terms of determining whether the cursor is within the target by using a distance criterion. We also spike sorted and did not include unsorted or “hash” threshold crossings, whereas Gilja et al. used threshold crossing counts.
[ "23536714", "26336135", "25247368", "7703686", "24808833", "24739786", "21765538", "2073945", "20943945", "4966614", "24654266", "7760138", "23160043", "24717350", "21939762", "17494696", "23862678", "14624237", "1510294", "26220660", "26079746", "17943613", "20359500", "25076875", "19603074", "25203982", "13679402", "15456829", "25174005", "26655766", "21159978", "22279207", "14726593", "12522173", "23143511", "23047892", "23593130", "15356183", "19966837", "24760860", "20841635", "23428966", "15188861", "16354382", "19497822", "24949462" ]
[ { "pmid": "23536714", "title": "State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements.", "abstract": "The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation." }, { "pmid": "26336135", "title": "Inference and Decoding of Motor Cortex Low-Dimensional Dynamics via Latent State-Space Models.", "abstract": "Motor cortex neuronal ensemble spiking activity exhibits strong low-dimensional collective dynamics (i.e., coordinated modes of activity) during behavior. Here, we demonstrate that these low-dimensional dynamics, revealed by unsupervised latent state-space models, can provide as accurate or better reconstruction of movement kinematics as direct decoding from the entire recorded ensemble. Ensembles of single neurons were recorded with triple microelectrode arrays (MEAs) implanted in ventral and dorsal premotor (PMv, PMd) and primary motor (M1) cortices while nonhuman primates performed 3-D reach-to-grasp actions. Low-dimensional dynamics were estimated via various types of latent state-space models including, for example, Poisson linear dynamic system (PLDS) models. Decoding from low-dimensional dynamics was implemented via point process and Kalman filters coupled in series. We also examined decoding based on a predictive subsampling of the recorded population. In this case, a supervised greedy procedure selected neuronal subsets that optimized decoding performance. When comparing decoding based on predictive subsampling and latent state-space models, the size of the neuronal subset was set to the same number of latent state dimensions. Overall, our findings suggest that information about naturalistic reach kinematics present in the recorded population is preserved in the inferred low-dimensional motor cortex dynamics. Furthermore, decoding based on unsupervised PLDS models may also outperform previous approaches based on direct decoding from the recorded population or on predictive subsampling." }, { "pmid": "25247368", "title": "Toward more versatile and intuitive cortical brain-machine interfaces.", "abstract": "Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics." }, { "pmid": "7703686", "title": "Movement parameters and neural activity in motor cortex and area 5.", "abstract": "The relations of ongoing single-cell activity in the arm area of the motor cortex and area 5 to parameters of evolving arm movements in two-dimensional (2D) space were investigated. A multiple linear regression model was used in which the ongoing impulse activity of cells at time t + tau was expressed as a function of the (X, Y) components of the target direction and of position, velocity, and acceleration of the hand at time t, where tau was a time shift (-200 to +200 msec). Analysis was done on 290 cells in the motor cortex and 207 cells in area 5. The time shift at which the highest coefficient of determination (R2) was observed was determined and the statistical significance of the model tested. The median R2 was 0.581 and 0.530 for motor cortex and area 5, respectively. The median shift at which the highest R2 was observed was -90 and +30 msec for motor cortex and area 5, respectively. For most cells statistically significant relations were observed to all four parameters tested; most prominent were the relations to target direction and least prominent those to acceleration." }, { "pmid": "24808833", "title": "What limits the performance of current invasive brain machine interfaces?", "abstract": "The concept of a brain-machine interface (BMI) or a computer-brain interface is simple: BMI creates a communication pathway for a direct control by brain of an external device. In reality BMIs are very complex devices and only recently the increase in computing power of microprocessors enabled a boom in BMI research that continues almost unabated to this date, the high point being the insertion of electrode arrays into the brains of 5 human patients in a clinical trial run by Cyberkinetics with few other clinical tests still in progress. Meanwhile several EEG-based BMI devices (non-invasive BMIs) were launched commercially. Modern electronics and dry electrode technology made possible to drive the cost of some of these devices below few hundred dollars. However, the initial excitement of the direct control by brain waves of a computer or other equipment is dampened by large efforts required for learning, high error rates and slow response speed. All these problems are directly related to low information transfer rates typical for such EEG-based BMIs. In invasive BMIs employing multiple electrodes inserted into the brain one may expect much higher information transfer rates than in EEG-based BMIs because, in theory, each electrode provides an independent information channel. However, although invasive BMIs require more expensive equipment and have ethical problems related to the need to insert electrodes in the live brain, such financial and ethical costs are often not offset by a dramatic improvement in the information transfer rate. Thus the main topic of this review is why in invasive BMIs an apparently much larger information content obtained with multiple extracellular electrodes does not translate into much higher rates of information transfer? This paper explores possible answers to this question by concluding that more research on what movement parameters are encoded by neurons in motor cortex is needed before we can enjoy the next generation BMIs." }, { "pmid": "24739786", "title": "Restoring sensorimotor function through intracortical interfaces: progress and looming challenges.", "abstract": "The loss of a limb or paralysis resulting from spinal cord injury has devastating consequences on quality of life. One approach to restoring lost sensory and motor abilities in amputees and patients with tetraplegia is to supply them with implants that provide a direct interface with the CNS. Such brain-machine interfaces might enable a patient to exert voluntary control over a prosthetic or robotic limb or over the electrically induced contractions of paralysed muscles. A parallel interface could convey sensory information about the consequences of these movements back to the patient. Recent developments in the algorithms that decode motor intention from neuronal activity and in approaches to convey sensory feedback by electrically stimulating neurons, using biomimetic and adaptation-based approaches, have shown the promise of invasive interfaces with sensorimotor cortices, although substantial challenges remain." }, { "pmid": "21765538", "title": "Statistical Signal Processing and the Motor Cortex.", "abstract": "Over the past few decades, developments in technology have significantly improved the ability to measure activity in the brain. This has spurred a great deal of research into brain function and its relation to external stimuli, and has important implications in medicine and other fields. As a result of improved understanding of brain function, it is now possible to build devices that provide direct interfaces between the brain and the external world. We describe some of the current understanding of function of the motor cortex region. We then discuss a typical likelihood-based state-space model and filtering based approach to address the problems associated with building a motor cortical-controlled cursor or robotic prosthetic device. As a variation on previous work using this approach, we introduce the idea of using Markov chain Monte Carlo methods for parameter estimation in this context. By doing this instead of performing maximum likelihood estimation, it is possible to expand the range of possible models that can be explored, at a cost in terms of computational load. We demonstrate results obtained applying this methodology to experimental data gathered from a monkey." }, { "pmid": "2073945", "title": "Shift of preferred directions of premotor cortical cells with arm movements performed across the workspace.", "abstract": "The activity of 156 neurons was recorded in the premotor cortex (Weinrich and Wise 1982) and in an adjoining rostral region of area 6 (area 6 DR; Barbas and Pandya 1987) while monkeys made visually-guided arm movements of similar direction within different parts of space. The activity of individual neurons varied most for a given preferred direction of movement within each part of space. These neurons (152/156, 97.4%) were labeled as directional. The spatial orientation of their preferred directions shifted in space to \"follow\" the rotation of the shoulder joint necessary to bring the arm into the different parts of the work-space. These results suggest that the cortical areas studied represent arm movement direction within a coordinate system rotating with the arm and where signals about the movement direction relate to the motor plan through a simple invariant relationship, that between cell preferred direction and arm orientation in space." }, { "pmid": "20943945", "title": "A closed-loop human simulator for investigating the role of feedback control in brain-machine interfaces.", "abstract": "Neural prosthetic systems seek to improve the lives of severely disabled people by decoding neural activity into useful behavioral commands. These systems and their decoding algorithms are typically developed \"offline,\" using neural activity previously gathered from a healthy animal, and the decoded movement is then compared with the true movement that accompanied the recorded neural activity. However, this offline design and testing may neglect important features of a real prosthesis, most notably the critical role of feedback control, which enables the user to adjust neural activity while using the prosthesis. We hypothesize that understanding and optimally designing high-performance decoders require an experimental platform where humans are in closed-loop with the various candidate decode systems and algorithms. It remains unexplored the extent to which the subject can, for a particular decode system, algorithm, or parameter, engage feedback and other strategies to improve decode performance. Closed-loop testing may suggest different choices than offline analyses. Here we ask if a healthy human subject, using a closed-loop neural prosthesis driven by synthetic neural activity, can inform system design. We use this online prosthesis simulator (OPS) to optimize \"online\" decode performance based on a key parameter of a current state-of-the-art decode algorithm, the bin width of a Kalman filter. First, we show that offline and online analyses indeed suggest different parameter choices. Previous literature and our offline analyses agree that neural activity should be analyzed in bins of 100- to 300-ms width. OPS analysis, which incorporates feedback control, suggests that much shorter bin widths (25-50 ms) yield higher decode performance. Second, we confirm this surprising finding using a closed-loop rhesus monkey prosthetic system. These findings illustrate the type of discovery made possible by the OPS, and so we hypothesize that this novel testing approach will help in the design of prosthetic systems that will translate well to human patients." }, { "pmid": "24654266", "title": "Intention estimation in brain-machine interfaces.", "abstract": "OBJECTIVE\nThe objective of this work was to quantitatively investigate the mechanisms underlying the performance gains of the recently reported 'recalibrated feedback intention-trained Kalman Filter' (ReFIT-KF).\n\n\nAPPROACH\nThis was accomplished by designing variants of the ReFIT-KF algorithm and evaluating training and online data to understand the neural basis of this improvement. We focused on assessing the contribution of two training set innovations of the ReFIT-KF algorithm: intention estimation and the two-stage training paradigm.\n\n\nMAIN RESULTS\nWithin the two-stage training paradigm, we found that intention estimation independently increased target acquisition rates by 37% and 59%, respectively, across two monkeys implanted with multiunit intracortical arrays. Intention estimation improved performance by enhancing the tuning properties and the mutual information between the kinematic and neural training data. Furthermore, intention estimation led to fewer shifts in channel tuning between the training set and online control, suggesting that less adaptation was required during online control. Retraining the decoder with online BMI training data also reduced shifts in tuning, suggesting a benefit of training a decoder in the same behavioral context; however, retraining also led to slower online decode velocities. Finally, we demonstrated that one- and two-stage training paradigms performed comparably when intention estimation is applied.\n\n\nSIGNIFICANCE\nThese findings highlight the utility of intention estimation in reducing the need for adaptive strategies and improving the online performance of BMIs, helping to guide future BMI design decisions." }, { "pmid": "7760138", "title": "Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons.", "abstract": "1. Several neurophysiological studies of the primary motor and premotor cortices have shown that the movement parameters direction, distance, and target position are correlated with the discharge of single neurons. Here we investigate whether the correlations with these parameters occur simultaneously (i.e., parallel processing), or sequentially (i.e., serial processing). 2. The single-unit data used for the analyses presented in this paper are the same as those used in our earlier study of neuronal specification of movement parameters. We recorded the activity of single neurons in the primary motor and premotor cortices of two rhesus monkeys (Macaca mulatta) while the animals performed reaching movements made in a horizontal plane. Specifically, the animals moved from a centrally located start position to 1 of 48 targets (1 cm2) placed at eight different directions (0-360 degrees in 45 degrees intervals) and six distances (1.4-5.4 cm in 0.8-cm increments) from the start position. 3. We analyzed 130 task-related cells; of these, 127 (99 in primary motor cortex, 28 near the superior precentral sulcus) had average discharges that were significantly modulated with the movement and were related to movement direction, distance, or target position. To determine the temporal profile of the correlation of each cell's discharge with the three parameters, we performed a regression analysis of the neural discharge. We calculated partial R2s for each parameter and the total R2 for the model as a function of time. 4. The discharge of the majority of units (73.2%) was significantly correlated for some time with all three parameters. Other units were found that correlated with different combinations of pairs of parameters (21.3%), and a small number of units appeared to code for only one parameter (5.5%). There was no obvious difference in the presence of correlations between cells recorded in the primary motor versus premotor cortices. 5. On average we found a clear temporal segregation and ordering in the onset of the parameter-related partial R2 values: direction-related discharge occurred first (115 ms before movement onset), followed sequentially by target position (57 ms after movement onset) and movement distance (248 ms after movement onset). Some overlap in the timing of the correlation of these parameters was evident. We found a similar sequential ordering for the latency of the peak of the R2 curves (48, 254, and 515 ms after movement onset, respectively, for direction, target position, and distance).(ABSTRACT TRUNCATED AT 400 WORDS)" }, { "pmid": "23160043", "title": "A high-performance neural prosthesis enabled by control algorithm design.", "abstract": "Neural prostheses translate neural activity from the brain into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, and thus offer individuals with disabilities greater interaction with the world. However, relatively low performance remains a critical barrier to successful clinical translation; current neural prostheses are considerably slower, with less accurate control, than the native arm. Here we present a new control algorithm, the recalibrated feedback intention-trained Kalman filter (ReFIT-KF) that incorporates assumptions about the nature of closed-loop neural prosthetic control. When tested in rhesus monkeys implanted with motor cortical electrode arrays, the ReFIT-KF algorithm outperformed existing neural prosthetic algorithms in all measured domains and halved target acquisition time. This control algorithm permits sustained, uninterrupted use for hours and generalizes to more challenging tasks without retraining. Using this algorithm, we demonstrate repeatable high performance for years after implantation in two monkeys, thereby increasing the clinical viability of neural prostheses." }, { "pmid": "24717350", "title": "Motor cortical control of movement speed with implications for brain-machine interface control.", "abstract": "Motor cortex plays a substantial role in driving movement, yet the details underlying this control remain unresolved. We analyzed the extent to which movement-related information could be extracted from single-trial motor cortical activity recorded while monkeys performed center-out reaching. Using information theoretic techniques, we found that single units carry relatively little speed-related information compared with direction-related information. This result is not mitigated at the population level: simultaneously recorded population activity predicted speed with significantly lower accuracy relative to direction predictions. Furthermore, a unit-dropping analysis revealed that speed accuracy would likely remain lower than direction accuracy, even given larger populations. These results suggest that the instantaneous details of single-trial movement speed are difficult to extract using commonly assumed coding schemes. This apparent paucity of speed information takes particular importance in the context of brain-machine interfaces (BMIs), which rely on extracting kinematic information from motor cortex. Previous studies have highlighted subjects' difficulties in holding a BMI cursor stable at targets. These studies, along with our finding of relatively little speed information in motor cortex, inspired a speed-dampening Kalman filter (SDKF) that automatically slows the cursor upon detecting changes in decoded movement direction. Effectively, SDKF enhances speed control by using prevalent directional signals, rather than requiring speed to be directly decoded from neural activity. SDKF improved success rates by a factor of 1.7 relative to a standard Kalman filter in a closed-loop BMI task requiring stable stops at targets. BMI systems enabling stable stops will be more effective and user-friendly when translated into clinical applications." }, { "pmid": "21939762", "title": "Synthesizing complex movement fragment representations from motor cortical ensembles.", "abstract": "We have previously shown that the responses of primary motor cortical neurons are more accurately predicted if one assumes that individual neurons encode temporally-extensive movement fragments or preferred trajectories instead of static movement parameters (Hatsopoulos et al., 2007). Building on these findings, we examine here how these preferred trajectories can be combined to generate a rich variety of preferred movement trajectories when neurons fire simultaneously. Specifically, we used a generalized linear model to fit each neuron's spike rate to an exponential function of the inner product between the actual movement trajectory and the preferred trajectory; then, assuming conditional independence, when two neurons fire simultaneously their spiking probabilities multiply implying that their preferred trajectories add. We used a similar exponential model to fit the probability of simultaneous firing and found that the majority of neuron pairs did combine their preferred trajectories using a simple additive rule. Moreover, a minority of neuron pairs that engaged in significant synchronization combined their preferred trajectories through a small scaling adjustment to the additive rule in the exponent, while preserving the shape of the predicted trajectory representation from the additive rule. These results suggest that complex movement representations can be synthesized in simultaneously firing neuronal ensembles by adding the trajectory representations of the constituents in the ensemble." }, { "pmid": "17494696", "title": "Encoding of movement fragments in the motor cortex.", "abstract": "Previous studies have suggested that complex movements can be elicited by electrical stimulation of the motor cortex. Most recording studies in the motor cortex, however, have investigated the encoding of time-independent features of movement such as direction, velocity, position, or force. Here, we show that single motor cortical neurons encode temporally evolving movement trajectories and not simply instantaneous movement parameters. We explicitly characterize the preferred trajectories of individual neurons using a simple exponential encoding model and demonstrate that temporally extended trajectories not only capture the tuning of motor cortical neurons more accurately, but can be used to decode the instantaneous movement direction with less error. These findings suggest that single motor cortical neurons encode whole movement fragments, which are temporally extensive and can be quite complex." }, { "pmid": "23862678", "title": "Sensors and decoding for intracortical brain computer interfaces.", "abstract": "Intracortical brain computer interfaces (iBCIs) are being developed to enable people to drive an output device, such as a computer cursor, directly from their neural activity. One goal of the technology is to help people with severe paralysis or limb loss. Key elements of an iBCI are the implanted sensor that records the neural signals and the software that decodes the user's intended movement from those signals. Here, we focus on recent advances in these two areas, placing special attention on contributions that are or may soon be adopted by the iBCI research community. We discuss how these innovations increase the technology's capability, accuracy, and longevity, all important steps that are expanding the range of possible future clinical applications." }, { "pmid": "14624237", "title": "A gain-field encoding of limb position and velocity in the internal model of arm dynamics.", "abstract": "Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands. Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another. To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e., the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally." }, { "pmid": "1510294", "title": "A glass/silicon composite intracortical electrode array.", "abstract": "A new manufacturing technique has been developed for creating silicon-based, penetrating electrode arrays intended for implantation into cerebral cortex. The arrays consist of a 4.2 mm x 4.2 mm glass/silicon composite base, from which project 100 silicon needle-type electrodes in a 10 x 10 array. Each needle is approximately 1,500 microns long, 80 microns in diameter at the base, and tapers to a sharp point at the metalized tip. The technique used to manufacture these arrays differs from our previous method in that a glass dielectric, rather than a p-n-p junction, provides electrical isolation between the individual electrodes in the array. The new electrode arrays exhibit superior electrical properties to those described previously. We have measured interelectrode impedances of at least 10(13) omega, and interelectrode capacitances of approximately 50 fF for the new arrays. In this paper, we describe the manufacturing techniques used to create the arrays, focusing on the dielectric isolation technique, and discuss the electrical and mechanical characteristics of these arrays." }, { "pmid": "26220660", "title": "Single-trial dynamics of motor cortex and their applications to brain-machine interfaces.", "abstract": "Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain-machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs." }, { "pmid": "26079746", "title": "Extracting Low-Dimensional Latent Structure from Time Series in the Presence of Delays.", "abstract": "Noisy, high-dimensional time series observations can often be described by a set of low-dimensional latent variables. Commonly used methods to extract these latent variables typically assume instantaneous relationships between the latent and observed variables. In many physical systems, changes in the latent variables manifest as changes in the observed variables after time delays. Techniques that do not account for these delays can recover a larger number of latent variables than are present in the system, thereby making the latent representation more difficult to interpret. In this work, we introduce a novel probabilistic technique, time-delay gaussian-process factor analysis (TD-GPFA), that performs dimensionality reduction in the presence of a different time delay between each pair of latent and observed variables. We demonstrate how using a gaussian process to model the evolution of each latent variable allows us to tractably learn these delays over a continuous domain. Additionally, we show how TD-GPFA combines temporal smoothing and dimensionality reduction into a common probabilistic framework. We present an expectation/conditional maximization either (ECME) algorithm to learn the model parameters. Our simulations demonstrate that when time delays are present, TD-GPFA is able to correctly identify these delays and recover the latent space. We then applied TD-GPFA to the activity of tens of neurons recorded simultaneously in the macaque motor cortex during a reaching task. TD-GPFA is able to better describe the neural activity using a more parsimonious latent space than GPFA, a method that has been used to interpret motor cortex data but does not account for time delays. More broadly, TD-GPFA can help to unravel the mechanisms underlying high-dimensional time series data by taking into account physical delays in the system." }, { "pmid": "17943613", "title": "Common-input models for multiple neural spike-train data.", "abstract": "Recent developments in multi-electrode recordings enable the simultaneous measurement of the spiking activity of many neurons. Analysis of such multineuronal data is one of the key challenge in computational neuroscience today. In this work, we develop a multivariate point-process model in which the observed activity of a network of neurons depends on three terms: (1) the experimentally-controlled stimulus; (2) the spiking history of the observed neurons; and (3) a hidden term that corresponds, for example, to common input from an unobserved population of neurons that is presynaptic to two or more cells in the observed population. We consider two models for the network firing-rates, one of which is computationally and analytically tractable but can lead to unrealistically high firing-rates, while the other with reasonable firing-rates imposes a greater computational burden. We develop an expectation-maximization algorithm for fitting the parameters of both the models. For the analytically tractable model the expectation step is based on a continuous-time implementation of the extended Kalman smoother, and the maximization step involves two concave maximization problems which may be solved in parallel. The other model that we consider necessitates the use of Monte Carlo methods for the expectation as well as maximization step. We discuss the trade-off involved in choosing between the two models and the associated methods. The techniques developed allow us to solve a variety of inference problems in a straightforward, computationally efficient fashion; for example, we may use the model to predict network activity given an arbitrary stimulus, infer a neuron's ring rate given the stimulus and the activity of the other observed neurons, and perform optimal stimulus decoding and prediction. We present several detailed simulation studies which explore the strengths and limitations of our approach." }, { "pmid": "20359500", "title": "Population decoding of motor cortical activity using a generalized linear model with hidden states.", "abstract": "Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications." }, { "pmid": "25076875", "title": "Decoding methods for neural prostheses: where have we reached?", "abstract": "This article reviews advances in decoding methods for brain-machine interfaces (BMIs). Recent work has focused on practical considerations for future clinical deployment of prosthetics. This review is organized by open questions in the field such as what variables to decode, how to design neural tuning models, which neurons to select, how to design models of desired actions, how to learn decoder parameters during prosthetic operation, and how to adapt to changes in neural signals and neural tuning. The concluding discussion highlights the need to design and test decoders within the context of their expected use and the need to answer the question of how much control accuracy is good enough for a prosthetic." }, { "pmid": "19603074", "title": "Unscented Kalman filter for brain-machine interfaces.", "abstract": "Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation." }, { "pmid": "25203982", "title": "A high-performance keyboard neural prosthesis enabled by task optimization.", "abstract": "Communication neural prostheses are an emerging class of medical devices that aim to restore efficient communication to people suffering from paralysis. These systems rely on an interface with the user, either via the use of a continuously moving cursor (e.g., mouse) or the discrete selection of symbols (e.g., keyboard). In developing these interfaces, many design choices have a significant impact on the performance of the system. The objective of this study was to explore the design choices of a continuously moving cursor neural prosthesis and optimize the interface to maximize information theoretic performance. We swept interface parameters of two keyboard-like tasks to find task and subject-specific optimal parameters as measured by achieved bitrate using two rhesus macaques implanted with multielectrode arrays. In this paper, we present the highest performing free-paced neural prosthesis under any recording modality with sustainable communication rates of up to 3.5 bits/s. These findings demonstrate that meaningful high performance can be achieved using an intracortical neural prosthesis, and that, when optimized, these systems may be appropriate for use as communication devices for those with physical disabilities." }, { "pmid": "13679402", "title": "Spatiotemporal tuning of motor cortical neurons for hand position and velocity.", "abstract": "A pursuit-tracking task (PTT) and multielectrode recordings were used to investigate the spatiotemporal encoding of hand position and velocity in primate primary motor cortex (MI). Continuous tracking of a randomly moving visual stimulus provided a broad sample of velocity and position space, reduced statistical dependencies between kinematic variables, and minimized the nonstationarities that are found in typical \"step-tracking\" tasks. These statistical features permitted the application of signal-processing and information-theoretic tools for the analysis of neural encoding. The multielectrode method allowed for the comparison of tuning functions among simultaneously recorded cells. During tracking, MI neurons showed heterogeneity of position and velocity coding, with markedly different temporal dynamics for each. Velocity-tuned neurons were approximately sinusoidally tuned for direction, with linear speed scaling; other cells showed sinusoidal tuning for position, with linear scaling by distance. Velocity encoding led behavior by about 100 ms for most cells, whereas position tuning was more broadly distributed, with leads and lags suggestive of both feedforward and feedback coding. Individual cells encoded velocity and position weakly, with comparable amounts of information about each. Linear regression methods confirmed that random, 2-D hand trajectories can be reconstructed from the firing of small ensembles of randomly selected neurons (3-19 cells) within the MI arm area. These findings demonstrate that MI carries information about evolving hand trajectory during visually guided pursuit tracking, including information about arm position both during and after its specification. However, the reconstruction methods used here capture only the low-frequency components of movement during the PTT. Hand motion signals appear to be represented as a distributed code in which diverse information about position and velocity is available within small regions of MI." }, { "pmid": "15456829", "title": "Superlinear population encoding of dynamic hand trajectory in primary motor cortex.", "abstract": "Neural activity in primary motor cortex (MI) is known to correlate with hand position and velocity. Previous descriptions of this tuning have (1) been linear in position or velocity, (2) depended only instantaneously on these signals, and/or (3) not incorporated the effects of interneuronal dependencies on firing rate. We show here that many MI cells encode a superlinear function of the full time-varying hand trajectory. Approximately 20% of MI cells carry information in the hand trajectory beyond just the position, velocity, and acceleration at a single time lag. Moreover, approximately one-third of MI cells encode the trajectory in a significantly superlinear manner; as one consequence, even small position changes can dramatically modulate the gain of the velocity tuning of MI cells, in agreement with recent psychophysical evidence. We introduce a compact nonlinear \"preferred trajectory\" model that predicts the complex structure of the spatiotemporal tuning functions described in previous work. Finally, observing the activity of neighboring cells in the MI network significantly increases the predictability of the firing rate of a single MI cell; however, we find interneuronal dependencies in MI to be much more locked to external kinematic parameters than those described recently in the hippocampus. Nevertheless, this neighbor activity is approximately as informative as the hand velocity, supporting the view that neural encoding in MI is best understood at a population level." }, { "pmid": "25174005", "title": "Encoding and decoding in parietal cortex during sensorimotor decision-making.", "abstract": "It has been suggested that the lateral intraparietal area (LIP) of macaques plays a fundamental role in sensorimotor decision-making. We examined the neural code in LIP at the level of individual spike trains using a statistical approach based on generalized linear models. We found that LIP responses reflected a combination of temporally overlapping task- and decision-related signals. Our model accounts for the detailed statistics of LIP spike trains and accurately predicts spike trains from task events on single trials. Moreover, we derived an optimal decoder for heterogeneous, multiplexed LIP responses that could be implemented in biologically plausible circuits. In contrast with interpretations of LIP as providing an instantaneous code for decision variables, we found that optimal decoding requires integrating LIP spikes over two distinct timescales. These analyses provide a detailed understanding of neural representations in LIP and a framework for studying the coding of multiplexed signals in higher brain areas." }, { "pmid": "26655766", "title": "Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface.", "abstract": "OBJECTIVE\nIt is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed.\n\n\nAPPROACH\nWe have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state.\n\n\nMAIN RESULTS\nWe have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder.\n\n\nSIGNIFICANCE\nWe have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders." }, { "pmid": "21159978", "title": "Encoding of coordinated grasp trajectories in primary motor cortex.", "abstract": "Few studies have investigated how the cortex encodes the preshaping of the hand as an object is grasped, an ethological movement referred to as prehension. We developed an encoding model of hand kinematics to test whether primary motor cortex (MI) neurons encode temporally extensive combinations of joint motions that characterize a prehensile movement. Two female rhesus macaque monkeys were trained to grasp 4 different objects presented by a robot while their arm was held in place by a thermoplastic brace. We used multielectrode arrays to record MI neurons and an infrared camera motion tracking system to record the 3-D positions of 14 markers placed on the monkeys' wrist and digits. A generalized linear model framework was used to predict the firing rate of each neuron in a 4 ms time interval, based on its own spiking history and the spatiotemporal kinematics of the joint angles of the hand. Our results show that the variability of the firing rate of MI neurons is better described by temporally extensive combinations of finger and wrist joint angle kinematics rather than any individual joint motion or any combination of static kinematic parameters at their optimal lag. Moreover, a higher percentage of neurons encoded joint angular velocities than joint angular positions. These results suggest that neurons encode the covarying trajectories of the hand's joints during a prehensile movement." }, { "pmid": "22279207", "title": "Encoding of coordinated reach and grasp trajectories in primary motor cortex.", "abstract": "Though the joints of the arm and hand together comprise 27 degrees of freedom, an ethological movement like reaching and grasping coordinates many of these joints so as to operate in a reduced dimensional space. We used a generalized linear model to predict single neuron responses in primary motor cortex (MI) during a reach-to-grasp task based on 40 features that represent positions and velocities of the arm and hand in joint angle and Cartesian coordinates as well as the neurons' own spiking history. Two rhesus monkeys were trained to reach and grasp one of five objects, located at one of seven locations while we used an infrared camera motion-tracking system to track markers placed on their upper limb and recorded single-unit activity from a microelectrode array implanted in MI. The kinematic trajectories that described hand shaping and transport to the object depended on both the type of object and its location. Modeling the kinematics as temporally extensive trajectories consistently yielded significantly higher predictive power in most neurons. Furthermore, a model that included all feature trajectories yielded more predictive power than one that included any single feature trajectory in isolation, and neurons tended to encode feature velocities over positions. The predictive power of a majority of neurons reached a plateau for a model that included only the first five principal components of all the features' trajectories, suggesting that MI has evolved or adapted to encode the natural kinematic covariations associated with prehension described by a limited set of kinematic synergies." }, { "pmid": "14726593", "title": "Differential representation of perception and action in the frontal cortex.", "abstract": "A motor illusion was created to separate human subjects' perception of arm movement from their actual movement during figure drawing. Trajectories constructed from cortical activity recorded in monkeys performing the same task showed that the actual movement was represented in the primary motor cortex, whereas the visualized, presumably perceived, trajectories were found in the ventral premotor cortex. Perception and action representations can be differentially recognized in the brain and may be contained in separate structures." }, { "pmid": "12522173", "title": "Systematic changes in motor cortex cell activity with arm posture during directional isometric force generation.", "abstract": "We report here the activity of 96 cells in primate primary motor cortex (MI) during exertion of isometric forces at the hand in constant spatial directions, while the hand was at five to nine different spatial locations on a plane. The discharge of nearly all cells varied significantly with both hand location and the direction of isometric force before and during force-ramp generation as well as during static force-hold. In addition, nearly all cells displayed changes in the variation of their activity with force direction at different hand locations. This change in relationship was often expressed in part as a change in the cell's directional tuning at different hand locations. Cell directional tuning tended to shift systematically with hand location even though the direction of static force output at the hand remained constant. These directional effects were less pronounced before the onset of force output than after force onset. Cells also often showed planar modulations of discharge level with hand location. Sixteen proximal arm muscles showed similar effects, reflecting how hand location-dependent biomechanical factors altered their task-related activity. These findings indicate that MI single-cell activity does not covary exclusively with the level and direction of net force output at the hand and provides further evidence that MI contributes to the transformation between extrinsic and intrinsic representations of motor output during isometric force production." }, { "pmid": "23143511", "title": "Neural population partitioning and a concurrent brain-machine interface for sequential motor function.", "abstract": "Although brain-machine interfaces (BMIs) have focused largely on performing single-targeted movements, many natural tasks involve planning a complete sequence of such movements before execution. For these tasks, a BMI that can concurrently decode the full planned sequence before its execution may also consider the higher-level goal of the task to reformulate and perform it more effectively. Using population-wide modeling, we discovered two distinct subpopulations of neurons in the rhesus monkey premotor cortex that allow two planned targets of a sequential movement to be simultaneously held in working memory without degradation. Such marked stability occurred because each subpopulation encoded either only currently held or only newly added target information irrespective of the exact sequence. On the basis of these findings, we developed a BMI that concurrently decodes a full motor sequence in advance of movement and can then accurately execute it as desired." }, { "pmid": "23047892", "title": "Feedback-controlled parallel point process filter for estimation of goal-directed movements from neural signals.", "abstract": "Real-time brain-machine interfaces have estimated either the target of a movement, or its kinematics. However, both are encoded in the brain. Moreover, movements are often goal-directed and made to reach a target. Hence, modeling the goal-directed nature of movements and incorporating the target information in the kinematic decoder can increase its accuracy. Using an optimal feedback control design, we develop a recursive Bayesian kinematic decoder that models goal-directed movements and combines the target information with the neural spiking activity during movement. To do so, we build a prior goal-directed state-space model for the movement using an optimal feedback control model of the sensorimotor system that aims to emulate the processes underlying actual motor control and takes into account the sensory feedback. Most goal-directed models, however, depend on the movement duration, not known a priori to the decoder. This has prevented their real-time implementation. To resolve this duration uncertainty, the decoder discretizes the duration and consists of a bank of parallel point process filters, each combining the prior model of a discretized duration with the neural activity. The kinematics are computed by optimally combining these filter estimates. Using the feedback-controlled model and even a coarse discretization, the decoder significantly reduces the root mean square error in estimation of reaching movements performed by a monkey." }, { "pmid": "23593130", "title": "A real-time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design.", "abstract": "Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system." }, { "pmid": "15356183", "title": "A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.", "abstract": "Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance." }, { "pmid": "19966837", "title": "Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes.", "abstract": "Coordinated spiking activity in neuronal ensembles, in local networks and across multiple cortical areas, is thought to provide the neural basis for cognition and adaptive behavior. Examining such collective dynamics at the level of single neuron spikes has remained, however, a considerable challenge. We found that the spiking history of small and randomly sampled ensembles (approximately 20-200 neurons) could predict subsequent single neuron spiking with substantial accuracy in the sensorimotor cortex of humans and nonhuman behaving primates. Furthermore, spiking was better predicted by the ensemble's history than by the ensemble's instantaneous state (Ising models), emphasizing the role of temporal dynamics leading to spiking. Notably, spiking could be predicted not only by local ensemble spiking histories, but also by spiking histories in different cortical areas. These strong collective dynamics may provide a basis for understanding cognition and adaptive behavior at the level of coordinated spiking in cortical networks." }, { "pmid": "24760860", "title": "Motor cortical correlates of arm resting in the context of a reaching task and implications for prosthetic control.", "abstract": "Prosthetic devices are being developed to restore movement for motor-impaired individuals. A robotic arm can be controlled based on models that relate motor-cortical ensemble activity to kinematic parameters. The models are typically built and validated on data from structured trial periods during which a subject actively performs specific movements, but real-world prosthetic devices will need to operate correctly during rest periods as well. To develop a model of motor cortical modulation during rest, we trained monkeys (Macaca mulatta) to perform a reaching task with their own arm while recording motor-cortical single-unit activity. When a monkey spontaneously put its arm down to rest between trials, our traditional movement decoder produced a nonzero velocity prediction, which would cause undesired motion when applied to a prosthetic arm. During these rest periods, a marked shift was found in individual units' tuning functions. The activity pattern of the whole population during rest (Idle state) was highly distinct from that during reaching movements (Active state), allowing us to predict arm resting from instantaneous firing rates with 98% accuracy using a simple classifier. By cascading this state classifier and the movement decoder, we were able to predict zero velocity correctly, which would avoid undesired motion in a prosthetic application. Interestingly, firing rates during hold periods followed the Active pattern even though hold kinematics were similar to those during rest with near-zero velocity. These findings expand our concept of motor-cortical function by showing that population activity reflects behavioral context in addition to the direct parameters of the movement itself." }, { "pmid": "20841635", "title": "Instantaneous estimation of motor cortical neural encoding for online brain-machine interfaces.", "abstract": "Recently, the authors published a sequential decoding algorithm for motor brain-machine interfaces (BMIs) that infers movement directly from spike trains and produces a new kinematic output every time an observation of neural activity is present at its input. Such a methodology also needs a special instantaneous neuronal encoding model to relate instantaneous kinematics to every neural spike activity. This requirement is unlike the tuning methods commonly used in computational neuroscience, which are based on time windows of neural and kinematic data. This paper develops a novel, online, encoding model that uses the instantaneous kinematic variables (position, velocity and acceleration in 2D or 3D space) to estimate the mean value of an inhomogeneous Poisson model. During BMI decoding the mapping from neural spikes to kinematics is one to one and easy to implement by simply reading the spike times directly. Due to the high temporal resolution of the encoding, the delay between motor cortex neurons and kinematics needs to be estimated in the encoding stage. Mutual information is employed to select the optimal time index defined as the lag for which the spike event is maximally informative with respect to the kinematics. We extensively compare the windowed tuning models with the proposed method. The big difference between them resides in the high firing rate portion of the tuning curve, which is rather important for BMI-decoding performance. This paper shows that implementing such an instantaneous tuning model in sequential Monte Carlo point process estimation based on spike timing provides statistically better kinematic reconstructions than the linear and exponential spike-tuning models." }, { "pmid": "23428966", "title": "Improving brain-machine interface performance by decoding intended future movements.", "abstract": "OBJECTIVE\nA brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future.\n\n\nAPPROACH\nWe use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen.\n\n\nMAIN RESULTS\nResults from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads.\n\n\nSIGNIFICANCE\nThis study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance." }, { "pmid": "15188861", "title": "Modeling and decoding motor cortical activity using a switching Kalman filter.", "abstract": "We present a switching Kalman filter model for the real-time inference of hand kinematics from a population of motor cortical neurons. Firing rates are modeled as a Gaussian mixture where the mean of each Gaussian component is a linear function of hand kinematics. A \"hidden state\" models the probability of each mixture component and evolves over time in a Markov chain. The model generalizes previous encoding and decoding methods, addresses the non-Gaussian nature of firing rates, and can cope with crudely sorted neural data common in on-line prosthetic applications." }, { "pmid": "16354382", "title": "Bayesian population decoding of motor cortical activity using a Kalman filter.", "abstract": "Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patient's own paralyzed limbs. For such applications, we developed a real-time system that uses Bayesian inference techniques to estimate hand motion from the firing rates of multiple neurons. In this study, we used recordings that were previously made in the arm area of primary motor cortex in awake behaving monkeys using a chronically implanted multielectrode microarray. Bayesian inference involves computing the posterior probability of the hand motion conditioned on a sequence of observed firing rates; this is formulated in terms of the product of a likelihood and a prior. The likelihood term models the probability of firing rates given a particular hand motion. We found that a linear gaussian model could be used to approximate this likelihood and could be readily learned from a small amount of training data. The prior term defines a probabilistic model of hand kinematics and was also taken to be a linear gaussian model. Decoding was performed using a Kalman filter, which gives an efficient recursive method for Bayesian inference when the likelihood and prior are linear and gaussian. In off-line experiments, the Kalman filter reconstructions of hand trajectory were more accurate than previously reported results. The resulting decoding algorithm provides a principled probabilistic model of motor-cortical coding, decodes hand motion in real time, provides an estimate of uncertainty, and is straightforward to implement. Additionally the formulation unifies and extends previous models of neural coding while providing insights into the motor-cortical code." }, { "pmid": "19497822", "title": "Neural decoding of hand motion using a linear state-space model with hidden states.", "abstract": "The Kalman filter has been proposed as a model to decode neural activity measured from the motor cortex in order to obtain real-time estimates of hand motion in behavioral neurophysiological experiments. However, currently used linear state-space models underlying the Kalman filter do not take into account other behavioral states such as muscular activity or the subject's level of attention, which are often unobservable during experiments but may play important roles in characterizing neural controlled hand movement. To address this issue, we depict these unknown states as one multidimensional hidden state in the linear state-space framework. This new model assumes that the observed neural firing rate is directly related to this hidden state. The dynamics of the hand state are also allowed to impact the dynamics of the hidden state, and vice versa. The parameters in the model can be identified by a conventional expectation-maximization algorithm. Since this model still uses the linear Gaussian framework, hand-state decoding can be performed by the efficient Kalman filter algorithm. Experimental results show that this new model provides a more appropriate representation of the neural data and generates more accurate decoding. Furthermore, we have used recently developed computationally efficient methods by incorporating a priori information of the targets of the reaching movement. Our results show that the hidden-state model with target-conditioning further improves decoding accuracy." }, { "pmid": "24949462", "title": "Neural decoding using a parallel sequential Monte Carlo method on point processes with ensemble effect.", "abstract": "Sequential Monte Carlo estimation on point processes has been successfully applied to predict the movement from neural activity. However, there exist some issues along with this method such as the simplified tuning model and the high computational complexity, which may degenerate the decoding performance of motor brain machine interfaces. In this paper, we adopt a general tuning model which takes recent ensemble activity into account. The goodness-of-fit analysis demonstrates that the proposed model can predict the neuronal response more accurately than the one only depending on kinematics. A new sequential Monte Carlo algorithm based on the proposed model is constructed. The algorithm can significantly reduce the root mean square error of decoding results, which decreases 23.6% in position estimation. In addition, we accelerate the decoding speed by implementing the proposed algorithm in a massive parallel manner on GPU. The results demonstrate that the spike trains can be decoded as point process in real time even with 8000 particles or 300 neurons, which is over 10 times faster than the serial implementation. The main contribution of our work is to enable the sequential Monte Carlo algorithm with point process observation to output the movement estimation much faster and more accurately." } ]
BMC Medical Informatics and Decision Making
28049465
PMC5209873
10.1186/s12911-016-0389-x
Secure and scalable deduplication of horizontally partitioned health data for privacy-preserving distributed statistical computation
BackgroundTechniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step.MethodsWe designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network.ResultsThe security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N − 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem.ConclusionsThe proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-016-0389-x) contains supplementary material, which is available to authorized users.
Related workSeveral PPRL protocols have been developed based on either deterministic or probabilistic matching of a set of identifiers. Interested readers are referred to [22, 23] for an extensive review of the PPRL protocols. The protocols can be broadly classified as protocols with or without a third party. In this section, we review privacy-preserving protocols for deterministic record linkage. These protocols are secure against the semi-honest adversarial model, which is the adversarial model considered in this paper.A record contains a set of identifiers that consists of direct and indirect (quasi-identifier) identifiers and other health information. Direct identifiers are attributes that can uniquely identify an individual across data custodians, such as a national identification number (ID). In contrast, quasi-identifiers are attributes that in combination with other attributes can identify an individual, such as name, sex, date of birth, and address. In this paper, the terms identifier and quasi-identifier are used interchangeably.Weber [12] and Quantin et al. [24] proposed protocols that use keyed hash functions. These protocols require data custodians send a hash of their records’ identifiers to a third party that performs exact matching and returns the results. The data custodians use a keyed hash function with a common secret key to prevent dictionary attacks by the third party. These protocols are secure as long as the third party does not collude with a data custodian. Quantin et al.’s protocol [24] performs phonetic encoding of the identifiers (i.e., last name, first name, date of birth, and sex) before hashing, in order to reduce the impact of typing errors in the identifiers on the quality of the linkage.Several protocols were proposed based on commutative encryption schemes1 [25–27]. In these protocols, each data custodian, in turn, encrypts the unique identifiers for all records across the data custodians using its private key, and consequently, each unique identifier is encrypted with the private keys of all the data custodians. Then, the encrypted unique identifiers are compared with each other, as the encrypted values of two unique identifiers match if the two unique identifiers match. The protocols proposed in [25, 26] are two-party computation protocols, whereas Adam et al.’s [27] protocol is a multi-party computation protocol.The protocols reviewed thus far require the exchange of a long list of hash or encrypted identifiers, which can limit the scalability of the protocols as the number of data custodians and records increases. In addition, protocols based on commutative encryption require communication rounds quadratic with the number of data custodians.Multi-party private set intersection protocols were designed based on Bloom filters2 [28, 29]. In general, each data custodian encodes the unique identifier values of its records as a Bloom filter (see the description of a Bloom filter in the Methods section). The protocols use different privacy-preserving techniques, as discussed below, to intersect the Bloom filters and then create a Bloom filter that encodes the unique identifiers of the records that have exact matches at all data custodians. Then, the data custodian queries the unique identifiers of its records in the intersection Bloom filter to identify the records that match.In Lai et al.’s [28] protocol, each data custodian splits its Bloom filter into multiple segments and distributes them to the other participating data custodians while keeping one segment for itself. Then, each data custodian locally intersects its share of the Bloom filter segments and distributes it to the other data custodians. Finally, the data custodians combine the results of the intersection of the Bloom filter segments to create a Bloom filter that is an intersection between all the data custodians’ Bloom filters. The protocol requires communication rounds quadratic with the number of data custodians, and the protocol is susceptible to a dictionary attack of the unique identifiers that have all the array positions in the same segment of the Bloom filter.In Many et al.’s [29] protocol, each data custodian uses secret sharing schemes3 [30] to split each counter position of the data custodian’s Bloom filter and then distributes them to three semi-trusted third parties. The third parties use secure multiplication and comparison protocols to intersect the data custodians’ Bloom filters, which adds overhead to the protocol.Dong et al. [31] proposed a two-party protocol for private set intersection. The protocol introduced a new variant of a Bloom filter, called a garbled Bloom filter, using a secret sharing scheme. The first data custodian encodes the unique identifiers of the data custodian’s records as a Bloom filter, whereas the second data custodian encodes the unique identifiers of its records as a garbled Bloom filter. Then, the data custodians intersect their Bloom filters using an oblivious transfer technique (OT)4 [32], which adds significant overhead to the overall performance of the protocol.Karapiperis et al. [33] proposed multi-party protocols for a secure intersection based on the Count-Min sketch.5 Each data custodian locally encodes the unique identifiers of its records based on the Count-Min sketch, denoted as the local synopsis, and then, the data custodians jointly compute the intersections of the local synopses using a secure sum protocol. The authors proposed two protocols that use secure sum protocols based on additive homomorphic encryption [34] and obscure the secret value with a random number [19, 35]. The protocols protect only the data custodians’ privacy, whereas our protocol protects individuals’ and data custodians’ privacy. The additive homomorphic encryption adds computation and communication overhead as the number of records and data custodians increases.The results of the protocols in [28, 29, 31, 33] contain the probability of a false positive. Although the protocols can choose a small false positive probability, for some applications, a false positive probability may not be acceptable.
[ "23268669", "24169275", "11861622", "16984658", "21486880", "22390523", "24682495", "22195094", "23304400", "23349080", "23221359", "22768321", "20442154", "11208260", "25123746", "22262590", "19567788", "21658256", "19504049", "23739011", "21696636", "8336512", "25957825", "19706187", "25530689" ]
[ { "pmid": "24169275", "title": "Health data use, stewardship, and governance: ongoing gaps and challenges: a report from AMIA's 2012 Health Policy Meeting.", "abstract": "Large amounts of personal health data are being collected and made available through existing and emerging technological media and tools. While use of these data has significant potential to facilitate research, improve quality of care for individuals and populations, and reduce healthcare costs, many policy-related issues must be addressed before their full value can be realized. These include the need for widely agreed-on data stewardship principles and effective approaches to reduce or eliminate data silos and protect patient privacy. AMIA's 2012 Health Policy Meeting brought together healthcare academics, policy makers, and system stakeholders (including representatives of patient groups) to consider these topics and formulate recommendations. A review of a set of Proposed Principles of Health Data Use led to a set of findings and recommendations, including the assertions that the use of health data should be viewed as a public good and that achieving the broad benefits of this use will require understanding and support from patients." }, { "pmid": "11861622", "title": "Roundtable on bioterrorism detection: information system-based surveillance.", "abstract": "During the 2001 AMIA Annual Symposium, the Anesthesia, Critical Care, and Emergency Medicine Working Group hosted the Roundtable on Bioterrorism Detection. Sixty-four people attended the roundtable discussion, during which several researchers discussed public health surveillance systems designed to enhance early detection of bioterrorism events. These systems make secondary use of existing clinical, laboratory, paramedical, and pharmacy data or facilitate electronic case reporting by clinicians. This paper combines case reports of six existing systems with discussion of some common techniques and approaches. The purpose of the roundtable discussion was to foster communication among researchers and promote progress by 1) sharing information about systems, including origins, current capabilities, stages of deployment, and architectures; 2) sharing lessons learned during the development and implementation of systems; and 3) exploring cooperation projects, including the sharing of software and data. A mailing list server for these ongoing efforts may be found at http://bt.cirg.washington.edu." }, { "pmid": "16984658", "title": "Distributed data processing for public health surveillance.", "abstract": "BACKGROUND\nMany systems for routine public health surveillance rely on centralized collection of potentially identifiable, individual, identifiable personal health information (PHI) records. Although individual, identifiable patient records are essential for conditions for which there is mandated reporting, such as tuberculosis or sexually transmitted diseases, they are not routinely required for effective syndromic surveillance. Public concern about the routine collection of large quantities of PHI to support non-traditional public health functions may make alternative surveillance methods that do not rely on centralized identifiable PHI databases increasingly desirable.\n\n\nMETHODS\nThe National Bioterrorism Syndromic Surveillance Demonstration Program (NDP) is an example of one alternative model. All PHI in this system is initially processed within the secured infrastructure of the health care provider that collects and holds the data, using uniform software distributed and supported by the NDP. Only highly aggregated count data is transferred to the datacenter for statistical processing and display.\n\n\nRESULTS\nDetailed, patient level information is readily available to the health care provider to elucidate signals observed in the aggregated data, or for ad hoc queries. We briefly describe the benefits and disadvantages associated with this distributed processing model for routine automated syndromic surveillance.\n\n\nCONCLUSION\nFor well-defined surveillance requirements, the model can be successfully deployed with very low risk of inadvertent disclosure of PHI--a feature that may make participation in surveillance systems more feasible for organizations and more appealing to the individuals whose PHI they hold. It is possible to design and implement distributed systems to support non-routine public health needs if required." }, { "pmid": "21486880", "title": "A secure protocol for protecting the identity of providers when disclosing data for disease surveillance.", "abstract": "BACKGROUND\nProviders have been reluctant to disclose patient data for public-health purposes. Even if patient privacy is ensured, the desire to protect provider confidentiality has been an important driver of this reluctance.\n\n\nMETHODS\nSix requirements for a surveillance protocol were defined that satisfy the confidentiality needs of providers and ensure utility to public health. The authors developed a secure multi-party computation protocol using the Paillier cryptosystem to allow the disclosure of stratified case counts and denominators to meet these requirements. The authors evaluated the protocol in a simulated environment on its computation performance and ability to detect disease outbreak clusters.\n\n\nRESULTS\nTheoretical and empirical assessments demonstrate that all requirements are met by the protocol. A system implementing the protocol scales linearly in terms of computation time as the number of providers is increased. The absolute time to perform the computations was 12.5&emsp14;s for data from 3000 practices. This is acceptable performance, given that the reporting would normally be done at 24&emsp14;h intervals. The accuracy of detection disease outbreak cluster was unchanged compared with a non-secure distributed surveillance protocol, with an F-score higher than 0.92 for outbreaks involving 500 or more cases.\n\n\nCONCLUSION\nThe protocol and associated software provide a practical method for providers to disclose patient data for sentinel, syndromic or other indicator-based surveillance while protecting patient privacy and the identity of individual providers." }, { "pmid": "22390523", "title": "Public health surveillance and meaningful use regulations: a crisis of opportunity.", "abstract": "The Health Information Technology for Economic and Clinical Health Act is intended to enhance reimbursement of health care providers for meaningful use of electronic health records systems. This presents both opportunities and challenges for public health departments. To earn incentive payments, clinical providers must exchange specified types of data with the public health system, such as immunization and syndromic surveillance data and notifiable disease reporting. However, a crisis looms because public health's information technology systems largely lack the capabilities to accept the types of data proposed for exchange. Cloud computing may be a solution for public health information systems. Through shared computing resources, public health departments could reap the benefits of electronic reporting within federal funding constraints." }, { "pmid": "24682495", "title": "Clinical research data warehouse governance for distributed research networks in the USA: a systematic review of the literature.", "abstract": "OBJECTIVE\nTo review the published, peer-reviewed literature on clinical research data warehouse governance in distributed research networks (DRNs).\n\n\nMATERIALS AND METHODS\nMedline, PubMed, EMBASE, CINAHL, and INSPEC were searched for relevant documents published through July 31, 2013 using a systematic approach. Only documents relating to DRNs in the USA were included. Documents were analyzed using a classification framework consisting of 10 facets to identify themes.\n\n\nRESULTS\n6641 documents were retrieved. After screening for duplicates and relevance, 38 were included in the final review. A peer-reviewed literature on data warehouse governance is emerging, but is still sparse. Peer-reviewed publications on UK research network governance were more prevalent, although not reviewed for this analysis. All 10 classification facets were used, with some documents falling into two or more classifications. No document addressed costs associated with governance.\n\n\nDISCUSSION\nEven though DRNs are emerging as vehicles for research and public health surveillance, understanding of DRN data governance policies and procedures is limited. This is expected to change as more DRN projects disseminate their governance approaches as publicly available toolkits and peer-reviewed publications.\n\n\nCONCLUSIONS\nWhile peer-reviewed, US-based DRN data warehouse governance publications have increased, DRN developers and administrators are encouraged to publish information about these programs." }, { "pmid": "22195094", "title": "All health care is not local: an evaluation of the distribution of Emergency Department care delivered in Indiana.", "abstract": "The Emergency Department (ED) delivers a major portion of health care - often with incomplete knowledge about the patient. As such, EDs are particularly likely to benefit from a health information exchange (HIE). The Indiana Public Health Emergency Surveillance System (PHESS) sends real-time registration information for emergency department encounters. Over the three-year study period, we found 2.8 million patients generated 7.4 million ED visits. The average number of visits was 2.6 visits/patient (range 1-385). We found more than 40% of ED visits during the study period were for patients having data at multiple institutions. When examining the network density, we found nearly all EDs share patients with more than 80 other EDs. Our results help clarify future health care policy decisions regarding optimal NHIN architecture and discount the notion that 'all healthcare is local'." }, { "pmid": "23304400", "title": "An evaluation of the rates of repeat notifiable disease reporting and patient crossover using a health information exchange-based automated electronic laboratory reporting system.", "abstract": "Patients move across healthcare organizations and utilize services with great frequency and variety. This fact impacts both health information technology policy and patient care. To understand the challenges faced when developing strategies for effective health information exchange, it is important to understand patterns of patient movement and utilization for many healthcare contexts, including managing public-health notifiable conditions. We studied over 10 years of public-health notifiable diseases using the nation's most comprehensive operational automatic electronic laboratory reporting system to characterize patient utilization patterns. Our cohort included 412,699 patients and 833,710 reportable cases. 11.3% of patients had multiple notifiable case reports, and 19.5% had notifiable disease data distributed across 2 or more institutions. This evidence adds to the growing body of evidence that patient data resides in many organizations and suggests that to fully realize the value of HIT in public health, cross-organizational data sharing must be meaningfully incentivized." }, { "pmid": "23349080", "title": "Federated queries of clinical data repositories: the sum of the parts does not equal the whole.", "abstract": "BACKGROUND AND OBJECTIVE\nIn 2008 we developed a shared health research information network (SHRINE), which for the first time enabled research queries across the full patient populations of four Boston hospitals. It uses a federated architecture, where each hospital returns only the aggregate count of the number of patients who match a query. This allows hospitals to retain control over their local databases and comply with federal and state privacy laws. However, because patients may receive care from multiple hospitals, the result of a federated query might differ from what the result would be if the query were run against a single central repository. This paper describes the situations when this happens and presents a technique for correcting these errors.\n\n\nMETHODS\nWe use a one-time process of identifying which patients have data in multiple repositories by comparing one-way hash values of patient demographics. This enables us to partition the local databases such that all patients within a given partition have data at the same subset of hospitals. Federated queries are then run separately on each partition independently, and the combined results are presented to the user.\n\n\nRESULTS\nUsing theoretical bounds and simulated hospital networks, we demonstrate that once the partitions are made, SHRINE can produce more precise estimates of the number of patients matching a query.\n\n\nCONCLUSIONS\nUncertainty in the overlap of patient populations across hospitals limits the effectiveness of SHRINE and other federated query tools. Our technique reduces this uncertainty while retaining an aggregate federated architecture." }, { "pmid": "22768321", "title": "A Protocol for the secure linking of registries for HPV surveillance.", "abstract": "INTRODUCTION\nIn order to monitor the effectiveness of HPV vaccination in Canada the linkage of multiple data registries may be required. These registries may not always be managed by the same organization and, furthermore, privacy legislation or practices may restrict any data linkages of records that can actually be done among registries. The objective of this study was to develop a secure protocol for linking data from different registries and to allow on-going monitoring of HPV vaccine effectiveness.\n\n\nMETHODS\nA secure linking protocol, using commutative hash functions and secure multi-party computation techniques was developed. This protocol allows for the exact matching of records among registries and the computation of statistics on the linked data while meeting five practical requirements to ensure patient confidentiality and privacy. The statistics considered were: odds ratio and its confidence interval, chi-square test, and relative risk and its confidence interval. Additional statistics on contingency tables, such as other measures of association, can be added using the same principles presented. The computation time performance of this protocol was evaluated.\n\n\nRESULTS\nThe protocol has acceptable computation time and scales linearly with the size of the data set and the size of the contingency table. The worse case computation time for up to 100,000 patients returned by each query and a 16 cell contingency table is less than 4 hours for basic statistics, and the best case is under 3 hours.\n\n\nDISCUSSION\nA computationally practical protocol for the secure linking of data from multiple registries has been demonstrated in the context of HPV vaccine initiative impact assessment. The basic protocol can be generalized to the surveillance of other conditions, diseases, or vaccination programs." }, { "pmid": "20442154", "title": "A preliminary look at duplicate testing associated with lack of electronic health record interoperability for transferred patients.", "abstract": "Duplication of medical testing results in a financial burden to the healthcare system. Authors undertook a retrospective review of duplicate testing on patients receiving coordinated care across two institutions, each with its own electronic medical record system. In order to determine whether duplicate testing occurred and if such testing was clinically indicated, authors analyzed records of 85 patients transferred from one site to the other between January 1, 2006 and December 31, 2007. Duplication of testing (repeat within 12 hours) was found in 32% of the cases examined; 20% of cases had at least one duplicate test not clinically indicated. While previous studies document that inaccessibility of paper records leads to duplicate testing when patients are transferred between care facilities, the current study suggests that incomplete electronic record transfer among incompatible electronic medical record systems can also lead to potentially costly duplicate testing behaviors. The authors believe that interoperable systems with integrated decision support could assist in minimizing duplication of testing at time of patient transfers." }, { "pmid": "11208260", "title": "The potential for research-based information in public health: identifying unrecognised information needs.", "abstract": "OBJECTIVE\nTo explore whether there is a potential for greater use of research-based information in public health practice in a local setting. Secondly, if research-based information is relevant, to explore the extent to which this generates questioning behaviour.\n\n\nDESIGN\nQualitative study using focus group discussions, observation and interviews.\n\n\nSETTING\nPublic health practices in Norway.\n\n\nPARTICIPANTS\n52 public health practitioners.\n\n\nRESULTS\nIn general, the public health practitioners had a positive attitude towards research-based information, but believed that they had few cases requiring this type of information. They did say, however, that there might be a potential for greater use. During five focus groups and six observation days we identified 28 questions/cases where it would have been appropriate to seek out research evidence according to our definition. Three of the public health practitioners identified three of these 28 cases as questions for which research-based information could have been relevant. This gap is interpreted as representing unrecognised information needs.\n\n\nCONCLUSIONS\nThere is an unrealised potential in public health practice for more frequent and extensive use of research-based information. The practitioners did not appear to reflect on the need for scientific information when faced with new cases and few questions of this type were generated." }, { "pmid": "25123746", "title": "Clinical research informatics and electronic health record data.", "abstract": "OBJECTIVES\nThe goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics.\n\n\nRESULTS\nMajor challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data.\n\n\nCONCLUSIONS\nThe use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system." }, { "pmid": "22262590", "title": "Design considerations, architecture, and use of the Mini-Sentinel distributed data system.", "abstract": "PURPOSE\nWe describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA.\n\n\nMETHODS\nA common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data.\n\n\nRESULTS\nAs of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center.\n\n\nCONCLUSION\nThis work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system." }, { "pmid": "19567788", "title": "The Shared Health Research Information Network (SHRINE): a prototype federated query tool for clinical data repositories.", "abstract": "The authors developed a prototype Shared Health Research Information Network (SHRINE) to identify the technical, regulatory, and political challenges of creating a federated query tool for clinical data repositories. Separate Institutional Review Boards (IRBs) at Harvard's three largest affiliated health centers approved use of their data, and the Harvard Medical School IRB approved building a Query Aggregator Interface that can simultaneously send queries to each hospital and display aggregate counts of the number of matching patients. Our experience creating three local repositories using the open source Informatics for Integrating Biology and the Bedside (i2b2) platform can be used as a road map for other institutions. The authors are actively working with the IRBs and regulatory groups to develop procedures that will ultimately allow investigators to obtain identified patient data and biomaterials through SHRINE. This will guide us in creating a future technical architecture that is scalable to a national level, compliant with ethical guidelines, and protective of the interests of the participating hospitals." }, { "pmid": "21658256", "title": "Physician privacy concerns when disclosing patient data for public health purposes during a pandemic influenza outbreak.", "abstract": "BACKGROUND\nPrivacy concerns by providers have been a barrier to disclosing patient information for public health purposes. This is the case even for mandated notifiable disease reporting. In the context of a pandemic it has been argued that the public good should supersede an individual's right to privacy. The precise nature of these provider privacy concerns, and whether they are diluted in the context of a pandemic are not known. Our objective was to understand the privacy barriers which could potentially influence family physicians' reporting of patient-level surveillance data to public health agencies during the Fall 2009 pandemic H1N1 influenza outbreak.\n\n\nMETHODS\nThirty seven family doctors participated in a series of five focus groups between October 29-31 2009. They also completed a survey about the data they were willing to disclose to public health units. Descriptive statistics were used to summarize the amount of patient detail the participants were willing to disclose, factors that would facilitate data disclosure, and the consensus on those factors. The analysis of the qualitative data was based on grounded theory.\n\n\nRESULTS\nThe family doctors were reluctant to disclose patient data to public health units. This was due to concerns about the extent to which public health agencies are dependable to protect health information (trusting beliefs), and the possibility of loss due to disclosing health information (risk beliefs). We identified six specific actions that public health units can take which would affect these beliefs, and potentially increase the willingness to disclose patient information for public health purposes.\n\n\nCONCLUSIONS\nThe uncertainty surrounding a pandemic of a new strain of influenza has not changed the privacy concerns of physicians about disclosing patient data. It is important to address these concerns to ensure reliable reporting during future outbreaks." }, { "pmid": "19504049", "title": "The Swedish personal identity number: possibilities and pitfalls in healthcare and medical research.", "abstract": "Swedish health care and national health registers are dependent on the presence of a unique identifier. This paper describes the Swedish personal identity number (PIN) and explores ethical issues of its use in medical research. A ten-digit-PIN is maintained by the National Tax Board for all individuals that have resided in Sweden since 1947. Until January 2008, an estimated 75,638 individuals have changed PIN. The most common reasons for change of PIN are incorrect recording of date of birth or sex among immigrants or newborns. Although uncommon, change of sex always leads to change of PIN since the PIN is sex-specific. The most common reasons for re-use of PIN (n = 15,887), is when immigrants are assigned a PIN that has previously been assigned to someone else. This is sometimes necessary since there is a shortage of certain PIN combinations referring to dates of birth in the 1950s and 1960s. Several ethical issues can be raised pro and con the use of PIN in medical research. The Swedish PIN is a useful tool for linkages between medical registers and allows for virtually 100% coverage of the Swedish health care system. We suggest that matching of registers through PIN and matching of national health registers without the explicit approval of the individual patient is to the benefit for both the individual patient and for society." }, { "pmid": "23739011", "title": "The effect of data cleaning on record linkage quality.", "abstract": "BACKGROUND\nWithin the field of record linkage, numerous data cleaning and standardisation techniques are employed to ensure the highest quality of links. While these facilities are common in record linkage software packages and are regularly deployed across record linkage units, little work has been published demonstrating the impact of data cleaning on linkage quality.\n\n\nMETHODS\nA range of cleaning techniques was applied to both a synthetically generated dataset and a large administrative dataset previously linked to a high standard. The effect of these changes on linkage quality was investigated using pairwise F-measure to determine quality.\n\n\nRESULTS\nData cleaning made little difference to the overall linkage quality, with heavy cleaning leading to a decrease in quality. Further examination showed that decreases in linkage quality were due to cleaning techniques typically reducing the variability - although correct records were now more likely to match, incorrect records were also more likely to match, and these incorrect matches outweighed the correct matches, reducing quality overall.\n\n\nCONCLUSIONS\nData cleaning techniques have minimal effect on linkage quality. Care should be taken during the data cleaning process." }, { "pmid": "21696636", "title": "The re-identification risk of Canadians from longitudinal demographics.", "abstract": "BACKGROUND\nThe public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.\n\n\nMETHODS\nUniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.\n\n\nRESULTS\nAlmost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.\n\n\nCONCLUSIONS\nA large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets." }, { "pmid": "8336512", "title": "Potential for cancer related health services research using a linked Medicare-tumor registry database.", "abstract": "The National Cancer Institute and the Health Care Financing Administration share a strong research interest in cancer costs, access to cancer prevention and treatment services, and cancer patient outcomes. To develop a database for such research, the two agencies have undertaken a collaborative effort to link Medicare Program data with the Surveillance, Epidemiology, and End Results (SEER) Program database. The SEER Program is a system of 9 population-based tumor registries that collect standardized clinical information on cases diagnosed in separate, geographically defined areas covering approximately 10% of the US population. Using a deterministic matching algorithm, the records of 94% of SEER registry cases diagnosed at age 65 or older between 1973 to 1989, or more than 610,000 persons, were successfully linked with Medicare claims files. The resulting database, combining clinical characteristics with information on utilization and costs, will permit the investigation of the contribution of various patient and health care setting factors to treatment patterns, costs, and medical outcomes." }, { "pmid": "25957825", "title": "Federated queries of clinical data repositories: Scaling to a national network.", "abstract": "Federated networks of clinical research data repositories are rapidly growing in size from a handful of sites to true national networks with more than 100 hospitals. This study creates a conceptual framework for predicting how various properties of these systems will scale as they continue to expand. Starting with actual data from Harvard's four-site Shared Health Research Information Network (SHRINE), the framework is used to imagine a future 4000 site network, representing the majority of hospitals in the United States. From this it becomes clear that several common assumptions of small networks fail to scale to a national level, such as all sites being online at all times or containing data from the same date range. On the other hand, a large network enables researchers to select subsets of sites that are most appropriate for particular research questions. Developers of federated clinical data networks should be aware of how the properties of these networks change at different scales and design their software accordingly." }, { "pmid": "19706187", "title": "Privacy-preserving record linkage using Bloom filters.", "abstract": "BACKGROUND\nCombining multiple databases with disjunctive or additional information on the same person is occurring increasingly throughout research. If unique identification numbers for these individuals are not available, probabilistic record linkage is used for the identification of matching record pairs. In many applications, identifiers have to be encrypted due to privacy concerns.\n\n\nMETHODS\nA new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers.\n\n\nRESULTS\nTests on simulated and actual databases yield linkage results comparable to non-encrypted identifiers and superior to results from phonetic encodings.\n\n\nCONCLUSION\nWe proposed a protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers. Since the protocol can be easily enhanced and has a low computational burden, the protocol might be useful for many applications requiring privacy-preserving record linkage." }, { "pmid": "25530689", "title": "Composite Bloom Filters for Secure Record Linkage.", "abstract": "The process of record linkage seeks to integrate instances that correspond to the same entity. Record linkage has traditionally been performed through the comparison of identifying field values (e.g., Surname), however, when databases are maintained by disparate organizations, the disclosure of such information can breach the privacy of the corresponding individuals. Various private record linkage (PRL) methods have been developed to obscure such identifiers, but they vary widely in their ability to balance competing goals of accuracy, efficiency and security. The tokenization and hashing of field values into Bloom filters (BF) enables greater linkage accuracy and efficiency than other PRL methods, but the encodings may be compromised through frequency-based cryptanalysis. Our objective is to adapt a BF encoding technique to mitigate such attacks with minimal sacrifices in accuracy and efficiency. To accomplish these goals, we introduce a statistically-informed method to generate BF encodings that integrate bits from multiple fields, the frequencies of which are provably associated with a minimum number of fields. Our method enables a user-specified tradeoff between security and accuracy. We compare our encoding method with other techniques using a public dataset of voter registration records and demonstrate that the increases in security come with only minor losses to accuracy." } ]
Frontiers in Neurorobotics
28194106
PMC5276858
10.3389/fnbot.2017.00003
ReaCog, a Minimal Cognitive Controller Based on Recruitment of Reactive Systems
It has often been stated that for a neuronal system to become a cognitive one, it has to be large enough. In contrast, we argue that a basic property of a cognitive system, namely the ability to plan ahead, can already be fulfilled by small neuronal systems. As a proof of concept, we propose an artificial neural network, termed reaCog, that, first, is able to deal with a specific domain of behavior (six-legged-walking). Second, we show how a minor expansion of this system enables the system to plan ahead and deploy existing behavioral elements in novel contexts in order to solve current problems. To this end, the system invents new solutions that are not possible for the reactive network. Rather these solutions result from new combinations of given memory elements. This faculty does not rely on a dedicated system being more or less independent of the reactive basis, but results from exploitation of the reactive basis by recruiting the lower-level control structures in a way that motor planning becomes possible as an internal simulation relying on internal representation being grounded in embodied experiences.
Related workIn this section, we will compare reaCog as a system with related recent approaches in order to point out differences. While there are many approaches toward cognitive systems and many proposals concerning cognitive architectures, we will concentrate on models that, like reaCog, consider a whole systems approach. First, we will deal with cognitive architectures in general. Second, we will briefly present relevant literature concerning comparable approaches in robotics, because a crucial property of reaCog is that it uses an embodied control structure to run a robot.Models of cognitive systemsModels of cognitive systems generally address selected aspects of cognition and often focus on specific findings from cognitive experiments (e.g., with respect to memory, attention, spatial imagery; review see Langley et al. (2009), Wintermute (2012). Duch et al. (2008) introduced a distinction between different cognitive architectures. First, these authors identified symbolic approaches. As an example, the original SOAR (State, Operator, and Result; Laird, 2008) has to be noted, a rule-based system in which knowledge is encoded in production rules that allow to state information or derive new knowledge through application of the rules. Second, emergent approaches follow a general bottom-up approach and often start from a connectionist representation. As one example, following a bottom-up approach, Verschure et al. (2003) introduced the DAC (Distributed Adaptive Control) series of robot architectures (Verschure et al., 2003; Verschure and Althaus, 2003). These authors initiated a sequence of experiments in simulation and in real implementation. Verschure started from a reflex-like system and introduced higher levels of control on top of the existing ones which modulated the lower levels and which were subsequently in charge on longer timespans (also introducing memory into the system) and were integrating additional sensory information. The experiments showed that the robots became more adapted to their environment exploiting visual cues for orienting and navigation etc., (Verschure et al., 2003). Many other approaches in emergent systems concentrate on perception, for example, the Neurally Organized Mobile Adaptive Device (NOMAD) which is based on Edelman (1993) Neural Darwinism approach and demonstrates pattern recognition in a mobile robot platform (Krichmar and Snook, 2002). Recently, this has gained broader support in the area of autonomous mental development (Weng et al., 2001) and has established the field of developmental robotics (Cangelosi and Schlesinger, 2015). A particular focus in such architectures concerning learning is currently not covered in reaCog. In general, as pointed out by Langley et al. (2009), these kinds of approaches have not yet demonstrated the broad functionality associated with cognitive architectures (and—as in addition mentioned by Duch et al. (2008)—many of such models are not realized and are often not detailed enough to be implemented as a cognitive system). ReaCog realizes such an emergent system but with focus on a complex behaving system that, in particular, aims at higher cognitive abilities currently not reached by such emergent systems. The third type concerns hybrid approaches which try to bring together the advantages of the other two paradigms, for example ACT-R (Adaptive Components of Thought-Rational, Anderson, 2003). The, in our view, most impressive and comprehensive model of such a cognitive system is presented by the CLARION system (review see Sun et al., 2005; Helie and Sun, 2010) being applied to creative problem solving. This system is detailed enough so that it can be implemented computationally. Applying the so called Explicit-Implicit Interaction (EII) theory and being implemented in the CLARION framework, this system can deal with a number of quantitatively and qualitatively known human data, by far more than can be simulated by our approach as reaCog, in contrast, does not deal with symbolic/verbal information. Apart from this aspect, the basic difference is that the EII/CLARION system comprises a hybrid system as it consists of two modules, the explicit knowledge module and the implicit knowledge module. Whereas, the latter contains knowledge that is not “consciously accessible” in principle, the explicit network contains knowledge that may be accessible. Information may be redundantly stored in both subsystems. Mutual coupling between both modules allows for mutual support when looking for a solution to a problem. In our approach, instead of using representational differences for implicit and explicit knowledge to cope with the different accessibility, we use only one type of representation, that, however, can be differently activated, either being in the reactive mode or in the “attended” mode. In our case, the localist information (motivational units) and the distributed information (procedural networks) are not separated into two modules, but form a common, decentralized structure. In this way, the reaCog system realizes the idea of recruitment as the same clusters are used in motor tasks and cognitive tasks. Whereas, we need an explicit attention system, as given in the spreading activation and winner-take-all layer, in the CLARION model decisions result from the recurrent network finding an attractor state.Many models of cognition take, quite in contrast to our approach, as a starting point the anatomy of the human brain. A prominent example is the GNOSIS project (Taylor and Zwaan, 2009). It deals with comparatively fine-grained assumptions on functional properties of brain modules, relying on imaging studies as well as on specific neurophysiological data. While GNOSIS concentrates mainly on perceptual, in particular visual input, the motor aspect is somewhat underrepresented. GNOSIS shows the ability to find new solutions to a problem, including the introduction of intermediate goals. Although an attention system is applied, this is used for controlling perception, not for supporting the search, as is the case in reaCog. Related to this, the search procedure—termed non-linguistic reasoning—in GNOSIS appears to be less open as the corresponding network is tailored to the actual problem to avoid a too large search space. In our approach, using the attention system, the complete memory can be used as substrate for finding a solution.4.2 Cognitive Robotic ApproachesThe approaches introduced in the previous section are not embodied and it appears difficult to envision how they could be embodied (Duch et al., 2008). Following the basic idea of embodied cognition (Brooks, 1989; Barsalou, 2008; Barsalou et al., 2012) embodiment is assumed as being necessary for any cognitive system. Our approach toward a minimal cognitive system is based on this core assumption. Robotic approaches have been proposed as ideal tools for research on cognition as the focus cannot narrowed down to a singular cognitive phenomenon, but it is required to put a unified system into the full context of different control processes and in interaction with the environment (Pezzulo et al., 2012).ReaCog as a system is clearly embodied. The procedures cannot by themselves instantiate the behavior, but require a body. The body is a constitutive part of the computational system, because the sensory feedback from the body is crucially required to activate the procedural memories in the appropriate way. The overall behavior emerges from the interaction between controller, body and environment. In the following, we will review relevant embodied robotic approaches.Today, many robotic approaches deal with the task of learning behaviors. In particular, behaviors should be adaptive. This means, a learned behavior should be transferable to similar movements and applicable in a broader context. Deep learning approaches have proven quite successful in such tasks e.g., Lenz et al. (2015), but many require large datasets for learning. Only recently Levine et al. (2015) presented a powerful reinforcement learning approach in this area. In this approach, the robot uses trial-and-error during online learning to explore possible behaviors. This allows the robot to quickly learn control policies for manipulation skills and has shown to be effective for quite difficult manipulation tasks. When using deep learning methods it is generally difficult to access the learned model. In contrast to reaCog such internal models are therefore not well suited for recruitment in higher-level tasks and planning ahead. In particular, there is no explicit internal body model which could be recruited. Rather, only implicit models are learned and have to be completely acquired anew for every single behavior.In the following, two exciting robotic examples tightly related to our approach will be addressed in more detail. The approach by Cully et al. (2015) aims at solving similar tasks as reaCog for a hexapod robot. It also applies as a general mechanism the idea of trial-and-error learning when the robot encounters a novel situation. In their case these new situations are walking up a slope or losing a leg. There are some differences compared to reaCog. Most notably, the testing of novel behaviors is done on the real robot. This is possible as the trial-and-error method is not applying discrete behaviors. Instead, central to the approach by Cully et al. (2015) is the idea of a behavioral parametrization which allows to characterize the currently experienced situation in a continuous, low dimensional space. A complete mapping toward optimal behaviors has been constructed in advance offline (Mouret and Clune, 2015). This pre-computed behaviors are exploited when a new situation or problem is encountered. As the behavioral space is continuous, the pre-computed behavior can be used to adapt for finding a new behavior. Further, there is no explicit body model that is shared between different behaviors. Instead, the memory approximates an incomplete body model, as it contains only a limited range of those movements which are geometrically possible. In contrast, reaCog, using its internal body model, allows to exploit all geometrically possible solutions and is not constraint to search in a continuous space, as illustrated by our example case, where a single leg is selected to perform completely out of context.While there is only a small number of robotic approaches dealing with explicit internal simulation, most of these are using very simple robotic architectures with only a very small number of degrees of freedom [for example see Svensson et al. (2009) or Chersi et al. (2013)]. It should further be mentioned that predictive models are also used to anticipate the visual effects of the robot's movements (e.g., Hoffmann, 2007; Möller and Schenck, 2008). With respect to reaCog the most similar approach has been pursued by Bongard et al. (2006). These authors use a four-legged, eight DoFs robot which, through motor babbling—i.e., randomly selected motor commands—learns the relation between motor output and the sensory consequences. This information is used to distinguish between a limited number of given hypotheses concerning the possible structure of the body. Finding the best fitting solution, one body model is selected. After the body model has been learned, in a second step the robot learns to move. To this end, the body model was used to perform different simulated behaviors and was only used as a forward model. Based on a reward given by an external supervisor and an optimizing algorithm, the best controller (sequence of moving the eight joints) was then realized to run the robot. Continuous learning allows the robot to register changes in the body morphology and to update its body model correspondingly. As the most important difference, Bongard et al. (2006) distinguish between the reactive system and the internal predictive body model. The central idea of their approach is that both are learned in distinct phases one after another. In reaCog the body model is part of the reactive system and required for the control of behavior. This allows different controllers driving the same body part and using the same body model for different functions (e.g., using a limb as a leg or as a gripper, Schilling et al., 2013a, Figure 10). In addition, different from our approach, Bongard et al. (2006) do not use artificial neural networks (ANN) for the body model and for the controller, but an explicit representation because application of ANN would make it “difficult to assess the correctness of the model” (Bongard et al., 2006, p. 1119). ReaCog deals with a much more complex structure as it deals with 18 DoFs instead of the only eight DoFs used by Bongard et al. (2006) which makes an explicit representation even more problematic.Different from their approach, we do not consider how the body model and the basic controllers will be learned, but take both as given (or “innate”). While the notion of innate body representations is controversial (de Vignemont, 2010), there is at least a general consensus about that there is some form of innate body model (often referred to as the body schema) reflecting general structural and dynamic properties of the body (Carruthers, 2008) which is shaped and develops further during maturation. This aspect is captured by our body model that encodes general structural relations of the body in service for motor control, but may adapt to developmental changes. While currently only kinematic properties are applied, dynamic influences can be integrated in the model as has been shown in Schilling (2009).A further important difference concerns the structure of the memory. Whereas, in Bongard's approach one monolithic controller is learned to deal with eight DoFs and producing one specific behavior, in reaCog the controller consists of modularized procedural memories. This memory architecture allows for selection between different states and therefore between different behaviors.
[ "20964882", "17998071", "17705682", "9639677", "15010478", "17110570", "16099349", "18359642", "16530429", "1688670", "23785343", "26017452", "21521609", "19786038", "15939768", "15939767", "18089037", "8094962", "23608361", "15068922", "21038261", "21601842", "20658861", "12039605", "17010571", "18394222", "25301621", "11373140", "8805419", "7705056", "15709874", "18423906", "21635344", "25741274", "3214648", "23346065", "18006736", "15959465", "24769063", "26891625", "16222545", "8713554", "23824506", "23060845", "17106698", "24062682", "16830135", "21969681", "14599324", "16209771", "15631592", "14534588", "15111010", "11229402", "27359335", "12662752" ]
[ { "pmid": "20964882", "title": "Neural reuse: a fundamental organizational principle of the brain.", "abstract": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design." }, { "pmid": "17998071", "title": "An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA.", "abstract": "While neural net models have been developed to a high degree of sophistication, they have some drawbacks at a more integrative, \"architectural\" level of analysis. We describe a \"hybrid\" cognitive architecture that is implementable in neuronal nets, and which has uniform brainlike features, including activation-passing and highly distributed \"codelets,\" implementable as small-scale neural nets. Empirically, this cognitive architecture accounts qualitatively for the data described by Baars' Global Workspace Theory (GWT), and Franklin's LIDA architecture, including state-of-the-art models of conscious contents in action-planning, Baddeley-style Working Memory, and working models of episodic and semantic longterm memory. These terms are defined both conceptually and empirically for the current theoretical domain. The resulting architecture meets four desirable goals for a unified theory of cognition: practical workability, autonomous agency, a plausible role for conscious cognition, and translatability into plausible neural terms. It also generates testable predictions, both empirical and computational." }, { "pmid": "17705682", "title": "Grounded cognition.", "abstract": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition." }, { "pmid": "15010478", "title": "Stick insect locomotion in a complex environment: climbing over large gaps.", "abstract": "In a complex environment, animals are challenged by various types of obstacles. This requires the controller of their walking system to be highly flexible. In this study, stick insects were presented with large gaps to cross in order to observe how locomotion can be adapted to challenging environmental situations. Different approaches were used to investigate the sequence of gap-crossing behaviour. A detailed video analysis revealed that gap-crossing behaviour resembles modified walking behaviour with additional step types. The walking sequence is interrupted by an interval of exploration, in which the insect probes the gap space with its antennae and front legs. When reaching the gap, loss of contact of an antenna with the ground does not elicit any observable reactions. In contrast, an initial front leg step into the gap that often follows antennal 'non-contact' evokes slowing down of stance velocity. An ablation experiment showed that the far edge of the gap is detected by tactile antennal stimulation rather than by vision. Initial contact of an antenna or front leg with the far edge of the gap represents a 'point of no return', after which gap crossing is always successfully completed. Finally, flow chart diagrams of the gap-crossing sequence were constructed based on an ethogram of single elements of behaviour. Comparing flow charts for two gap sizes revealed differences in the frequency and succession of these elements, especially during the first part of the sequence." }, { "pmid": "17110570", "title": "Resilient machines through continuous self-modeling.", "abstract": "Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals." }, { "pmid": "16099349", "title": "Listening to action-related sentences modulates the activity of the motor system: a combined TMS and behavioral study.", "abstract": "Transcranial magnetic stimulation (TMS) and a behavioral paradigm were used to assess whether listening to action-related sentences modulates the activity of the motor system. By means of single-pulse TMS, either the hand or the foot/leg motor area in the left hemisphere was stimulated in distinct experimental sessions, while participants were listening to sentences expressing hand and foot actions. Listening to abstract content sentences served as a control. Motor evoked potentials (MEPs) were recorded from hand and foot muscles. Results showed that MEPs recorded from hand muscles were specifically modulated by listening to hand-action-related sentences, as were MEPs recorded from foot muscles by listening to foot-action-related sentences. This modulation consisted of an amplitude decrease of the recorded MEPs. In the behavioral task, participants had to respond with the hand or the foot while listening to actions expressing hand and foot actions, as compared to abstract sentences. Coherently with the results obtained with TMS, when the response was given with the hand, reaction times were slower during listening to hand-action-related sentences, while when the response was given with the foot, reaction times were slower during listening to foot-action-related sentences. The present data show that processing verbally presented actions activates different sectors of the motor system, depending on the effector used in the listened-to action." }, { "pmid": "18359642", "title": "Types of body representation and the sense of embodiment.", "abstract": "The sense of embodiment is vital for self recognition. An examination of anosognosia for hemiplegia--the inability to recognise that one is paralysed down one side of one's body--suggests the existence of 'online' and 'offline' representations of the body. Online representations of the body are representations of the body as it is currently, are newly constructed moment by moment and are directly \"plugged into\" current perception of the body. In contrast, offline representations of the body are representations of what the body is usually like, are relatively stable and are constructed from online representations. This distinction is supported by an analysis of phantom limb--the feeling that an amputated limb is still present--phenomena. Initially it seems that the sense of embodiment may arise from either of these types of representation; however, an integrated representation of the body seems to be required. It is suggested information from vision and emotions is involved in generating these representations. A lack of access to online representations of the body does not necessarily lead to a loss in the sense of embodiment. An integrated offline representation of the body could account for the sense of embodiment and perform the functions attributed to this sense." }, { "pmid": "16530429", "title": "Building a motor simulation de novo: observation of dance by dancers.", "abstract": "Research on action simulation identifies brain areas that are active while imagining or performing simple overlearned actions. Are areas engaged during imagined movement sensitive to the amount of actual physical practice? In the present study, participants were expert dancers who learned and rehearsed novel, complex whole-body dance sequences 5 h a week across 5 weeks. Brain activity was recorded weekly by fMRI as dancers observed and imagined performing different movement sequences. Half these sequences were rehearsed and half were unpracticed control movements. After each trial, participants rated how well they could perform the movement. We hypothesized that activity in premotor areas would increase as participants observed and simulated movements that they had learnt outside the scanner. Dancers' ratings of their ability to perform rehearsed sequences, but not the control sequences, increased with training. When dancers observed and simulated another dancer's movements, brain regions classically associated with both action simulation and action observation were active, including inferior parietal lobule, cingulate and supplementary motor areas, ventral premotor cortex, superior temporal sulcus and primary motor cortex. Critically, inferior parietal lobule and ventral premotor activity was modulated as a function of dancers' ratings of their own ability to perform the observed movements and their motor experience. These data demonstrate that a complex motor resonance can be built de novo over 5 weeks of rehearsal. Furthermore, activity in premotor and parietal areas during action simulation is enhanced by the ability to execute a learned action irrespective of stimulus familiarity or semantic label." }, { "pmid": "1688670", "title": "What mechanisms coordinate leg movement in walking arthropods?", "abstract": "The construction of artificial walking machines has been a challenging task for engineers for several centuries. Advances in computer technology have stimulated this research in the past two decades, and enormous progress has been made, particularly in recent years. Nevertheless, in comparing the walk of a six-legged robot with the walk of an insect, the immense differences are immediately obvious. The walking of an animal is much more versatile, and seems to be more effective and elegant. Thus it is useful to consider the corresponding biological mechanisms in order to apply these or similar mechanisms to the control of walking legs in machines. Until recently, little information on this paper summarizes recent developments." }, { "pmid": "23785343", "title": "How and to what end may consciousness contribute to action? Attributing properties of consciousness to an embodied, minimally cognitive artificial neural network.", "abstract": "An artificial neural network called reaCog is described which is based on a decentralized, reactive and embodied architecture developed to control non-trivial hexapod walking in an unpredictable environment (Walknet) while using insect-like navigation (Navinet). In reaCog, these basic networks are extended in such a way that the complete system, reaCog, adopts the capability of inventing new behaviors and - via internal simulation - of planning ahead. This cognitive expansion enables the reactive system to be enriched with additional procedures. Here, we focus on the question to what extent properties of phenomena to be characterized on a different level of description as for example consciousness can be found in this minimally cognitive system. Adopting a monist view, we argue that the phenomenal aspect of mental phenomena can be neglected when discussing the function of such a system. Under this condition, reaCog is discussed to be equipped with properties as are bottom-up and top-down attention, intentions, volition, and some aspects of Access Consciousness. These properties have not been explicitly implemented but emerge from the cooperation between the elements of the network. The aspects of Access Consciousness found in reaCog concern the above mentioned ability to plan ahead and to invent and guide (new) actions. Furthermore, global accessibility of memory elements, another aspect characterizing Access Consciousness is realized by this network. reaCog allows for both reactive/automatic control and (access-) conscious control of behavior. We discuss examples for interactions between both the reactive domain and the conscious domain. Metacognition or Reflexive Consciousness is not a property of reaCog. Possible expansions are discussed to allow for further properties of Access Consciousness, verbal report on internal states, and for Metacognition. In summary, we argue that already simple networks allow for properties of consciousness if leaving the phenomenal aspect aside." }, { "pmid": "26017452", "title": "Robots that can adapt like animals.", "abstract": "Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot 'think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury." }, { "pmid": "21521609", "title": "Experimental and theoretical approaches to conscious processing.", "abstract": "Recent experimental studies and theoretical models have begun to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity. The present review focuses on the well-delimited issue of how an external or internal piece of information goes beyond nonconscious processing and gains access to conscious processing, a transition characterized by the existence of a reportable subjective experience. Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and \"ignition\" of a large-scale prefronto-parietal network. We compare these findings to current theoretical models of conscious processing, including the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices. The clinical implications of these results for general anesthesia, coma, vegetative state, and schizophrenia are discussed." }, { "pmid": "19786038", "title": "Body schema and body image--pros and cons.", "abstract": "There seems to be no dimension of bodily awareness that cannot be disrupted. To account for such variety, there is a growing consensus that there are at least two distinct types of body representation that can be impaired, the body schema and the body image. However, the definition of these notions is often unclear. The notion of body image has attracted most controversy because of its lack of unifying positive definition. The notion of body schema, onto which there seems to be a more widespread agreement, also covers a variety of sensorimotor representations. Here, I provide a conceptual analysis of the body schema contrasting it with the body image(s) as well as assess whether (i) the body schema can be specifically impaired, while other types of body representation are preserved; and (ii) the body schema obeys principles that are different from those that apply to other types of body representation." }, { "pmid": "15939768", "title": "Context-dependent changes in strength and efficacy of leg coordination mechanisms.", "abstract": "Appropriate coordination of stepping in adjacent legs is crucial for stable walking. Several leg coordination rules have been derived from behavioural experiments on walking insects, some of which also apply to arthropods with more than six legs and to four-legged walking vertebrates. Three of these rules affect the timing of stance-swing transition [rules 1 to 3 (sensu Cruse)]. They can give rise to normal leg coordination and adaptive responses to disturbances, as shown by kinematic simulations and dynamic hardware tests. In spite of their importance to the study of animal walking, the coupling strength associated with these rules has never been measured experimentally. Generally coupling strength of the underlying mechanisms has been considered constant rather than context-dependent. The present study analyses stepping patterns of the stick insect Carausius morosus during straight and curve walking sequences. To infer strength and efficacy of coupling between pairs of sender and receiver legs, the likelihood of the receiver leg being in swing is determined, given a certain delay relative to the time of a swing-stance (or stance-swing) transition in the sender leg. This is compared to a corresponding measure for independent, hence uncoupled, step sequences. The difference is defined as coupling strength. The ratio of coupling strength and its theoretical maximum is defined as efficacy. Irrespective of the coordination rule, coupling strength between ipsilateral leg pairs is at least twice that of contralateral leg pairs, being strongest between ipsilateral hind and middle legs and weakest between contralateral middle legs. Efficacy is highest for inhibitory rule 1, reaching 84-95% for ipsilateral and 29-65% for contralateral leg pairs. Efficacy of excitatory rules 2 and 3 ranges between 35-56% for ipsilateral and 8-21% for contralateral leg pairs. The behavioural transition from straight to curve walking is associated with context-dependent changes in coupling strength, increasing in both outer leg pairs and decreasing between inner hind and middle leg. Thus, the coordination rules that are thought to underlie many adaptive properties of the walking system, themselves adapt in a context-dependent manner." }, { "pmid": "15939767", "title": "The behavioural transition from straight to curve walking: kinetics of leg movement parameters and the initiation of turning.", "abstract": "The control of locomotion requires the ability to adapt movement sequences to the behavioural context of the animal. In hexapod walking, adaptive behavioural transitions require orchestration of at least 18 leg joints and twice as many muscle groups. Although kinematics of locomotion has been studied in several arthropod species and in a range of different behaviours, almost nothing is known about the transition from one behavioural state to another. Implicitly, most studies on context-dependency assume that all parameters that undergo a change during a behavioural transition do so at the same rate. The present study tests this assumption by analysing the sequence of kinematic events during turning of the stick insect Carausius morosus, and by measuring how the time courses of the changing parameters differ between legs. Turning was triggered reliably at a known instant in time by means of the optomotor response to large-field visual motion. Thus, knowing the start point of the transition, the kinematic parameters that initiate turning could be ranked according to their time constants. Kinematics of stick insect walking vary considerably among trials and within trials. As a consequence, the behavioural states of straight walking and curve walking are described by the distributions of 13 kinematic parameters per leg and of orientation angles of head and antennae. The transitions between the behavioural states are then characterised by the fraction of the variance within states by which these distributions differ, and by the rate of change of the corresponding time courses. The antennal optomotor response leads that of the locomotor system. Visually elicited turning is shown to be initiated by stance direction changes of both front legs. The transition from straight to curve walking in stick insects follows different time courses for different legs, with time constants of kinematic parameters ranging from 1.7 s to more than 3 s. Therefore, turning is a behavioural transition that involves a characteristic orchestration of events rather than synchronous parallel actions with a single time constant." }, { "pmid": "18089037", "title": "Behaviour-based modelling of hexapod locomotion: linking biology and technical application.", "abstract": "Walking in insects and most six-legged robots requires simultaneous control of up to 18 joints. Moreover, the number of joints that are mechanically coupled via body and ground varies from one moment to the next, and external conditions such as friction, compliance and slope of the substrate are often unpredictable. Thus, walking behaviour requires adaptive, context-dependent control of many degrees of freedom. As a consequence, modelling legged locomotion addresses many aspects of any motor behaviour in general. Based on results from behavioural experiments on arthropods, we describe a kinematic model of hexapod walking: the distributed artificial neural network controller walknet. Conceptually, the model addresses three basic problems in legged locomotion. (I) First, coordination of several legs requires coupling between the step cycles of adjacent legs, optimising synergistic propulsion, but ensuring stability through flexible adjustment to external disturbances. A set of behaviourally derived leg coordination rules can account for decentralised generation of different gaits, and allows stable walking of the insect model as well as of a number of legged robots. (II) Second, a wide range of different leg movements must be possible, e.g. to search for foothold, grasp for objects or groom the body surface. We present a simple neural network controller that can simulate targeted swing trajectories, obstacle avoidance reflexes and cyclic searching-movements. (III) Third, control of mechanically coupled joints of the legs in stance is achieved by exploiting the physical interactions between body, legs and substrate. A local positive displacement feedback, acting on individual leg joints, transforms passive displacement of a joint into active movement, generating synergistic assistance reflexes in all mechanically coupled joints." }, { "pmid": "8094962", "title": "Neural Darwinism: selection and reentrant signaling in higher brain function.", "abstract": "Variation and selection within neural populations play key roles in the development and function of the brain. In this article, I review a population theory of the nervous system aimed at understanding the significance of these processes. Since its original formulation in 1978, considerable evidence has accumulated to support this theory of neuronal group selection. Extensive neural modeling based on the theory has provided useful insights into several outstanding neurobiological problems including those concerned with integration of cortical function, sensorimotor control, and perceptually based behavior." }, { "pmid": "23608361", "title": "Where's the action? The pragmatic turn in cognitive science.", "abstract": "In cognitive science, we are currently witnessing a 'pragmatic turn', away from the traditional representation-centered framework towards a paradigm that focuses on understanding cognition as 'enactive', as skillful activity that involves ongoing interaction with the external world. The key premise of this view is that cognition should not be understood as providing models of the world, but as subserving action and being grounded in sensorimotor coupling. Accordingly, cognitive processes and their underlying neural activity patterns should be studied primarily with respect to their role in action generation. We suggest that such an action-oriented paradigm is not only conceptually viable, but already supported by much experimental evidence. Numerous findings either overtly demonstrate the action-relatedness of cognition or can be re-interpreted in this new framework. We argue that new vistas on the functional relevance and the presumed 'representational' nature of neural processes are likely to emerge from this paradigm." }, { "pmid": "21038261", "title": "The Brain's concepts: the role of the Sensory-motor system in conceptual knowledge.", "abstract": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns." }, { "pmid": "21601842", "title": "Action-based language: a theory of language acquisition, comprehension, and production.", "abstract": "Evolution and the brain have done a marvelous job solving many tricky problems in action control, including problems of learning, hierarchical control over serial behavior, continuous recalibration, and fluency in the face of slow feedback. Given that evolution tends to be conservative, it should not be surprising that these solutions are exploited to solve other tricky problems, such as the design of a communication system. We propose that a mechanism of motor control, paired controller/predictor models, has been exploited for language learning, comprehension, and production. Our account addresses the development of grammatical regularities and perspective, as well as how linguistic symbols become meaningful through grounding in perception, action, and emotional systems." }, { "pmid": "20658861", "title": "Incubation, insight, and creative problem solving: a unified theory and a connectionist model.", "abstract": "This article proposes a unified framework for understanding creative problem solving, namely, the explicit-implicit interaction theory. This new theory of creative problem solving constitutes an attempt at providing a more unified explanation of relevant phenomena (in part by reinterpreting/integrating various fragmentary existing theories of incubation and insight). The explicit-implicit interaction theory relies mainly on 5 basic principles, namely, (a) the coexistence of and the difference between explicit and implicit knowledge, (b) the simultaneous involvement of implicit and explicit processes in most tasks, (c) the redundant representation of explicit and implicit knowledge, (d) the integration of the results of explicit and implicit processing, and (e) the iterative (and possibly bidirectional) processing. A computational implementation of the theory is developed based on the CLARION cognitive architecture and applied to the simulation of relevant human data. This work represents an initial step in the development of process-based theories of creativity encompassing incubation, insight, and various other related phenomena." }, { "pmid": "12039605", "title": "Conscious thought as simulation of behaviour and perception.", "abstract": "A 'simulation' theory of cognitive function can be based on three assumptions about brain function. First, behaviour can be simulated by activating motor structures, as during an overt action but suppressing its execution. Second, perception can be simulated by internal activation of sensory cortex, as during normal perception of external stimuli. Third, both overt and covert actions can elicit perceptual simulation of their normal consequences. A large body of evidence supports these assumptions. It is argued that the simulation approach can explain the relations between motor, sensory and cognitive functions and the appearance of an inner world." }, { "pmid": "17010571", "title": "Perception through visuomotor anticipation in a mobile robot.", "abstract": "Several scientists suggested that certain perceptual qualities are based on sensorimotor anticipation: for example, the softness of a sponge is perceived by anticipating the sensations resulting from a grasping movement. For the perception of spatial arrangements, this article demonstrates that this concept can be realized in a mobile robot. The robot first learned to predict how its visual input changes under movement commands. With this ability, two perceptual tasks could be solved: judging the distance to an obstacle in front by 'mentally' simulating a movement toward the obstacle, and recognizing a dead end by simulating either an obstacle-avoidance algorithm or a recursive search for an exit. A simulated movement contained a series of prediction steps. In each step, a multilayer perceptron anticipated the next image, which, however, became increasingly noisy. To denoise an image, it was split into patches, and each patch was projected onto a manifold obtained by modelling the density of the distribution of training patches with a mixture of Gaussian functions." }, { "pmid": "18394222", "title": "The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading.", "abstract": "Imitation, deliberation, and mindreading are characteristically human sociocognitive skills. Research on imitation and its role in social cognition is flourishing across various disciplines. Imitation is surveyed in this target article under headings of behavior, subpersonal mechanisms, and functions of imitation. A model is then advanced within which many of the developments surveyed can be located and explained. The shared circuits model (SCM) explains how imitation, deliberation, and mindreading can be enabled by subpersonal mechanisms of control, mirroring, and simulation. It is cast at a middle, functional level of description, that is, between the level of neural implementation and the level of conscious perceptions and intentional actions. The SCM connects shared informational dynamics for perception and action with shared informational dynamics for self and other, while also showing how the action/perception, self/other, and actual/possible distinctions can be overlaid on these shared informational dynamics. It avoids the common conception of perception and action as separate and peripheral to central cognition. Rather, it contributes to the situated cognition movement by showing how mechanisms for perceiving action can be built on those for active perception.;>;>The SCM is developed heuristically, in five layers that can be combined in various ways to frame specific ontogenetic or phylogenetic hypotheses. The starting point is dynamic online motor control, whereby an organism is closely attuned to its embedding environment through sensorimotor feedback. Onto this are layered functions of prediction and simulation of feedback, mirroring, simulation of mirroring, monitored inhibition of motor output, and monitored simulation of input. Finally, monitored simulation of input specifying possible actions plus inhibited mirroring of such possible actions can generate information about the possible as opposed to actual instrumental actions of others, and the possible causes and effects of such possible actions, thereby enabling strategic social deliberation. Multiple instances of such shared circuits structures could be linked into a network permitting decomposition and recombination of elements, enabling flexible control, imitative learning, understanding of other agents, and instrumental and strategic deliberation. While more advanced forms of social cognition, which require tracking multiple others and their multiple possible actions, may depend on interpretative theorizing or language, the SCM shows how layered mechanisms of control, mirroring, and simulation can enable distinctively human cognitive capacities for imitation, deliberation, and mindreading." }, { "pmid": "25301621", "title": "Biorobotics: using robots to emulate and investigate agile locomotion.", "abstract": "The graceful and agile movements of animals are difficult to analyze and emulate because locomotion is the result of a complex interplay of many components: the central and peripheral nervous systems, the musculoskeletal system, and the environment. The goals of biorobotics are to take inspiration from biological principles to design robots that match the agility of animals, and to use robots as scientific tools to investigate animal adaptive behavior. Used as physical models, biorobots contribute to hypothesis testing in fields such as hydrodynamics, biomechanics, neuroscience, and prosthetics. Their use may contribute to the design of prosthetic devices that more closely take human locomotion principles into account." }, { "pmid": "11373140", "title": "Neural simulation of action: a unifying mechanism for motor cognition.", "abstract": "Paradigms drawn from cognitive psychology have provided new insight into covert stages of action. These states include not only intending actions that will eventually be executed, but also imagining actions, recognizing tools, learning by observation, or even understanding the behavior of other people. Studies using techniques for mapping brain activity, probing cortical excitability, or measuring the activity of peripheral effectors in normal human subjects and in patients all provide evidence of a subliminal activation of the motor system during these cognitive states. The hypothesis that the motor system is part of a simulation network that is activated under a variety of conditions in relation to action, either self-intended or observed from other individuals, will be developed. The function of this process of simulation would be not only to shape the motor system in anticipation to execution, but also to provide the self with information on the feasibility and the meaning of potential actions." }, { "pmid": "8805419", "title": "Mental motor imagery: a window into the representational stages of action.", "abstract": "The physiological basis of mental states can be effectively studied by combining cognitive psychology with human neuroscience. Recent research has employed mental motor imagery in normal and brain-damaged subjects to decipher the content and the structure of covert processes preceding the execution of action. The mapping of brain activity during motor imagery discloses a pattern of activation similar to that of an executed action." }, { "pmid": "7705056", "title": "Comprehension of cause-effect relations in a tool-using task by chimpanzees (Pan troglodytes).", "abstract": "Five chimpanzees (Pan troglodytes) were tested to assess their understanding of causality in a tool task. The task consisted of a transparent tube with a trap-hole drilled in its middle. A reward was randomly placed on either side of the hole. Depending on which side the chimpanzee inserted the stick into, the candy was either pushed out of the tube or into the trap. In Experiment 1, the success rate of 2 chimpanzees rose highly above chance, but that of the other subjects did not. Results show that the 2 successful chimpanzees selected the correct side for insertion beforehand. Experiment 2 ruled out the possibility that their success was due to a distance-based associative rule, and the results favor an alternative hypothesis that relates success to an understanding of the causal relation between the tool-using action and its outcome." }, { "pmid": "15709874", "title": "Recognizing people from their movement.", "abstract": "Human observers demonstrate impressive visual sensitivity to human movement. What defines this sensitivity? If motor experience influences the visual analysis of action, then observers should be most sensitive to their own movements. If view-dependent visual experience determines visual sensitivity to human movement, then observers should be most sensitive to the movements of their friends. To test these predictions, participants viewed sagittal displays of point-light depictions of themselves, their friends, and strangers performing various actions. In actor identification and discrimination tasks, sensitivity to one's own motion was highest. Visual sensitivity to friends', but not strangers', actions was above chance. Performance was action dependent. Control studies yielded chance performance with inverted and static displays, suggesting that form and low-motion cues did not define performance. These results suggest that both motor and visual experience define visual sensitivity to human action." }, { "pmid": "18423906", "title": "On the other hand: dummy hands and peripersonal space.", "abstract": "Where are my hands? The brain can answer this question using sensory information arising from vision, proprioception, or touch. Other sources of information about the position of our hands can be derived from multisensory interactions (or potential interactions) with our close environment, such as when we grasp or avoid objects. The pioneering study of multisensory representations of peripersonal space was published in Behavioural Brain Research almost 30 years ago [Rizzolatti G, Scandolara C, Matelli M, Gentilucci M. Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses. Behav Brain Res 1981;2:147-63]. More recently, neurophysiological, neuroimaging, neuropsychological, and behavioural studies have contributed a wealth of evidence concerning hand-centred representations of objects in peripersonal space. This evidence is examined here in detail. In particular, we focus on the use of artificial dummy hands as powerful instruments to manipulate the brain's representation of hand position, peripersonal space, and of hand ownership. We also review recent studies of the 'rubber hand illusion' and related phenomena, such as the visual capture of touch, and the recalibration of hand position sense, and discuss their findings in the light of research on peripersonal space. Finally, we propose a simple model that situates the 'rubber hand illusion' in the neurophysiological framework of multisensory hand-centred representations of space." }, { "pmid": "21635344", "title": "Bootstrapping cognition from behavior-a computerized thought experiment.", "abstract": "We show that simple perceptual competences can emerge from an internal simulation of action effects and are thus grounded in behavior. A simulated agent learns to distinguish between dead ends and corridors without the necessity to represent these concepts in the sensory domain. Initially, the agent is only endowed with a simple value system and the means to extract low-level features from an image. In the interaction with the environment, it acquires a visuo-tactile forward model that allows the agent to predict how the visual input is changing under its movements, and whether movements will lead to a collision. From short-term predictions based on the forward model, the agent learns an inverse model. The inverse model in turn produces suggestions about which actions should be simulated in long-term predictions, and long-term predictions eventually give rise to the perceptual ability." }, { "pmid": "25741274", "title": "Revisiting the body-schema concept in the context of whole-body postural-focal dynamics.", "abstract": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability." }, { "pmid": "3214648", "title": "Kinematic networks. A distributed model for representing and regularizing motor redundancy.", "abstract": "Motor control in primates relates to a system which is highly redundant from the mechanical point of view--redundancy coming from an imbalance between the set of independently controllable variables and the set of system variables. The consequence is the manifestation of a broad class of ill-posed problems, problems for which it is difficult to identify unique solutions. For example (i) the problem of determining the coordinated patterns of rotation of the arm joints for a planned trajectory of the hand; (ii) the problem of determining the distribution of muscle forces for a desired set of joint torques. Ill-posed problems, in general, require regularization methods which allow to spell acceptable, if not unique, solutions. In the case of the motor system, we propose that the basic regularization mechanism is provided by the potential fields generated by the elastic properties of muscles, according to an organizational principle that we call \"Passive Motion Paradigm\". The physiological basis of this hypothesis is reviewed and a \"Kinematic Network\" (K-net) model is proposed that expresses the kinematic transformations and the causal relations implied by elasticity. Moreover, it is shown how K-nets can be obtained from a kinematic \"Body Model\", in the context of a specific task. Two particularly significant results are: (i) the uniform treatment of closed as well as open kinematic chains, and (ii) the development of a new method for the automatic generation of kinematic equations with arbitrary topology. Moreover, the model is akin to the concept of \"motor equivalence\" in the sense that it provides families of motor equivalent trajectories parametrized by tunable motor impedances." }, { "pmid": "23346065", "title": "Computational Grounded Cognition: a new alliance between grounded cognition and computational modeling.", "abstract": "Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition." }, { "pmid": "18006736", "title": "Self-organization, embodiment, and biologically inspired robotics.", "abstract": "Robotics researchers increasingly agree that ideas from biology and self-organization can strongly benefit the design of autonomous robots. Biological organisms have evolved to perform and survive in a world characterized by rapid changes, high uncertainty, indefinite richness, and limited availability of information. Industrial robots, in contrast, operate in highly controlled environments with no or very little uncertainty. Although many challenges remain, concepts from biologically inspired (bio-inspired) robotics will eventually enable researchers to engineer machines for the real world that possess at least some of the desirable properties of biological organisms, such as adaptivity, robustness, versatility, and agility." }, { "pmid": "15959465", "title": "Brain mechanisms linking language and action.", "abstract": "For a long time the cortical systems for language and actions were believed to be independent modules. However, as these systems are reciprocally connected with each other, information about language and actions might interact in distributed neuronal assemblies. A critical case is that of action words that are semantically related to different parts of the body (for example, 'lick', 'pick' and 'kick'): does the comprehension of these words specifically, rapidly and automatically activate the motor system in a somatotopic manner, and does their comprehension rely on activity in the action system?" }, { "pmid": "24769063", "title": "From sensorimotor learning to memory cells in prefrontal and temporal association cortex: a neurocomputational study of disembodiment.", "abstract": "Memory cells, the ultimate neurobiological substrates of working memory, remain active for several seconds and are most commonly found in prefrontal cortex and higher multisensory areas. However, if correlated activity in \"embodied\" sensorimotor systems underlies the formation of memory traces, why should memory cells emerge in areas distant from their antecedent activations in sensorimotor areas, thus leading to \"disembodiment\" (movement away from sensorimotor systems) of memory mechanisms? We modelled the formation of memory circuits in six-area neurocomputational architectures, implementing motor and sensory primary, secondary and higher association areas in frontotemporal cortices along with known between-area neuroanatomical connections. Sensorimotor learning driven by Hebbian neuroplasticity led to formation of cell assemblies distributed across the different areas of the network. These action-perception circuits (APCs) ignited fully when stimulated, thus providing a neural basis for long-term memory (LTM) of sensorimotor information linked by learning. Subsequent to ignition, activity vanished rapidly from APC neurons in sensorimotor areas but persisted in those in multimodal prefrontal and temporal areas. Such persistent activity provides a mechanism for working memory for actions, perceptions and symbols, including short-term phonological and semantic storage. Cell assembly ignition and \"disembodied\" working memory retreat of activity to multimodal areas are documented in the neurocomputational models' activity dynamics, at the level of single cells, circuits, and cortical areas. Memory disembodiment is explained neuromechanistically by APC formation and structural neuroanatomical features of the model networks, especially the central role of multimodal prefrontal and temporal cortices in bridging between sensory and motor areas. These simulations answer the \"where\" question of cortical working memory in terms of distributed APCs and their inner structure, which is, in part, determined by neuroanatomical structure. As the neurocomputational model provides a mechanistic explanation of how memory-related \"disembodied\" neuronal activity emerges in \"embodied\" APCs, it may be key to solving aspects of the embodiment debate and eventually to a better understanding of cognitive brain functions." }, { "pmid": "26891625", "title": "Vicarious trial and error.", "abstract": "When rats come to a decision point, they sometimes pause and look back and forth as if deliberating over the choice; at other times, they proceed as if they have already made their decision. In the 1930s, this pause-and-look behaviour was termed 'vicarious trial and error' (VTE), with the implication that the rat was 'thinking about the future'. The discovery in 2007 that the firing of hippocampal place cells gives rise to alternating representations of each of the potential path options in a serial manner during VTE suggested a possible neural mechanism that could underlie the representations of future outcomes. More-recent experiments examining VTE in rats suggest that there are direct parallels to human processes of deliberative decision making, working memory and mental time travel." }, { "pmid": "8713554", "title": "Premotor cortex and the recognition of motor actions.", "abstract": "In area F5 of the monkey premotor cortex there are neurons that discharge both when the monkey performs an action and when he observes a similar action made by another monkey or by the experimenter. We report here some of the properties of these 'mirror' neurons and we propose that their activity 'represents' the observed action. We posit, then, that this motor representation is at the basis of the understanding of motor events. Finally, on the basis of some recent data showing that, in man, the observation of motor actions activate the posterior part of inferior frontal gyrus, we suggest that the development of the lateral verbal communication system in man derives from a more ancient communication system based on recognition of hand and face gestures." }, { "pmid": "23824506", "title": "Walknet, a bio-inspired controller for hexapod walking.", "abstract": "Walknet comprises an artificial neural network that allows for the simulation of a considerable amount of behavioral data obtained from walking and standing stick insects. It has been tested by kinematic and dynamic simulations as well as on a number of six-legged robots. Over the years, various different expansions of this network have been provided leading to different versions of Walknet. This review summarizes the most important biological findings described by Walknet and how they can be simulated. Walknet shows how a number of properties observed in insects may emerge from a decentralized architecture. Examples are the continuum of so-called \"gaits,\" coordination of up to 18 leg joints during stance when walking forward or backward over uneven surfaces and negotiation of curves, dealing with leg loss, as well as being able following motion trajectories without explicit precalculation. The different Walknet versions are compared to other approaches describing insect-inspired hexapod walking. Finally, we briefly address the ability of this decentralized reactive controller to form the basis for the simulation of higher-level cognitive faculties exceeding the capabilities of insects." }, { "pmid": "23060845", "title": "What's Next: Recruitment of a Grounded Predictive Body Model for Planning a Robot's Actions.", "abstract": "Even comparatively simple, reactive systems are able to control complex motor tasks, such as hexapod walking on unpredictable substrate. The capability of such a controller can be improved by introducing internal models of the body and of parts of the environment. Such internal models can be applied as inverse models, as forward models or to solve the problem of sensor fusion. Usually, separate models are used for these functions. Furthermore, separate models are used to solve different tasks. Here we concentrate on internal models of the body as the brain considers its own body the most important part of the world. The model proposed is formed by a recurrent neural network with the property of pattern completion. The model shows a hierarchical structure but nonetheless comprises a holistic system. One and the same model can be used as a forward model, as an inverse model, for sensor fusion, and, with a simple expansion, as a model to internally simulate (new) behaviors to be used for prediction. The model embraces the geometrical constraints of a complex body with many redundant degrees of freedom, and allows finding geometrically possible solutions. To control behavior such as walking, climbing, or reaching, this body model is complemented by a number of simple reactive procedures together forming a procedural memory. In this article, we illustrate the functioning of this network. To this end we present examples for solutions of the forward function and the inverse function, and explain how the complete network might be used for predictive purposes. The model is assumed to be \"innate,\" so learning the parameters of the model is not (yet) considered." }, { "pmid": "17106698", "title": "Hexapod Walking: an expansion to Walknet dealing with leg amputations and force oscillations.", "abstract": "The control of the legs of a walking hexapod is a complex problem as the legs have three joints each, resulting in a total of 18 degrees of freedom. We addressed this problem using a decentralized architecture termed Walknet, which consists of peripheral pattern generators being coordinated through influences acting mainly between neighbouring legs. Both, the coordinating influences and the local control modules (each acting only on one leg), are biologically inspired. This investigation shows that it is possible to adapt this approach to account for additional biological data by (1) changing the structure of the selector net in a biological plausible way (including force as an analog variable), (2) introducing a biologically motivated coordination influence for coactivation between legs and (3) adding a hypothetical influence between hind and front legs. This network of controllers has been tested using a dynamic simulation. It is able to describe (a) the behaviour of animals walking with one or two legs being amputated and (b) force oscillations that occur in a specific experimental situation, the standing legs of a walking animal." }, { "pmid": "24062682", "title": "A hexapod walker using a heterarchical architecture for action selection.", "abstract": "Moving in a cluttered environment with a six-legged walking machine that has additional body actuators, therefore controlling 22 DoFs, is not a trivial task. Already simple forward walking on a flat plane requires the system to select between different internal states. The orchestration of these states depends on walking velocity and on external disturbances. Such disturbances occur continuously, for example due to irregular up-and-down movements of the body or slipping of the legs, even on flat surfaces, in particular when negotiating tight curves. The number of possible states is further increased when the system is allowed to walk backward or when front legs are used as grippers and cannot contribute to walking. Further states are necessary for expansion that allow for navigation. Here we demonstrate a solution for the selection and sequencing of different (attractor) states required to control different behaviors as are forward walking at different speeds, backward walking, as well as negotiation of tight curves. This selection is made by a recurrent neural network (RNN) of motivation units, controlling a bank of decentralized memory elements in combination with the feedback through the environment. The underlying heterarchical architecture of the network allows to select various combinations of these elements. This modular approach representing an example of neural reuse of a limited number of procedures allows for adaptation to different internal and external conditions. A way is sketched as to how this approach may be expanded to form a cognitive system being able to plan ahead. This architecture is characterized by different types of modules being arranged in layers and columns, but the complete network can also be considered as a holistic system showing emergent properties which cannot be attributed to a specific module." }, { "pmid": "16830135", "title": "Control of swing movement: influences of differently shaped substrate.", "abstract": "Stick insects were studied while walking on different substrates. The trajectories of swing movements are recorded. The starting position of a swing movement is varied in vertical direction and in the direction parallel to body long axis. The trajectories found cannot be predicted by an ANN (Swingnet1) proposed earlier to describe swing movements. However, a modified network (Swingnet2) allows for a satisfying description of the behavioral results. Walking on a narrow treadwheel leads to different swing trajectories compared to walking on a broad treadwheel. These trajectories cannot be described by Swingnet1, too. The form of the swing trajectory may depend on the direction of the force vector by which the leg acts on the ground in the preceding stance. Based on this assumption, an alternative hypothesis (Swingnet3) is proposed that can quantitatively describe all results of our experiment. When stick insects walk from a wide to a narrow substrate, transition between different swing trajectories does not change gradually over time. Rather, the form of the trajectory is determined by the current sensory input of the leg on a step-to-step basis. Finally, four different avoidance reflexes and their implementation into swing movements are investigated and described by a quantitative simulation." }, { "pmid": "21969681", "title": "Active tactile exploration for adaptive locomotion in the stick insect.", "abstract": "Insects carry a pair of actively movable feelers that supply the animal with a range of multimodal information. The antennae of the stick insect Carausius morosus are straight and of nearly the same length as the legs, making them ideal probes for near-range exploration. Indeed, stick insects, like many other insects, use antennal contact information for the adaptive control of locomotion, for example, in climbing. Moreover, the active exploratory movement pattern of the antennae is context-dependent. The first objective of the present study is to reveal the significance of antennal contact information for the efficient initiation of climbing. This is done by means of kinematic analysis of freely walking animals as they undergo a tactually elicited transition from walking to climbing. The main findings are that fast, tactually elicited re-targeting movements may occur during an ongoing swing movement, and that the height of the last antennal contact prior to leg contact largely predicts the height of the first leg contact. The second objective is to understand the context-dependent adaptation of the antennal movement pattern in response to tactile contact. We show that the cycle frequency of both antennal joints increases after obstacle contact. Furthermore, inter-joint coupling switches distinctly upon tactile contact, revealing a simple mechanism for context-dependent adaptation." }, { "pmid": "14599324", "title": "Intelligence with representation.", "abstract": "Behaviour-based robotics has always been inspired by earlier cybernetics work such as that of W. Grey Walter. It emphasizes that intelligence can be achieved without the kinds of representations common in symbolic AI systems. The paper argues that such representations might indeed not be needed for many aspects of sensory-motor intelligence but become a crucial issue when bootstrapping to higher levels of cognition. It proposes a scenario in the form of evolutionary language games by which embodied agents develop situated grounded representations adapted to their needs and the conventions emerging in the population." }, { "pmid": "16209771", "title": "Coordinating perceptually grounded categories through language: a case study for colour.", "abstract": "This article proposes a number of models to examine through which mechanisms a population of autonomous agents could arrive at a repertoire of perceptually grounded categories that is sufficiently shared to allow successful communication. The models are inspired by the main approaches to human categorisation being discussed in the literature: nativism, empiricism, and culturalism. Colour is taken as a case study. Although we take no stance on which position is to be accepted as final truth with respect to human categorisation and naming, we do point to theoretical constraints that make each position more or less likely and we make clear suggestions on what the best engineering solution would be. Specifically, we argue that the collective choice of a shared repertoire must integrate multiple constraints, including constraints coming from communication." }, { "pmid": "15631592", "title": "The interaction of the explicit and the implicit in skill learning: a dual-process approach.", "abstract": "This article explicates the interaction between implicit and explicit processes in skill learning, in contrast to the tendency of researchers to study each type in isolation. It highlights various effects of the interaction on learning (including synergy effects). The authors argue for an integrated model of skill learning that takes into account both implicit and explicit processes. Moreover, they argue for a bottom-up approach (first learning implicit knowledge and then explicit knowledge) in the integrated model. A variety of qualitative data can be accounted for by the approach. A computational model, CLARION, is then used to simulate a range of quantitative data. The results demonstrate the plausibility of the model, which provides a new perspective on skill learning." }, { "pmid": "14534588", "title": "Environmentally mediated synergy between perception and behaviour in mobile robots.", "abstract": "The notion that behaviour influences perception seems self-evident, but the mechanism of their interaction is not known. Perception and behaviour are usually considered to be separate processes. In this view, perceptual learning constructs compact representations of sensory events, reflecting their statistical properties, independently of behavioural relevance. Behavioural learning, however, forms associations between perception and action, organized by reinforcement, without regard for the construction of perception. It is generally assumed that the interaction between these two processes is internal to the agent, and can be explained solely in terms of the neuronal substrate. Here we show, instead, that perception and behaviour can interact synergistically via the environment. Using simulated and real mobile robots, we demonstrate that perceptual learning directly supports behavioural learning and so promotes a progressive structuring of behaviour. This structuring leads to a systematic bias in input sampling, which directly affects the organization of the perceptual system. This external, environmentally mediated feedback matches the perceptual system to the emerging behavioural structure, so that the behaviour is stabilized." }, { "pmid": "15111010", "title": "Neural mechanisms for prediction: do insects have forward models?", "abstract": "'Forward models' are increasingly recognized as a crucial explanatory concept in vertebrate motor control. The essential idea is that an important function implemented by nervous systems is prediction of the sensory consequences of action. This is often associated with higher cognitive capabilities; however, many of the purposes forward models are thought to have analogues in insect behaviour, and the concept is closely connected to those of 'efference copy' and 'corollary discharge'. This article considers recent evidence from invertebrates that demonstrates the predictive modulation of sensory processes by motor output, and discusses to what extent this supports the conclusion that insect nervous systems also implement forward models. Several promising directions for further research are outlined." }, { "pmid": "27359335", "title": "Outcome Prediction of Consciousness Disorders in the Acute Stage Based on a Complementary Motor Behavioural Tool.", "abstract": "INTRODUCTION\nAttaining an accurate diagnosis in the acute phase for severely brain-damaged patients presenting Disorders of Consciousness (DOC) is crucial for prognostic validity; such a diagnosis determines further medical management, in terms of therapeutic choices and end-of-life decisions. However, DOC evaluation based on validated scales, such as the Revised Coma Recovery Scale (CRS-R), can lead to an underestimation of consciousness and to frequent misdiagnoses particularly in cases of cognitive motor dissociation due to other aetiologies. The purpose of this study is to determine the clinical signs that lead to a more accurate consciousness assessment allowing more reliable outcome prediction.\n\n\nMETHODS\nFrom the Unit of Acute Neurorehabilitation (University Hospital, Lausanne, Switzerland) between 2011 and 2014, we enrolled 33 DOC patients with a DOC diagnosis according to the CRS-R that had been established within 28 days of brain damage. The first CRS-R assessment established the initial diagnosis of Unresponsive Wakefulness Syndrome (UWS) in 20 patients and a Minimally Consciousness State (MCS) in the remaining13 patients. We clinically evaluated the patients over time using the CRS-R scale and concurrently from the beginning with complementary clinical items of a new observational Motor Behaviour Tool (MBT). Primary endpoint was outcome at unit discharge distinguishing two main classes of patients (DOC patients having emerged from DOC and those remaining in DOC) and 6 subclasses detailing the outcome of UWS and MCS patients, respectively. Based on CRS-R and MBT scores assessed separately and jointly, statistical testing was performed in the acute phase using a non-parametric Mann-Whitney U test; longitudinal CRS-R data were modelled with a Generalized Linear Model.\n\n\nRESULTS\nFifty-five per cent of the UWS patients and 77% of the MCS patients had emerged from DOC. First, statistical prediction of the first CRS-R scores did not permit outcome differentiation between classes; longitudinal regression modelling of the CRS-R data identified distinct outcome evolution, but not earlier than 19 days. Second, the MBT yielded a significant outcome predictability in the acute phase (p<0.02, sensitivity>0.81). Third, a statistical comparison of the CRS-R subscales weighted by MBT became significantly predictive for DOC outcome (p<0.02).\n\n\nDISCUSSION\nThe association of MBT and CRS-R scoring improves significantly the evaluation of consciousness and the predictability of outcome in the acute phase. Subtle motor behaviour assessment provides accurate insight into the amount and the content of consciousness even in the case of cognitive motor dissociation." }, { "pmid": "12662752", "title": "Multiple paired forward and inverse models for motor control.", "abstract": "Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. In this paper, we propose a modular approach to such motor learning and control. We review the behavioral evidence and benefits of modularity, and propose a new architecture based on multiple pairs of inverse (controller) and forward (predictor) models. Within each pair, the inverse and forward models are tightly coupled both during their acquisition, through motor learning, and use, during which the forward models determine the contribution of each inverse model's output to the final motor command. This architecture can simultaneously learn the multiple inverse models necessary for control as well as how to select the inverse models appropriate for a given environment. Finally, we describe specific predictions of the model, which can be tested experimentally." } ]
Scientific Reports
28165495
PMC5292966
10.1038/srep41831
Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction
Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.
Related WorksPrevious studies related to our work can be classified into three categories: traditional MIL, metric learning based MIL and transfer learning based MIL.Traditional MILMulti-Instance Multi-Label k-Nearest Neighbor (MIMLkNN)25 try to utilize the popular k-nearest neighbor techniques into MIL. Motivated by the advantage of the citers that is used in Citation-kNN approach26, MIMLkNN not only considers the test instances’ neighboring examples in the training set, but also considers those training examples which regard the test instance as their own neighbors (i.e., the citers). Different from MIMLkNN, Multi-instance Multi-label Support Vector Machine (MIMLSVM)6 first degenerates MIL task to a simplified single-instance learning (SIL) task by utilizing a clustering-based representation transformation627. After this transformation, each training bag is transformed into a single instance. By this way, MIMLSVM maps the MIL problem to a SIL problem. For another traditional MIL approach, Multi-instance Multi-label Neural Network (MIMLNN)28 is obtained by using the two-layer neural network structure28 to replace the Multi-Label Support Vector Machine (MLSVM)6 used in MIMLSVM.Metric based MILDifferent from MIMLNN, to encode much geometry information of the bag data, the metric-based Ensemble Multi-Instance Multi-Label (EnMIMLNN)4 combines three different Hausdorff distances (i.e., average, maximal and minimal) to define the distance between two bag, and proposes two voting-based models (i.e., EnMIMLNNvoting1 and EnMIMLNNvoting2). Recently, Xu Ye et al. proposes a metric-based multi-instance learning method (MIMEL)29 by minimizing the KL divergence between two multivariate Gaussians with the constraints of maximizing the distance of bags between class and minimizing the distance of bags within class. Different from MIMEL, Jin Rong et al. proposes a metric-based learning method30 for multi-instance multi-label problem. Recently, MIML-DML5 attempts to find a distance metric by considering that the same category bag pairs should have a smaller distance than that from different categories. These metric-based MIL approaches are both designed for the traditional MIL problem where the bags in SD and TD are drawn from the same distribution.Transfer learning based MILRecently, MICS31 proposed tackling the MI co-variate shift problem by considering the distribution change at both bag-level and instance-level. MICS attempts to utilize the weights of bags and instances to solve the covariate shift problem. Then, with the learned weights, the MI co-variate shift problem can be solved by traditional MIL methods. However, MICS does not present the method to utilize the learned weights into multi-instance metric learning.
[ "10573421", "23353650", "25708164", "26923212", "16526484", "1608464", "691075", "21920789" ]
[ { "pmid": "10573421", "title": "A combined algorithm for genome-wide prediction of protein function.", "abstract": "The availability of over 20 fully sequenced genomes has driven the development of new methods to find protein function and interactions. Here we group proteins by correlated evolution, correlated messenger RNA expression patterns and patterns of domain fusion to determine functional relationships among the 6,217 proteins of the yeast Saccharomyces cerevisiae. Using these methods, we discover over 93,000 pairwise links between functionally related yeast proteins. Links between characterized and uncharacterized proteins allow a general function to be assigned to more than half of the 2,557 previously uncharacterized yeast proteins. Examples of functional links are given for a protein family of previously unknown function, a protein whose human homologues are implicated in colon cancer and the yeast prion Sup35." }, { "pmid": "23353650", "title": "A large-scale evaluation of computational protein function prediction.", "abstract": "Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools." }, { "pmid": "25708164", "title": "Protein functional properties prediction in sparsely-label PPI networks through regularized non-negative matrix factorization.", "abstract": "BACKGROUND\nPredicting functional properties of proteins in protein-protein interaction (PPI) networks presents a challenging problem and has important implication in computational biology. Collective classification (CC) that utilizes both attribute features and relational information to jointly classify related proteins in PPI networks has been shown to be a powerful computational method for this problem setting. Enabling CC usually increases accuracy when given a fully-labeled PPI network with a large amount of labeled data. However, such labels can be difficult to obtain in many real-world PPI networks in which there are usually only a limited number of labeled proteins and there are a large amount of unlabeled proteins. In this case, most of the unlabeled proteins may not connected to the labeled ones, the supervision knowledge cannot be obtained effectively from local network connections. As a consequence, learning a CC model in sparsely-labeled PPI networks can lead to poor performance.\n\n\nRESULTS\nWe investigate a latent graph approach for finding an integration latent graph by exploiting various latent linkages and judiciously integrate the investigated linkages to link (separate) the proteins with similar (different) functions. We develop a regularized non-negative matrix factorization (RNMF) algorithm for CC to make protein functional properties prediction by utilizing various data sources that are available in this problem setting, including attribute features, latent graph, and unlabeled data information. In RNMF, a label matrix factorization term and a network regularization term are incorporated into the non-negative matrix factorization (NMF) objective function to seek a matrix factorization that respects the network structure and label information for classification prediction.\n\n\nCONCLUSION\nExperimental results on KDD Cup tasks predicting the localization and functions of proteins to yeast genes demonstrate the effectiveness of the proposed RNMF method for predicting the protein properties. In the comparison, we find that the performance of the new method is better than those of the other compared CC algorithms especially in paucity of labeled proteins." }, { "pmid": "26923212", "title": "Multi-instance multi-label distance metric learning for genome-wide protein function prediction.", "abstract": "Multi-instance multi-label (MIML) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with not only multiple instances but also multiple class labels. To find an appropriate MIML learning method for genome-wide protein function prediction, many studies in the literature attempted to optimize objective functions in which dissimilarity between instances is measured using the Euclidean distance. But in many real applications, Euclidean distance may be unable to capture the intrinsic similarity/dissimilarity in feature space and label space. Unlike other previous approaches, in this paper, we propose to learn a multi-instance multi-label distance metric learning framework (MIMLDML) for genome-wide protein function prediction. Specifically, we learn a Mahalanobis distance to preserve and utilize the intrinsic geometric information of both feature space and label space for MIML learning. In addition, we try to deal with the sparsely labeled data by giving weight to the labeled data. Extensive experiments on seven real-world organisms covering the biological three-domain system (i.e., archaea, bacteria, and eukaryote; Woese et al., 1990) show that the MIMLDML algorithm is superior to most state-of-the-art MIML learning algorithms." }, { "pmid": "16526484", "title": "Efficient and robust feature extraction by maximum margin criterion.", "abstract": "In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem. In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient." }, { "pmid": "691075", "title": "Archaebacteria.", "abstract": "Experimental work published elsewhere has shown that the Archaebacteria encompass several distinct subgroups including methanogens, extreme halophiles, and various thermoacidophiles. The common characteristics of Archaebacteria known to date are these: (1) the presence of characteristic tRNAs and ribosomal RNAs; (2) the absence of peptidoglycan cell walls, with in many cases, replacement by a largely proteinaceous coat; (3) the occurrence of ether linked lipids built from phytanyl chains and (4) in all cases known so far, their occurrence only in unusual habitats. These organisms contain a number of 'eucaryotic features' in addition to their many bacterial attributes. This is interpreted as a strong indication that the Archaebacteria, while not actually eucaryotic, do indeed represents a third separate, line of descent as originally proposed." }, { "pmid": "21920789", "title": "A novel method for quantitatively predicting non-covalent interactions from protein and nucleic acid sequence.", "abstract": "Biochemical interactions between proteins and biological macromolecules are dominated by non-covalent interactions. A novel method is presented for quantitatively predicting the number of two most dominant non-covalent interactions, i.e., hydrogen bonds and van der Waals contacts, potentially forming in a hypothetical protein-nucleic acid complex from sequences using support vector machine regression models in conjunction with a hybrid feature. The hybrid feature consists of the sequence-length fraction information, conjoint triad for protein sequences and the gapped dinucleotide composition. The SVR-based models achieved excellent performance. The polarity of amino acids was also found to play a vital role in the formation of hydrogen bonds and van der Waals contacts. We have constructed a web server H-VDW (http://www.cbi.seu.edu.cn/H-VDW/H-VDW.htm) for public access to the SVR models." } ]
BMC Psychology
28196507
PMC5307765
10.1186/s40359-017-0173-4
VREX: an open-source toolbox for creating 3D virtual reality experiments
BackgroundWe present VREX, a free open-source Unity toolbox for virtual reality research in the fields of experimental psychology and neuroscience.ResultsDifferent study protocols about perception, attention, cognition and memory can be constructed using the toolbox. VREX provides a procedural generation of (interconnected) rooms that can be automatically furnished with a click of a button. VREX includes a menu system for creating and storing experiments with different stages. Researchers can combine different rooms and environments to perform end-to-end experiments including different testing situations and data collection. For fine-tuned control VREX also comes with an editor where all the objects in the virtual room can be manually placed and adjusted in the 3D world.ConclusionsVREX simplifies the generation and setup of complicated VR scenes and experiments for researchers. VREX can be downloaded and easily installed from vrex.mozello.comElectronic supplementary materialThe online version of this article (doi:10.1186/s40359-017-0173-4) contains supplementary material, which is available to authorized users.
Why VREX: Related workThere are a wide variety of Unity add-ons assisting the generation of interactive virtual worlds, such as Playmaker [10], Adventure Creator [11] and ProBuilder [12] to name a few. Yet these toolboxes are very general-purpose. There also exists some software applications similar to VREX in terms of simplifying the creation of VR experiments for psychological research, e.g. MazeSuite [13] and WorldViz Vizard [14]. The list of compared software is not comprehensive and here we briefly describe only two of them with key differences to VREX.MazeSuite is a free toolbox that allows easy creation of connected 3D corridors. It enables researchers to perform spatial and navigational behavior experiments within interactive and extendable 3D virtual environments [13]. Although the user can design mazes by hand and fill them with objects, it is difficult to achieve the look and feel of a regular apartment. This is where VREX differs, having been designed for indoor experiments in mind from the beginning. Another noticeable difference is that MazeSuite runs as a standalone program, while VREX can be embedded inside Unity Game Engine, allowing for more powerful features, higher visual quality and faster code iterations in our experience.WorldViz Vizard gives researchers the tools to create and conduct complex VR-based experiments. Researchers of any background can rapidly develop their own virtual environments and author complex interactions between environment, devices, and participants [14]. Although Vizard is visually advanced, this comes at a price of the licence fee to remove time restrictions and prominent watermarks. VREX matches the graphical quality of Vizard with the power of Unity 5 game engine, while staying open source and free of charge (Unity license fees may apply for publishing).As any software matures, more features tend to be added by the developers. This in term means more complex interfaces that might confuse the novice user. The advantage of VREX is the narrow focus to specific types of experiments, allowing for clear design and simple workflow.
[ "16480691", "11718793", "18411560", "15639436" ]
[ { "pmid": "16480691", "title": "Cognitive Ethology and exploring attention in real-world scenes.", "abstract": "We sought to understand what types of information people use when they infer the attentional states of others. In our study, two groups of participants viewed pictures of social interactions. One group was asked to report where the people in the pictures were directing their attention and how they (the group) knew it. The other group was simply asked to describe the pictures. We recorded participants' eye movements as they completed the different tasks and documented their subjective inferences and descriptions. The findings suggest that important cues for inferring attention of others include direction of eye gaze, head position, body orientation, and situational context. The study illustrates how attention research can benefit from (a) using more complex real-world tasks and stimuli, (b) measuring participants' subjective reports about their experiences and beliefs, and (c) observing and describing situational behavior rather than seeking to uncover some putative basic mechanism(s) of attention. Finally, we discuss how our research points to a new approach for studying human attention. This new approach, which we call Cognitive Ethology, focuses on understanding how attention operates in everyday situations and what people know and believe about attention." }, { "pmid": "11718793", "title": "What controls attention in natural environments?", "abstract": "The highly task-specific fixation patterns revealed in performance of natural tasks demonstrate the fundamentally active nature of vision, and suggest that in many situations, top-down processes may be a major factor in the acquisition of visual information. Understanding how a top-down visual system could function requires understanding the mechanisms that control the initiation of the different task-specific computations at the appropriate time. This is particularly difficult in dynamic environments, like driving, where many aspects of the visual input may be unpredictable. We therefore examined drivers' abilities to detect Stop signs in a virtual environment when the signs were visible for restricted periods of time. Detection performance is heavily modulated both by the instructions and the local visual context. This suggests that visibility of the signs requires active search, and that the frequency of this search is influenced by learnt knowledge of the probabilistic structure of the environment." }, { "pmid": "18411560", "title": "Maze Suite 1.0: a complete set of tools to prepare, present, and analyze navigational and spatial cognitive neuroscience experiments.", "abstract": "Maze Suite is a complete set of tools that enables researchers to perform spatial and navigational behavioral experiments within interactive, easy-to-create, and extendable (e.g., multiple rooms) 3-D virtual environments. Maze Suite can be used to design and edit adapted 3-D environments, as well as to track subjects' behavioral performance. Maze Suite consists of three main applications: an editing program for constructing maze environments (MazeMaker), a visualization/rendering module (MazeWalker), and an analysis and mapping tool (MazeViewer). Each of these tools is run and used from a graphical user interface, thus making editing, execution, and analysis user friendly. MazeMaker is a .NET architecture application that can easily be used to create new 3-D environments and to edit objects (e.g., geometric shapes, pictures, landscapes, etc.) or add them to the environment effortlessly. In addition, Maze Suite has the capability of sending signal-out pulses to physiological recording devices, using standard computer ports. Maze Suite, with all three applications, is a unique and complete toolset for researchers who want to easily and rapidly deploy interactive 3-D environments." }, { "pmid": "15639436", "title": "Change blindness: past, present, and future.", "abstract": "Change blindness is the striking failure to see large changes that normally would be noticed easily. Over the past decade this phenomenon has greatly contributed to our understanding of attention, perception, and even consciousness. The surprising extent of change blindness explains its broad appeal, but its counterintuitive nature has also engendered confusions about the kinds of inferences that legitimately follow from it. Here we discuss the legitimate and the erroneous inferences that have been drawn, and offer a set of requirements to help separate them. In doing so, we clarify the genuine contributions of change blindness research to our understanding of visual perception and awareness, and provide a glimpse of some ways in which change blindness might shape future research." } ]
Frontiers in Neuroinformatics
28261082
PMC5311048
10.3389/fninf.2017.00009
Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis
A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier.
2. Related workExisting approaches to automated monitoring of SMM are based either on webcams or accelerometers. In a series of publications (Gonçalves et al., 2012a,b,c) a group from the University of Minho created methods based on the Kinect webcam sensor from Microsoft. Although, their approach shows promising results, the authors restricted themselves to detecting only one type of SMM, namely hand flapping. In addition, the Kinect sensor is limited to monitoring within a confined space and requires users to be in close proximity to the sensor. This limits the application of the approach, as it does not allow continuous recording across a range of contexts and activities.Alternative approaches to the Kinect are based on the use of wearable 3-axis accelerometers (see Figure 1). Although, the primary aim of previously published accelerometer-based studies is to detect SMM in individuals with ASD, some studies have been carried out with healthy volunteers mimicking SMM (Westeyn et al., 2005; Plötz et al., 2012)1, and therefore do not necessarily generalize to the ASD population.Figure 1Accelerometer readings of one second in length from the class flapping. The accelerometer was mounted to the right wrist. Each line corresponds to one of the three acceleration axes.To-date, there have been two different approaches to automatically detecting SMM in ASD using accelerometer data. One approach is to use a single accelerometer to detect one type of SMM, such as hand flapping when a sensor is worn on the wrist (Gonçalves et al., 2012a; Rodrigues et al., 2013). The second approach is to use multiple accelerometers to detect multiple SMM, such as hand flapping from sensors worn on the wrist, and body rocking with a sensor worn on the torso (Min et al., 2009; Min and Tewfik, 2010a,b, 2011; Min, 2014). Other studies have done the same, but included a detection class where hand flapping and body rocking occur simultaneously in time (i.e., “flap-rock,” see Albinali et al., 2009, 2012; Goodwin et al., 2011, 2014).While more sensors appear to improve recognition accuracy in these studies, one practical drawback is that many individuals with ASD have sensory sensitivities that might make them less able or willing to tolerate wearing multiple devices. To accommodate for different sensory profiles in the ASD population, it would be ideal to limit the number of sensors to a minimum, while still optimizing accurate multiple class SMM detection.Typical features used for acceleration analyses of SMM in prior studies have focused on: distances between mean values along accelerometer axes, variance along axes directions, correlation coefficients, entropy, Fast Fourier Transform (FFT) peaks, and frequencies (Albinali et al., 2009, 2012; Goodwin et al., 2011, 2014), Stockwell transform (Goodwin et al., 2014), mean standard deviation, root mean square, number of peaks, and zero crossing values (Gonçalves et al., 2012a; Rodrigues et al., 2013), and skewness and kurtosis (Min, 2014; Min and Tewfik, 2011). These features are mainly aimed at characterizing oscillatory features of SMM as statistical characteristics of values distributed around mean values in each accelerometer axis, joint relation of changes in different axial directions, or frequency components of oscillatory moves. While useful in many regards, these features fail to capture potentially important dynamics of SMM that can change over time, namely, when they do not follow a consistent oscillatory pattern or when patterns differ in frequency, duration, speed, and amplitude (Goodwin et al., 2014). A final limitation to previous publications in this domain, is that different sensor types have been used across studies. These may have different orientations, resulting in features with different values, despite representing the same SMM. To overcome this limitation, other sets of features are required that do not vary in their characteristics across different types of SMM and sensor orientations.
[ "15149482", "25153585", "17404130", "15026089", "20839042", "17048092", "12241313", "14976271", "17550872", "8175612" ]
[ { "pmid": "15149482", "title": "The lifetime distribution of health care costs.", "abstract": "OBJECTIVE\nTo estimate the magnitude and age distribution of lifetime health care expenditures.\n\n\nDATA SOURCES\nClaims data on 3.75 million Blue Cross Blue Shield of Michigan members, and data from the Medicare Current Beneficiary Survey, the Medical Expenditure Panel Survey, the Michigan Mortality Database, and Michigan nursing home patient counts.\n\n\nDATA COLLECTION\nData were aggregated and summarized in year 2000 dollars by service, age, and gender.\n\n\nSTUDY DESIGN\nWe use life table models to simulate a typical lifetime's distribution of expenditures, employing cross-sectional data on age- and sex-specific health care costs and the mortality experience of the population. We determine remaining lifetime expenditures at each age for all initial members of a birth cohort. Separately, we calculate remaining expenditures for survivors at all ages. Using cross-sectional data, the analysis holds disease incidence, medical technology, and health care prices constant, thus permitting an exclusive focus on the role of age in health care costs.\n\n\nPRINCIPAL FINDINGS\nPer capita lifetime expenditure is USD $316,600, a third higher for females (USD $361,200) than males (USD $268,700). Two-fifths of this difference owes to women's longer life expectancy. Nearly one-third of lifetime expenditures is incurred during middle age, and nearly half during the senior years. For survivors to age 85, more than one-third of their lifetime expenditures will accrue in their remaining years.\n\n\nCONCLUSIONS\nGiven the essential demographic phenomenon of our time, the rapid aging of the population, our findings lend increased urgency to understanding and addressing the interaction between aging and health care spending." }, { "pmid": "25153585", "title": "Automated diagnosis of autism: in search of a mathematical marker.", "abstract": "Autism is a type of neurodevelopmental disorder affecting the memory, behavior, emotion, learning ability, and communication of an individual. An early detection of the abnormality, due to irregular processing in the brain, can be achieved using electroencephalograms (EEG). The variations in the EEG signals cannot be deciphered by mere visual inspection. Computer-aided diagnostic tools can be used to recognize the subtle and invisible information present in the irregular EEG pattern and diagnose autism. This paper presents a state-of-the-art review of automated EEG-based diagnosis of autism. Various time domain, frequency domain, time-frequency domain, and nonlinear dynamics for the analysis of autistic EEG signals are described briefly. A focus of the review is the use of nonlinear dynamics and chaos theory to discover the mathematical biomarkers for the diagnosis of the autism analogous to biological markers. A combination of the time-frequency and nonlinear dynamic analysis is the most effective approach to characterize the nonstationary and chaotic physiological signals for the automated EEG-based diagnosis of autism spectrum disorder (ASD). The features extracted using these nonlinear methods can be used as mathematical markers to detect the early stage of autism and aid the clinicians in their diagnosis. This will expedite the administration of appropriate therapies to treat the disorder." }, { "pmid": "17404130", "title": "The lifetime distribution of the incremental societal costs of autism.", "abstract": "OBJECTIVE\nTo describe the age-specific and lifetime incremental societal costs of autism in the United States.\n\n\nDESIGN\nEstimates of use and costs of direct medical and nonmedical care were obtained from a literature review and database analysis. A human capital approach was used to estimate lost productivity. These costs were projected across the life span, and discounted incremental age-specific costs were computed.\n\n\nSETTING\nUnited States.\n\n\nPARTICIPANTS\nHypothetical incident autism cohort born in 2000 and diagnosed in 2003.\n\n\nMAIN OUTCOME MEASURES\nDiscounted per capita incremental societal costs.\n\n\nRESULTS\nThe lifetime per capita incremental societal cost of autism is $3.2 million. Lost productivity and adult care are the largest components of costs. The distribution of costs over the life span varies by cost category.\n\n\nCONCLUSIONS\nAlthough autism is typically thought of as a disorder of childhood, its costs can be felt well into adulthood. The substantial costs resulting from adult care and lost productivity of both individuals with autism and their parents have important implications for those aging members of the baby boom generation approaching retirement, including large financial burdens affecting not only those families but also potentially society in general. These results may imply that physicians and other care professionals should consider recommending that parents of children with autism seek financial counseling to help plan for the transition into adulthood." }, { "pmid": "15026089", "title": "Comparison of direct observational methods for measuring stereotypic behavior in children with autism spectrum disorders.", "abstract": "We compared partial-interval recording (PIR) and momentary time sampling (MTS) estimates against continuous measures of the actual durations of stereotypic behavior in young children with autism or pervasive developmental disorder-not otherwise specified. Twenty-two videotaped samples of stereotypy were scored using a low-tech duration recording method, and relative durations (i.e., proportions of observation periods consumed by stereotypy) were calculated. Then 10, 20, and 30s MTS and 10s PIR estimates of relative durations were derived from the raw duration data. Across all samples, PIR was found to grossly overestimate the relative duration of stereotypy. Momentary time sampling both over- and under-estimated the relative duration of stereotypy, but with much smaller errors than PIR (Experiment 1). These results were replicated across 27 samples of low, moderate and high levels of stereotypy (Experiment 2)." }, { "pmid": "20839042", "title": "Automated detection of stereotypical motor movements.", "abstract": "To overcome problems with traditional methods for measuring stereotypical motor movements in persons with Autism Spectrum Disorders (ASD), we evaluated the use of wireless three-axis accelerometers and pattern recognition algorithms to automatically detect body rocking and hand flapping in children with ASD. Findings revealed that, on average, pattern recognition algorithms correctly identified approximately 90% of stereotypical motor movements repeatedly observed in both laboratory and classroom settings. Precise and efficient recording of stereotypical motor movements could enable researchers and clinicians to systematically study what functional relations exist between these behaviors and specific antecedents and consequences. These measures could also facilitate efficacy studies of behavioral and pharmacologic interventions intended to replace or decrease the incidence or severity of stereotypical motor movements." }, { "pmid": "17048092", "title": "The Repetitive Behavior Scale-Revised: independent validation in individuals with autism spectrum disorders.", "abstract": "A key feature of autism is restricted repetitive behavior (RRB). Despite the significance of RRBs, little is known about their phenomenology, assessment, and treatment. The Repetitive Behavior Scale-Revised (RBS-R) is a recently-developed questionnaire that captures the breadth of RRB in autism. To validate the RBS-R in an independent sample, we conducted a survey within the South Carolina Autism Society. A total of 320 caregivers (32%) responded. Factor analysis produced a five-factor solution that was clinically meaningful and statistically sound. The factors were labeled \"Ritualistic/Sameness Behavior,\" \"Stereotypic Behavior,\" \"Self-injurious Behavior,\" \"Compulsive Behavior,\" and \"Restricted Interests.\" Measures of internal consistency were high for this solution, and interrater reliability data suggested that the RBS-R performs well in outpatient settings." }, { "pmid": "12241313", "title": "Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.", "abstract": "The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias." }, { "pmid": "14976271", "title": "Patterns of cardiovascular reactivity in disease diagnosis.", "abstract": "BACKGROUND\nAberrations of cardiovascular reactivity (CVR), an expression of autonomic function, occur in a number of clinical conditions, but lack specificity for a particular disorder. Recently, a CVR pattern particular to chronic fatigue syndrome was observed.\n\n\nAIM\nTo assess whether specific CVR patterns can be described for other clinical conditions.\n\n\nMETHODS\nSix groups of patients, matched for age and gender, were evaluated with a shortened head-up tilt test: patients with chronic fatigue syndrome (CFS) (n = 20), non-CFS fatigue (F) (n = 15), neurally-mediated syncope (SY) (n = 21), familial Mediterranean fever (FMF) (n = 17), psoriatic arthritis (PSOR) (n = 19) and healthy subjects (H) (n = 20). A 10-min supine phase was followed by recording 600 cardiac cycles on tilt (5-10 min). Beat-to-beat heart rate (HR) and pulse transit time (PTT) were measured. Results were analysed using conventional statistics, recurrence plot analysis and fractal analysis.\n\n\nRESULTS\nMultivariate analysis evaluated independent predictors of the CVR in each patient group vs. all other groups. Based on these predictors, equations were determined for a linear discriminant score (DS) for each group. The best sensitivities and specificities of the DS, consistent with disease-related phenotypes of CVR, were noted in the following groups: CFS, 90.0% and 60%; SY, 93.3% and 62.5%; FMF, 90.1% and 75.4%, respectively.\n\n\nDISCUSSION\nPathological disturbances may alter cardiovascular reactivity. Our data support the existence of disease-related CVR phenotypes, with implications for pathogenesis and differential diagnosis." }, { "pmid": "17550872", "title": "A dynamical systems analysis of spontaneous movements in newborn infants.", "abstract": "The authors evaluated the characteristics of infants' spontaneous movements by using dynamical systems analysis. Participants were 6 healthy 1-month-old full-term newborn infants (3 males, 3 females). They used a triaxial accelerometer to measure limb acceleration in 3-dimensional space. Acceleration signals were recorded during 200 s from the right wrist when the infant was in an active alert state and lying supine (sampling rate 200 Hz). and was stored in the system's memory. Digitized data were transferred to a PC for subsequent processing with analysis software. The acceleration time series data were analyzed linearly and nonlinearly. Nonlinear time series analysis suggested that the infants' spontaneous movements are characterized by a nonlinear chaotic dynamics with 5 or 6 embedding dimensions. The production of infants'spontaneous movements involves chaotic dynamic systems that are capable of generating voluntary skill movements." }, { "pmid": "8175612", "title": "Dynamical assessment of physiological systems and states using recurrence plot strategies.", "abstract": "Physiological systems are best characterized as complex dynamical processes that are continuously subjected to and updated by nonlinear feedforward and feedback inputs. System outputs usually exhibit wide varieties of behaviors due to dynamical interactions between system components, external noise perturbations, and physiological state changes. Complicated interactions occur at a variety of hierarchial levels and involve a number of interacting variables, many of which are unavailable for experimental measurement. In this paper we illustrate how recurrence plots can take single physiological measurements, project them into multidimensional space by embedding procedures, and identify time correlations (recurrences) that are not apparent in the one-dimensional time series. We extend the original description of recurrence plots by computing an array of specific recurrence variables that quantify the deterministic structure and complexity of the plot. We then demonstrate how physiological states can be assessed by making repeated recurrence plot calculations within a window sliding down any physiological dynamic. Unlike other predominant time series techniques, recurrence plot analyses are not limited by data stationarity and size constraints. Pertinent physiological examples from respiratory and skeletal motor systems illustrate the utility of recurrence plots in the diagnosis of nonlinear systems. The methodology is fully applicable to any rhythmical system, whether it be mechanical, electrical, neural, hormonal, chemical, or even spacial." } ]
JMIR Medical Informatics
28153818
PMC5314102
10.2196/medinform.6918
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE
BackgroundDiverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks.ObjectiveOur objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem.MethodsWe developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches.ResultsFormative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem.ConclusionsOur strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.
Related WorkSome researchers have recognized the value of using ontologies to better support search activities (eg, [13,45]). The central focus of this research is term extraction and mapping, which is done using text mining and natural language processing techniques. In this body of work, ontologies are used to improve search performance computationally without involving users. The fundamental difference compared with our work is that we use ontologies to help users develop knowledge and domain-specific vocabulary—that is, the focus is on the user rather than on algorithms and other computational processes. Our approach is important in contexts where users have valuable knowledge and context-specific goals that cannot be replaced by computation—in other words, users need to be kept “in the loop.”Other researchers have focused on developing interfaces to MEDLINE as alternatives to PubMed. For example, Wei et al have developed PubTator, a PubMed replacement interface that uses multiple text mining algorithms to improve search results [46]. PubTator also offers some support for document triaging. Whereas PubTator appears interesting and useful, it relies on queries being input into the standard text box, and it presents results in a typical list-based fashion. Thus, it is not aimed at addressing either of the two problems we are attempting to address with OVERT-MED—that is, the vocabulary problem and the information overload problem. Other alternative interfaces that offer interesting features but do not address either of the two problems include SLIM [47] and HubMed [48]. An alternative interface that potentially provides support in addressing the first problem is iPubMed [49], which provides fuzzy matches to search results. An alternative interface that may provide support in addressing the second problem is refMED [50], which provides minimal triaging support through relevance ranking. A for-profit private tool, Quertle, appears to use visualizations to mitigate the information overload problem, although very few details are publicly available. Lu [51] provides a detailed survey that includes many other alternative interfaces to MEDLINE, although none are aimed at solving either of the two problems that we are addressing here.In summary, no extant research explores the combination of (1) ontologies to help build domain-specific knowledge and vocabulary when users need to be kept “in the loop” and (2) triaging support using interactive visualizations to help mitigate the information overload problem. The following sections provide details about our approach to addressing these issues.
[ "18280516", "20157491", "23803299", "11971889", "24513593", "27267955", "15561792", "17584211", "11720966", "15471753", "11209803", "16221948", "25665127", "21245076", "18950739", "23703206", "16321145", "16845111", "20624778", "21245076", "22034350" ]
[ { "pmid": "18280516", "title": "How to perform a literature search.", "abstract": "PURPOSE\nEvidence based clinical practice seeks to integrate the current best evidence from clinical research with physician clinical expertise and patient individual preferences. We outline a stepwise approach to an effective and efficient search of electronic databases and introduce the reader to resources most relevant to the practicing urologist.\n\n\nMATERIALS AND METHODS\nThe need for additional research evidence is introduced in the context of a urological clinical scenario. This information need is translated into a focused clinical question using the PICOT (population, intervention, comparison, outcome and type of study) format. This PICOT format provides key words for a literature search of pre-appraised evidence and original research studies that address the clinical scenario.\n\n\nRESULTS\nAvailable online resources can be broadly categorized into databases that focus on primary research studies, ie randomized, controlled trials, cohort studies, case-control or case series, such as MEDLINE and those that focus on secondary research that provides synthesis or synopsis of primary studies. Examples of such sources of pre-appraised evidence that are becoming increasingly relevant to urologists include BMJ Clinical Evidence, ACP Journal Club, The Cochrane Library and the National Guideline Clearinghouse.\n\n\nCONCLUSIONS\nThe ability to search the medical literature in a time efficient manner represents an important part of an evidence based practice that is relevant to all urologists. The use of electronic databases of pre-appraised evidence can greatly expedite the search for high quality evidence, which is then integrated with urologist clinical skills and patient individual circumstances." }, { "pmid": "20157491", "title": "Understanding PubMed user search behavior through log analysis.", "abstract": "This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed." }, { "pmid": "23803299", "title": "Utilization and perceived problems of online medical resources and search tools among different groups of European physicians.", "abstract": "BACKGROUND\nThere is a large body of research suggesting that medical professionals have unmet information needs during their daily routines.\n\n\nOBJECTIVE\nTo investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet.\n\n\nMETHODS\nA detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists).\n\n\nRESULTS\nMost participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited \"restricted accessibility to good quality information\" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, P<.001). General practitioners were more likely than specialists to report \"lack of time\" as a barrier towards finding information on the Internet (59%, 50/85 vs 43%, 111/260; χ²₁=7.231, P=.007) and to restrict their search by language (48%, 43/89 vs 35%, 97/278; χ²₁=5.148, P=.023). They frequently consult general health websites (36%, 31/87 vs 19%, 51/269; χ²₂=12.813, P=.002) and online physician network communities (17%, 15/86, χ²₂=9.841 vs 6%, 17/270, P<.001).\n\n\nCONCLUSIONS\nThe reported inaccessibility of relevant, trustworthy resources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open access to high-quality content for physicians. The potential role of collaborative tools in providing the psychological support and affirmation normally given by medical colleagues needs further consideration. Tools that speed up quality evaluation and aid selection of relevant search results need to be identified. In order to develop an adequate search tool, a differentiated approach considering the differing needs of physician subgroups may be beneficial." }, { "pmid": "11971889", "title": "Factors associated with success in searching MEDLINE and applying evidence to answer clinical questions.", "abstract": "OBJECTIVES\nThis study sought to assess the ability of medical and nurse practitioner students to use MEDLINE to obtain evidence for answering clinical questions and to identify factors associated with the successful answering of questions.\n\n\nMETHODS\nA convenience sample of medical and nurse practitioner students was recruited. After completing instruments measuring demographic variables, computer and searching attitudes and experience, and cognitive traits, the subjects were given a brief orientation to MEDLINE searching and the techniques of evidence-based medicine. The subjects were then given 5 questions (from a pool of 20) to answer in two sessions using the Ovid MEDLINE system and the Oregon Health & Science University library collection. Each question was answered using three possible responses that reflected the quality of the evidence. All actions capable of being logged by the Ovid system were captured. Statistical analysis was performed using a model based on generalized estimating equations. The relevance-based measures of recall and precision were measured by defining end queries and having relevance judgments made by physicians who were not associated with the study.\n\n\nRESULTS\nForty-five medical and 21 nurse practitioner students provided usable answers to 324 questions. The rate of correctness increased from 32.3 to 51.6 percent for medical students and from 31.7 to 34.7 percent for nurse practitioner students. Ability to answer questions correctly was most strongly associated with correctness of the answer before searching, user experience with MEDLINE features, the evidence-based medicine question type, and the spatial visualization score. The spatial visualization score showed multi-colinearity with student type (medical vs. nurse practitioner). Medical and nurse practitioner students obtained comparable recall and precision, neither of which was associated with correctness of the answer.\n\n\nCONCLUSIONS\nMedical and nurse practitioner students in this study were at best moderately successful at answering clinical questions correctly with the assistance of literature searching. The results confirm the importance of evaluating both search ability and the ability to use the resulting information to accomplish a clinical task." }, { "pmid": "24513593", "title": "Evaluation of a novel Conjunctive Exploratory Navigation Interface for consumer health information: a crowdsourced comparative study.", "abstract": "BACKGROUND\nNumerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces.\n\n\nOBJECTIVE\nThis study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT).\n\n\nMETHODS\nWe collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants.\n\n\nRESULTS\nRepeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search.\n\n\nCONCLUSIONS\nWe presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone." }, { "pmid": "27267955", "title": "Designing Health Websites Based on Users' Web-Based Information-Seeking Behaviors: A Mixed-Method Observational Study.", "abstract": "BACKGROUND\nLaypeople increasingly use the Internet as a source of health information, but finding and discovering the right information remains problematic. These issues are partially due to the mismatch between the design of consumer health websites and the needs of health information seekers, particularly the lack of support for \"exploring\" health information.\n\n\nOBJECTIVE\nThe aim of this research was to create a design for consumer health websites by supporting different health information-seeking behaviors. We created a website called Better Health Explorer with the new design. Through the evaluation of this new design, we derive design implications for future implementations.\n\n\nMETHODS\nBetter Health Explorer was designed using a user-centered approach. The design was implemented and assessed through a laboratory-based observational study. Participants tried to use Better Health Explorer and another live health website. Both websites contained the same content. A mixed-method approach was adopted to analyze multiple types of data collected in the experiment, including screen recordings, activity logs, Web browsing histories, and audiotaped interviews.\n\n\nRESULTS\nOverall, 31 participants took part in the observational study. Our new design showed a positive result for improving the experience of health information seeking, by providing a wide range of information and an engaging environment. The results showed better knowledge acquisition, a higher number of page reads, and more query reformulations in both focused and exploratory search tasks. In addition, participants spent more time to discover health information with our design in exploratory search tasks, indicating higher engagement with the website. Finally, we identify 4 design considerations for designing consumer health websites and health information-seeking apps: (1) providing a dynamic information scope; (2) supporting serendipity; (3) considering trust implications; and (4) enhancing interactivity.\n\n\nCONCLUSIONS\nBetter Health Explorer provides strong support for the heterogeneous and shifting behaviors of health information seekers and eases the health information-seeking process. Our findings show the importance of understanding different health information-seeking behaviors and highlight the implications for designers of consumer health websites and health information-seeking apps." }, { "pmid": "15561792", "title": "Answering physicians' clinical questions: obstacles and potential solutions.", "abstract": "OBJECTIVE\nTo identify the most frequent obstacles preventing physicians from answering their patient-care questions and the most requested improvements to clinical information resources.\n\n\nDESIGN\nQualitative analysis of questions asked by 48 randomly selected generalist physicians during ambulatory care.\n\n\nMEASUREMENTS\nFrequency of reported obstacles to answering patient-care questions and recommendations from physicians for improving clinical information resources.\n\n\nRESULTS\nThe physicians asked 1,062 questions but pursued answers to only 585 (55%). The most commonly reported obstacle to the pursuit of an answer was the physician's doubt that an answer existed (52 questions, 11%). Among pursued questions, the most common obstacle was the failure of the selected resource to provide an answer (153 questions, 26%). During audiotaped interviews, physicians made 80 recommendations for improving clinical information resources. For example, they requested comprehensive resources that answer questions likely to occur in practice with emphasis on treatment and bottom-line advice. They asked for help in locating information quickly by using lists, tables, bolded subheadings, and algorithms and by avoiding lengthy, uninterrupted prose.\n\n\nCONCLUSION\nPhysicians do not seek answers to many of their questions, often suspecting a lack of usable information. When they do seek answers, they often cannot find the information they need. Clinical resource developers could use the recommendations made by practicing physicians to provide resources that are more useful for answering clinical questions." }, { "pmid": "17584211", "title": "The information-seeking behaviour of doctors: a review of the evidence.", "abstract": "This paper provides a narrative review of the available literature from the past 10 years (1996-2006) that focus on the information seeking behaviour of doctors. The review considers the literature in three sub-themes: Theme 1, the Information Needs of Doctors includes information need, frequency of doctors' questions and types of information needs; Theme 2, Information Seeking by Doctors embraces pattern of information resource use, time spent searching, barriers to information searching and information searching skills; Theme 3, Information Sources Utilized by Doctors comprises the number of sources utilized, comparison of information sources consulted, computer usage, ranking of information resources, printed resource use, personal digital assistant (PDA) use, electronic database use and the Internet. The review is wide ranging. It would seem that the traditional methods of face-to-face communication and use of hard-copy evidence still prevail amongst qualified medical staff in the clinical setting. The use of new technologies embracing the new digital age in information provision may influence this in the future. However, for now, it would seem that there is still research to be undertaken to uncover the most effective methods of encouraging clinicians to use the best evidence in everyday practice." }, { "pmid": "11720966", "title": "Evaluation of controlled vocabulary resources for development of a consumer entry vocabulary for diabetes.", "abstract": "BACKGROUND\nDigital information technology can facilitate informed decision making by individuals regarding their personal health care. The digital divide separates those who do and those who do not have access to or otherwise make use of digital information. To close the digital divide, health care communications research must address a fundamental issue, the consumer vocabulary problem: consumers of health care, at least those who are laypersons, are not always familiar with the professional vocabulary and concepts used by providers of health care and by providers of health care information, and, conversely, health care and health care information providers are not always familiar with the vocabulary and concepts used by consumers. One way to address this problem is to develop a consumer entry vocabulary for health care communications.\n\n\nOBJECTIVES\nTo evaluate the potential of controlled vocabulary resources for supporting the development of consumer entry vocabulary for diabetes.\n\n\nMETHODS\nWe used folk medical terms from the Dictionary of American Regional English project to create extended versions of 3 controlled vocabulary resources: the Unified Medical Language System Metathesaurus, the Eurodicautom of the European Commission's Translation Service, and the European Commission Glossary of popular and technical medical terms. We extracted consumer terms from consumer-authored materials, and physician terms from physician-authored materials. We used our extended versions of the vocabulary resources to link diabetes-related terms used by health care consumers to synonymous, nearly-synonymous, or closely-related terms used by family physicians. We also examined whether retrieval of diabetes-related World Wide Web information sites maintained by nonprofit health care professional organizations, academic organizations, or governmental organizations can be improved by substituting a physician term for its related consumer term in the query.\n\n\nRESULTS\nThe Dictionary of American Regional English extension of the Metathesaurus provided coverage, either direct or indirect, of approximately 23% of the natural language consumer-term-physician-term pairs. The Dictionary of American Regional English extension of the Eurodicautom provided coverage for 16% of the term pairs. Both the Metathesaurus and the Eurodicautom indirectly related more terms than they directly related. A high percentage of covered term pairs, with more indirectly covered pairs than directly covered pairs, might be one way to make the most out of expensive controlled vocabulary resources. We compared retrieval of diabetes-related Web information sites using the physician terms to retrieval using related consumer terms We based the comparison on retrieval of sites maintained by non-profit healthcare professional organizations, academic organizations, or governmental organizations. The number of such sites in the first 20 Results from a search was increased by substituting a physician term for its related consumer term in the query. This suggests that the Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom may be used to provide useful links from natural language consumer terms to natural language physician terms.\n\n\nCONCLUSIONS\nThe Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom should be investigated further for support of consumer entry vocabulary for diabetes." }, { "pmid": "15471753", "title": "Reformulation of consumer health queries with professional terminology: a pilot study.", "abstract": "BACKGROUND\nThe Internet is becoming an increasingly important resource for health-information seekers. However, consumers often do not use effective search strategies. Query reformulation is one potential intervention to improve the effectiveness of consumer searches.\n\n\nOBJECTIVE\nWe endeavored to answer the research question: \"Does reformulating original consumer queries with preferred terminology from the Unified Medical Language System (UMLS) Metathesaurus lead to better search returns?\"\n\n\nMETHODS\nConsumer-generated queries with known goals (n=16) that could be mapped to UMLS Metathesaurus terminology were used as test samples. Reformulated queries were generated by replacing user terms with Metathesaurus-preferred synonyms (n=18). Searches (n=36) were performed using both a consumer information site and a general search engine. Top 30 precision was used as a performance indicator to compare the performance of the original and reformulated queries.\n\n\nRESULTS\nForty-two percent of the searches utilizing reformulated queries yielded better search returns than their associated original queries, 19% yielded worse results, and the results for the remaining 39% did not change. We identified ambiguous lay terms, expansion of acronyms, and arcane professional terms as causes for changes in performance.\n\n\nCONCLUSIONS\nWe noted a trend towards increased precision when providing substitutions for lay terms, abbreviations, and acronyms. We have found qualitative evidence that reformulating queries with professional terminology may be a promising strategy to improve consumer health-information searches, although we caution that automated reformulation could in fact worsen search performance when the terminology is ill-fitted or arcane." }, { "pmid": "16221948", "title": "Exploring and developing consumer health vocabularies.", "abstract": "Laypersons (\"consumers\") often have difficulty finding, understanding, and acting on health information due to gaps in their domain knowledge. Ideally, consumer health vocabularies (CHVs) would reflect the different ways consumers express and think about health topics, helping to bridge this vocabulary gap. However, despite the recent research on mismatches between consumer and professional language (e.g., lexical, semantic, and explanatory), there have been few systematic efforts to develop and evaluate CHVs. This paper presents the point of view that CHV development is practical and necessary for extending research on informatics-based tools to facilitate consumer health information seeking, retrieval, and understanding. In support of the view, we briefly describe a distributed, bottom-up approach for (1) exploring the relationship between common consumer health expressions and professional concepts and (2) developing an open-access, preliminary (draft) \"first-generation\" CHV. While recognizing the limitations of the approach (e.g., not addressing psychosocial and cultural factors), we suggest that such exploratory research and development will yield insights into the nature of consumer health expressions and assist developers in creating tools and applications to support consumer health information seeking." }, { "pmid": "25665127", "title": "Knowledge retrieval from PubMed abstracts and electronic medical records with the Multiple Sclerosis Ontology.", "abstract": "BACKGROUND\nIn order to retrieve useful information from scientific literature and electronic medical records (EMR) we developed an ontology specific for Multiple Sclerosis (MS).\n\n\nMETHODS\nThe MS Ontology was created using scientific literature and expert review under the Protégé OWL environment. We developed a dictionary with semantic synonyms and translations to different languages for mining EMR. The MS Ontology was integrated with other ontologies and dictionaries (diseases/comorbidities, gene/protein, pathways, drug) into the text-mining tool SCAIView. We analyzed the EMRs from 624 patients with MS using the MS ontology dictionary in order to identify drug usage and comorbidities in MS. Testing competency questions and functional evaluation using F statistics further validated the usefulness of MS ontology.\n\n\nRESULTS\nValidation of the lexicalized ontology by means of named entity recognition-based methods showed an adequate performance (F score = 0.73). The MS Ontology retrieved 80% of the genes associated with MS from scientific abstracts and identified additional pathways targeted by approved disease-modifying drugs (e.g. apoptosis pathways associated with mitoxantrone, rituximab and fingolimod). The analysis of the EMR from patients with MS identified current usage of disease modifying drugs and symptomatic therapy as well as comorbidities, which are in agreement with recent reports.\n\n\nCONCLUSION\nThe MS Ontology provides a semantic framework that is able to automatically extract information from both scientific literature and EMR from patients with MS, revealing new pathogenesis insights as well as new clinical information." }, { "pmid": "21245076", "title": "PubMed and beyond: a survey of web tools for searching biomedical literature.", "abstract": "The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search." }, { "pmid": "18950739", "title": "The Human Phenotype Ontology: a tool for annotating and analyzing human hereditary disease.", "abstract": "There are many thousands of hereditary diseases in humans, each of which has a specific combination of phenotypic features, but computational analysis of phenotypic data has been hampered by lack of adequate computational data structures. Therefore, we have developed a Human Phenotype Ontology (HPO) with over 8000 terms representing individual phenotypic anomalies and have annotated all clinical entries in Online Mendelian Inheritance in Man with the terms of the HPO. We show that the HPO is able to capture phenotypic similarities between diseases in a useful and highly significant fashion." }, { "pmid": "23703206", "title": "PubTator: a web-based text mining tool for assisting biocuration.", "abstract": "Manually curating knowledge from biomedical literature into structured databases is highly expensive and time-consuming, making it difficult to keep pace with the rapid growth of the literature. There is therefore a pressing need to assist biocuration with automated text mining tools. Here, we describe PubTator, a web-based system for assisting biocuration. PubTator is different from the few existing tools by featuring a PubMed-like interface, which many biocurators find familiar, and being equipped with multiple challenge-winning text mining algorithms to ensure the quality of its automatic results. Through a formal evaluation with two external user groups, PubTator was shown to be capable of improving both the efficiency and accuracy of manual curation. PubTator is publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/." }, { "pmid": "16321145", "title": "SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.", "abstract": "BACKGROUND\nWith the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded.\n\n\nRESULTS\nAlpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features.\n\n\nCONCLUSION\nSLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine." }, { "pmid": "16845111", "title": "HubMed: a web-based biomedical literature search interface.", "abstract": "HubMed is an alternative search interface to the PubMed database of biomedical literature, incorporating external web services and providing functions to improve the efficiency of literature search, browsing and retrieval. Users can create and visualize clusters of related articles, export citation data in multiple formats, receive daily updates of publications in their areas of interest, navigate links to full text and other related resources, retrieve data from formatted bibliography lists, navigate citation links and store annotated metadata for articles of interest. HubMed is freely available at http://www.hubmed.org/." }, { "pmid": "20624778", "title": "Interactive and fuzzy search: a dynamic way to explore MEDLINE.", "abstract": "MOTIVATION\nThe MEDLINE database, consisting of over 19 million publication records, is the primary source of information for biomedicine and health questions. Although the database itself has been growing rapidly, the search paradigm of MEDLINE has remained largely unchanged.\n\n\nRESULTS\nHere, we propose a new system for exploring the entire MEDLINE collection, represented by two unique features: (i) interactive: providing instant feedback to users' query letter by letter, and (ii) fuzzy: allowing approximate search. We develop novel index structures and search algorithms to make such a search model possible. We also develop incremental-update techniques to keep the data up to date.\n\n\nAVAILABILITY\nInteractive and fuzzy searching algorithms for exploring MEDLINE are implemented in a system called iPubMed, freely accessible over the web at http://ipubmed.ics.uci.edu/ and http://tastier.cs.tsinghua.edu.cn/ipubmed." }, { "pmid": "21245076", "title": "PubMed and beyond: a survey of web tools for searching biomedical literature.", "abstract": "The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search." }, { "pmid": "22034350", "title": "D³: Data-Driven Documents.", "abstract": "Data-Driven Documents (D3) is a novel representation-transparent approach to visualization for the web. Rather than hide the underlying scenegraph within a toolkit-specific abstraction, D3 enables direct inspection and manipulation of a native representation: the standard document object model (DOM). With D3, designers selectively bind input data to arbitrary document elements, applying dynamic transforms to both generate and modify content. We show how representational transparency improves expressiveness and better integrates with developer tools than prior approaches, while offering comparable notational efficiency and retaining powerful declarative components. Immediate evaluation of operators further simplifies debugging and allows iterative development. Additionally, we demonstrate how D3 transforms naturally enable animation and interaction with dramatic performance improvements over intermediate representations." } ]
Royal Society Open Science
28280588
PMC5319354
10.1098/rsos.160896
Understanding human queuing behaviour at exits: an empirical study
The choice of the exit to egress from a facility plays a fundamental role in pedestrian modelling and simulation. Yet, empirical evidence for backing up simulation is scarce. In this contribution, we present three new groups of experiments that we conducted in different geometries. We varied parameters such as the width of the doors, the initial location and number of pedestrians which in turn affected their perception of the environment. We extracted and analysed relevant indicators such as distance to the exits and density levels. The results put in evidence the fact that pedestrians use time-dependent information to optimize their exit choice, and that, in congested states, a load balancing over the exits occurs. We propose a minimal modelling approach that covers those situations, especially the cases where the geometry does not show a symmetrical configuration. Most of the models try to achieve the load balancing by simulating the system and solving optimization problems. We show statistically and by simulation that a linear model based on the distance to the exits and the density levels around the exit can be an efficient dynamical alternative.
2.Related worksData gathering for the exit choice of pedestrians is performed in real-world [1–4], as well as in virtual environments [5–7]. Participants might behave differently in the virtual environments where the perception is different. However, we observe in both cases that pedestrians are able to dynamically optimize their travel time by choosing adequate exits. In the models, the choice of the exit corresponds to the tactical level of the pedestrian behaviour. Early works consider the shortest path as an adequate solution for uncongested situations [8]. For congested states, the closest exit, if it is congested, may not be the one minimizing the travel time. Therefore, most of the models are based on the distance to the exit and travel time (see e.g. [9–13]). Other factors are also used, such as route preference [4], density level around the exits [1,2], socio-economic factors [7], type of behaviours (egoistic/cooperative, see [3]), or the presence of smoke, the visibility, the herding tendency or again the faster-is-slower effect in the case of emergency [4,14–16]. Several types of modelling are developed. Some of them use log-IT or Prob-IT statistical models [4,5,7,11,17]. Some others are based on notions from game theory of pedestrian rationality and objective function [9,10]. While iterative methods such as the Metropolis algorithm or neural networks allow to reach user or system optima by minimizing individual travel time or marginal cost [2,13].The estimation of travel times in congested situations is a complex problem. Such procedure is realized in general by using simulation of an operational pedestrian model. The coupling to simulation makes the use of the exit model a hard task in terms of computation effort. Yet, there exist strong correlations between the travel time and the density level. They are a consequence of the characteristic fundamental relationship between the flow and the density, that is well established in the literature of traffic theory (see e.g. [18]). Some recent dynamical models are based, among other parameters, on the density levels in the vicinity of the exits (see [1,2,4]). In such models, the density substitutes the travel time. The density levels are simple to measure and, in contrast to the travel time, do not require simulation of the system to be estimated. This makes the density-based models easier to implement than equilibrium-based models.
[ "11028994", "27605166" ]
[ { "pmid": "11028994", "title": "Simulating dynamical features of escape panic.", "abstract": "One of the most disastrous forms of collective human behaviour is the kind of crowd stampede induced by panic, often leading to fatalities as people are crushed or trampled. Sometimes this behaviour is triggered in life-threatening situations such as fires in crowded buildings; at other times, stampedes can arise during the rush for seats or seemingly without cause. Although engineers are finding ways to alleviate the scale of such disasters, their frequency seems to be increasing with the number and size of mass events. But systematic studies of panic behaviour and quantitative theories capable of predicting such crowd dynamics are rare. Here we use a model of pedestrian behaviour to investigate the mechanisms of (and preconditions for) panic and jamming by uncoordinated motion in crowds. Our simulations suggest practical ways to prevent dangerous crowd pressures. Moreover, we find an optimal strategy for escape from a smoke-filled room, involving a mixture of individualistic behaviour and collective 'herding' instinct." }, { "pmid": "27605166", "title": "Crowd behaviour during high-stress evacuations in an immersive virtual environment.", "abstract": "Understanding the collective dynamics of crowd movements during stressful emergency situations is central to reducing the risk of deadly crowd disasters. Yet, their systematic experimental study remains a challenging open problem due to ethical and methodological constraints. In this paper, we demonstrate the viability of shared three-dimensional virtual environments as an experimental platform for conducting crowd experiments with real people. In particular, we show that crowds of real human subjects moving and interacting in an immersive three-dimensional virtual environment exhibit typical patterns of real crowds as observed in real-life crowded situations. These include the manifestation of social conventions and the emergence of self-organized patterns during egress scenarios. High-stress evacuation experiments conducted in this virtual environment reveal movements characterized by mass herding and dangerous overcrowding as they occur in crowd disasters. We describe the behavioural mechanisms at play under such extreme conditions and identify critical zones where overcrowding may occur. Furthermore, we show that herding spontaneously emerges from a density effect without the need to assume an increase of the individual tendency to imitate peers. Our experiments reveal the promise of immersive virtual environments as an ethical, cost-efficient, yet accurate platform for exploring crowd behaviour in high-risk situations with real human subjects." } ]
Scientific Reports
28230161
PMC5322330
10.1038/srep43167
Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT
Outlier detection in bioinformatics data streaming mining has received significant attention by research communities in recent years. The problems of how to distinguish noise from an exception and deciding whether to discard it or to devise an extra decision path for accommodating it are causing dilemma. In this paper, we propose a novel algorithm called ODR with incrementally Optimized Very Fast Decision Tree (ODR-ioVFDT) for taking care of outliers in the progress of continuous data learning. By using an adaptive interquartile-range based identification method, a tolerance threshold is set. It is then used to judge if a data of exceptional value should be included for training or otherwise. This is different from the traditional outlier detection/removal approaches which are two separate steps in processing through the data. The proposed algorithm is tested using datasets of five bioinformatics scenarios and comparing the performance of our model and other ones without ODR. The results show that ODR-ioVFDT has better performance in classification accuracy, kappa statistics, and time consumption. The ODR-ioVFDT applied onto bioinformatics streaming data processing for detecting and quantifying the information of life phenomena, states, characters, variables and components of the organism can help to diagnose and treat disease more effectively.
Related WorkThere are many ways to categorize outlier detection approaches. To illustrate by the class objective, one-class classification outlier detection approach proposed by Tax17. The artificial outlier is generated by normal instances that are trained by a one-class classifier. Then the combination of one-class and support vector data description algorithms is given to achieve a boundary decision between normal and outlier samples. But the drawback of one-class classification is not able to handle multi-objective dataset. Thus the genetic programming for one-class classifier proposed by Loveard and Cielsieski18 aims to apply for diverse formalisms in its evolutionary processing. Since the multifarious dataset with diversity classes take over most type of dataset, the outlier detection approach for multi-objective is in wilderness demand19. The instances that pertain to the misclassified (ISMs) filtering method exhibit a high level of class overlap for similar instance implemented by Michael R. Smith et al.20. The approach is based on two measure heuristics, one is k-Disagreeing Neighbors (kDN) for taking space of local overlap instances, and another is Disjunct Size (DS) for dividing instances by covering instance of largest disjunct among the dataset. Although this method performs well on outlier reduction, but high cost of time is the biggest drawback of ISMs.The pattern learning outlier detection models are usually categorized into clustering, distance-based, density-based, probabilistic and information-theoretic. Wu, Shu, and Shengrui Wang21 are using an information-theoretic model to share an interesting relationship with other models. A concept of holoentropy that takes both entropy and total correlation into consideration to be the outlier factor of an object, which is solely determined by the object itself and can be updated efficiently. This method constrain the maximum deviation allowed from them normal model. It will be reported as an outlier if it has the large difference. Zeng, Xiangxiang, Xuan Zhang, and Quan Zou22 gave a biological interaction networks for finding out the information between gene, protein, miRNA and disease phtnotype and predicting potential disease-related miRNA based on networks. Ando, Shin23 is giving a scalable minimization algorithm base on information bottleneck formalization that exploits the localized form of the cost function over individual clusters. Bay, Stephen D and Mark Schwabacher24 are displaying a distance-based model that uses a simple nested loop algorithm. It will give near linear time performance in the worst case. Knorr, Edwin M25 gives a K-nearest neighbor distribution of a data point to determine whether it is an outlier. Xuan, Ping et al.26 gave a prediction method HDMP based on weighted K most simialer neighbors to find the similarity between disease and phenotype27. Yousri, Noha A.28 displayed an approach that is clustering considering a complementary problem to outlier analysis. A universal set of clusters is proposed which combines clusters obtained from clustering, and a virtual cluster for the outlier. It optimized clustering model to purposely detect outliers. Breunig, Markus M et al.29 used density-based model to define its outlier score, in which local outlier factor degree depends on how isolated the object is related to the surrounding neighborhood. Cao, Feng et al.30 also present a density-based micro-cluster to summarize the clusters with arbitrary shape, which guarantees the precision of the weights of the micro-clusters with limited memory. Yuen, Ka-Veng, and He-Qing Mu31 gave a probabilistic method for robust parametric identification and outlier detection in linear regression approach. Brink, Wikus et al.32 derived a Gaussian noise model for outlier removing. The probabilistic approach is almost analogous to those clustering algorithms whereby the process of fitting values are used to quantify the outlier scores of data points.Our outlier identify method of incremental optimized very fast decision tree with outlier detection and removal (ODR-ioVFDT) is an nonparametric optimized decision tree classifier based on probability density that is excusive of the outlier detection and removal first, and in the meantime, send the clean data flow to ioVFDT3334. This algorithm aims to reduce the running time of classification and increase the accuracy of prediction by making a quick time-series preprocessing of dataset.
[ "25134094", "26059461", "23950912", "26134276" ]
[ { "pmid": "25134094", "title": "Incremental Support Vector Learning for Ordinal Regression.", "abstract": "Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training ν -support vector classification (ν-SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line ν-SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin." }, { "pmid": "26059461", "title": "Integrative approaches for predicting microRNA function and prioritizing disease-related microRNA using biological interaction networks.", "abstract": "MicroRNAs (miRNA) play critical roles in regulating gene expressions at the posttranscriptional levels. The prediction of disease-related miRNA is vital to the further investigation of miRNA's involvement in the pathogenesis of disease. In previous years, biological experimentation is the main method used to identify whether miRNA was associated with a given disease. With increasing biological information and the appearance of new miRNAs every year, experimental identification of disease-related miRNAs poses considerable difficulties (e.g. time-consumption and high cost). Because of the limitations of experimental methods in determining the relationship between miRNAs and diseases, computational methods have been proposed. A key to predict potential disease-related miRNA based on networks is the calculation of similarity among diseases and miRNA over the networks. Different strategies lead to different results. In this review, we summarize the existing computational approaches and present the confronted difficulties that help understand the research status. We also discuss the principles, efficiency and differences among these methods. The comprehensive comparison and discussion elucidated in this work provide constructive insights into the matter." }, { "pmid": "23950912", "title": "Prediction of microRNAs associated with human diseases based on weighted k most similar neighbors.", "abstract": "BACKGROUND\nThe identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIt is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates.\n\n\nCONCLUSIONS\nThe superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred." }, { "pmid": "26134276", "title": "Similarity computation strategies in the microRNA-disease network: a survey.", "abstract": "Various microRNAs have been demonstrated to play roles in a number of human diseases. Several microRNA-disease network reconstruction methods have been used to describe the association from a systems biology perspective. The key problem for the network is the similarity computation model. In this article, we reviewed the main similarity computation methods and discussed these methods and future works. This survey may prompt and guide systems biology and bioinformatics researchers to build more perfect microRNA-disease associations and may make the network relationship clear for medical researchers." } ]
Frontiers in Psychology
28293202
PMC5329031
10.3389/fpsyg.2017.00260
Automating Individualized Formative Feedback in Large Classes Based on a Directed Concept Graph
Student learning outcomes within courses form the basis for course completion and time-to-graduation statistics, which are of great importance in education, particularly higher education. Budget pressures have led to large classes in which student-to-instructor interaction is very limited. Most of the current efforts to improve student progress in large classes, such as “learning analytics,” (LA) focus on the aspects of student behavior that are found in the logs of Learning Management Systems (LMS), for example, frequency of signing in, time spent on each page, and grades. These are important, but are distant from providing help to the student making insufficient progress in a course. We describe a computer analytical methodology which includes a dissection of the concepts in the course, expressed as a directed graph, that are applied to test questions, and uses performance on these questions to provide formative feedback to each student in any course format: face-to-face, blended, flipped, or online. Each student receives individualized assistance in a scalable and affordable manner. It works with any class delivery technology, textbook, and learning management system.
Related works/state of the artAn overview of four categories of approaches to analytical activities that are currently being used on data from educational settings is provided by Piety et al. (2014). Their work provides a conceptual framework for considering these different approaches and provides an overview of the state of the art in each of the four categories. Our work falls primarily into their second category, “Learning Analytics/Educational Data Mining.” Their work identifies the areas of overlap between their four different categories and a noticeable gap is left by the current approaches in the educational context of individual students in postsecondary education. This gap is the focal area for our current work and what follows is a description of the state of the art in the Learning Analytics category as it relates to our work.Log based approachesMuch attention has been paid to using information from Learning Management Systems (LMS) logs and other logs of student activity. These logs are used to flag students who are likely to do poorly in a course and/or make satisfactory progress toward graduation. A survey article in the Chronicle of Higher Education (Blumenstyk, 2014) describes this as “personalized education” but considers the term to be “rather fuzzy.” This area is also often referred to as “learning analytics” (LA). Many tools have been developed to help colleges and universities spot students who are more likely to fail (Blumenstyk, 2014; Rogers et al., 2014). Companies with offerings in this area include Blackboard1, Ellucian2, Starfish Retention Solutions3, and GradesFirst4. The details of what data these companies use is not clear from their web sites, but their services generally appear to utilize LMS logs, gradebooks, number and time of meetings with tutors and other behavioral information, as well as student grades in previous courses. Dell has partnered with a number of higher education institutions to apply this type of analytics to increase student engagement and retention, such as at Harper College (Dell Inc, 2014). Their model emphasizes pre-enrollment information, such as high school GPA and current employment status. These efforts often produce insight into progress of the student body as a whole, and to individual students' progress over the semesters, but do not go deeper into individual student's learning progress within a course.Approaches based on student decisionsCivitas Learning5 takes a different approach. It emphasizes the need to inform the student regarding the decisions to be made in choosing the school, the major, the career goals, the courses within the school, etc. These are very important decisions, and certainly can be informed by a “predictive analytics platform,” but they are outside an individual course. Ellucian6 describes their “student success” software in much the same way, but in less detail. Starfish Retention Solutions7 also describes its software in much the same way and gathers data from a variety of campus data sources, including the student information system and the learning management system. The orientation, as described, is at the macroscopic level, outside of individual courses. An example given is that when a student fails to choose a major on time, an intervention should be scheduled to assist in student retention. GradesFirst8 describes its analytics capabilities in terms of tracking individual student course attendance, scheduling tutoring appointments, as well as other time and behavior management functions.Course concept based approachesProducts and services from another group of companies promote the achievement of student learning outcomes within courses by adapting the presentation of material in the subject matter to the progress and behavior of individual students. This is sometimes referred to as ”adaptive education” or “adaptive learning.” One company, Acrobatiq9, distinguishes between the usual learning analytics and their own approach (Hampson, 2014) and does it in the domain of an online course specifically developed to provide immediate feedback to students. This is an interesting and promising method, but its application appears to be limited by the need to develop a new course, rather than being directly applicable to existing courses.Smart Sparrow10 describes its function as “adaptive learning,” looking at problem areas encountered by each student and personalizing the instructional content for each individual student. The company describes this in terms of having the instructor develop content using their authoring tool, which then allows presentation of the next “page” to be based on previous student responses. This appears to be a modern instantiation of Programmed Instruction (Radcliffe, 2007).WebAssign11 is a popular tool used in math and sciences for administering quizzes, homework, practice exercises, and other assessment instruments. Their new Class Insights product appears to provide instructors with the ability to identify questions and topic areas that are challenging to individual students as well as the class collectively (Benien, 2015). It also provides feedback to students to help them identify ways to redirect their efforts if they are struggling to generate correct answers to questions and problems. Aplia12 provides automated grading services for instructors with feedback intended to help students increase their level of engagement. They create study plans for students based on how they performed on their quizzes, which are created using a scaffolded learning path moving students from lower order thinking skills to higher order thinking skills. These plans are not shared with the instructors and are for students only.Textbook publishers have been developing technology solutions to enhance their product offerings. CengageNow13 has pre and post assessments for chapters that create a personalized study plan for students linked to videos and chapters within the book. Other textbook publishers have a similar approach in their technologies. In contrast, the Cengage MindTap14 platform has an engagement tracker that flags students who are not performing well in the class on quizzes and interaction. This is more focused on providing the instructor with information to intervene. A dozen or so student behaviors and interactions are used to calculate an engagement score for each student in MindTap, including student-generated materials within the content. McGraw Hill also offers adaptive learning technology called LearnSmart15 which focuses on determining students' knowledge and strength areas and adapts content to help students focus their learning efforts on material they do not already know. It provides reports for both instructors and students to keep updated on a student's progress in a course.This adaptive learning approach, along with methods to select the path the student should take from one course content segment to the next, is used by many implementations of Adaptive Educational Systems. An example is the Mobile Integrated and Individualized Course (MIIC) system (Brinton et al., 2015), a full presentation platform which includes text, videos, quizzes, and its own social learning network. It is based on a back-end web server and client-device-side software installed on the student's tablet computer. The tests of MIIC used a textbook written by the implementers and so avoided permission concerns. Another service, WileyPLUS with ORION Wiley16, is currently available with two psychology textbooks published by the Wiley textbook company. It appears to use logs and quizzes, along with online access to the textbooks, in following student progress and difficulties. It seems to be the LMS for a single Wiley course/textbook. In this case, there is no development by the instructor needed, but one is limited to the textbooks and approach of this publisher.Shortcomings/limitations of current approachesWhat the varied approaches in the first two categories (Log and Student Decision Based Approaches) apparently do not do constitutes a significant omission; the approaches do not provide assistance to students with learning the content within each course. While informing students can improve their decisions, the approaches described in Student Decision Based Approaches impact a macro level of student decision making; the project described here relates to student decision making at a micro level. Providing individual face-to-face support within a course is time-consuming, which makes doing so expensive. The increasing number of large courses is financially driven, so any solution to improve student learning must be cost effective. Cost is a major limitation of the approaches described in Course Concept Based Approaches. With those approaches, existing instructional content must be adapted to the system, or new instructional content must be developed, essentially constructing a new textbook for the course. That is not a viable option for most individual instructors, forcing them to rely upon the content developed by someone else, such as a textbook publisher. Often, instructors find some aspects of their textbook unsatisfying and it may be difficult to make modifications when a textbook is integrated within a publisher's software system. The tool proposed in this paper avoids that problem.
[]
[]
PLoS Computational Biology
28282375
PMC5345757
10.1371/journal.pcbi.1005248
A human judgment approach to epidemiological forecasting
Infectious diseases impose considerable burden on society, despite significant advances in technology and medicine over the past century. Advanced warning can be helpful in mitigating and preparing for an impending or ongoing epidemic. Historically, such a capability has lagged for many reasons, including in particular the uncertainty in the current state of the system and in the understanding of the processes that drive epidemic trajectories. Presently we have access to data, models, and computational resources that enable the development of epidemiological forecasting systems. Indeed, several recent challenges hosted by the U.S. government have fostered an open and collaborative environment for the development of these technologies. The primary focus of these challenges has been to develop statistical and computational methods for epidemiological forecasting, but here we consider a serious alternative based on collective human judgment. We created the web-based “Epicast” forecasting system which collects and aggregates epidemic predictions made in real-time by human participants, and with these forecasts we ask two questions: how accurate is human judgment, and how do these forecasts compare to their more computational, data-driven alternatives? To address the former, we assess by a variety of metrics how accurately humans are able to predict influenza and chikungunya trajectories. As for the latter, we show that real-time, combined human predictions of the 2014–2015 and 2015–2016 U.S. flu seasons are often more accurate than the same predictions made by several statistical systems, especially for short-term targets. We conclude that there is valuable predictive power in collective human judgment, and we discuss the benefits and drawbacks of this approach.
Related workAs exemplified by the fields of meteorology and econometrics, statistical and computational models are frequently used to understand, describe, and forecast the evolution of complex dynamical systems [12, 13]. The situation in epidemiological forecasting is no different; data-driven forecasting frameworks have been developed for a variety of tasks [14–16]. To assess accuracy, forecasts are typically compared to pre-defined baselines and to other, often competing, forecasts. The focus has traditionally been on comparisons between data-driven methods, but there has been less work toward understanding the utility of alternative approaches, including those based on human judgment. In addition to developing and applying one such approach, we also provide an intuitive point of reference by contrasting the performance of data-driven and human judgment methods for epidemiological forecasting.Methods based on collective judgment take advantage of the interesting observation that group judgment is generally superior to individual judgment—a phenomena commonly known as “The Wisdom of Crowds”. This was illustrated over a century ago when Francis Galton showed that a group of common people was collectively able to estimate the weight of an ox to within one percent of its actual weight [17]. Since then, collective judgment has been used to predict outcomes in a number of diverse settings, including for example finance, economics, politics, sports, and meteorology [18–20]. A more specific type of collective judgment arises when the participants (whether human or otherwise) are experts—a committee of experts. This approach is common in a variety of settings, for example in artificial intelligence and machine learning in the form of committee machines [21] and ensemble classifiers [22]. More relevant examples of incorporating human judgment in influenza research include prediction markets [23, 24] and other crowd-sourcing methods like Flu Near You [25, 26].
[ "16731270", "9892452", "8604170", "10997211", "27449080", "24714027", "24373466", "22629476", "17173231", "26270299", "26317693", "25401381", "10752360" ]
[ { "pmid": "16731270", "title": "Global and regional burden of disease and risk factors, 2001: systematic analysis of population health data.", "abstract": "BACKGROUND\nOur aim was to calculate the global burden of disease and risk factors for 2001, to examine regional trends from 1990 to 2001, and to provide a starting point for the analysis of the Disease Control Priorities Project (DCPP).\n\n\nMETHODS\nWe calculated mortality, incidence, prevalence, and disability adjusted life years (DALYs) for 136 diseases and injuries, for seven income/geographic country groups. To assess trends, we re-estimated all-cause mortality for 1990 with the same methods as for 2001. We estimated mortality and disease burden attributable to 19 risk factors.\n\n\nFINDINGS\nAbout 56 million people died in 2001. Of these, 10.6 million were children, 99% of whom lived in low-and-middle-income countries. More than half of child deaths in 2001 were attributable to acute respiratory infections, measles, diarrhoea, malaria, and HIV/AIDS. The ten leading diseases for global disease burden were perinatal conditions, lower respiratory infections, ischaemic heart disease, cerebrovascular disease, HIV/AIDS, diarrhoeal diseases, unipolar major depression, malaria, chronic obstructive pulmonary disease, and tuberculosis. There was a 20% reduction in global disease burden per head due to communicable, maternal, perinatal, and nutritional conditions between 1990 and 2001. Almost half the disease burden in low-and-middle-income countries is now from non-communicable diseases (disease burden per head in Sub-Saharan Africa and the low-and-middle-income countries of Europe and Central Asia increased between 1990 and 2001). Undernutrition remains the leading risk factor for health loss. An estimated 45% of global mortality and 36% of global disease burden are attributable to the joint hazardous effects of the 19 risk factors studied. Uncertainty in all-cause mortality estimates ranged from around 1% in high-income countries to 15-20% in Sub-Saharan Africa. Uncertainty was larger for mortality from specific diseases, and for incidence and prevalence of non-fatal outcomes.\n\n\nINTERPRETATION\nDespite uncertainties about mortality and burden of disease estimates, our findings suggest that substantial gains in health have been achieved in most populations, countered by the HIV/AIDS epidemic in Sub-Saharan Africa and setbacks in adult mortality in countries of the former Soviet Union. Our results on major disease, injury, and risk factor causes of loss of health, together with information on the cost-effectiveness of interventions, can assist in accelerating progress towards better health and reducing the persistent differentials in health between poor and rich countries." }, { "pmid": "9892452", "title": "Trends in infectious disease mortality in the United States during the 20th century.", "abstract": "CONTEXT\nRecent increases in infectious disease mortality and concern about emerging infections warrant an examination of longer-term trends.\n\n\nOBJECTIVE\nTo describe trends in infectious disease mortality in the United States during the 20th century.\n\n\nDESIGN AND SETTING\nDescriptive study of infectious disease mortality in the United States. Deaths due to infectious diseases from 1900 to 1996 were tallied by using mortality tables. Trends in age-specific infectious disease mortality were examined by using age-specific death rates for 9 common infectious causes of death.\n\n\nSUBJECTS\nPersons who died in the United States between 1900 and 1996.\n\n\nMAIN OUTCOME MEASURES\nCrude and age-adjusted mortality rates.\n\n\nRESULTS\nInfectious disease mortality declined during the first 8 decades of the 20th century from 797 deaths per 100000 in 1900 to 36 deaths per 100000 in 1980. From 1981 to 1995, the mortality rate increased to a peak of 63 deaths per 100000 in 1995 and declined to 59 deaths per 100000 in 1996. The decline was interrupted by a sharp spike in mortality caused by the 1918 influenza epidemic. From 1938 to 1952, the decline was particularly rapid, with mortality decreasing 8.2% per year. Pneumonia and influenza were responsible for the largest number of infectious disease deaths throughout the century. Tuberculosis caused almost as many deaths as pneumonia and influenza early in the century, but tuberculosis mortality dropped off sharply after 1945. Infectious disease mortality increased in the 1980s and early 1990s in persons aged 25 years and older and was mainly due to the emergence of the acquired immunodeficiency syndrome (AIDS) in 25- to 64-year-olds and, to a lesser degree, to increases in pneumonia and influenza deaths among persons aged 65 years and older. There was considerable year-to-year variability in infectious disease mortality, especially for the youngest and oldest age groups.\n\n\nCONCLUSIONS\nAlthough most of the 20th century has been marked by declining infectious disease mortality, substantial year-to-year variation as well as recent increases emphasize the dynamic nature of infectious diseases and the need for preparedness to address them." }, { "pmid": "8604170", "title": "Trends in infectious diseases mortality in the United States.", "abstract": "OBJECTIVE\nTo evaluate recent trends in infectious diseases mortality in the United States.\n\n\nDESIGN\nDescriptive study of infectious disease mortality, classifying International Classification of Diseases, Ninth Revision codes as infectious diseases, consequence of infectious diseases, or not infectious diseases. Multiple cause-of-death tapes from the National Center for Health Statistics for the years 1980 through 1992 were used, with a focus on underlying cause-of-death data and on codes that exclusively represent infectious diseases.\n\n\nSETTING\nUnited States.\n\n\nSUBJECTS\nAll persons who died between 1980 and 1992.\n\n\nMAIN OUTCOME MEASURE\nDeath.\n\n\nRESULTS\nBetween 1980 and 1992, the death rate due to infectious diseases as the underlying cause of death increased 58%, from 41 to 65 deaths per 100,000 population in the United States. Age-adjusted mortality from infectious diseases increased 39% during the same period. Infectious diseases mortality increased 25% among those aged 65 years and older (from 271 to 338 per 100,000), and 6.3 times among 25- to 44-year-olds (from six to 38 deaths per 100,000). Mortality due to respiratory tract infections increased 20%, from 25 to 30 deaths per 100,000, deaths attributed to human immunodeficiency virus increased from virtually none to 13 per 100,000 in 1992, and the rate of death due to septicemia increased 83% from 4.2 to 7.7 per 100,000.\n\n\nCONCLUSIONS\nDespite historical predictions that infectious diseases would wane in the United States, these data show that infectious diseases mortality in the United States has been increasing in recent years." }, { "pmid": "10997211", "title": "Forecasting disease risk for increased epidemic preparedness in public health.", "abstract": "Emerging infectious diseases pose a growing threat to human populations. Many of the world's epidemic diseases (particularly those transmitted by intermediate hosts) are known to be highly sensitive to long-term changes in climate and short-term fluctuations in the weather. The application of environmental data to the study of disease offers the capability to demonstrate vector-environment relationships and potentially forecast the risk of disease outbreaks or epidemics. Accurate disease forecasting models would markedly improve epidemic prevention and control capabilities. This chapter examines the potential for epidemic forecasting and discusses the issues associated with the development of global networks for surveillance and prediction. Existing global systems for epidemic preparedness focus on disease surveillance using either expert knowledge or statistical modelling of disease activity and thresholds to identify times and areas of risk. Predictive health information systems would use monitored environmental variables, linked to a disease system, to be observed and provide prior information of outbreaks. The components and varieties of forecasting systems are discussed with selected examples, along with issues relating to further development." }, { "pmid": "27449080", "title": "Results from the centers for disease control and prevention's predict the 2013-2014 Influenza Season Challenge.", "abstract": "BACKGROUND\nEarly insights into the timing of the start, peak, and intensity of the influenza season could be useful in planning influenza prevention and control activities. To encourage development and innovation in influenza forecasting, the Centers for Disease Control and Prevention (CDC) organized a challenge to predict the 2013-14 Unites States influenza season.\n\n\nMETHODS\nChallenge contestants were asked to forecast the start, peak, and intensity of the 2013-2014 influenza season at the national level and at any or all Health and Human Services (HHS) region level(s). The challenge ran from December 1, 2013-March 27, 2014; contestants were required to submit 9 biweekly forecasts at the national level to be eligible. The selection of the winner was based on expert evaluation of the methodology used to make the prediction and the accuracy of the prediction as judged against the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet).\n\n\nRESULTS\nNine teams submitted 13 forecasts for all required milestones. The first forecast was due on December 2, 2013; 3/13 forecasts received correctly predicted the start of the influenza season within one week, 1/13 predicted the peak within 1 week, 3/13 predicted the peak ILINet percentage within 1 %, and 4/13 predicted the season duration within 1 week. For the prediction due on December 19, 2013, the number of forecasts that correctly forecasted the peak week increased to 2/13, the peak percentage to 6/13, and the duration of the season to 6/13. As the season progressed, the forecasts became more stable and were closer to the season milestones.\n\n\nCONCLUSION\nForecasting has become technically feasible, but further efforts are needed to improve forecast accuracy so that policy makers can reliably use these predictions. CDC and challenge contestants plan to build upon the methods developed during this contest to improve the accuracy of influenza forecasts." }, { "pmid": "24714027", "title": "Influenza forecasting in human populations: a scoping review.", "abstract": "Forecasts of influenza activity in human populations could help guide key preparedness tasks. We conducted a scoping review to characterize these methodological approaches and identify research gaps. Adapting the PRISMA methodology for systematic reviews, we searched PubMed, CINAHL, Project Euclid, and Cochrane Database of Systematic Reviews for publications in English since January 1, 2000 using the terms \"influenza AND (forecast* OR predict*)\", excluding studies that did not validate forecasts against independent data or incorporate influenza-related surveillance data from the season or pandemic for which the forecasts were applied. We included 35 publications describing population-based (N = 27), medical facility-based (N = 4), and regional or global pandemic spread (N = 4) forecasts. They included areas of North America (N = 15), Europe (N = 14), and/or Asia-Pacific region (N = 4), or had global scope (N = 3). Forecasting models were statistical (N = 18) or epidemiological (N = 17). Five studies used data assimilation methods to update forecasts with new surveillance data. Models used virological (N = 14), syndromic (N = 13), meteorological (N = 6), internet search query (N = 4), and/or other surveillance data as inputs. Forecasting outcomes and validation metrics varied widely. Two studies compared distinct modeling approaches using common data, 2 assessed model calibration, and 1 systematically incorporated expert input. Of the 17 studies using epidemiological models, 8 included sensitivity analysis. This review suggests need for use of good practices in influenza forecasting (e.g., sensitivity analysis); direct comparisons of diverse approaches; assessment of model calibration; integration of subjective expert input; operational research in pilot, real-world applications; and improved mutual understanding among modelers and public health officials." }, { "pmid": "24373466", "title": "A systematic review of studies on forecasting the dynamics of influenza outbreaks.", "abstract": "Forecasting the dynamics of influenza outbreaks could be useful for decision-making regarding the allocation of public health resources. Reliable forecasts could also aid in the selection and implementation of interventions to reduce morbidity and mortality due to influenza illness. This paper reviews methods for influenza forecasting proposed during previous influenza outbreaks and those evaluated in hindsight. We discuss the various approaches, in addition to the variability in measures of accuracy and precision of predicted measures. PubMed and Google Scholar searches for articles on influenza forecasting retrieved sixteen studies that matched the study criteria. We focused on studies that aimed at forecasting influenza outbreaks at the local, regional, national, or global level. The selected studies spanned a wide range of regions including USA, Sweden, Hong Kong, Japan, Singapore, United Kingdom, Canada, France, and Cuba. The methods were also applied to forecast a single measure or multiple measures. Typical measures predicted included peak timing, peak height, daily/weekly case counts, and outbreak magnitude. Due to differences in measures used to assess accuracy, a single estimate of predictive error for each of the measures was difficult to obtain. However, collectively, the results suggest that these diverse approaches to influenza forecasting are capable of capturing specific outbreak measures with some degree of accuracy given reliable data and correct disease assumptions. Nonetheless, several of these approaches need to be evaluated and their performance quantified in real-time predictions." }, { "pmid": "22629476", "title": "Surveillance of dengue fever virus: a review of epidemiological models and early warning systems.", "abstract": "Dengue fever affects over a 100 million people annually hence is one of the world's most important vector-borne diseases. The transmission area of this disease continues to expand due to many direct and indirect factors linked to urban sprawl, increased travel and global warming. Current preventative measures include mosquito control programs, yet due to the complex nature of the disease and the increased importation risk along with the lack of efficient prophylactic measures, successful disease control and elimination is not realistic in the foreseeable future. Epidemiological models attempt to predict future outbreaks using information on the risk factors of the disease. Through a systematic literature review, this paper aims at analyzing the different modeling methods and their outputs in terms of acting as an early warning system. We found that many previous studies have not sufficiently accounted for the spatio-temporal features of the disease in the modeling process. Yet with advances in technology, the ability to incorporate such information as well as the socio-environmental aspect allowed for its use as an early warning system, albeit limited geographically to a local scale." }, { "pmid": "17173231", "title": "Use of prediction markets to forecast infectious disease activity.", "abstract": "Prediction markets have accurately forecasted the outcomes of a wide range of future events, including sales of computer printers, elections, and the Federal Reserve's decisions about interest rates. We propose that prediction markets may be useful for tracking and forecasting emerging infectious diseases, such as severe acute respiratory syndrome and avian influenza, by aggregating expert opinion quickly, accurately, and inexpensively. Data from a pilot study in the state of Iowa suggest that these markets can accurately predict statewide seasonal influenza activity 2-4 weeks in advance by using clinical data volunteered from participating health care workers. Information revealed by prediction markets may help to inform treatment, prevention, and policy decisions. Also, these markets could help to refine existing surveillance systems." }, { "pmid": "26270299", "title": "Flu Near You: Crowdsourced Symptom Reporting Spanning 2 Influenza Seasons.", "abstract": "OBJECTIVES\nWe summarized Flu Near You (FNY) data from the 2012-2013 and 2013-2014 influenza seasons in the United States.\n\n\nMETHODS\nFNY collects limited demographic characteristic information upon registration, and prompts users each Monday to report symptoms of influenza-like illness (ILI) experienced during the previous week. We calculated the descriptive statistics and rates of ILI for the 2012-2013 and 2013-2014 seasons. We compared raw and noise-filtered ILI rates with ILI rates from the Centers for Disease Control and Prevention ILINet surveillance system.\n\n\nRESULTS\nMore than 61 000 participants submitted at least 1 report during the 2012-2013 season, totaling 327 773 reports. Nearly 40 000 participants submitted at least 1 report during the 2013-2014 season, totaling 336 933 reports. Rates of ILI as reported by FNY tracked closely with ILINet in both timing and magnitude.\n\n\nCONCLUSIONS\nWith increased participation, FNY has the potential to serve as a viable complement to existing outpatient, hospital-based, and laboratory surveillance systems. Although many established systems have the benefits of specificity and credibility, participatory systems offer advantages in the areas of speed, sensitivity, and scalability." }, { "pmid": "26317693", "title": "Flexible Modeling of Epidemics with an Empirical Bayes Framework.", "abstract": "Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic's behavior, policy makers can design and implement more effective countermeasures. This past year, the Centers for Disease Control and Prevention hosted the \"Predict the Influenza Season Challenge\", with the task of predicting key epidemiological measures for the 2013-2014 U.S. influenza season with the help of digital surveillance data. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, and the season onset, duration, peak time, and peak height, with and without using Google Flu Trends data. Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, tailoring these models to certain types of surveillance data can be challenging, and overly complex models with many parameters can compromise forecasting ability. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to some other diseases with seasonal epidemics. This method produces a complete posterior distribution over epidemic curves, rather than, for example, solely point predictions of forecasting targets. We report prospective influenza-like-illness forecasts made for the 2013-2014 U.S. influenza season, and compare the framework's cross-validated prediction error on historical data to that of a variety of simpler baseline predictors." }, { "pmid": "25401381", "title": "Algorithm aversion: people erroneously avoid algorithms after seeing them err.", "abstract": "Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human." }, { "pmid": "10752360", "title": "Clinical versus mechanical prediction: a meta-analysis.", "abstract": "The process of making judgments and decisions requires a method for combining data. To compare the accuracy of clinical and mechanical (formal, statistical) data-combination techniques, we performed a meta-analysis on studies of human health and behavior. On average, mechanical-prediction techniques were about 10% more accurate than clinical predictions. Depending on the specific analysis, mechanical prediction substantially outperformed clinical prediction in 33%-47% of studies examined. Although clinical predictions were often as accurate as mechanical predictions, in only a few studies (6%-16%) were they substantially more accurate. Superiority for mechanical-prediction techniques was consistent, regardless of the judgment task, type of judges, judges' amounts of experience, or the types of data being combined. Clinical predictions performed relatively less well when predictors included clinical interview data. These data indicate that mechanical predictions of human behaviors are equal or superior to clinical prediction methods for a wide range of circumstances." } ]
JMIR mHealth and uHealth
28246070
PMC5350460
10.2196/mhealth.6395
Accuracy and Adoption of Wearable Technology Used by Active Citizens: A Marathon Event Field Study
BackgroundToday, runners use wearable technology such as global positioning system (GPS)–enabled sport watches to track and optimize their training activities, for example, when participating in a road race event. For this purpose, an increasing amount of low-priced, consumer-oriented wearable devices are available. However, the variety of such devices is overwhelming. It is unclear which devices are used by active, healthy citizens and whether they can provide accurate tracking results in a diverse study population. No published literature has yet assessed the dissemination of wearable technology in such a cohort and related influencing factors.ObjectiveThe aim of this study was 2-fold: (1) to determine the adoption of wearable technology by runners, especially “smart” devices and (2) to investigate on the accuracy of tracked distances as recorded by such devices.MethodsA pre-race survey was applied to assess which wearable technology was predominantly used by runners of different age, sex, and fitness level. A post-race survey was conducted to determine the accuracy of the devices that tracked the running course. Logistic regression analysis was used to investigate whether age, sex, fitness level, or track distance were influencing factors. Recorded distances of different device categories were tested with a 2-sample t test against each other.ResultsA total of 898 pre-race and 262 post-race surveys were completed. Most of the participants (approximately 75%) used wearable technology for training optimization and distance recording. Females (P=.02) and runners in higher age groups (50-59 years: P=.03; 60-69 years: P<.001; 70-79 year: P=.004) were less likely to use wearables. The mean of the track distances recorded by mobile phones with combined app (mean absolute error, MAE=0.35 km) and GPS-enabled sport watches (MAE=0.12 km) was significantly different (P=.002) for the half-marathon event.ConclusionsA great variety of vendors (n=36) and devices (n=156) were identified. Under real-world conditions, GPS-enabled devices, especially sport watches and mobile phones, were found to be accurate in terms of recorded course distances.
Related WorkAccording to Düking et al [4], wearables “are lightweight, sensor-based devices that are worn close to or on the surface of the skin, where they detect, analyze, and transmit information concerning several internal and external variables to an external device (...),” (p. 2). In particular, GPS-enabled devices can be considered reliable tracking devices, which holds true even for inexpensive systems.As a study conducted by Pugliese et al suggests, the increasing use of wearables among consumers has implications for public health. Monitoring an individual’s personal activity level, for example, steps taken in one day, can result in an increased overall physical activity [5]. A moderate level of physical activity can prevent widespread diseases such as diabetes or hypertension [6-8] and thus result in decreasing costs for public health care systems in the long term [9,10].Yet, in the context of the quantified-self movement, a high accuracy of these consumer-centric devices is desirable. In theory, the measurements obtained by different vendors and device categories (ie, GPS-enabled system vs accelerometer-based) should be comparable with each other [11].Noah et al studied the reliability and validity of 2 Fitbit (Fitbit, San Francisco, CA) activity trackers with 23 participants. There seems to be evidence that these particular devices produce results “valid for activity monitoring” [12].A study by Ferguson et al evaluated several consumer-level activity monitors [13]. The findings suggested the validity of fitness trackers with respect to measurement of steps; however, their study population was limited to 21 young adults.At present, and to the best of our knowledge, no study exists that examines the adoption of consumer-level devices in a broad and diverse population. This is supported by the meta-analysis by Evenson et al: “Exploring the measurement properties of the trackers in a wide variety of populations would also be important in both laboratory and field settings.” We conclude that “more field-based studies are needed” (p. 20) [3]. In particular, this should include all age groups, different fitness levels, and a great variety of related devices.
[ "25592201", "26858649", "2144946", "3555525", "10593542", "12900704", "24777201", "24007317", "22969321", "17720623", "10993413", "27068022", "25789630", "24268570", "26950687", "24497157", "25973205", "9152686", "19887012", "23812857", "14719979" ]
[ { "pmid": "25592201", "title": "Waste the waist: a pilot randomised controlled trial of a primary care based intervention to support lifestyle change in people with high cardiovascular risk.", "abstract": "BACKGROUND\nIn the UK, thousands of people with high cardiovascular risk are being identified by a national risk-assessment programme (NHS Health Checks). Waste the Waist is an evidence-informed, theory-driven (modified Health Action Process Approach), group-based intervention designed to promote healthy eating and physical activity for people with high cardiovascular risk. This pilot randomised controlled trial aimed to assess the feasibility of delivering the Waste the Waist intervention in UK primary care and of conducting a full-scale randomised controlled trial. We also conducted exploratory analyses of changes in weight.\n\n\nMETHODS\nPatients aged 40-74 with a Body Mass Index of 28 or more and high cardiovascular risk were identified from risk-assessment data or from practice database searches. Participants were randomised, using an online computerised randomisation algorithm, to receive usual care and standardised information on cardiovascular risk and lifestyle (Controls) or nine sessions of the Waste the Waist programme (Intervention). Group allocation was concealed until the point of randomisation. Thereafter, the statistician, but not participants or data collectors were blinded to group allocation. Weight, physical activity (accelerometry) and cardiovascular risk markers (blood tests) were measured at 0, 4 and 12 months.\n\n\nRESULTS\n108 participants (22% of those approached) were recruited (55 intervention, 53 controls) from 6 practices and 89% provided data at both 4 and 12 months. Participants had a mean age of 65 and 70% were male. Intervention participants attended 72% of group sessions. Based on last observations carried forward, the intervention group did not lose significantly more weight than controls at 12 months, although the difference was significant when co-interventions and co-morbidities that could affect weight were taken into account (Mean Diff 2.6Kg. 95%CI: -4.8 to -0.3, p = 0.025). No significant differences were found in physical activity.\n\n\nCONCLUSIONS\nThe Waste the Waist intervention is deliverable in UK primary care, has acceptable recruitment and retention rates and produces promising preliminary weight loss results. Subject to refinement of the physical activity component, it is now ready for evaluation in a full-scale trial.\n\n\nTRIAL REGISTRATION\nCurrent Controlled Trials ISRCTN10707899 ." }, { "pmid": "26858649", "title": "Extracellular Cysteine in Connexins: Role as Redox Sensors.", "abstract": "Connexin-based channels comprise hemichannels and gap junction channels. The opening of hemichannels allow for the flux of ions and molecules from the extracellular space into the cell and vice versa. Similarly, the opening of gap junction channels permits the diffusional exchange of ions and molecules between the cytoplasm and contacting cells. The controlled opening of hemichannels has been associated with several physiological cellular processes; thereby unregulated hemichannel activity may induce loss of cellular homeostasis and cell death. Hemichannel activity can be regulated through several mechanisms, such as phosphorylation, divalent cations and changes in membrane potential. Additionally, it was recently postulated that redox molecules could modify hemichannels properties in vitro. However, the molecular mechanism by which redox molecules interact with hemichannels is poorly understood. In this work, we discuss the current knowledge on connexin redox regulation and we propose the hypothesis that extracellular cysteines could be important for sensing changes in redox potential. Future studies on this topic will offer new insight into hemichannel function, thereby expanding the understanding of the contribution of hemichannels to disease progression." }, { "pmid": "2144946", "title": "A meta-analysis of physical activity in the prevention of coronary heart disease.", "abstract": "Evidence for an independent role of increased physical activity in the primary prevention of coronary heart disease has grown in recent years. The authors apply the techniques of meta-analysis to data extracted from the published literature by Powell et al. (Ann Rev Public Health 1987;8:253-87), as well as more recent studies addressing this relation, in order to make formal quantitative statements and to explore features of study design that influence the observed relation between physical activity and coronary heart disease risk. They find, for example, a summary relative risk of death from coronary heart disease of 1.9 (95% confidence interval 1.6-2.2) for sedentary compared with active occupations. The authors also find that methodologically stronger studies tend to show a larger benefit of physical activity than less well-designed studies." }, { "pmid": "3555525", "title": "Physical activity and the incidence of coronary heart disease.", "abstract": "Our review focuses on all articles in the English language that provide sufficient data to calculate a relative risk or odds ratio for CHD at different levels of physical activity. The inverse association between physical activity and incidence of CHD is consistently observed, especially in the better designed studies; this association is appropriately sequenced, biologically graded, plausible, and coherent with existing knowledge. Therefore, the observations reported in the literature support the inference that physical activity is inversely and causally related to the incidence of CHD. The two most important observations in this review are, first, better studies have been more likely than poorer studies to report an inverse association between physical activity and the incidence of CHD and, second, the relative risk of inactivity appears to be similar in magnitude to that of hypertension, hypercholesterolemia, and smoking. These observations suggest that in CHD prevention programs, regular physical activity should be promoted as vigorously as blood pressure control, dietary modification to lower serum cholesterol, and smoking cessation. Given the large proportion of sedentary persons in the United States (91), the incidence of CHD attributable to insufficient physical activity is likely to be surprisingly large. Therefore, public policy that encourages regular physical activity should be pursued." }, { "pmid": "10593542", "title": "Economic costs of obesity and inactivity.", "abstract": "PURPOSE\nThe purpose of this paper is to assess the economic costs of inactivity (including those attributable to obesity). These costs represent one summary of the public health impact of increasingly sedentary populations in countries with established market economies. Components of the costs of illness include direct costs resulting from treatment of morbidity and indirect costs caused by lost productivity (work days lost) and forgone earnings caused by premature mortality.\n\n\nMETHODS\nWe searched the Medline database for studies reporting the economic costs of obesity or inactivity, or cost of illness. From the identified references those relating to obesity or conditions attributable to obesity were reviewed. Chronic conditions related to inactivity include coronary heart disease (CHD), hypertension, Type II diabetes, colon cancer, depression and anxiety, osteoporotic hip fractures, and also obesity. Increasing adiposity, or obesity, is itself a direct cause of Type II diabetes, hypertension, CHD, gallbladder disease, osteoarthritis and cancer of the breast, colon, and endometrium. The most up-to-date estimates were extracted. To estimate the proportion of disease that could be prevented by eliminating inactivity or obesity we calculated the population-attributable risk percent. Prevalence based cost of illness for the U.S. is in 1995 dollars.\n\n\nRESULTS\nThe direct costs of lack of physical activity, defined conservatively as absence of leisure-time physical activity, are approximately 24 billion dollars or 2.4% of the U.S. health care expenditures. Direct costs for obesity defined as body mass index greater than 30, in 1995 dollars, total 70 billion dollars. These costs are independent of those resulting from lack of activity.\n\n\nCONCLUSION\nOverall, the direct costs of inactivity and obesity account for some 9.4% of the national health care expenditures in the United States. Inactivity, with its wide range of health consequences, represents a major avoidable contribution to the costs of illness in the United States and other countries with modern lifestyles that have replaced physical labor with sedentary occupations and motorized transportation." }, { "pmid": "12900704", "title": "Validity of 10 electronic pedometers for measuring steps, distance, and energy cost.", "abstract": "PURPOSE\nThis study examined the effects of walking speed on the accuracy and reliability of 10 pedometers: Yamasa Skeletone (SK), Sportline 330 (SL330) and 345 (SL345), Omron (OM), Yamax Digiwalker SW-701 (DW), Kenz Lifecorder (KZ), New Lifestyles 2000 (NL), Oregon Scientific (OR), Freestyle Pacer Pro (FR), and Walk4Life LS 2525 (WL).\n\n\nMETHODS\nTen subjects (33 +/- 12 yr) walked on a treadmill at various speeds (54, 67, 80, 94, and 107 m x min-1) for 5-min stages. Simultaneously, an investigator determined steps by a hand counter and energy expenditure (kcal) by indirect calorimetry. Each brand was measured on the right and left sides.\n\n\nRESULTS\nCorrelation coefficients between right and left sides exceeded 0.81 for all pedometers except OR (0.76) and SL345 (0.57). Most pedometers underestimated steps at 54 m x min-1, but accuracy for step counting improved at faster speeds. At 80 m x min-1 and above, six models (SK, OM, DW, KZ, NL, and WL) gave mean values that were within +/- 1% of actual steps. Six pedometers displayed the distance traveled. Most of them estimated mean distance to within +/- 10% at 80 m x min-1 but overestimated distance at slower speeds and underestimated distance at faster speeds. Eight pedometers displayed kilocalories, but except for KZ and NL, it is unclear whether this should reflect net or gross kilocalories. If one assumes they display net kilocalories, the general trend was an overestimation of kilocalories at every speed. If one assumes they display gross kilocalorie, then seven of the eight pedometers were accurate to within +/-30% at all speeds.\n\n\nCONCLUSION\nIn general, pedometers are most accurate for assessing steps, less accurate for assessing distance, and even less accurate for assessing kilocalories." }, { "pmid": "24777201", "title": "Validity of consumer-based physical activity monitors.", "abstract": "BACKGROUND\nMany consumer-based monitors are marketed to provide personal information on the levels of physical activity and daily energy expenditure (EE), but little or no information is available to substantiate their validity.\n\n\nPURPOSE\nThis study aimed to examine the validity of EE estimates from a variety of consumer-based, physical activity monitors under free-living conditions.\n\n\nMETHODS\nSixty (26.4 ± 5.7 yr) healthy males (n = 30) and females (n = 30) wore eight different types of activity monitors simultaneously while completing a 69-min protocol. The monitors included the BodyMedia FIT armband worn on the left arm, the DirectLife monitor around the neck, the Fitbit One, the Fitbit Zip, and the ActiGraph worn on the belt, as well as the Jawbone Up and Basis B1 Band monitor on the wrist. The validity of the EE estimates from each monitor was evaluated relative to criterion values concurrently obtained from a portable metabolic system (i.e., Oxycon Mobile). Differences from criterion measures were expressed as a mean absolute percent error and were evaluated using 95% equivalence testing.\n\n\nRESULTS\nFor overall group comparisons, the mean absolute percent error values (computed as the average absolute value of the group-level errors) were 9.3%, 10.1%, 10.4%, 12.2%, 12.6%, 12.8%, 13.0%, and 23.5% for the BodyMedia FIT, Fitbit Zip, Fitbit One, Jawbone Up, ActiGraph, DirectLife, NikeFuel Band, and Basis B1 Band, respectively. The results from the equivalence testing showed that the estimates from the BodyMedia FIT, Fitbit Zip, and NikeFuel Band (90% confidence interval = 341.1-359.4) were each within the 10% equivalence zone around the indirect calorimetry estimate.\n\n\nCONCLUSIONS\nThe indicators of the agreement clearly favored the BodyMedia FIT armband, but promising preliminary findings were also observed with the Fitbit Zip." }, { "pmid": "24007317", "title": "Comparison of steps and energy expenditure assessment in adults of Fitbit Tracker and Ultra to the Actical and indirect calorimetry.", "abstract": "Epidemic levels of inactivity are associated with chronic diseases and rising healthcare costs. To address this, accelerometers have been used to track levels of activity. The Fitbit and Fitbit Ultra are some of the newest commercially available accelerometers. The purpose of this study was to determine the reliability and validity of the Fitbit and Fitbit Ultra. Twenty-three subjects were fitted with two Fitbit and Fitbit Ultra accelerometers, two industry-standard accelerometers and an indirect calorimetry device. Subjects participated in 6-min bouts of treadmill walking, jogging and stair stepping. Results indicate the Fitbit and Fitbit Ultra are reliable and valid for activity monitoring (step counts) and determining energy expenditure while walking and jogging without an incline. The Fitbit and standard accelerometers under-estimated energy expenditure compared to indirect calorimetry for inclined activities. These data suggest the Fitbit and Fitbit Ultra are reliable and valid for monitoring over-ground energy expenditure." }, { "pmid": "22969321", "title": "Enhancing positioning accuracy in urban terrain by fusing data from a GPS receiver, inertial sensors, stereo-camera and digital maps for pedestrian navigation.", "abstract": "The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS." }, { "pmid": "17720623", "title": "Physiological limits to exercise performance in the heat.", "abstract": "Exercise in the heat results in major alterations in cardiovascular, thermoregulatory, metabolic and neuromuscular function. Hyperthermia appears to be the key determinant of exercise performance in the heat. Thus, strategies that attenuate the rise in core temperature contribute to enhanced exercise performance. These include heat acclimatization, pre-exercise cooling and fluid ingestion which have all been shown to result in reduced physiological and psychophysical strain during exercise in the heat and improved performance." }, { "pmid": "10993413", "title": "Validity of accelerometry for the assessment of moderate intensity physical activity in the field.", "abstract": "PURPOSE\nThis study was undertaken to examine the validity of accelerometry in assessing moderate intensity physical activity in the field and to evaluate the metabolic cost of various recreational and household activities.\n\n\nMETHODS\nTwenty-five subjects completed four bouts of overground walking at a range of self-selected speeds, played two holes of golf, and performed indoor (window washing, dusting, vacuuming) and outdoor (lawn mowing, planting shrubs) household tasks. Energy expenditure was measured using a portable metabolic system, and motion was recorded using a Yamax Digiwalker pedometer (walking only), a Computer Science and Application, Inc. (CSA) accelerometer, and a Tritrac accelerometer. Correlations between accelerometer counts and energy cost were examined. In addition, individual equations to predict METs from counts were developed from the walking data and applied to the other activities to compare the relationships between counts and energy cost.\n\n\nRESULTS\nObserved MET levels differed from values reported in the Compendium of Physical Activities, although all activities fell in the moderate intensity range. Relationships between counts and METs were stronger for walking (CSA, r = 0.77; Tritrac, r = 0.89) than for all activities combined (CSA, r = 0.59; Tritrac, r = 0.62). Metabolic costs of golf and the household activities were underestimated by 30-60% based on the equations derived from level walking.\n\n\nCONCLUSION\nThe count versus METs relationship for accelerometry was found to be dependent on the type of activity performed, which may be due to the inability of accelerometers to detect increased energy cost from upper body movement, load carriage, or changes in surface or terrain. This may introduce error in attempts to use accelerometry to assess point estimates of physical activity energy expenditure in free-living situations." }, { "pmid": "27068022", "title": "Validation of the Fitbit One® for physical activity measurement at an upper torso attachment site.", "abstract": "BACKGROUND\nThe upper torso is recommended as an attachment site for the Fitbit One(®), one of the most common wireless physical activity trackers in the consumer market, and could represent a viable alternative to wrist- and hip-attachment sites. The objective of this study was to provide evidence concerning the validity of the Fitbit One(®) attached to the upper torso for measuring step counts and energy expenditure among female adults.\n\n\nRESULTS\nThirteen female adults completed a four-phase treadmill exercise protocol (1.9, 3.0, 4.0, and 5.2 mph). Participants were fitted with three Fitbit(®) trackers (two Fitbit One(®) trackers: one on the upper torso, one on the hip; and a wrist-based Fitbit Flex(®)). Steps were assessed by manual counting of a video recording. Energy expenditure was measured by gas exchange indirect calorimetry. Concordance correlation coefficients of Fitbit-estimated step counts to observed step counts for the upper torso-attached Fitbit One(®), hip-attached Fitbit One(®) and wrist-attached Fitbit Flex(®) were 0.98 (95% CI 0.97-0.99), 0.99 (95% CI 0.99-0.99), and 0.75 (95% CI 0.70-0.79), respectively. The percent error for step count estimates from the upper torso attachment site was ≤3% for all walking and running speeds. Upper torso step count estimates showed similar accuracy relative to hip attachment of the Fitbit One(®) and were more accurate than the wrist-based Fitbit Flex(®). Similar results were obtained for energy expenditure estimates. Energy expenditure estimates for the upper torso attachment site yielded relative percent errors that ranged from 9 to 19% and were more accurate than the wrist-based Fitbit Flex(®), but less accurate than hip attachment of the Fitbit One(®).\n\n\nCONCLUSIONS\nOur study shows that physical activity measures obtained from the upper torso attachment site of the Fitbit One(®) are accurate across different walking and running speeds in female adults. The upper torso attachment site of the Fitbit One(®) outperformed the wrist-based Fitbit Flex(®) and yielded similar step count estimates to hip-attachment. These data support the upper torso as an alternative attachment site for the Fitbit One(®)." }, { "pmid": "25789630", "title": "Step detection and activity recognition accuracy of seven physical activity monitors.", "abstract": "The aim of this study was to compare the seven following commercially available activity monitors in terms of step count detection accuracy: Movemonitor (Mc Roberts), Up (Jawbone), One (Fitbit), ActivPAL (PAL Technologies Ltd.), Nike+ Fuelband (Nike Inc.), Tractivity (Kineteks Corp.) and Sensewear Armband Mini (Bodymedia). Sixteen healthy adults consented to take part in the study. The experimental protocol included walking along an indoor straight walkway, descending and ascending 24 steps, free outdoor walking and free indoor walking. These tasks were repeated at three self-selected walking speeds. Angular velocity signals collected at both shanks using two wireless inertial measurement units (OPAL, ADPM Inc) were used as a reference for the step count, computed using previously validated algorithms. Step detection accuracy was assessed using the mean absolute percentage error computed for each sensor. The Movemonitor and the ActivPAL were also tested within a nine-minute activity recognition protocol, during which the participants performed a set of complex tasks. Posture classifications were obtained from the two monitors and expressed as a percentage of the total task duration. The Movemonitor, One, ActivPAL, Nike+ Fuelband and Sensewear Armband Mini underestimated the number of steps in all the observed walking speeds, whereas the Tractivity significantly overestimated step count. The Movemonitor was the best performing sensor, with an error lower than 2% at all speeds and the smallest error obtained in the outdoor walking. The activity recognition protocol showed that the Movemonitor performed best in the walking recognition, but had difficulty in discriminating between standing and sitting. Results of this study can be used to inform choice of a monitor for specific applications." }, { "pmid": "24268570", "title": "Validation of the Fitbit One activity monitor device during treadmill walking.", "abstract": "OBJECTIVES\nIn order to quantify the effects of physical activity such as walking on chronic disease, accurate measurement of physical activity is needed. The objective of this study was to determine the validity and reliability of a new activity monitor, the Fitbit One, in a population of healthy adults.\n\n\nDESIGN\nCross-sectional study.\n\n\nMETHODS\nThirty healthy adults ambulated at 5 different speeds (0.90, 1.12, 1.33, 1.54, 1.78 m/s) on a treadmill while wearing three Fitbit One activity monitors (two on the hips and one in the pocket). The order of each speed condition was randomized. Fitbit One step count output was compared to observer counts and distance output was compared to the calibrated treadmill output. Two-way repeated measures ANOVA, concordance correlation coefficients, and Bland and Altman plots were used to assess validity and intra-class correlation coefficients (ICC) were used to assess reliability.\n\n\nRESULTS\nNo significant differences were noted between Fitbit One step count outputs and observer counts, and concordance was substantial (0.97-1.00). Inter-device reliability of the step count was high for all walking speeds (ICC ≥ 0.95). Percent relative error was less than 1.3%. The distance output of the Fitbit One activity monitors was significantly different from the criterion values for each monitor at all speeds (P<0.001) and exhibited poor concordance (0.0-0.05). Inter-device reliability was excellent for all treadmill speeds (ICC ≥ 0.90). Percent relative error was high (up to 39.6%).\n\n\nCONCLUSIONS\nThe Fitbit One activity monitors are valid and reliable devices for measuring step counts in healthy young adults. The distance output of the monitors is inaccurate and should be noted with caution." }, { "pmid": "26950687", "title": "Accuracy of three Android-based pedometer applications in laboratory and free-living settings.", "abstract": "This study examines the accuracy of three popular, free Android-based pedometer applications (apps), namely, Runtastic (RT), Pacer Works (PW), and Tayutau (TY) in laboratory and free-living settings. Forty-eight adults (22.5 ± 1.4 years) completed 3-min bouts of treadmill walking at five incremental speeds while carrying a test smartphone installed with the three apps. Experiment was repeated thrice, with the smartphone placed either in the pants pockets, at waist level, or secured to the left arm by an armband. The actual step count was manually counted by a tally counter. In the free-living setting, each of the 44 participants (21.9 ± 1.6 years) carried a smartphone with installed apps and a reference pedometer (Yamax Digi-Walker CW700) for 7 consecutive days. Results showed that TY produced the lowest mean absolute percent error (APE 6.7%) and was the only app with acceptable accuracy in counting steps in a laboratory setting. RT consistently underestimated steps with APE of 16.8% in the laboratory. PW significantly underestimated steps when the smartphone was secured to the arm, but overestimated under other conditions (APE 19.7%). TY was the most accurate app in counting steps in a laboratory setting with the lowest APE of 6.7%. In the free-living setting, the APE relative to the reference pedometer was 16.6%, 18.0%, and 16.8% for RT, PW, and TY, respectively. None of the three apps counted steps accurately in the free-living setting." }, { "pmid": "24497157", "title": "Measuring and influencing physical activity with smartphone technology: a systematic review.", "abstract": "BACKGROUND\nRapid developments in technology have encouraged the use of smartphones in physical activity research, although little is known regarding their effectiveness as measurement and intervention tools.\n\n\nOBJECTIVE\nThis study systematically reviewed evidence on smartphones and their viability for measuring and influencing physical activity.\n\n\nDATA SOURCES\nResearch articles were identified in September 2013 by literature searches in Web of Knowledge, PubMed, PsycINFO, EBSCO, and ScienceDirect.\n\n\nSTUDY SELECTION\nThe search was restricted using the terms (physical activity OR exercise OR fitness) AND (smartphone* OR mobile phone* OR cell phone*) AND (measurement OR intervention). Reviewed articles were required to be published in international academic peer-reviewed journals, or in full text from international scientific conferences, and focused on measuring physical activity through smartphone processing data and influencing people to be more active through smartphone applications.\n\n\nSTUDY APPRAISAL AND SYNTHESIS METHODS\nTwo reviewers independently performed the selection of articles and examined titles and abstracts to exclude those out of scope. Data on study characteristics, technologies used to objectively measure physical activity, strategies applied to influence activity; and the main study findings were extracted and reported.\n\n\nRESULTS\nA total of 26 articles (with the first published in 2007) met inclusion criteria. All studies were conducted in highly economically advantaged countries; 12 articles focused on special populations (e.g. obese patients). Studies measured physical activity using native mobile features, and/or an external device linked to an application. Measurement accuracy ranged from 52 to 100% (n = 10 studies). A total of 17 articles implemented and evaluated an intervention. Smartphone strategies to influence physical activity tended to be ad hoc, rather than theory-based approaches; physical activity profiles, goal setting, real-time feedback, social support networking, and online expert consultation were identified as the most useful strategies to encourage physical activity change. Only five studies assessed physical activity intervention effects; all used step counts as the outcome measure. Four studies (three pre-post and one comparative) reported physical activity increases (12-42 participants, 800-1,104 steps/day, 2 weeks-6 months), and one case-control study reported physical activity maintenance (n = 200 participants; >10,000 steps/day) over 3 months.\n\n\nLIMITATIONS\nSmartphone use is a relatively new field of study in physical activity research, and consequently the evidence base is emerging.\n\n\nCONCLUSIONS\nFew studies identified in this review considered the validity of phone-based assessment of physical activity. Those that did report on measurement properties found average-to-excellent levels of accuracy for different behaviors. The range of novel and engaging intervention strategies used by smartphones, and user perceptions on their usefulness and viability, highlights the potential such technology has for physical activity promotion. However, intervention effects reported in the extant literature are modest at best, and future studies need to utilize randomized controlled trial research designs, larger sample sizes, and longer study periods to better explore the physical activity measurement and intervention capabilities of smartphones." }, { "pmid": "25973205", "title": "Do non-elite older runners slow down more than younger runners in a 100 km ultra-marathon?", "abstract": "BACKGROUND\nThis study investigated changes in normalised running speed as a proxy for effort distribution over segments in male elite and age group 100 km ultra-marathoners with the assumption that older runners would slow down more than younger runners.\n\n\nMETHODS\nThe annual ten fastest finishers (i.e. elite and age group runners) competing between 2000 and 2009 in the '100 km Lauf Biel' were identified. Normalised average running speed (i.e. relative to segment 1 of the race corrected for gradient) was analysed as a proxy for pacing in elite and age group finishers. For each year, the ratio of the running speed from the final to the first segment for each age cohort was determined. These ratios were combined across years with the assumption that there were no 'extreme' wind events etc. which may have impacted the final relative to the first segment across years. The ratios between the age cohorts were compared using one-way ANOVA and Tukey's post-hoc test. The ratios between elite and age group runners were investigated using one-way ANOVA with Dunnett's multiple comparison post-hoc tests. The trend across age groups was investigated using simple regression analysis with age as the dependent variable.\n\n\nRESULTS\nNormalised average running speed was different between age group 18-24 years and age groups 25-29, 30-34, 35-39, 40-44, 45-49, 50-54, 55-59 and 65-69 years. Regression analysis showed no trend across age groups (r(2) = 0.003, p > 0.05).\n\n\nCONCLUSION\nTo summarize, (i) athletes in age group 18-24 years were slower than athletes in most other age groups and (ii) there was no trend of slowing down for older athletes." }, { "pmid": "9152686", "title": "Could a satellite-based navigation system (GPS) be used to assess the physical activity of individuals on earth?", "abstract": "OBJECTIVES\nTo test whether the Global Positioning System (GPS) could be potentially useful to assess the velocity of walking and running in humans.\n\n\nSUBJECT\nA young man was equipped with a GPS receptor while walking running and cycling at various velocity on an athletic track. The speed of displacement assessed by GPS, was compared to that directly measured by chronometry (76 tests).\n\n\nRESULTS\nIn walking and running conditions (from 2-20 km/h) as well as cycling conditions (from 20-40 km/h), there was a significant relationship between the speed assessed by GPS and that actually measured (r = 0.99, P < 0.0001) with little bias in the prediction of velocity. The overall error of prediction (s.d. of difference) averaged +/-0.8 km/h.\n\n\nCONCLUSION\nThe GPS technique appears very promising for speed assessment although the relative accuracy at walking speed is still insufficient for research purposes. It may be improved by using differential GPS measurement." }, { "pmid": "19887012", "title": "Global positioning system: a new opportunity in physical activity measurement.", "abstract": "Accurate measurement of physical activity is a pre-requisite to monitor population physical activity levels and design effective interventions. Global Positioning System (GPS) technology offers potential to improve the measurement of physical activity. This paper 1) reviews the extant literature on the application of GPS to monitor human movement, with a particular emphasis on free-living physical activity, 2) discusses issues associated with GPS use, and 3) provides recommendations for future research. Overall findings show that GPS is a useful tool to augment our understanding of physical activity by providing the context (location) of the activity and used together with Geographical Information Systems can provide some insight into how people interact with the environment. However, no studies have shown that GPS alone is a reliable and valid measure of physical activity." }, { "pmid": "23812857", "title": "Global positioning systems (GPS) and microtechnology sensors in team sports: a systematic review.", "abstract": "BACKGROUND\nUse of Global positioning system (GPS) technology in team sport permits measurement of player position, velocity, and movement patterns. GPS provides scope for better understanding of the specific and positional physiological demands of team sport and can be used to design training programs that adequately prepare athletes for competition with the aim of optimizing on-field performance.\n\n\nOBJECTIVE\nThe objective of this study was to conduct a systematic review of the depth and scope of reported GPS and microtechnology measures used within individual sports in order to present the contemporary and emerging themes of GPS application within team sports.\n\n\nMETHODS\nA systematic review of the application of GPS technology in team sports was conducted. We systematically searched electronic databases from earliest record to June 2012. Permutations of key words included GPS; male and female; age 12-50 years; able-bodied; and recreational to elite competitive team sports.\n\n\nRESULTS\nThe 35 manuscripts meeting the eligibility criteria included 1,276 participants (age 11.2-31.5 years; 95 % males; 53.8 % elite adult athletes). The majority of manuscripts reported on GPS use in various football codes: Australian football league (AFL; n = 8), soccer (n = 7), rugby union (n = 6), and rugby league (n = 6), with limited representation in other team sports: cricket (n = 3), hockey (n = 3), lacrosse (n = 1), and netball (n = 1). Of the included manuscripts, 34 (97 %) detailed work rate patterns such as distance, relative distance, speed, and accelerations, with only five (14.3 %) reporting on impact variables. Activity profiles characterizing positional play and competitive levels were also described. Work rate patterns were typically categorized into six speed zones, ranging from 0 to 36.0 km·h⁻¹, with descriptors ranging from walking to sprinting used to identify the type of activity mainly performed in each zone. With the exception of cricket, no standardized speed zones or definitions were observed within or between sports. Furthermore, speed zone criteria often varied widely within (e.g. zone 3 of AFL ranged from 7 to 16 km·h⁻¹) and between sports (e.g. zone 3 of soccer ranged from 3.0 to <13 km·h⁻¹ code). Activity descriptors for a zone also varied widely between sports (e.g. zone 4 definitions ranged from jog, run, high velocity, to high-intensity run). Most manuscripts focused on the demands of higher intensity efforts (running and sprint) required by players. Body loads and impacts, also summarized into six zones, showed small variations in descriptions, with zone criteria based upon grading systems provided by GPS manufacturers.\n\n\nCONCLUSION\nThis systematic review highlights that GPS technology has been used more often across a range of football codes than across other team sports. Work rate pattern activities are most often reported, whilst impact data, which require the use of microtechnology sensors such as accelerometers, are least reported. There is a lack of consistency in the definition of speed zones and activity descriptors, both within and across team sports, thus underscoring the difficulties encountered in meaningful comparisons of the physiological demands both within and between team sports. A consensus on definitions of speed zones and activity descriptors within sports would facilitate direct comparison of the demands within the same sport. Meta-analysis from systematic review would also be supported. Standardization of speed zones between sports may not be feasible due to disparities in work rate pattern activities." }, { "pmid": "14719979", "title": "Global positioning system and sport-specific testing.", "abstract": "Most physiological testing of athletes is performed in well-controlled situations in the laboratory. Multiple factors that are hard to control for have limited the use of sport-specific field testing. Recently, the technique of the differential global positioning system (dGPS) has been put forward as a way to monitor the position and speed of an athlete during outdoor activities with acceptable precision, thus controlling the two most important factors of performance in endurance athletics, i.e. inclination and speed. A detailed analysis of performance has been shown to be possible in combination with metabolic gas measurements. The combination of accelerometry and dGPS has also been shown to improve physiological field testing. The technique of dGPS could probably also be combined with other bio-measurements (e.g. electromyography and cycling cadence and power) and may enable other studies of exercise physiology in the field, otherwise restricted to the laboratory environment. This technique may also be of use in general exercise physiology where monitoring of patients with, for example, cardiovascular and pulmonary diseases, could be of interest for the future." } ]
International Journal of Biomedical Imaging
28367213
PMC5358478
10.1155/2017/9749108
Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM
The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this study, to improve the performance and reduce the complexity involves in the medical image segmentation process, we have investigated Berkeley wavelet transformation (BWT) based brain tumor segmentation. Furthermore, to improve the accuracy and quality rate of the support vector machine (SVM) based classifier, relevant features are extracted from each segmented tissue. The experimental results of proposed technique have been evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 96.51% accuracy, 94.2% specificity, and 97.72% sensitivity, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 0.82 dice similarity index coefficient, which indicates better overlap between the automated (machines) extracted tumor region with manually extracted tumor region by radiologists. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques.
2. Related WorksMedical image segmentation for detection of brain tumor from the magnetic resonance (MR) images or from other medical imaging modalities is a very important process for deciding right therapy at the right time. Many techniques have been proposed for classification of brain tumors in MR images, most notably, fuzzy clustering means (FCM), support vector machine (SVM), artificial neural network (ANN), knowledge-based techniques, and expectation-maximization (EM) algorithm technique which are some of the popular techniques used for region based segmentation and so to extract the important information from the medical imaging modalities. An overview and findings of some of the recent and prominent researches are presented here. Damodharan and Raghavan [10] have presented a neural network based technique for brain tumor detection and classification. In this method, the quality rate is produced separately for segmentation of WM, GM, CSF, and tumor region and claims an accuracy of 83% using neural network based classifier. Alfonse and Salem [11] have presented a technique for automatic classification of brain tumor from MR images using an SVM-based classifier. To improve the accuracy of the classifier, features are extracted using fast Fourier transform (FFT) and reduction of features is performed using Minimal-Redundancy-Maximal-Relevance (MRMR) technique. This technique has obtained an accuracy of 98.9%.The extraction of the brain tumor requires the separation of the brain MR images to two regions [12]. One region contains the tumor cells of the brain and the second contains the normal brain cells [13]. Zanaty [14] proposed a methodology for brain tumor segmentation based on a hybrid type of approach, combining FCM, seed region growing, and Jaccard similarity coefficient algorithm to measure segmented gray matter and white matter tissues from MR images. This method obtained an average segmentation score S of 90% at the noise level of 3% and 9%, respectively. Kong et al. [7] investigated automatic segmentation of brain tissues from MR images using discriminative clustering and future selection approach. Demirhan et al. [5] presented a new tissue segmentation algorithm using wavelets and neural networks, which claims effective segmentation of brain MR images into the tumor, WM, GM, edema, and CSF. Torheim et al. [15], Guo et al. [1], and Yao et al. [16] presented a technique which employed texture features, wavelet transform, and SVM's algorithm for effective classification of dynamic contrast-enhanced MR images, to handle the nonlinearity of real data and to address different image protocols effectively. Torheim et al. [15] also claim that their proposed technique gives better predictions and improved clinical factors, tumor volume, and tumor stage in comparison with first-order statistical features.Kumar and Vijayakumar [17] introduced brain tumor segmentation and classification based on principal component analysis (PCA) and radial basis function (RBF) kernel based SVM and claims similarity index of 96.20%, overlap fraction of 95%, and an extra fraction of 0.025%. The classification accuracy to identify tumor type of this method is 94% with total errors detected of 7.5%. Sharma et al. [18] have presented a highly efficient technique which claims accuracy of 100% in the classification of brain tumor from MR images. This method is utilizing texture-primitive features with artificial neural network (ANN) as segmentation and classifier tool. Cui et al. [19] applied a localized fuzzy clustering with spatial information to form an objective of medical image segmentation and bias field estimation for brain MR images. In this method, authors use Jaccard similarity index as a measurement of the segmentation accuracy and claim 83% to 95% accuracy to segment white matter, gray matter, and cerebrospinal fluid. Wang et al. [20] have presented a medical image segmentation technique based on active contour model to deal with the problem of intensity inhomogeneities in image segmentation. Chaddad [21] has proposed a technique of automatic feature extraction for brain tumor detection based on Gaussian mixture model (GMM) using MR images. In this method, using principal component analysis (PCA) and wavelet based features, the performance of the GMM feature extraction is enhanced. An accuracy of 97.05% for the T1-weighted and T2-weighted and 94.11% for FLAIR-weighted MR images are obtained.Deepa and Arunadevi [22] have proposed a technique of extreme learning machine for classification of brain tumor from 3D MR images. This method obtained an accuracy of 93.2%, the sensitivity of 91.6%, and specificity of 97.8%. Sachdeva et al. [23] have presented a multiclass brain tumor classification, segmentation, and feature extraction performed using a dataset of 428 MR images. In this method, authors used ANN and then PCA-ANN and observed the increment in classification accuracy from 77% to 91%.The above literature survey has revealed that some of the techniques are invented to obtain segmentation only; some of the techniques are invented to obtain feature extraction and some of the techniques are invented to obtain classification only. Feature extraction and reduction of feature vectors for effective segmentation of WM, GM, CSF, and infected tumor region and analysis on combined approach could not be conducted in all the published literature. Moreover, only a few features are extracted and therefore very low accuracy in tumor detection has been obtained. Also, all the above literatures are missing with the calculation of overlap that is dice similarity index, which is one of the important parameters to judge the accuracy of any brain tumor segmentation algorithm.In this study, we perform a combination of biologically inspired Berkeley wavelet transformation (BWT) and SVM as a classifier tool to improve diagnostic accuracy. The cause of this study is to extract information from the segmented tumor region and classify healthy and infected tumor tissues for a large database of medical images. Our results lead to conclude that the proposed method is suitable to integrate clinical decision support systems for primary screening and diagnosis by the radiologists or clinical experts.
[ "23790354", "25265636", "24240724", "24802069", "19893702", "23645344", "18194102" ]
[ { "pmid": "23790354", "title": "State of the art survey on MRI brain tumor segmentation.", "abstract": "Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). In brain tumor studies, the existence of abnormal tissues may be easily detectable most of the time. However, accurate and reproducible segmentation and characterization of abnormalities are not straightforward. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation, and the degree of user supervision. Interactive or semiautomatic methods are likely to remain dominant in practice for some time, especially in these applications where erroneous interpretations are unacceptable. This article presents an overview of the most relevant brain tumor segmentation methods, conducted after the acquisition of the image. Given the advantages of magnetic resonance imaging over other diagnostic imaging, this survey is focused on MRI brain tumor segmentation. Semiautomatic and fully automatic techniques are emphasized." }, { "pmid": "25265636", "title": "Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks.", "abstract": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema." }, { "pmid": "24240724", "title": "A watermarking-based medical image integrity control system and an image moment signature for tampering characterization.", "abstract": "In this paper, we present a medical image integrity verification system to detect and approximate local malevolent image alterations (e.g., removal or addition of lesions) as well as identifying the nature of a global processing an image may have undergone (e.g., lossy compression, filtering, etc.). The proposed integrity analysis process is based on nonsignificant region watermarking with signatures extracted from different pixel blocks of interest, which are compared with the recomputed ones at the verification stage. A set of three signatures is proposed. The first two devoted to detection and modification location are cryptographic hashes and checksums, while the last one is issued from the image moment theory. In this paper, we first show how geometric moments can be used to approximate any local modification by its nearest generalized 2-D Gaussian. We then demonstrate how ratios between original and recomputed geometric moments can be used as image features in a classifier-based strategy in order to determine the nature of a global image processing. Experimental results considering both local and global modifications in MRI and retina images illustrate the overall performances of our approach. With a pixel block signature of about 200 bit long, it is possible to detect, to roughly localize, and to get an idea about the image tamper." }, { "pmid": "24802069", "title": "Classification of dynamic contrast enhanced MR images of cervical cancers using texture analysis and support vector machines.", "abstract": "Dynamic contrast enhanced MRI (DCE-MRI) provides insight into the vascular properties of tissue. Pharmacokinetic models may be fitted to DCE-MRI uptake patterns, enabling biologically relevant interpretations. The aim of our study was to determine whether treatment outcome for 81 patients with locally advanced cervical cancer could be predicted from parameters of the Brix pharmacokinetic model derived from pre-chemoradiotherapy DCE-MRI. First-order statistical features of the Brix parameters were used. In addition, texture analysis of Brix parameter maps was done by constructing gray level co-occurrence matrices (GLCM) from the maps. Clinical factors and first- and second-order features were used as explanatory variables for support vector machine (SVM) classification, with treatment outcome as response. Classification models were validated using leave-one-out cross-model validation. A random value permutation test was used to evaluate model significance. Features derived from first-order statistics could not discriminate between cured and relapsed patients (specificity 0%-20%, p-values close to unity). However, second-order GLCM features could significantly predict treatment outcome with accuracies (~70%) similar to the clinical factors tumor volume and stage (69%). The results indicate that the spatial relations within the tumor, quantified by texture features, were more suitable for outcome prediction than first-order features." }, { "pmid": "19893702", "title": "Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network.", "abstract": "The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN). This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1) image filtering, 2) segmentation, 3) feature extraction, and 4) analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated." }, { "pmid": "23645344", "title": "Segmentation, feature extraction, and multiclass brain tumor classification.", "abstract": "Multiclass brain tumor classification is performed by using a diversified dataset of 428 post-contrast T1-weighted MR images from 55 patients. These images are of primary brain tumors namely astrocytoma (AS), glioblastoma multiforme (GBM), childhood tumor-medulloblastoma (MED), meningioma (MEN), secondary tumor-metastatic (MET), and normal regions (NR). Eight hundred fifty-six regions of interest (SROIs) are extracted by a content-based active contour model. Two hundred eighteen intensity and texture features are extracted from these SROIs. In this study, principal component analysis (PCA) is used for reduction of dimensionality of the feature space. These six classes are then classified by artificial neural network (ANN). Hence, this approach is named as PCA-ANN approach. Three sets of experiments have been performed. In the first experiment, classification accuracy by ANN approach is performed. In the second experiment, PCA-ANN approach with random sub-sampling has been used in which the SROIs from the same patient may get repeated during testing. It is observed that the classification accuracy has increased from 77 to 91 %. PCA-ANN has delivered high accuracy for each class: AS-90.74 %, GBM-88.46 %, MED-85 %, MEN-90.70 %, MET-96.67 %, and NR-93.78 %. In the third experiment, to remove bias and to test the robustness of the proposed system, data is partitioned in a manner such that the SROIs from the same patient are not common for training and testing sets. In this case also, the proposed system has performed well by delivering an overall accuracy of 85.23 %. The individual class accuracy for each class is: AS-86.15 %, GBM-65.1 %, MED-63.36 %, MEN-91.5 %, MET-65.21 %, and NR-93.3 %. A computer-aided diagnostic system comprising of developed methods for segmentation, feature extraction, and classification of brain tumors can be beneficial to radiologists for precise localization, diagnosis, and interpretation of brain tumors on MR images." }, { "pmid": "18194102", "title": "The berkeley wavelet transform: a biologically inspired orthogonal wavelet transform.", "abstract": "We describe the Berkeley wavelet transform (BWT), a two-dimensional triadic wavelet transform. The BWT comprises four pairs of mother wavelets at four orientations. Within each pair, one wavelet has odd symmetry, and the other has even symmetry. By translation and scaling of the whole set (plus a single constant term), the wavelets form a complete, orthonormal basis in two dimensions. The BWT shares many characteristics with the receptive fields of neurons in mammalian primary visual cortex (V1). Like these receptive fields, BWT wavelets are localized in space, tuned in spatial frequency and orientation, and form a set that is approximately scale invariant. The wavelets also have spatial frequency and orientation bandwidths that are comparable with biological values. Although the classical Gabor wavelet model is a more accurate description of the receptive fields of individual V1 neurons, the BWT has some interesting advantages. It is a complete, orthonormal basis and is therefore inexpensive to compute, manipulate, and invert. These properties make the BWT useful in situations where computational power or experimental data are limited, such as estimation of the spatiotemporal receptive fields of neurons." } ]
Frontiers in Neurorobotics
28381998
PMC5360715
10.3389/fnbot.2017.00013
Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture
Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation.
2. Related workThe proposed key pose based action recognition approach is motivated and inspired by recent evidences about the learning mechanisms and representations involved in the processing of articulated motion sequences, as well as hardware and software developments from various fields of visual sciences. For instance, empirical studies indicate, that special kinds of events within a motion sequence facilitate the recognition of an action. Additional evidences from psychophysics, as well as neurophysiology suggest that both, form and motion information contribute to the representation of an action. Modeling efforts propose functional mechanisms for the processing of biological motion and show how such processing principles can be transfered to technical domains. Deep convolutional networks make it possible to learn hierarchical object representations, which show an impressive recognition performance and enable the implementation of fast and energy efficient classification architectures, particularly in combination with neuromorphic hardware platforms. In the following sections, we will briefly introduce related work and results from different scientific fields, all contributing to a better understanding of action representation and the development of efficient action recognition approaches.2.1. Articulated and biological motionStarting with the pioneering work of Johansson (1973), perceptual sciences gained more and more insights about how biological motion might be represented in the human brain and what the characteristic properties of an articulated motion sequence are. In psychophysical experiments, humans show a remarkable performance in recognizing biological motions, even when the presented motion is reduced to a set of points moving coherently with body joints (point light stimuli; PLS). In a detection task, subjects were capable of recognizing a walking motion within about 200 ms (Johansson, 1976). These stimuli, however, are not free of – at least configurational – form information and the discussion about the contributions of form and motion in biological motion representation is still ongoing (Garcia and Grossman, 2008). Some studies indicate a stronger importance of motion cues (Mather and Murdoch, 1994), others emphasize the role of configurational form information (Lange and Lappe, 2006). Even less is known about the specific nature and characteristic of the visual cues which facilitate the recognition of a biological motion sequence. In Casile and Giese (2005), a statistical analysis as well as the results of psychophysical experiments indicate that local opponent motion in horizontal direction is one of the critical features for the recognition of PLS. Thurman and Grossman (2008) conclude, that there are specific moments in an action performance which are “more perceptually salient” compared to others. Their results emphasize the importance of dynamic cues in moments when the distance between opposing limbs is the lowest (corresponding to local opponent motion; maxima in the motion energy). On the contrary, more recent findings by Thirkettle et al. (2009) indicate, that moments of a large horizontal body extension (co-occurring with minima in the motion energy) facilitate the recognition of a biological motion in a PLS.In neurophysiology, functional imaging studies (Grossman et al., 2000), as well as single-cell recordings (Oram and Perrett, 1994) indicate the existence of specialized mechanisms for the processing of biological motion in the superior temporal sulcus (STS). STS has been suggested to be a point of convergence of the separate dorsal “where” and the ventral “what” pathways (Boussaoud et al., 1990; Felleman and Van Essen, 1991), containing cells which integrate form and motion information of biological objects (Oram and Perrett, 1996) and selectively respond to, e.g., object manipulation, face, limb and whole body motion (Puce and Perrett, 2003). Besides the evidence that both form and motion information contribute to the registration of biological motion, action specific cells in STS are reported to respond to static images of articulated bodies which in parallel evoke activities in the medio temporal (MT) and medial superior temporal (MST) areas of the dorsal stream (implied motion), although there is no motion present in the input signal (Kourtzi and Kanwisher, 2000; Jellema and Perrett, 2003). In line with the psychophysical studies, these results indicate that poses with a specific feature characteristic (here, articulation) facilitate the recognition of a human motion sequence.Complementary modeling efforts in the field of computational neuroscience suggest potential mechanisms which might explain the underlying neural processing and learning principles. In Giese and Poggio (2003) a model for the recognition of biological movements is proposed, which processes visual input along two separate form and motion pathways and temporally integrates the responses of prototypical motion and form patterns (snapshots) cells via asymmetric connections in both pathways. Layher et al. (2014) extended this model by incorporating an interaction between the two pathways, realizing the automatic and unsupervised learning of key poses by modulating the learning of the form prototypes using a motion energy based signal derived in the motion pathway. In addition, a feedback mechanism is proposed in this extended model architecture which (I) realizes sequence selectivity by temporal association learning and (II) gives a potential explanation for the activities in MT/MST observed for static images of articulated poses in neurophysiological studies.2.2. Action recognition in image sequencesIn computer vision, the term vision-based action recognition summarizes approaches to assign an action label to each frame or a collection of frames of an image sequence. Over the last decades, numerous vision-based action recognition approaches have been developed and different taxonomies have been proposed to classify them by different aspects of their processing principles. In Poppe (2010), action recognition methods are separated by the nature of the image representation they rely on, as well as the kind of the employed classification scheme. Image representations are divided into global representations, which use a holistic representation of the body in the region of interest (ROI; most often the bounding box around a body silhouette in the image space), and local representations, which describe image and motion characteristics in a spatial or spatio-temporal local neighborhood. Prominent examples for the use of whole body representations are motion history images (MHI) (Bobick and Davis, 2001), or the application of histograms of oriented gradients (HOG) (Dalal and Triggs, 2005; Thurau and Hlavác, 2008). Local representations are, e.g., employed in Dollar et al. (2005), where motion and form based descriptors are derived in the local neighborhood (cuboids) of spatio-temporal interest points. Classification approaches are separated into direct classification, which disregard temporal relationships (e.g., using histograms of prototype descriptors, Dollar et al., 2005) and temporal state-space models, which explicitly model temporal transitions between observations (e.g., by employing Hidden Markov models (HMMs) Yamato et al., 1992, or dynamic time warping (DTW) Chaaraoui et al., 2013). For further taxonomies and an exhaustive overview of computer vision action recognition approaches we refer to the excellent reviews in Gavrila (1999); Aggarwal and Ryoo (2011); Weinland et al. (2011).The proposed approach uses motion and form based feature properties to extract key pose frames. The identified key pose frames are used to learn class specific key pose representations using a deep convolutional neural network (DCNN). Classification is either performed framewise or by temporal integration through majority voting. Thus, following the taxonomy of Poppe (2010), the approach can be classified as using global representations together with a direct classification scheme. Key pose frames are considered as temporal events within an action sequence. This kind of action representation and classification is inherently invariant against variations in (recording and execution) speed. We do not argue that modeling temporal relationships between such events is not necessary in general. The very simple temporal integration scheme was chosen to focus on an analysis of the importance of key poses in the context of action representation and recognition. Because of the relevance to the presented approach, we will briefly compare specifically key pose base action recognition approaches in the following.2.3. Key pose based action recognitionKey pose based action recognition approaches differ in their understanding of the concept of key poses. Some take a phenomenological perspective and define key poses as events which possess a specific feature characteristic giving rise to their peculiarity. There is no a priori knowledge available about whether, when and how often such feature-driven events occur within an observed action sequence, neither during the establishment of the key pose representations during training, nor while trying to recognize an action sequence. Others regard key pose selection as the result of a statistical analysis, favoring poses which are easy to separate among different classes or maximally capture the characteristics of an action sequence. The majority of approaches rely on such statistical properties and either consider the intra- or the inter-class distribution of image-based pose descriptors to identify key poses in action sequences.Intra-class based approachesApproaches which evaluate intra-class properties of the feature distributions regard key poses as the most representative poses of an action and measures of centrality are exploited on agglomerations in pose feature spaces to identify the poses which are most common to an action sequence. In Chaaraoui et al. (2013), a contour based descriptor following (Dedeoğlu et al., 2006) is used. Key poses are selected by repetitive k-means clustering of the pose descriptors and evaluating the resulting clusters using a compactness metric. A sequence of nearest neighbor key poses is derived for each test sequence and dynamic time warping (DTW) is applied to account for different temporal scales. The class of the closest matching temporal sequence of key poses from the training set is used as the final recognition result. Based on histograms of oriented gradients (HOG) and histograms of weighted optical flow (HOWOF) descriptors, Cao et al. (2012) adapt a local linear embedding (LLE) strategy to establish a manifold model which reduces descriptor dimensionality, while preserving the local relationship between the descriptors. Key poses are identified by interpreting the data points (i.e., descriptors/poses) on the manifold as an adjacent graph and applying a PageRank (Brin and Page, 1998) based procedure to determine the vertices of the graph with the highest centrality, or relevance.In all, key pose selection based on an intra-class analysis of the feature distribution has the advantage of capturing the characteristics of one action in isolation, independent of other classes in a dataset. Thus, key poses are not dataset specific and – in principle – can also be shared among different actions. However, most intra-class distribution based approaches build upon measures of centrality (i.e., as a part of cluster algorithms) and thus key poses are dominated by frequent poses of an action. Because they are part of transitions between others, frequent poses tend to occur in different classes and thus do not help in separating them. Infrequent poses, on the other hand, are not captured very well, but are intuitively more likely to be discriminative. The authors' are not aware of an intra-class distribution based method which tries to identify key poses based on their infrequency or abnormality (e.g., by evaluating cluster sizes and distances).Inter-class based approachesApproaches based on inter-class distribution, on the other hand, consider highly discriminative poses as key poses to separate different action appearances. Discriminability is here defined as resulting in either the best classification performance or in maximum dissimilarities between the extracted pose descriptors of different classes. To maximize the classification performance, Weinland and Boyer (2008) propose a method of identifying a vocabulary of highly discriminative pose exemplars. In each iteration of the forward selection of key poses, one exemplar at a time is added to the set of key poses by independently evaluating the classification performance of the currently selected set of poses in union with one of the remaining exemplars in the training set. The pose exemplar, which increases classification performance the most is then added to the final key pose set. The procedure is repeated until a predefined number of key poses is reached. Classification is performed based on a distance metric obtained by either silhouette-to-silhouette or silhouette-to-edge matching. Liu et al. (2013) combine the output of the early stages of an HMAX inspired processing architecture (Riesenhuber and Poggio, 1999) with a center-surround feature map obtained by subtracting several layers of a Gaussian pyramid and a wavelet laplacian pyramid feature map into framewise pose descriptors. The linearized feature descriptors are projected into a low-dimensional subspace derived by principal component analysis (PCA). Key poses are selected by employing an adaptive boosting technique (AdaBoost; Freund and Schapire, 1995) to select the most discriminative feature descriptors (i.e., poses). A test action sequence is matched to the thus reduced number of exemplars per action by applying an adapted local naive Bayes nearest neighbor classification scheme (LNBNN; McCann and Lowe, 2012). Each descriptor of a test sequence is assigned to its k nearest neighbors and a classwise voting is updated by the distance of a descriptor to the respective neighbor weighted by the relative number of classes per descriptor. In Baysal et al. (2010), noise reduced edges of an image are chained into a contour segmented network (CSN) by using orientation and closeness properties and transformed into a 2-adjacent segment descriptor (k-AS; Ferrari et al., 2008). The most characteristic descriptors are determined by identifying k candidate key poses per class using the k-medoids clustering algorithm and selecting the most distinctive ones among the set of all classes using a similarity measure on the 2-AS descriptors. Classification is performed by assigning each frame to the class of the key pose with the highest similarity and sequence-wide majority voting. Cheema et al. (2011) follow the same key pose extraction scheme, but instead of selecting only the most distinctive ones, key pose candidates are weighted by the number of false and correct assignments to an action class. A weighted voting scheme is then used to classify a given test sequence. Thus, although key poses with large weights have an increased influence on the final class assignment, all key poses take part in the classification process. Zhao and Elgammal (2008) use an information theoretic approach to select key frames within action sequences. They propose to describe the local neighborhood of spatiotemporal interest points using an intensity gradient based descriptor (Dollar et al., 2005). The extracted descriptors are then clustered, resulting in a codebook of prototypical descriptors (visual words). The pose prototypes are used to estimate the discriminatory power of a frame by calculating a measure based on the conditional entropy given the visual words detected in a frame. The frames with the highest discriminatory power are marked as key frames. Chi-square distances of histogram based spatiotemporal representations are used to compare key frames from the test and training datasets and majority voting is used to assign an action class to a test sequence.For a given pose descriptor and/or classification architecture, inter-class based key pose selection methods in principle minimize the recognition error, either for the recognition of the key poses (e.g., Baysal et al., 2010; Liu et al., 2013) or for the action classification (e.g., Weinland and Boyer, 2008). But, on the other hand, key poses obtained by inter-class analysis inherently do not cover the most characteristic poses of an action, but the ones which are the most distinctive within a specific set of actions. Applying this class of algorithms on two different sets of actions sharing one common action might result in a different selection of key poses for the same action. Thus, once extracted, key pose representations do not necessarily generalize over different datasets/domains and, in addition, sharing of key poses between different classes is not intended.Feature-driven approachesFeature-driven key pose selection methods do not rely on the distribution of features or descriptors at all and define a key pose as a pose which co-occurs with a specific characteristic of an image or feature. Commonly employed features, such as extrema in a motion energy based signal, are often correlated with pose properties such as the degree of articulation or the extendedness. Compared to statistical methods, this is a more pose centered perspective, since parameters of the pose itself are used to select a key pose instead of parameters describing the relationship or differences between poses.Lv and Nevatia (2007) select key poses in sequences of 3D-joint positions by automatically locating extrema of the motion energy within temporal windows. Motion energy in their approach is determined by calculating the sum over the L2 norm of the motion vectors of the joints between two temporally adjacent timesteps. 3D motion capturing data is used to render 2D projections of the key poses from different view angles. Single frames of an action sequence are matched to the silhouettes of the resulting 2D key pose representations using an extension of the Pyramid Match Kernel algorithm (PMK; Grauman and Darrell, 2005). Transitions between key poses are modeled using action graph models. Given an action sequence, the most likely action model is determined using the Viterbi Algorithm. In Gong et al. (2010), a key pose selection mechanism for 3D human action representations is proposed. Per action sequence, feature vectors (three angles for twelve joints) are projected onto the subspace spanned by the first three eigenvectors obtained by PCA. Several instances of an action are synchronized to derive the mean performance (in terms of execution) of an action. Motion energy is then defined by calculating the Euclidean distance between two adjacent poses in the mean performance. The local extrema of the motion energy are used to select the key poses, which after their reconstruction in the original space are used as the vocabulary in a bag of words approach. During recognition, each pose within a sequence is assigned to the key pose with the minimum Euclidean distance resulting in a histogram of key pose occurrences per sequence. These histograms serve as input to a support vector machine (SVM) classifier. In Ogale et al. (2007), candidate key poses are extracted by localizing the extrema of the mean motion magnitude in the estimated optical flow. Redundant poses are sorted out pairwise by considering the ratio between the intersection and the union of two registered silhouettes. The final set of unique key poses is used to construct a probabilistic context-free grammar (PCFG). This method uses an inter-class metric to reject preselected key pose candidates and thus is not purely feature-driven.Feature-driven key pose selection methods are independent of the number of different actions within a dataset. Thus, retraining is not necessary if, e.g., a new action is added to a dataset and the sharing of key poses among different actions is in principle possible. Naturally, there is no guarantee, that the selected poses maximize the separability of pose or action classes.
[ "2358548", "15929657", "27651489", "1822724", "18000323", "18346774", "12612631", "17934233", "11054914", "26157000", "9377276", "14527537", "22392705", "1005623", "26147887", "10769305", "16540566", "23757577", "25104385", "23962364", "8836213", "12689371", "10526343", "18585892", "19757967", "18484834" ]
[ { "pmid": "2358548", "title": "Pathways for motion analysis: cortical connections of the medial superior temporal and fundus of the superior temporal visual areas in the macaque.", "abstract": "To identify the cortical connections of the medial superior temporal (MST) and fundus of the superior temporal (FST) visual areas in the extrastriate cortex of the macaque, we injected multiple tracers, both anterograde and retrograde, in each of seven macaques under physiological control. We found that, in addition to connections with each other, both MST and FST have widespread connections with visual and polysensory areas in posterior prestriate, parietal, temporal, and frontal cortex. In prestriate cortex, both areas have connections with area V3A. MST alone has connections with the far peripheral field representations of V1 and V2, the parieto-occipital (PO) visual area, and the dorsal prelunate area (DP), whereas FST alone has connections with area V4 and the dorsal portion of area V3. Within the caudal superior temporal sulcus, both areas have extensive connections with the middle temporal area (MT), MST alone has connections with area PP, and FST alone has connections with area V4t. In the rostral superior temporal sulcus, both areas have extensive connections with the superior temporal polysensory area (STP) in the upper bank of the sulcus and with area IPa in the sulcal floor. FST also has connections with the cortex in the lower bank of the sulcus, involving area TEa. In the parietal cortex, both the central field representation of MST and FST have connections with the ventral intraparietal (VIP) and lateral intraparietal (LIP) areas, whereas MST alone has connections with the inferior parietal gyrus. In the temporal cortex, the central field representation of MST as well as FST has connections with visual area TEO and cytoarchitectonic area TF. In the frontal cortex, both MST and FST have connections with the frontal eye field. On the basis of the laminar pattern of anterograde and retrograde label, it was possible to classify connections as forward, backward, or intermediate and thereby place visual areas into a cortical hierarchy. In general, MST and FST receive forward inputs from prestriate visual areas, have intermediate connections with parietal areas, and project forward to the frontal eye field and areas in the rostral superior temporal sulcus. Because of the strong inputs to MST and FST from area MT, an area known to play a role in the analysis of visual motion, and because MST and FST themselves have high proportions of directionally selective cells, they appear to be important stations in a cortical motion processing system." }, { "pmid": "15929657", "title": "Critical features for the recognition of biological motion.", "abstract": "Humans can perceive the motion of living beings from very impoverished stimuli like point-light displays. How the visual system achieves the robust generalization from normal to point-light stimuli remains an unresolved question. We present evidence on multiple levels demonstrating that this generalization might be accomplished by an extraction of simple mid-level optic flow features within coarse spatial arrangement, potentially exploiting relatively simple neural circuits: (1) A statistical analysis of the most informative mid-level features reveals that normal and point-light walkers share very similar dominant local optic flow features. (2) We devise a novel point-light stimulus (critical features stimulus) that contains these features, and which is perceived as a human walker even though it is inconsistent with the skeleton of the human body. (3) A neural model that extracts only these critical features accounts for substantial recognition rates for strongly degraded stimuli. We conclude that recognition of biological motion might be accomplished by detecting mid-level optic flow features with relatively coarse spatial localization. The computationally challenging reconstruction of precise position information from degraded stimuli might not be required." }, { "pmid": "27651489", "title": "Convolutional networks for fast, energy-efficient neuromorphic computing.", "abstract": "Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer." }, { "pmid": "1822724", "title": "Distributed hierarchical processing in the primate cerebral cortex.", "abstract": "In recent years, many new cortical areas have been identified in the macaque monkey. The number of identified connections between areas has increased even more dramatically. We report here on (1) a summary of the layout of cortical areas associated with vision and with other modalities, (2) a computerized database for storing and representing large amounts of information on connectivity patterns, and (3) the application of these data to the analysis of hierarchical organization of the cerebral cortex. Our analysis concentrates on the visual system, which includes 25 neocortical areas that are predominantly or exclusively visual in function, plus an additional 7 areas that we regard as visual-association areas on the basis of their extensive visual inputs. A total of 305 connections among these 32 visual and visual-association areas have been reported. This represents 31% of the possible number of pathways if each area were connected with all others. The actual degree of connectivity is likely to be closer to 40%. The great majority of pathways involve reciprocal connections between areas. There are also extensive connections with cortical areas outside the visual system proper, including the somatosensory cortex, as well as neocortical, transitional, and archicortical regions in the temporal and frontal lobes. In the somatosensory/motor system, there are 62 identified pathways linking 13 cortical areas, suggesting an overall connectivity of about 40%. Based on the laminar patterns of connections between areas, we propose a hierarchy of visual areas and of somatosensory/motor areas that is more comprehensive than those suggested in other recent studies. The current version of the visual hierarchy includes 10 levels of cortical processing. Altogether, it contains 14 levels if one includes the retina and lateral geniculate nucleus at the bottom as well as the entorhinal cortex and hippocampus at the top. Within this hierarchy, there are multiple, intertwined processing streams, which, at a low level, are related to the compartmental organization of areas V1 and V2 and, at a high level, are related to the distinction between processing centers in the temporal and parietal lobes. However, there are some pathways and relationships (about 10% of the total) whose descriptions do not fit cleanly into this hierarchical scheme for one reason or another. In most instances, though, it is unclear whether these represent genuine exceptions to a strict hierarchy rather than inaccuracies or uncertainities in the reported assignment." }, { "pmid": "18000323", "title": "Groups of adjacent contour segments for object detection.", "abstract": "We present a family of scale-invariant local shape features formed by chains of k connected, roughly straight contour segments (kAS), and their use for object class detection. kAS are able to cleanly encode pure fragments of an object boundary, without including nearby clutter. Moreover, they offer an attractive compromise between information content and repeatability, and encompass a wide variety of local shape structures. We also define a translation and scale invariant descriptor encoding the geometric configuration of the segments within a kAS, making kAS easy to reuse in other frameworks, for example as a replacement or addition to interest points. Software for detecting and describing kAS is released on lear.inrialpes.fr/software. We demonstrate the high performance of kAS within a simple but powerful sliding-window object detection scheme. Through extensive evaluations, involving eight diverse object classes and more than 1400 images, we 1) study the evolution of performance as the degree of feature complexity k varies and determine the best degree; 2) show that kAS substantially outperform interest points for detecting shape-based classes; 3) compare our object detector to the recent, state-of-the-art system by Dalal and Triggs [4]." }, { "pmid": "18346774", "title": "Necessary but not sufficient: motion perception is required for perceiving biological motion.", "abstract": "Researchers have argued that biological motion perception from point-light animations is resolved from stationary form information. To determine whether motion is required for biological motion perception, we measured discrimination thresholds at isoluminance. Whereas simple direction discriminations falter at isoluminance, biological motion perception fails entirely. However, when performance is measured as a function of contrast, it is apparent that biological motion is contrast-dependent, while direction discriminations are contrast invariant. Our results are evidence that biological motion perception requires intact motion perception, but is also mediated by a secondary mechanism that may be the integration of form and motion, or the computation of higher-order motion cues." }, { "pmid": "12612631", "title": "Neural mechanisms for the recognition of biological movements.", "abstract": "The visual recognition of complex movements and actions is crucial for the survival of many species. It is important not only for communication and recognition at a distance, but also for the learning of complex motor actions by imitation. Movement recognition has been studied in psychophysical, neurophysiological and imaging experiments, and several cortical areas involved in it have been identified. We use a neurophysiologically plausible and quantitative model as a tool for organizing and making sense of the experimental data, despite their growing size and complexity. We review the main experimental findings and discuss possible neural mechanisms, and show that a learning-based, feedforward model provides a neurophysiologically plausible and consistent summary of many key experimental results." }, { "pmid": "17934233", "title": "Actions as space-time shapes.", "abstract": "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action, and low quality video." }, { "pmid": "11054914", "title": "Brain areas involved in perception of biological motion.", "abstract": "These experiments use functional magnetic resonance imaging (fMRI) to reveal neural activity uniquely associated with perception of biological motion. We isolated brain areas activated during the viewing of point-light figures, then compared those areas to regions known to be involved in coherent-motion perception and kinetic-boundary perception. Coherent motion activated a region matching previous reports of human MT/MST complex located on the temporo-parieto-occipital junction. Kinetic boundaries activated a region posterior and adjacent to human MT previously identified as the kinetic-occipital (KO) region or the lateral-occipital (LO) complex. The pattern of activation during viewing of biological motion was located within a small region on the ventral bank of the occipital extent of the superior-temporal sulcus (STS). This region is located lateral and anterior to human MT/MST, and anterior to KO. Among our observers, we localized this region more frequently in the right hemisphere than in the left. This was true regardless of whether the point-light figures were presented in the right or left hemifield. A small region in the medial cerebellum was also active when observers viewed biological-motion sequences. Consistent with earlier neuroimaging and single-unit studies, this pattern of results points to the existence of neural mechanisms specialized for analysis of the kinematics defining biological motion." }, { "pmid": "26157000", "title": "Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.", "abstract": "Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "14527537", "title": "Cells in monkey STS responsive to articulated body motions and consequent static posture: a case of implied motion?", "abstract": "We show that populations of visually responsive cells in the anterior part of the superior temporal sulcus (STSa) of the macaque monkey code for the sight of both specific articulated body actions and the consequent articulated static body postures. We define articulated actions as actions where one body part (e.g. a limb or head) moves with respect to the remainder of the body which remains static; conversely non-articulated actions are actions where the equivalent body parts do not move with respect to each other but move as one. Similarly, articulated static body postures contain a torsion or rotation between parts, while non-articulated postures do not. Cells were tested with the sight of articulated and non-articulated actions followed by the resultant articulated or non-articulated static body postures. In addition, the static body postures that formed the start and end of the actions were tested in isolation. The cells studied did not respond to the sight of non-articulated static posture, which formed the starting-point of the action, but responded vigorously to the articulated static posture that formed the end-point of the action. Other static postures resembling the articulated end-point posture, but which were in a more relaxed muscular state (i.e. non-articulated), did not evoke responses. The cells did not respond to body actions that were less often associated with the effective static articulated postures. Our results suggest that the cells' responses were related to the implied action rather than the static posture per se. We propose that the neural representations in STSa for actual biological motion may also extend to biological motion implied from static postures. These representations could play a role in producing the activity in the medial temporal/medial superior temporal (V5(MT)/MST) areas reported in fMRI studies when subjects view still photographs of people in action." }, { "pmid": "22392705", "title": "3D convolutional neural networks for human action recognition.", "abstract": "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods." }, { "pmid": "26147887", "title": "Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.", "abstract": "It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns." }, { "pmid": "10769305", "title": "Activation in human MT/MST by static images with implied motion.", "abstract": "A still photograph of an object in motion may convey dynamic information about the position of the object immediately before and after the photograph was taken (implied motion). Medial temporal/medial superior temporal cortex (MT/MST) is one of the main brain regions engaged in the perceptual analysis of visual motion. In two experiments we examined whether MT/MST is also involved in representing implied motion from static images. We found stronger functional magnetic resonance imaging (fMRI) activation within MT/MST during viewing of static photographs with implied motion compared to viewing of photographs without implied motion. These results suggest that brain regions involved in the visual analysis of motion are also engaged in processing implied dynamic information from static images." }, { "pmid": "16540566", "title": "A model of biological motion perception from configural form cues.", "abstract": "Biological motion perception is the compelling ability of the visual system to perceive complex human movements effortlessly and within a fraction of a second. Recent neuroimaging and neurophysiological studies have revealed that the visual perception of biological motion activates a widespread network of brain areas. The superior temporal sulcus has a crucial role within this network. The roles of other areas are less clear. We present a computational model based on neurally plausible assumptions to elucidate the contributions of motion and form signals to biological motion perception and the computations in the underlying brain network. The model simulates receptive fields for images of the static human body, as found by neuroimaging studies, and temporally integrates their responses by leaky integrator neurons. The model reveals a high correlation to data obtained by neurophysiological, neuroimaging, and psychophysical studies." }, { "pmid": "23757577", "title": "Learning discriminative key poses for action recognition.", "abstract": "In this paper, we present a new approach for human action recognition based on key-pose selection and representation. Poses in video frames are described by the proposed extensive pyramidal features (EPFs), which include the Gabor, Gaussian, and wavelet pyramids. These features are able to encode the orientation, intensity, and contour information and therefore provide an informative representation of human poses. Due to the fact that not all poses in a sequence are discriminative and representative, we further utilize the AdaBoost algorithm to learn a subset of discriminative poses. Given the boosted poses for each video sequence, a new classifier named weighted local naive Bayes nearest neighbor is proposed for the final action classification, which is demonstrated to be more accurate and robust than other classifiers, e.g., support vector machine (SVM) and naive Bayes nearest neighbor. The proposed method is systematically evaluated on the KTH data set, the Weizmann data set, the multiview IXMAS data set, and the challenging HMDB51 data set. Experimental results manifest that our method outperforms the state-of-the-art techniques in terms of recognition rate." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "23962364", "title": "Responses of Anterior Superior Temporal Polysensory (STPa) Neurons to \"Biological Motion\" Stimuli.", "abstract": "Abstract Cells have been found in the superior temporal polysensory area (STPa) of the macaque temporal cortex that are selectively responsive to the sight of particular whole body movements (e.g., walking) under normal lighting. These cells typically discriminate the direction of walking and the view of the body (e.g., left profile walking left). We investigated the extent to which these cells are responsive under \"biological motion\" conditions where the form of the body is defined only by the movement of light patches attached to the points of limb articulation. One-third of the cells (25/72) selective for the form and motion of walking bodies showed sensitivity to the moving light displays. Seven of these cells showed only partial sensitivity to form from motion, in so far as the cells responded more to moving light displays than to moving controls but failed to discriminate body view. These seven cells exhibited directional selectivity. Eighteen cells showed statistical discrimination for both direction of movement and body view under biological motion conditions. Most of these cells showed reduced responses to the impoverished moving light stimuli compared to full light conditions. The 18 cells were thus sensitive to detailed form information (body view) from the pattern of articulating motion. Cellular processing of the global pattern of articulation was indicated by the observations that none of these cells were found sensitive to movement of individual limbs and that jumbling the pattern of moving limbs reduced response magnitude. A further 10 cells were tested for sensitivity to moving light displays of whole body actions other than walking. Of these cells 5/10 showed selectivity for form displayed by biological motion stimuli that paralleled the selectivity under normal lighting conditions. The cell responses thus provide direct evidence for neural mechanisms computing form from nonrigid motion. The selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion." }, { "pmid": "8836213", "title": "Integration of form and motion in the anterior superior temporal polysensory area (STPa) of the macaque monkey.", "abstract": "1. Processing of visual information in primates is believed to occur in at least two separate cortical pathways, commonly labeled the \"form\" and \"motion\" pathways. This division lies in marked contrast to our everyday visual experience, in which we have a unified percept of both the form and motion of objects, implying integration of both types of information. We report here on a neuronal population in the anterior part of the superior temporal polysensory area (STPa) both sensitive to form (heads and bodies) and selective for motion direction. 2. A total of 161 cells were found to be sensitive to body form and motion. The majority of cells (125 of 161, 78%) responded to only one combination of view and direction (termed unimodal cells, e.g., left profile view moving left, not right profile moving left, or left profile moving right). We show that the response of some of these cells is selective for both the motion and the form of a single object, not simply the juxtaposition of appropriate form and motion signals. 3. A smaller number of cells (9 of 161, 6%) responded selectively to two opposite combinations of view and direction (e.g., left profile moving left and right profile moving right, but no other view and direction combinations). A few cells (4 of 161, 2%) showed \"object-centered\" selectivity to view and direction combinations, responding to all directions of motion where the body moves in a direction compatible with the direction it faces, for example, responding to left profile going left, right profile going right, face view moving toward the observer, back view moving away from the observer, but not other view and direction combinations. 4. The majority of the neurons (106 of 138, 77%) selective for specific body view and direction combinations responded best to compatible motion (e.g., left profile moving left), and one fourth (23%) showed selectivity for incompatible motion (e.g., right profile moving left). 5. The relative strengths of motion and form inputs to cells in STPa conjointly sensitive to information about form and motion were assessed. The majority of the responses (95%) were characterized as showing nonlinear summation of form and motion inputs. 6. The capacity to discriminate different directions and different forms was compared across three populations of STPa cells, namely those sensitive to 1) form only, 2) motion only, and 3) both form and motion. The selectivity of the latter class could be predicted from combinations of the other two classes. 7. The response latencies of cells selective for form and motion are on average coincident with cells selective for direction of motion (but not stimulus form). Both these cell populations have response latencies on average 20 ms earlier than cells selective for static form. 8. Calculation of the average of early response latency cells (cell whose response latency was under the sample mean) suggests that direction information is present in cell responses some 35 ms before form information becomes evident. Direction information and form information become evident within 5 ms of each other in the average late response latency cells (those cells whose response latency was greater than the sample mean). Inputs relating to movement show an initial response period that does not discriminate direction. The quality of initial direction discrimination appeared to be independent of response latency. The initial discrimination of form was related to response latency in that cells with longer response latencies showed greater initial discrimination of form in their responses. We argue that these findings are consistent with form inputs arriving to area STPa approximately 20 ms after motion inputs into area STPa." }, { "pmid": "12689371", "title": "Electrophysiology and brain imaging of biological motion.", "abstract": "The movements of the faces and bodies of other conspecifics provide stimuli of considerable interest to the social primate. Studies of single cells, field potential recordings and functional neuroimaging data indicate that specialized visual mechanisms exist in the superior temporal sulcus (STS) of both human and non-human primates that produce selective neural responses to moving natural images of faces and bodies. STS mechanisms also process simplified displays of biological motion involving point lights marking the limb articulations of animate bodies and geometrical shapes whose motion simulates purposeful behaviour. Facial movements such as deviations in eye gaze, important for gauging an individual's social attention, and mouth movements, indicative of potential utterances, generate particularly robust neural responses that differentiate between movement types. Collectively such visual processing can enable the decoding of complex social signals and through its outputs to limbic, frontal and parietal systems the STS may play a part in enabling appropriate affective responses and social behaviour." }, { "pmid": "10526343", "title": "Hierarchical models of object recognition in cortex.", "abstract": "Visual processing in cortex is classically modeled as a hierarchy of increasingly sophisticated representations, naturally extending the model of simple to complex cells of Hubel and Wiesel. Surprisingly, little quantitative modeling has been done to explore the biological feasibility of this class of models to explain aspects of higher-level visual processing such as object recognition. We describe a new hierarchical model consistent with physiological data from inferotemporal cortex that accounts for this complex visual task and makes testable predictions. The model is based on a MAX-like operation applied to inputs to certain cortical neurons that may have a general role in cortical function." }, { "pmid": "18585892", "title": "Recognizing emotions expressed by body pose: a biologically inspired neural model.", "abstract": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language." }, { "pmid": "19757967", "title": "Contributions of form, motion and task to biological motion perception.", "abstract": "The ability of human observers to detect 'biological motion' of humans and animals has been taken as evidence of specialized perceptual mechanisms. This ability remains unimpaired when the stimulus is reduced to a moving array of dots representing only the joints of the agent: the point light walker (PLW) (G. Johansson, 1973). Such stimuli arguably contain underlying form, and recent debate has centered on the contributions of form and motion to their processing (J. O. Garcia & E. D. Grossman, 2008; E. Hiris, 2007). Human actions contain periodic variations in form; we exploit this by using brief presentations to reveal how these natural variations affect perceptual processing. Comparing performance with static and dynamic presentations reveals the influence of integrative motion signals. Form information appears to play a critical role in biological motion processing and our results show that this information is supported, not replaced, by the integrative motion signals conveyed by the relationships between the dots of the PLW. However, our data also suggest strong task effects on the relevance of the information presented by the PLW. We discuss the relationship between task performance and stimulus in terms of form and motion information, and the implications for conclusions drawn from PLW based studies." }, { "pmid": "18484834", "title": "Temporal \"Bubbles\" reveal key features for point-light biological motion perception.", "abstract": "Humans are remarkably good at recognizing biological motion, even when depicted as point-light animations. There is currently some debate as to the relative importance of form and motion cues in the perception of biological motion from the simple dot displays. To investigate this issue, we adapted the \"Bubbles\" technique, most commonly used in face and object perception, to isolate the critical features for point-light biological motion perception. We find that observer sensitivity waxes and wanes during the course of an action, with peak discrimination performance most strongly correlated with moments of local opponent motion of the extremities. When dynamic cues are removed, instances that are most perceptually salient become the least salient, evidence that the strategies employed during point-light biological motion perception are not effective for recognizing human actions from static patterns. We conclude that local motion features, not global form templates, are most critical for perceiving point-light biological motion. These experiments also present a useful technique for identifying key features of dynamic events." } ]
Transactions of the Association for Computational Linguistics
28344978
PMC5361062
null
Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health
Mental illness is one of the most pressing public health issues of our time. While counseling and psychotherapy can be effective treatments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with labeled outcomes of the conversations. In this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. We develop a set of novel computational discourse analysis methods to measure how various linguistic aspects of conversations are correlated with conversation outcomes. Applying techniques such as sequence-based conversation models, language model comparisons, message clustering, and psycholinguistics-inspired word frequency analyses, we discover actionable conversation strategies that are associated with better conversation outcomes.
2 Related WorkOur work relates to two lines of research:Therapeutic Discourse Analysis & PsycholinguisticsThe field of conversation analysis was born in the 1960s out of a suicide prevention center (Sacks and Jefferson, 1995; Van Dijk, 1997). Since then conversation analysis has been applied to various clinical settings including psychotherapy (Labov and Fanshel, 1977). Work in psycholinguistics has demonstrated that the words people use can reveal important aspects of their social and psychological worlds (Pennebaker et al., 2003). Previous work also found that there are linguistic cues associated with depression (Ramirez-Esparza et al., 2008; Campbell and Pennebaker, 2003) as well as with suicude (Pestian et al., 2012). These findings are consistent with Beck’s cognitive model of depression (1967; cognitive symptoms of depression precede the affective and mood symptoms) and with Pyszczynski and Greenberg’s self-focus model of depression (1987; depressed persons engage in higher levels of self-focus than non-depressed persons).In this work, we propose an operationalized psycholinguistic model of perspective change and further provide empirical evidence for these theoretical models of depression.Large-scale Computational Linguistics Applied to ConversationsLarge-scale studies have revealed subtle dynamics in conversations such as coordination or style matching effects (Niederhoffer and Pennebaker, 2002; Danescu-Niculescu-Mizil, 2012) as well as expressions of social power and status (Bramsen et al., 2011; Danescu-Niculescu-Mizil et al., 2012). Other studies have connected writing to measures of success in the context of requests (Althoff et al., 2014), user retention (Althoff and Leskovec, 2015), novels (Ashok et al., 2013), and scientific abstracts (Guerini et al., 2012). Prior work has modeled dialogue acts in conversational speech based on linguistic cues and discourse coherence (Stolcke et al., 2000). Unsupervised machine learning models have also been used to model conversations and segment them into speech acts, topical clusters, or stages. Most approaches employ Hidden Markov Model-like models (Barzilay and Lee, 2004; Ritter et al., 2010; Paul, 2012; Yang et al., 2014) which are also used in this work to model progression through conversation stages.Very recently, technology-mediated counseling has allowed the collection of large datasets on counseling. Howes et al. (2014) find that symptom severity can be predicted from transcript data with comparable accuracy to face-to-face data but suggest that insights into style and dialogue structure are needed to predict measures of patient progress. Counseling datasets have also been used to predict the conversation outcome (Huang, 2015) but without modeling the within-conversation dynamics that are studied in this work. Other work has explored how novel interfaces based on topic models can support counselors during conversations (Dinakar et al., 2014a; 2014b; 2015; Chen, 2014).Our work joins these two lines of research by developing computational discourse analysis methods applicable to large datasets that are grounded in therapeutic discourse analysis and psycholinguistics.
[]
[]
Frontiers in Neuroinformatics
28381997
PMC5361107
10.3389/fninf.2017.00021
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
1.3. Related work1.3.1. Generic workflow enginesLike OpenMOLE, other initiatives made the choice not to target a specific community. Kepler (Altintas et al., 2004) was one of the first general-purpose scientific workflow systems, recognizing the need for transparent and simplified access to high performance computing platforms more than a decade ago. Pegasus (Deelman et al., 2005) is a system that initially gained popularity for mapping complex workflows to resources resources in distributed environments without requiring input from the user.PSOM (Pipeline System for Octave and Matlab) (Bellec et al., 2012) is a workflow system centered around Matlab/Octave. Although this is certainly a good asset for this userbase, it revolves around Matlab, a proprietary system. This hinders by definition sharing workflows to the wider community and reduces the reproducibility of experiments.1.3.2. Community-tailored workflow enginesOn the other hand, some communities have seen the emergence of tailored workflow managers. For example, the bioinformatics community has developed Taverna (Oinn et al., 2004) and Galaxy (Goecks et al., 2010) for the needs of their community.In the specific case of the neuroimaging field, two main solutions emerge: NiPype (Gorgolewski et al., 2011) and LONI (Rex et al., 2003). NiPype is organized around three layers. The most promising one is the top-level common interface that provides a Python abstraction of the main neuroimaging toolkits (FSL, SPM, …). It is extremely useful to compare equivalent methods across multiple packages. NiPype also offers pipelining possibilities and a basic workload delegation layer only targeting the cluster environments SGE and PBS. Workflows are delegated to these environments as a whole, without the possibility to exploit a finer grain parallelism among the different tasks.The LONI Pipeline provides a graphical interface for choosing processing blocks from a predefined library to form the pipeline. It supports workload delegation to clusters preconfigured to understand the DRMAA API (Tröger et al., 2012).However, the LONI Pipeline displays limitations at three levels. First, the format used to define new nodes is XML (eXtensible Markup Language), and assumes the packaged tools offer a well-formed command line and its input parameters. On this aspect, the Python interfaces forming NiPype's top layer is far superior to LONI pipeline's approach. Second, one might also regret the impossibility to script workflows, to the best of our knowledge.The third and main drawback of the LONI pipeline is in our opinion its restrictive licensing, which prevents an external user to modify and redistribute the modifications easily. Previous works in the literature have shown the importance of developing and releasing scientific software under Free and Open Source licenses (Stodden, 2009; Peng, 2011). This is of tremendous importance to enable reproducibility and thorough peer-reviewing of scientific results.Finally, we have recently noted another effort developed in Python: FastR3 (Achterberg et al., 2015). It is designed around a plugin system that enables connecting to different data sources or execution environments. At the moment, execution environments can only be addressed through the DRMA (Distributed Resource Management Application) API but more environments should be provided in the future.1.3.3. Level of support of HPC environmentsTable 1 lists the support for various HPC environments in the workflow managers studied in this section. It also sums up the features and domains of application for each tool.Table 1Summary table of the features, HPC environments supported and domains of application of various workflow managers.Workflow engineLocal multi-processingHPC supportGrid supportCloud supportGalaxy4YesDRMAA clustersNoNo (manual cluster deployment)Taverna5YesNoNoNoFastRYesDRMAA clustersNoNoLONI6NoDRMAA clustersNoNo (manual cluster deployment)NiPypeYesPBS/Torque, SGENoNoKepler7YesPBS, Condor, LoadLevelerGlobusNoPegasus8No (need local Condor)Condor, PBSNoNo (manual cluster deployment)PSOMYesNoNoNoOpenMOLEYesCondor, Slurm, PBS, SGE, OARAd hoc grids, gLite/EMI, Dirac, EGIEC2 (fully automated)9Workflow engineScripting supportGUIGeneric/CommunityLicenseGalaxyNoYesBioInformaticsAFL 3.0TavernaNoYesBioInformaticsApache 2.0FastRPythonNoNeuroimagingBSDLONINoYesNeuroimagingProprietary (LONI)NiPypePythonNoNeuroimagingBSDKeplerPartly with RYesGenericBSDPegasusPython, Java, PerlNoGenericApache 2.0PSOMMatlabNoGenericMITOpenMOLEDomain Specific Language, ScalaYesGenericAGPL 3Information was drawn from the web pages in footnote when present, or from the reference paper cited in the section otherwise.To the best of our knowledge, we are not aware of any workflow engine that targets as many environments as OpenMOLE, but more importantly that introduces an advanced service layer to distribute the workload. When it comes to very large scale infrastructures such as grids and clouds, sophisticated submission strategies taking into account the state of the resources as well as implementing a level of fault tolerance must be available. Most of the other workflow engines offer service delegation layers that simply send jobs to a local cluster. OpenMOLE implements expert submission strategies (job grouping, over submission, …), harnesses efficient middlewares such as Dirac, and automatically manages end-to-end data transfer even across heterogeneous computing environments.Compared to other workflow processing engines, OpenMOLE promotes a zero-deployment approach by accessing the computing environments from bare metal, and copies on-the-fly any software component required for a successful remote execution. OpenMOLE also encourages the use of software components developed in heterogeneous programming languages and enables users to easily replace the elements involved in the workflow.
[ "24600388", "17070705", "22493575", "26368917", "20738864", "21897815", "11577229", "23658616", "22334356", "18519166", "24816548", "23758125", "15201187", "22144613", "12880830" ]
[ { "pmid": "24600388", "title": "Machine learning for neuroimaging with scikit-learn.", "abstract": "Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain." }, { "pmid": "17070705", "title": "Probabilistic diffusion tractography with multiple fibre orientations: What can we gain?", "abstract": "We present a direct extension of probabilistic diffusion tractography to the case of multiple fibre orientations. Using automatic relevance determination, we are able to perform online selection of the number of fibre orientations supported by the data at each voxel, simplifying the problem of tracking in a multi-orientation field. We then apply the identical probabilistic algorithm to tractography in the multi- and single-fibre cases in a number of example systems which have previously been tracked successfully or unsuccessfully with single-fibre tractography. We show that multi-fibre tractography offers significant advantages in sensitivity when tracking non-dominant fibre populations, but does not dramatically change tractography results for the dominant pathways." }, { "pmid": "22493575", "title": "The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows.", "abstract": "The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources." }, { "pmid": "26368917", "title": "Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns.", "abstract": "Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model's predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic." }, { "pmid": "20738864", "title": "Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences.", "abstract": "Increased reliance on computational approaches in the life sciences has revealed grave concerns about how accessible and reproducible computation-reliant results truly are. Galaxy http://usegalaxy.org, an open web-based platform for genomic research, addresses these problems. Galaxy automatically tracks and manages data provenance and provides support for capturing the context and intent of computational methods. Galaxy Pages are interactive, web-based documents that provide users with a medium to communicate a complete computational analysis." }, { "pmid": "21897815", "title": "Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python.", "abstract": "Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research." }, { "pmid": "11577229", "title": "Distributed and overlapping representations of faces and objects in ventral temporal cortex.", "abstract": "The functional architecture of the object vision pathway in the human brain was investigated using functional magnetic resonance imaging to measure patterns of response in ventral temporal cortex while subjects viewed faces, cats, five categories of man-made objects, and nonsense pictures. A distinct pattern of response was found for each stimulus category. The distinctiveness of the response to a given category was not due simply to the regions that responded maximally to that category, because the category being viewed also could be identified on the basis of the pattern of response when those regions were excluded from the analysis. Patterns of response that discriminated among all categories were found even within cortical regions that responded maximally to only one category. These results indicate that the representations of faces and objects in ventral temporal cortex are widely distributed and overlapping." }, { "pmid": "23658616", "title": "Accelerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs.", "abstract": "With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation." }, { "pmid": "22334356", "title": "Model-based analysis of multishell diffusion MR data for tractography: how to get over fitting problems.", "abstract": "In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-monoexponential decay, commonly observed in experimental data, is shown to induce overfitting in the distribution of fiber orientations when not considered in the model. Extra fiber orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous gamma distribution of diffusivities, which significantly improves the fitting and reduces the overfitting. Using in vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-monoexponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fiber orientations in white matter and near the cortex." }, { "pmid": "18519166", "title": "Provenance in neuroimaging.", "abstract": "Provenance, the description of the history of a set of data, has grown more important with the proliferation of research consortia-related efforts in neuroimaging. Knowledge about the origin and history of an image is crucial for establishing data and results quality; detailed information about how it was processed, including the specific software routines and operating systems that were used, is necessary for proper interpretation, high fidelity replication and re-use. We have drafted a mechanism for describing provenance in a simple and easy to use environment, alleviating the burden of documentation from the user while still providing a rich description of an image's provenance. This combination of ease of use and highly descriptive metadata should greatly facilitate the collection of provenance and subsequent sharing of data." }, { "pmid": "24816548", "title": "Automatic whole brain MRI segmentation of the developing neonatal brain.", "abstract": "Magnetic resonance (MR) imaging is increasingly being used to assess brain growth and development in infants. Such studies are often based on quantitative analysis of anatomical segmentations of brain MR images. However, the large changes in brain shape and appearance associated with development, the lower signal to noise ratio and partial volume effects in the neonatal brain present challenges for automatic segmentation of neonatal MR imaging data. In this study, we propose a framework for accurate intensity-based segmentation of the developing neonatal brain, from the early preterm period to term-equivalent age, into 50 brain regions. We present a novel segmentation algorithm that models the intensities across the whole brain by introducing a structural hierarchy and anatomical constraints. The proposed method is compared to standard atlas-based techniques and improves label overlaps with respect to manual reference segmentations. We demonstrate that the proposed technique achieves highly accurate results and is very robust across a wide range of gestational ages, from 24 weeks gestational age to term-equivalent age." }, { "pmid": "23758125", "title": "Automated processing of zebrafish imaging data: a survey.", "abstract": "Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines." }, { "pmid": "15201187", "title": "Taverna: a tool for the composition and enactment of bioinformatics workflows.", "abstract": "MOTIVATION\nIn silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available with programmatic access in the form of Web services. Bioinformatics scientists will need to orchestrate these Web services in workflows as part of their analyses.\n\n\nRESULTS\nThe Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. Two examples are used to illustrate the ease by which in silico experiments can be represented as Scufl workflows using the workbench application." }, { "pmid": "22144613", "title": "Reproducible research in computational science.", "abstract": "Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible." }, { "pmid": "12880830", "title": "The LONI Pipeline Processing Environment.", "abstract": "The analysis of raw data in neuroimaging has become a computationally entrenched process with many intricate steps run on increasingly larger datasets. Many software packages exist that provide either complete analyses or specific steps in an analysis. These packages often possess diverse input and output requirements, utilize different file formats, run in particular environments, and have limited abilities with certain types of data. The combination of these packages to achieve more sensitive and accurate results has become a common tactic in brain mapping studies but requires much work to ensure valid interoperation between programs. The handling, organization, and storage of intermediate data can prove difficult as well. The LONI Pipeline Processing Environment is a simple, efficient, and distributed computing solution to these problems enabling software inclusion from different laboratories in different environments. It is used here to derive a T1-weighted MRI atlas of the human brain from 452 normal young adult subjects with fully automated processing. The LONI Pipeline Processing Environment's parallel processing efficiency using an integrated client/server dataflow model was 80.9% when running the atlas generation pipeline from a PC client (Acer TravelMate 340T) on 48 dedicated server processors (Silicon Graphics Inc. Origin 3000). The environment was 97.5% efficient when the same analysis was run on eight dedicated processors." } ]
BMC Medical Informatics and Decision Making
28330491
PMC5363029
10.1186/s12911-017-0424-6
Orchestrating differential data access for translational research: a pilot implementation
BackgroundTranslational researchers need robust IT solutions to access a range of data types, varying from public data sets to pseudonymised patient information with restricted access, provided on a case by case basis. The reason for this complication is that managing access policies to sensitive human data must consider issues of data confidentiality, identifiability, extent of consent, and data usage agreements. All these ethical, social and legal aspects must be incorporated into a differential management of restricted access to sensitive data.MethodsIn this paper we present a pilot system that uses several common open source software components in a novel combination to coordinate access to heterogeneous biomedical data repositories containing open data (open access) as well as sensitive data (restricted access) in the domain of biobanking and biosample research. Our approach is based on a digital identity federation and software to manage resource access entitlements.ResultsOpen source software components were assembled and configured in such a way that they allow for different ways of restricted access according to the protection needs of the data. We have tested the resulting pilot infrastructure and assessed its performance, feasibility and reproducibility.ConclusionsCommon open source software components are sufficient to allow for the creation of a secure system for differential access to sensitive data. The implementation of this system is exemplary for researchers facing similar requirements for restricted access data. Here we report experience and lessons learnt of our pilot implementation, which may be useful for similar use cases. Furthermore, we discuss possible extensions for more complex scenarios.
Related workMany different approaches and systems are used for tackling the aims and issues we have addressed in the pilot. Biological material repositories similar to BioSD exist, varying in scope [82, 83], geographical reference area [84] and scale [85, 86]. BioSD is mainly a European reference resource for public biosample data and metadata. A similar variety exists in the arena of clinical data resources [87]. In this field, the LPC Catalogue is among the most prominent biobank catalogues in Europe, while a wide range of biobanks with different scales and scopes exist [88]. Several technologies and approaches are available to manage identities and application access rights [27–32]. For instance, commercial systems like OpenID [89] tend to prefer technical simplicity over advanced features (e.g., identity federation is not a standard feature within OpenID). We have chosen Shibboleth for multiple reasons: it is reliable software based on the SAML standard, it is well-known among research organisations, and the organisations involved in the pilot were already using Shibboleth when we started our work. Permission and access management is an issue wider than technology, which encompasses IT solutions, policies like access audits and new personnel checking and regulatory compliance [16, 90] The access control used in REMS can be seen as a variant of a lists-based access control approach (ACL [91]). Compared to similar products [92–94], REMS is focused on granting resource access based on the commitment to a data access agreement, and the final approval from personnel with the data access control role. Moreover, REMS allows for the definition of workflows to obtain and finalise the access approval procedures, and it logs the actions during the execution of these workflows. Finally, both REMS and the other components we have used are modular and can be composed into a larger system (e.g., with respect to the distribution of identities). While one might prefer simpler options on a smaller scale [95], our approach gives the flexibility to implement larger infrastructures with existing common technologies. The approach used in the pilot does not address the further data protection that is often ensured by establishing different data access levels (e.g., original patient records, de-identified/obfuscated data, aggregated data, disclosure of only summary statistics, computed at the source of data [96]) and by classifying users based on user trustworthiness [97, 98]. The pilot approach is agnostic with respect to the resource that is controlled and the specific protection mechanism that this has in place, which is made possible by the fact that both Shibboleth and REMS essentially see a resource as a reference, such as a URL to a web application or a web link to a file download. For instance, one might adopt our approach for mediating access to resources providing data summaries in ways similar to the BBMRI [98, 99], as well as in case of resources that grant access to web services [100] and local computations [96].The pilot in the context of data access frameworksIn life science increasingly medical data have to be effectively accessed and linked. This expanding volume of human data is stored in various databases, repositories, and patient registries, while protecting data privacy and the legitimate interests of patients as well as data subjects. Regarding the purpose of ensuring protection of human data while enabling data sharing, several approaches have been suggested that range from the creation of a political framework in the form of resolutions or treaties, to operational guidelines for data sharing [101]. Such frameworks include concepts like legitimate public health purpose, minimum information necessary, privacy and security standards, data use agreements [102], ethical codes like the IMIA (International Medical Informatics Association) Code of Ethics for Health Information Professionals [103] and AMIA’s (American Medical Informatics Association) Code of Professional and Ethical Conduct, guidance for genomic data, and potential privacy risks [104]. More concrete approaches are a human rights-based system for an international code of conduct for genomic and clinical data sharing [105], recommendations about clinical databases and privacy protections [106], and healthcare privacy protection based on differential privacy-preserving methods (iDASH, integrating Data for Analysis, Anonymization, and Sharing) [107, 108].Genetic sequence databases are an important part of many biomedical research efforts and contained in many data repositories and biosamples databases. However, human genetic data should only be made available if it can be protected so that the privacy of data subjects is not revealed. The problem is that individual genomic sequence data (e.g. SNPs) are potentially “identifiable” using common identifiers [106, 109, 110]. In biobanking many new population biobanks and cohort studies were created to produce information about possible associations between genotype and phenotype, an association that is important to understand the causes of diseases. Together with BBMRI, different initiatives exist that address the protection of data privacy and that further the standardization and harmonization of data management of genomic data and the sharing of data and biosamples, for example: Public Population Project in Genomics (P3G [111]), International Society for Biological and Environmental Repositories (ISBER [112]), Biobank Standardisation and Harmonisation for Research Excellence projects [113] and the Electronic Medical Records and Genomics (eMERGE) Network [11, 114].The constraints arising from limitations defined by the informed consent of the data subject have to be reflected in data access agreements and data transfer agreements. In general, the rule applies that data can only be made available to the extent that is allowed under the local legal requirements relevant for the data provider including ethics votes, vote by data access committee and the consent by the data subject. Data sharing should be an important part of an overall data management plan, which is a key element to support data access and sustainability. A data sharing agreement should supplement and not supplant the data management plan because the sharing agreement is about relationship building and trust building. It supports the long term planning and finding ways to maximize the use of data.Anonymisation is becoming increasingly more difficult to achieve due to the increase in health data such as genomic data that is potentially identifying. As mentioned above, although anonymisation is protecting the privacy needs of the data subjects, it is an imperfect solution and must be supplemented by additional solutions that build trust and prevent researchers from trying to identify study subjects. In the end, what is necessary for research is a culture of responsibility and data governance when dealing with human data. Building blocks that support and strengthen such culture are data sharing agreements, strict authentication and authorisation methods and the monitoring and tracking of data usage. The created pilot fits into such efforts, because, by using and combining several open source components, it created an efficient authentication and authorisation framework for the access to sensitive data that can support efforts for trust building. The pilot must be seen in connection with the creation of a European Open Science Cloud, a federated environment for scientific data sharing and reuse, based on existing and emerging elements [115]. The complexity of current data sharing practices requires new mechanisms that are more flexible and adjustable and are employing proven components, like the open source authentication components of the pilot.
[ "18182604", "24008998", "21734173", "22392843", "24144394", "22112900", "17077452", "21632745", "14624258", "23305810", "12039510", "21890450", "17130065", "24513229", "12904522", "12424005", "22505850", "23413435", "18560089", "22096232", "24265224", "25361974", "26527722", "25355519", "24849882", "26111507", "22807660", "26809823", "27338147", "22417641", "22588877", "15188009", "22139929", "21969471", "22434835", "24608524", "24835880", "22955497", "18096909", "24821736", "24682735", "22722688", "23221359", "23533569", "19717803", "25377061", "19567443", "22733977", "24573176", "20051768", "25521367", "25521230", "15247459", "22404490", "23743551", "11396337" ]
[ { "pmid": "24008998", "title": "Some experiences and opportunities for big data in translational research.", "abstract": "Health care has become increasingly information intensive. The advent of genomic data, integrated into patient care, significantly accelerates the complexity and amount of clinical data. Translational research in the present day increasingly embraces new biomedical discovery in this data-intensive world, thus entering the domain of \"big data.\" The Electronic Medical Records and Genomics consortium has taught us many lessons, while simultaneously advances in commodity computing methods enable the academic community to affordably manage and process big data. Although great promise can emerge from the adoption of big data methods and philosophy, the heterogeneity and complexity of clinical data, in particular, pose additional challenges for big data inferencing and clinical application. However, the ultimate comparability and consistency of heterogeneous clinical information sources can be enhanced by existing and emerging data standards, which promise to bring order to clinical data chaos. Meaningful Use data standards in particular have already simplified the task of identifying clinical phenotyping patterns in electronic health records." }, { "pmid": "21734173", "title": "Reengineering translational science: the time is right.", "abstract": "Despite dramatic advances in the molecular pathogenesis of disease, translation of basic biomedical research into safe and effective clinical applications remains a slow, expensive, and failure-prone endeavor. To pursue opportunities for disruptive translational innovation, the U.S. National Institutes of Health (NIH) intends to establish a new entity, the National Center for Advancing Translational Sciences (NCATS). The mission of NCATS is to catalyze the generation of innovative methods and technologies that will enhance the development, testing, and implementation of diagnostics and therapeutics across a wide range of diseases and conditions. The new center's activities will complement, and not compete with, translational research being carried out at NIH and elsewhere in the public and private sectors." }, { "pmid": "22392843", "title": "Knowledge engineering for health: a new discipline required to bridge the \"ICT gap\" between research and healthcare.", "abstract": "Despite vast amount of money and research being channeled toward biomedical research, relatively little impact has been made on routine clinical practice. At the heart of this failure is the information and communication technology \"chasm\" that exists between research and healthcare. A new focus on \"knowledge engineering for health\" is needed to facilitate knowledge transmission across the research-healthcare gap. This discipline is required to engineer the bidirectional flow of data: processing research data and knowledge to identify clinically relevant advances and delivering these into healthcare use; conversely, making outcomes from the practice of medicine suitably available for use by the research community. This system will be able to self-optimize in that outcomes for patients treated by decisions that were based on the latest research knowledge will be fed back to the research world. A series of meetings, culminating in the \"I-Health 2011\" workshop, have brought together interdisciplinary experts to map the challenges and requirements for such a system. Here, we describe the main conclusions from these meetings. An \"I4Health\" interdisciplinary network of experts now exists to promote the key aims and objectives, namely \"integrating and interpreting information for individualized healthcare,\" by developing the \"knowledge engineering for health\" domain." }, { "pmid": "22112900", "title": "Why we need easy access to all data from all clinical trials and how to accomplish it.", "abstract": "International calls for registering all trials involving humans and for sharing the results, and sometimes also the raw data and the trial protocols, have increased in recent years. Such calls have come, for example, from the Organization for Economic Cooperation and Development (OECD), the World Health Organization (WHO), the US National Institutes of Heath, the US Congress, the European Commission, the European ombudsman, journal editors, The Cochrane Collaboration, and several funders, for example the UK Medical Research Council, the Wellcome Trust, the Bill and Melinda Gates Foundation and the Hewlett Foundation. Calls for data sharing have mostly been restricted to publicly-funded research, but I argue that the distinction between publicly-funded and industry-funded research is an artificial and irrelevant one, as the interests of the patients must override commercial interests. I also argue why it is a moral imperative to render all results from all trials involving humans, also healthy volunteers, publicly available. Respect for trial participants who often run a personal and unknown risk by participating in trials requires that they--and therefore also the society at large that they represent--be seen as the ultimate owners of trial data. Data sharing would lead to tremendous benefits for patients, progress in science, and rational use of healthcare resources based on evidence we can trust. The harmful consequences are minor compared to the benefits. It has been amply documented that the current situation, with selective reporting of favorable research and biased data analyses being the norm rather than the exception, is harmful to patients and has led to the death of tens of thousands of patients that could have been avoided. National and supranational legislation is needed to make data sharing happen as guidelines and other voluntary agreements do not work. I propose the contents of such legislation and of appropriate sanctions to hold accountable those who refuse to share their data." }, { "pmid": "17077452", "title": "Toward a national framework for the secondary use of health data: an American Medical Informatics Association White Paper.", "abstract": "Secondary use of health data applies personal health information (PHI) for uses outside of direct health care delivery. It includes such activities as analysis, research, quality and safety measurement, public health, payment, provider certification or accreditation, marketing, and other business applications, including strictly commercial activities. Secondary use of health data can enhance health care experiences for individuals, expand knowledge about disease and appropriate treatments, strengthen understanding about effectiveness and efficiency of health care systems, support public health and security goals, and aid businesses in meeting customers' needs. Yet, complex ethical, political, technical, and social issues surround the secondary use of health data. While not new, these issues play increasingly critical and complex roles given current public and private sector activities not only expanding health data volume, but also improving access to data. Lack of coherent policies and standard \"good practices\" for secondary use of health data impedes efforts to strengthen the U.S. health care system. The nation requires a framework for the secondary use of health data with a robust infrastructure of policies, standards, and best practices. Such a framework can guide and facilitate widespread collection, storage, aggregation, linkage, and transmission of health data. The framework will provide appropriate protections for legitimate secondary use." }, { "pmid": "21632745", "title": "Ethical and practical challenges of sharing data from genome-wide association studies: the eMERGE Consortium experience.", "abstract": "In 2007, the National Human Genome Research Institute (NHGRI) established the Electronic MEdical Records and GEnomics (eMERGE) Consortium (www.gwas.net) to develop, disseminate, and apply approaches to research that combine DNA biorepositories with electronic medical record (EMR) systems for large-scale, high-throughput genetic research. One of the major ethical and administrative challenges for the eMERGE Consortium has been complying with existing data-sharing policies. This paper discusses the challenges of sharing genomic data linked to health information in the electronic medical record (EMR) and explores the issues as they relate to sharing both within a large consortium and in compliance with the National Institutes of Health (NIH) data-sharing policy. We use the eMERGE Consortium experience to explore data-sharing challenges from the perspective of multiple stakeholders (i.e., research participants, investigators, and research institutions), provide recommendations for researchers and institutions, and call for clearer guidance from the NIH regarding ethical implementation of its data-sharing policy." }, { "pmid": "23305810", "title": "Security and privacy in electronic health records: a systematic literature review.", "abstract": "OBJECTIVE\nTo report the results of a systematic literature review concerning the security and privacy of electronic health record (EHR) systems.\n\n\nDATA SOURCES\nOriginal articles written in English found in MEDLINE, ACM Digital Library, Wiley InterScience, IEEE Digital Library, Science@Direct, MetaPress, ERIC, CINAHL and Trip Database.\n\n\nSTUDY SELECTION\nOnly those articles dealing with the security and privacy of EHR systems.\n\n\nDATA EXTRACTION\nThe extraction of 775 articles using a predefined search string, the outcome of which was reviewed by three authors and checked by a fourth.\n\n\nRESULTS\nA total of 49 articles were selected, of which 26 used standards or regulations related to the privacy and security of EHR data. The most widely used regulations are the Health Insurance Portability and Accountability Act (HIPAA) and the European Data Protection Directive 95/46/EC. We found 23 articles that used symmetric key and/or asymmetric key schemes and 13 articles that employed the pseudo anonymity technique in EHR systems. A total of 11 articles propose the use of a digital signature scheme based on PKI (Public Key Infrastructure) and 13 articles propose a login/password (seven of them combined with a digital certificate or PIN) for authentication. The preferred access control model appears to be Role-Based Access Control (RBAC), since it is used in 27 studies. Ten of these studies discuss who should define the EHR systems' roles. Eleven studies discuss who should provide access to EHR data: patients or health entities. Sixteen of the articles reviewed indicate that it is necessary to override defined access policies in the case of an emergency. In 25 articles an audit-log of the system is produced. Only four studies mention that system users and/or health staff should be trained in security and privacy.\n\n\nCONCLUSIONS\nRecent years have witnessed the design of standards and the promulgation of directives concerning security and privacy in EHR systems. However, more work should be done to adopt these regulations and to deploy secure EHR systems." }, { "pmid": "12039510", "title": "Research on tobacco use among teenagers: ethical challenges.", "abstract": "Recent increases in adolescent smoking portend upcoming public health challenges as the majority of smokers initiate long-term addiction during youth, but experience major health consequences later in life. To effectively address this important teenage and adult health issue, critical research information and early interventions are needed, yet conducting tobacco research with teen smokers poses substantial challenges, including several ethical dilemmas. This paper reviews some of the ethical issues presented in etiologic and clinical treatment research addressing adolescent smoking. Common problems and possible solutions are presented. Issues of parent/guardian involvement, decision-making ability of teens, the need to maintain confidentiality are discussed, along with the specific problems of recruitment, compensation, and ethical challenges that arise in group treatment settings. Context-specific ethical adjustments and alternative perspectives are likely to be needed if we are to overcome procedural difficulties in conducting teen smoking studies." }, { "pmid": "21890450", "title": "Conducting research with tribal communities: sovereignty, ethics, and data-sharing issues.", "abstract": "BACKGROUND\nWhen conducting research with American Indian tribes, informed consent beyond conventional institutional review board (IRB) review is needed because of the potential for adverse consequences at a community or governmental level that are unrecognized by academic researchers.\n\n\nOBJECTIVES\nIn this article, we review sovereignty, research ethics, and data-sharing considerations when doing community-based participatory health-related or natural-resource-related research with American Indian nations and present a model material and data-sharing agreement that meets tribal and university requirements.\n\n\nDISCUSSION\nOnly tribal nations themselves can identify potential adverse outcomes, and they can do this only if they understand the assumptions and methods of the proposed research. Tribes must be truly equal partners in study design, data collection, interpretation, and publication. Advances in protection of intellectual property rights (IPR) are also applicable to IRB reviews, as are principles of sovereignty and indigenous rights, all of which affect data ownership and control.\n\n\nCONCLUSIONS\nAcademic researchers engaged in tribal projects should become familiar with all three areas: sovereignty, ethics and informed consent, and IPR. We recommend developing an agreement with tribal partners that reflects both health-related IRB and natural-resource-related IPR considerations." }, { "pmid": "17130065", "title": "Health-related stigma: rethinking concepts and interventions.", "abstract": "As a feature of many chronic health problems, stigma contributes to a hidden burden of illness. Health-related stigma is typically characterized by social disqualification of individuals and populations who are identified with particular health problems. Another aspect is characterized by social disqualification targeting other features of a person's identity-such as ethnicity, sexual preferences or socio-economic status-which through limited access to services and other social disadvantages result in adverse effects on health. Health professionals therefore have substantial interests in recognizing and mitigating the impact of stigma as both a feature and a cause of many health problems. Rendering historical concepts of stigma as a discrediting physical attribute obsolete, two generations of Goffman-inspired sociological studies have redefined stigma as a socially discrediting situation of individuals. Based on that formulation and to specify health research interests, a working definition of health-related stigma is proposed. It emphasizes the particular features of target health problems and the role of particular social, cultural and economic settings in developing countries. As a practical matter, it relates to various strategies for intervention, which may focus on controlling or treating target health problems with informed health and social policies, countering the disposition of perpetrators to stigmatize, and supporting those who are stigmatized to limit their vulnerability and strengthen their resilience. Our suggestions for health studies of stigma highlight needs for disease- and culture-specific research that serves the interests of international health." }, { "pmid": "24513229", "title": "Intervening within and across levels: a multilevel approach to stigma and public health.", "abstract": "This article uses a multilevel approach to review the literature on interventions with promise to reduce social stigma and its consequences for population health. Three levels of an ecological system are discussed. The intrapersonal level describes interventions directed at individuals, to either enhance coping strategies of people who belong to stigmatized groups or change attitudes and behaviors of the non-stigmatized. The interpersonal level describes interventions that target dyadic or small group interactions. The structural level describes interventions directed at the social-political environment, such as laws and policies. These intervention levels are related and they reciprocally affect one another. In this article we review the literature within each level. We suggest that interventions at any level have the potential to affect other levels of an ecological system through a process of mutually reinforcing reciprocal processes. We discuss research priorities, in particular longitudinal research that incorporates multiple outcomes across a system." }, { "pmid": "12424005", "title": "The pharmaceutical industry as an informant.", "abstract": "The pharmaceutical industry spends more time and resources on generation, collation, and dissemination of medical information than it does on production of medicines. This information is essential as a resource for development of medicines, but is also needed to satisfy licensing requirements, protect patents, promote sales, and advise patients, prescribers, and dispensers. Such information is of great commercial value, and most of it is confidential, protected by regulations about intellectual property rights. Through their generation and dissemination of information, transnational companies can greatly influence clinical practice. Sometimes, their commercially determined goals represent genuine advances in health-care provision, but most often they are implicated in excessive and costly production of information that is largely kept secret, often duplicated, and can risk undermining the best interests of patients and society." }, { "pmid": "23413435", "title": "A new way to protect privacy in large-scale genome-wide association studies.", "abstract": "MOTIVATION\nIncreased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues.\n\n\nRESULTS\nWe show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "18560089", "title": "Access and privacy rights using web security standards to increase patient empowerment.", "abstract": "Electronic Health Record (EHR) systems are becoming more and more sophisticated and include nowadays numerous applications, which are not only accessed by medical professionals, but also by accounting and administrative personnel. This could represent a problem concerning basic rights such as privacy and confidentiality. The principles, guidelines and recommendations compiled by the OECD protection of privacy and trans-border flow of personal data are described and considered within health information system development. Granting access to an EHR should be dependent upon the owner of the record; the patient: he must be entitled to define who is allowed to access his EHRs, besides the access control scheme each health organization may have implemented. In this way, it's not only up to health professionals to decide who have access to what, but the patient himself. Implementing such a policy is walking towards patient empowerment which society should encourage and governments should promote. The paper then introduces a technical solution based on web security standards. This would give patients the ability to monitor and control which entities have access to their personal EHRs, thus empowering them with the knowledge of how much of his medical history is known and by whom. It is necessary to create standard data access protocols, mechanisms and policies to protect the privacy rights and furthermore, to enable patients, to automatically track the movement (flow) of their personal data and information in the context of health information systems. This solution must be functional and, above all, user-friendly and the interface should take in consideration some heuristics of usability in order to provide the user with the best tools. The current official standards on confidentiality and privacy in health care, currently being developed within the EU, are explained, in order to achieve a consensual idea of the guidelines that all member states should follow to transfer such principles into national laws. A perspective is given on the state of the art concerning web security standards, which can be used to easily engineer health information systems complying with the patient empowering goals. In conclusion health systems with the characteristics thus described are technically feasible and should be generally implemented and deployed." }, { "pmid": "22096232", "title": "The BioSample Database (BioSD) at the European Bioinformatics Institute.", "abstract": "The BioSample Database (http://www.ebi.ac.uk/biosamples) is a new database at EBI that stores information about biological samples used in molecular experiments, such as sequencing, gene expression or proteomics. The goals of the BioSample Database include: (i) recording and linking of sample information consistently within EBI databases such as ENA, ArrayExpress and PRIDE; (ii) minimizing data entry efforts for EBI database submitters by enabling submitting sample descriptions once and referencing them later in data submissions to assay databases and (iii) supporting cross database queries by sample characteristics. Each sample in the database is assigned an accession number. The database includes a growing set of reference samples, such as cell lines, which are repeatedly used in experiments and can be easily referenced from any database by their accession numbers. Accession numbers for the reference samples will be exchanged with a similar database at NCBI. The samples in the database can be queried by their attributes, such as sample types, disease names or sample providers. A simple tab-delimited format facilitates submissions of sample information to the database, initially via email to [email protected]." }, { "pmid": "24265224", "title": "Updates to BioSamples database at European Bioinformatics Institute.", "abstract": "The BioSamples database at the EBI (http://www.ebi.ac.uk/biosamples) provides an integration point for BioSamples information between technology specific databases at the EBI, projects such as ENCODE and reference collections such as cell lines. The database delivers a unified query interface and API to query sample information across EBI's databases and provides links back to assay databases. Sample groups are used to manage related samples, e.g. those from an experimental submission, or a single reference collection. Infrastructural improvements include a new user interface with ontological and key word queries, a new query API, a new data submission API, complete RDF data download and a supporting SPARQL endpoint, accessioning at the point of submission to the European Nucleotide Archive and European Genotype Phenotype Archives and improved query response times." }, { "pmid": "25361974", "title": "ArrayExpress update--simplifying data submissions.", "abstract": "The ArrayExpress Archive of Functional Genomics Data (http://www.ebi.ac.uk/arrayexpress) is an international functional genomics database at the European Bioinformatics Institute (EMBL-EBI) recommended by most journals as a repository for data supporting peer-reviewed publications. It contains data from over 7000 public sequencing and 42,000 array-based studies comprising over 1.5 million assays in total. The proportion of sequencing-based submissions has grown significantly over the last few years and has doubled in the last 18 months, whilst the rate of microarray submissions is growing slightly. All data in ArrayExpress are available in the MAGE-TAB format, which allows robust linking to data analysis and visualization tools and standardized analysis. The main development over the last two years has been the release of a new data submission tool Annotare, which has reduced the average submission time almost 3-fold. In the near future, Annotare will become the only submission route into ArrayExpress, alongside MAGE-TAB format-based pipelines. ArrayExpress is a stable and highly accessed resource. Our future tasks include automation of data flows and further integration with other EMBL-EBI resources for the representation of multi-omics data." }, { "pmid": "26527722", "title": "2016 update of the PRIDE database and its related tools.", "abstract": "The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development 'PRIDE Cluster' and 'PRIDE Proteomes', which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive." }, { "pmid": "25355519", "title": "COSMIC: exploring the world's knowledge of somatic mutations in human cancer.", "abstract": "COSMIC, the Catalogue Of Somatic Mutations In Cancer (http://cancer.sanger.ac.uk) is the world's largest and most comprehensive resource for exploring the impact of somatic mutations in human cancer. Our latest release (v70; Aug 2014) describes 2 002 811 coding point mutations in over one million tumor samples and across most human genes. To emphasize depth of knowledge on known cancer genes, mutation information is curated manually from the scientific literature, allowing very precise definitions of disease types and patient details. Combination of almost 20,000 published studies gives substantial resolution of how mutations and phenotypes relate in human cancer, providing insights into the stratification of mutations and biomarkers across cancer patient populations. Conversely, our curation of cancer genomes (over 12,000) emphasizes knowledge breadth, driving discovery of unrecognized cancer-driving hotspots and molecular targets. Our high-resolution curation approach is globally unique, giving substantial insight into molecular biomarkers in human oncology. In addition, COSMIC also details more than six million noncoding mutations, 10,534 gene fusions, 61,299 genome rearrangements, 695,504 abnormal copy number segments and 60,119,787 abnormal expression variants. All these types of somatic mutation are annotated to both the human genome and each affected coding gene, then correlated across disease and mutation types." }, { "pmid": "24849882", "title": "A Minimum Data Set for Sharing Biobank Samples, Information, and Data: MIABIS.", "abstract": "Numerous successful scientific results have emerged from projects using shared biobanked samples and data. In order to facilitate the discovery of underutilized biobank samples, it would be helpful if a global biobank register containing descriptive information about the samples existed. But first, for shared data to be comparable, it needs to be harmonized. In compliance with the aim of BBMRI (Biobanking and Biomolecular Resources Research Infrastructure), to harmonize biobanking across Europe, and the conclusion that the move towards a universal information infrastructure for biobanking is directly connected to the issues of semantic interoperability through standardized message formats and controlled terminologies, we have developed an updated version of the minimum data set for biobanks and studies using human biospecimens. The data set called MIABIS (Minimum Information About BIobank data Sharing) consists of 52 attributes describing a biobank's content. The aim is to facilitate data discovery through harmonization of data elements describing a biobank at the aggregate level. As many biobanks across Europe possess a tremendous amount of samples that are underutilized, this would help pave the way for biobank networking on a national and international level, resulting in time and cost savings and faster emergence of new scientific results." }, { "pmid": "22807660", "title": "Bioinformatics meets user-centred design: a perspective.", "abstract": "Designers have a saying that \"the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years.\" It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI), and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD) strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics." }, { "pmid": "26809823", "title": "[Connecting biobanks of large European cohorts (EU Project BBMRI-LPC)].", "abstract": "BACKGROUND\nIn addition to the Biobanking and BioMolecular resources Research Initiative (BBMRI), which is establishing a European research infrastructure for biobanks, a network for large European prospective cohorts (LPC) is being built to facilitate transnational research into important groups of diseases and health care. One instrument for this is the database \"LPC Catalogue,\" which supports access to the biomaterials of the participating cohorts.\n\n\nOBJECTIVES\nTo present the LPC Catalogue as a relevant tool for connecting European biobanks. In addition, the LPC Catalogue has been extended to establish compatibility with existing Minimum Information About Biobank data Sharing (MIABIS) and to allow for more detailed search requests. This article describes the LPC Catalogue, its organizational and technical structure, and the aforementioned extensions.\n\n\nMATERIALS AND METHODS\nThe LPC Catalogue provides a structured overview of the participating LPCs. It offers various retrieval possibilities and a search function. To support more detailed search requests, a new module has been developed, called a \"data cube\". The provision of data by the cohorts is being supported by a \"connector\" component.\n\n\nRESULTS\nThe LPC Catalogue contains data on 22 cohorts and more than 3.8 million biosamples. At present, data on the biosamples of three cohorts have been acquired for the \"cube,\" which is continuously being expanded. In the BBMRI-LPC, tendering for scientific projects using the data and samples of the participating cohorts is currently being carried out. In this context, several proposals have already been approved.\n\n\nCONCLUSIONS\nThe LPC Catalogue is supporting transnational access to biosamples. A comparison with existing solutions illustrates the relevance of its functionality." }, { "pmid": "27338147", "title": "Making sense of big data in health research: Towards an EU action plan.", "abstract": "Medicine and healthcare are undergoing profound changes. Whole-genome sequencing and high-resolution imaging technologies are key drivers of this rapid and crucial transformation. Technological innovation combined with automation and miniaturization has triggered an explosion in data production that will soon reach exabyte proportions. How are we going to deal with this exponential increase in data production? The potential of \"big data\" for improving health is enormous but, at the same time, we face a wide range of challenges to overcome urgently. Europe is very proud of its cultural diversity; however, exploitation of the data made available through advances in genomic medicine, imaging, and a wide range of mobile health applications or connected devices is hampered by numerous historical, technical, legal, and political barriers. European health systems and databases are diverse and fragmented. There is a lack of harmonization of data formats, processing, analysis, and data transfer, which leads to incompatibilities and lost opportunities. Legal frameworks for data sharing are evolving. Clinicians, researchers, and citizens need improved methods, tools, and training to generate, analyze, and query data effectively. Addressing these barriers will contribute to creating the European Single Market for health, which will improve health and healthcare for all Europeans." }, { "pmid": "22588877", "title": "The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data.", "abstract": "The cBio Cancer Genomics Portal (http://cbioportal.org) is an open-access resource for interactive exploration of multidimensional cancer genomics data sets, currently providing access to data from more than 5,000 tumor samples from 20 cancer studies. The cBio Cancer Genomics Portal significantly lowers the barriers between complex genomic data and cancer researchers who want rapid, intuitive, and high-quality access to molecular profiles and clinical attributes from large-scale cancer genomics projects and empowers researchers to translate these rich data sets into biologic insights and clinical applications." }, { "pmid": "15188009", "title": "The COSMIC (Catalogue of Somatic Mutations in Cancer) database and website.", "abstract": "The discovery of mutations in cancer genes has advanced our understanding of cancer. These results are dispersed across the scientific literature and with the availability of the human genome sequence will continue to accrue. The COSMIC (Catalogue of Somatic Mutations in Cancer) database and website have been developed to store somatic mutation data in a single location and display the data and other information related to human cancer. To populate this resource, data has currently been extracted from reports in the scientific literature for somatic mutations in four genes, BRAF, HRAS, KRAS2 and NRAS. At present, the database holds information on 66 634 samples and reports a total of 10 647 mutations. Through the web pages, these data can be queried, displayed as figures or tables and exported in a number of formats. COSMIC is an ongoing project that will continue to curate somatic mutation data and release it through the website." }, { "pmid": "22139929", "title": "BioProject and BioSample databases at NCBI: facilitating capture and organization of metadata.", "abstract": "As the volume and complexity of data sets archived at NCBI grow rapidly, so does the need to gather and organize the associated metadata. Although metadata has been collected for some archival databases, previously, there was no centralized approach at NCBI for collecting this information and using it across databases. The BioProject database was recently established to facilitate organization and classification of project data submitted to NCBI, EBI and DDBJ databases. It captures descriptive information about research projects that result in high volume submissions to archival databases, ties together related data across multiple archives and serves as a central portal by which to inform users of data availability. Concomitantly, the BioSample database is being developed to capture descriptive information about the biological samples investigated in projects. BioProject and BioSample records link to corresponding data stored in archival repositories. Submissions are supported by a web-based Submission Portal that guides users through a series of forms for input of rich metadata describing their projects and samples. Together, these databases offer improved ways for users to query, locate, integrate and interpret the masses of data held in NCBI's archival repositories. The BioProject and BioSample databases are available at http://www.ncbi.nlm.nih.gov/bioproject and http://www.ncbi.nlm.nih.gov/biosample, respectively." }, { "pmid": "21969471", "title": "Knowledge sharing and collaboration in translational research, and the DC-THERA Directory.", "abstract": "Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European 'Network of Excellence' (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science." }, { "pmid": "22434835", "title": "Research resources: curating the new eagle-i discovery system.", "abstract": "Development of biocuration processes and guidelines for new data types or projects is a challenging task. Each project finds its way toward defining annotation standards and ensuring data consistency with varying degrees of planning and different tools to support and/or report on consistency. Further, this process may be data type specific even within the context of a single project. This article describes our experiences with eagle-i, a 2-year pilot project to develop a federated network of data repositories in which unpublished, unshared or otherwise 'invisible' scientific resources could be inventoried and made accessible to the scientific community. During the course of eagle-i development, the main challenges we experienced related to the difficulty of collecting and curating data while the system and the data model were simultaneously built, and a deficiency and diversity of data management strategies in the laboratories from which the source data was obtained. We discuss our approach to biocuration and the importance of improving information management strategies to the research process, specifically with regard to the inventorying and usage of research resources. Finally, we highlight the commonalities and differences between eagle-i and similar efforts with the hope that our lessons learned will assist other biocuration endeavors. DATABASE URL: www.eagle-i.net." }, { "pmid": "24608524", "title": "Translational research platforms integrating clinical and omics data: a review of publicly available solutions.", "abstract": "The rise of personalized medicine and the availability of high-throughput molecular analyses in the context of clinical care have increased the need for adequate tools for translational researchers to manage and explore these data. We reviewed the biomedical literature for translational platforms allowing the management and exploration of clinical and omics data, and identified seven platforms: BRISK, caTRIP, cBio Cancer Portal, G-DOC, iCOD, iDASH and tranSMART. We analyzed these platforms along seven major axes. (1) The community axis regrouped information regarding initiators and funders of the project, as well as availability status and references. (2) We regrouped under the information content axis the nature of the clinical and omics data handled by each system. (3) The privacy management environment axis encompassed functionalities allowing control over data privacy. (4) In the analysis support axis, we detailed the analytical and statistical tools provided by the platforms. We also explored (5) interoperability support and (6) system requirements. The final axis (7) platform support listed the availability of documentation and installation procedures. A large heterogeneity was observed in regard to the capability to manage phenotype information in addition to omics data, their security and interoperability features. The analytical and visualization features strongly depend on the considered platform. Similarly, the availability of the systems is variable. This review aims at providing the reader with the background to choose the platform best suited to their needs. To conclude, we discuss the desiderata for optimal translational research platforms, in terms of privacy, interoperability and technical features." }, { "pmid": "24835880", "title": "A review of international biobanks and networks: success factors and key benchmarks.", "abstract": "Biobanks and biobanking networks are involved in varying degrees in the collection, processing, storage, and dissemination of biological specimens. This review outlines the approaches that 16 of the largest biobanks and biobanking networks in Europe, North America, Australia, and Asia have taken to collecting and distributing human research specimens and managing scientific initiatives while covering operating costs. Many are small operations that exist as either a single or a few freezers in a research laboratory, hospital clinical laboratory, or pathology suite. Larger academic and commercial biobanks operate to support large clinical and epidemiological studies. Operational and business models depend on the medical and research missions of their institutions and home countries. Some national biobanks operate with a centralized physical biobank that accepts samples from multiple locations. Others operate under a \"federated\" model where each institution maintains its own collections but agrees to list them on a central shared database. Some collections are \"project-driven\" meaning that specimens are collected and distributed to answer specific research questions. \"General\" collections are those that exist to establish a reference collection, that is, not to meet particular research goals but to be available to respond to multiple requests for an assortment of research uses. These individual and networked biobanking systems operate under a variety of business models, usually incorporating some form of partial cost recovery, while requiring at least partial public or government funding. Each has a well-defined biospecimen-access policy in place that specifies requirements that must be met-such as ethical clearance and the expertise to perform the proposed experiments-to obtain samples for research. The success of all of these biobanking models depends on a variety of factors including well-defined goals, a solid business plan, and specimen collections that are developed according to strict quality and operational controls." }, { "pmid": "22955497", "title": "Security practices and regulatory compliance in the healthcare industry.", "abstract": "OBJECTIVE\nSecuring protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance.\n\n\nDESIGN\nWe employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance.\n\n\nMEASUREMENT\nWe utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security.\n\n\nRESULTS\nOur analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices).\n\n\nCONCLUSIONS\nHospitals in the highest level of compliance were significantly managing third parties' breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption." }, { "pmid": "18096909", "title": "caGrid 1.0: an enterprise Grid infrastructure for biomedical research.", "abstract": "OBJECTIVE\nTo develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research.\n\n\nDESIGN\nAn enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study.\n\n\nMEASUREMENTS\nThe caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications.\n\n\nRESULTS\nThe caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid.\n\n\nCONCLUSIONS\nWhile caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community." }, { "pmid": "24821736", "title": "CAPriCORN: Chicago Area Patient-Centered Outcomes Research Network.", "abstract": "The Chicago Area Patient-Centered Outcomes Research Network (CAPriCORN) represents an unprecedented collaboration across diverse healthcare institutions including private, county, and state hospitals and health systems, a consortium of Federally Qualified Health Centers, and two Department of Veterans Affairs hospitals. CAPriCORN builds on the strengths of our institutions to develop a cross-cutting infrastructure for sustainable and patient-centered comparative effectiveness research in Chicago. Unique aspects include collaboration with the University HealthSystem Consortium to aggregate data across sites, a centralized communication center to integrate patient recruitment with the data infrastructure, and a centralized institutional review board to ensure a strong and efficient human subject protection program. With coordination by the Chicago Community Trust and the Illinois Medical District Commission, CAPriCORN will model how healthcare institutions can overcome barriers of data integration, marketplace competition, and care fragmentation to develop, test, and implement strategies to improve care for diverse populations and reduce health disparities." }, { "pmid": "24682735", "title": "The eGenVar data management system--cataloguing and sharing sensitive data and metadata for the life sciences.", "abstract": "Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/." }, { "pmid": "22722688", "title": "Securing the data economy: translating privacy and enacting security in the development of DataSHIELD.", "abstract": "Contemporary bioscience is seeing the emergence of a new data economy: with data as its fundamental unit of exchange. While sharing data within this new 'economy' provides many potential advantages, the sharing of individual data raises important social and ethical concerns. We examine ongoing development of one technology, DataSHIELD, which appears to elide privacy concerns about sharing data by enabling shared analysis while not actually sharing any individual-level data. We combine presentation of the development of DataSHIELD with presentation of an ethnographic study of a workshop to test the technology. DataSHIELD produced an application of the norm of privacy that was practical, flexible and operationalizable in researchers' everyday activities, and one which fulfilled the requirements of ethics committees. We demonstrated that an analysis run via DataSHIELD could precisely replicate results produced by a standard analysis where all data are physically pooled and analyzed together. In developing DataSHIELD, the ethical concept of privacy was transformed into an issue of security. Development of DataSHIELD was based on social practices as well as scientific and ethical motivations. Therefore, the 'success' of DataSHIELD would, likewise, be dependent on more than just the mathematics and the security of the technology." }, { "pmid": "23533569", "title": "SHRINE: enabling nationally scalable multi-site disease studies.", "abstract": "Results of medical research studies are often contradictory or cannot be reproduced. One reason is that there may not be enough patient subjects available for observation for a long enough time period. Another reason is that patient populations may vary considerably with respect to geographic and demographic boundaries thus limiting how broadly the results apply. Even when similar patient populations are pooled together from multiple locations, differences in medical treatment and record systems can limit which outcome measures can be commonly analyzed. In total, these differences in medical research settings can lead to differing conclusions or can even prevent some studies from starting. We thus sought to create a patient research system that could aggregate as many patient observations as possible from a large number of hospitals in a uniform way. We call this system the 'Shared Health Research Information Network', with the following properties: (1) reuse electronic health data from everyday clinical care for research purposes, (2) respect patient privacy and hospital autonomy, (3) aggregate patient populations across many hospitals to achieve statistically significant sample sizes that can be validated independently of a single research setting, (4) harmonize the observation facts recorded at each institution such that queries can be made across many hospitals in parallel, (5) scale to regional and national collaborations. The purpose of this report is to provide open source software for multi-site clinical studies and to report on early uses of this application. At this time SHRINE implementations have been used for multi-site studies of autism co-morbidity, juvenile idiopathic arthritis, peripartum cardiomyopathy, colorectal cancer, diabetes, and others. The wide range of study objectives and growing adoption suggest that SHRINE may be applicable beyond the research uses and participating hospitals named in this report." }, { "pmid": "19717803", "title": "The University of Michigan Honest Broker: a Web-based service for clinical and translational research and practice.", "abstract": "For the success of clinical and translational science, a seamless interoperation is required between clinical and research information technology. Addressing this need, the Michigan Clinical Research Collaboratory (MCRC) was created. The MCRC employed a standards-driven Web Services architecture to create the U-M Honest Broker, which enabled sharing of clinical and research data among medical disciplines and separate institutions. Design objectives were to facilitate sharing of data, maintain a master patient index (MPI), deidentification of data, and routing data to preauthorized destination systems for use in clinical care, research, or both. This article describes the architecture and design of the U-M HB system and the successful demonstration project. Seventy percent of eligible patients were recruited for a prospective study examining the correlation between interventional cardiac catheterizations and depression. The U-M Honest Broker delivered on the promise of using structured clinical knowledge shared among providers to help clinical and translational research." }, { "pmid": "25377061", "title": "A systematic review of barriers to data sharing in public health.", "abstract": "BACKGROUND\nIn the current information age, the use of data has become essential for decision making in public health at the local, national, and global level. Despite a global commitment to the use and sharing of public health data, this can be challenging in reality. No systematic framework or global operational guidelines have been created for data sharing in public health. Barriers at different levels have limited data sharing but have only been anecdotally discussed or in the context of specific case studies. Incomplete systematic evidence on the scope and variety of these barriers has limited opportunities to maximize the value and use of public health data for science and policy.\n\n\nMETHODS\nWe conducted a systematic literature review of potential barriers to public health data sharing. Documents that described barriers to sharing of routinely collected public health data were eligible for inclusion and reviewed independently by a team of experts. We grouped identified barriers in a taxonomy for a focused international dialogue on solutions.\n\n\nRESULTS\nTwenty potential barriers were identified and classified in six categories: technical, motivational, economic, political, legal and ethical. The first three categories are deeply rooted in well-known challenges of health information systems for which structural solutions have yet to be found; the last three have solutions that lie in an international dialogue aimed at generating consensus on policies and instruments for data sharing.\n\n\nCONCLUSIONS\nThe simultaneous effect of multiple interacting barriers ranging from technical to intangible issues has greatly complicated advances in public health data sharing. A systematic framework of barriers to data sharing in public health will be essential to accelerate the use of valuable information for the global good." }, { "pmid": "24573176", "title": "A human rights approach to an international code of conduct for genomic and clinical data sharing.", "abstract": "Fostering data sharing is a scientific and ethical imperative. Health gains can be achieved more comprehensively and quickly by combining large, information-rich datasets from across conventionally siloed disciplines and geographic areas. While collaboration for data sharing is increasingly embraced by policymakers and the international biomedical community, we lack a common ethical and legal framework to connect regulators, funders, consortia, and research projects so as to facilitate genomic and clinical data linkage, global science collaboration, and responsible research conduct. Governance tools can be used to responsibly steer the sharing of data for proper stewardship of research discovery, genomics research resources, and their clinical applications. In this article, we propose that an international code of conduct be designed to enable global genomic and clinical data sharing for biomedical research. To give this proposed code universal application and accountability, however, we propose to position it within a human rights framework. This proposition is not without precedent: international treaties have long recognized that everyone has a right to the benefits of scientific progress and its applications, and a right to the protection of the moral and material interests resulting from scientific productions. It is time to apply these twin rights to internationally collaborative genomic and clinical data sharing." }, { "pmid": "20051768", "title": "Technical and policy approaches to balancing patient privacy and data sharing in clinical and translational research.", "abstract": "INTRODUCTION\nClinical researchers need to share data to support scientific validation and information reuse and to comply with a host of regulations and directives from funders. Various organizations are constructing informatics resources in the form of centralized databases to ensure reuse of data derived from sponsored research. The widespread use of such open databases is contingent on the protection of patient privacy.\n\n\nMETHODS\nWe review privacy-related problems associated with data sharing for clinical research from technical and policy perspectives. We investigate existing policies for secondary data sharing and privacy requirements in the context of data derived from research and clinical settings. In particular, we focus on policies specified by the US National Institutes of Health and the Health Insurance Portability and Accountability Act and touch on how these policies are related to current and future use of data stored in public database archives. We address aspects of data privacy and identifiability from a technical, although approachable, perspective and summarize how biomedical databanks can be exploited and seemingly anonymous records can be reidentified using various resources without hacking into secure computer systems.\n\n\nRESULTS\nWe highlight which clinical and translational data features, specified in emerging research models, are potentially vulnerable or exploitable. In the process, we recount a recent privacy-related concern associated with the publication of aggregate statistics from pooled genome-wide association studies that have had a significant impact on the data sharing policies of National Institutes of Health-sponsored databanks.\n\n\nCONCLUSION\nBased on our analysis and observations we provide a list of recommendations that cover various technical, legal, and policy mechanisms that open clinical databases can adopt to strengthen data privacy protection as they move toward wider deployment and adoption." }, { "pmid": "25521367", "title": "Scalable privacy-preserving data sharing methodology for genome-wide association studies: an application to iDASH healthcare privacy protection challenge.", "abstract": "In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods." }, { "pmid": "25521230", "title": "A community assessment of privacy preserving techniques for human genomes.", "abstract": "To answer the need for the rigorous protection of biomedical data, we organized the Critical Assessment of Data Privacy and Protection initiative as a community effort to evaluate privacy-preserving dissemination techniques for biomedical data. We focused on the challenge of sharing aggregate human genomic data (e.g., allele frequencies) in a way that preserves the privacy of the data donors, without undermining the utility of genome-wide association studies (GWAS) or impeding their dissemination. Specifically, we designed two problems for disseminating the raw data and the analysis outcome, respectively, based on publicly available data from HapMap and from the Personal Genome Project. A total of six teams participated in the challenges. The final results were presented at a workshop of the iDASH (integrating Data for Analysis, 'anonymization,' and SHaring) National Center for Biomedical Computing. We report the results of the challenge and our findings about the current genome privacy protection techniques." }, { "pmid": "22404490", "title": "The tension between data sharing and the protection of privacy in genomics research.", "abstract": "Next-generation sequencing and global data sharing challenge many of the governance mechanisms currently in place to protect the privacy of research participants. These challenges will make it more difficult to guarantee anonymity for participants, provide information to satisfy the requirements of informed consent, and ensure complete withdrawal from research when requested. To move forward, we need to improve the current governance systems for research so that they are responsive to individual privacy concerns but can also be effective at a global level. We need to develop a system of e-governance that can complement existing governance systems but that places greater reliance on the use of technology to ensure compliance with ethical and legal requirements. These new governance structures must be able to address the concerns of research participants while at the same time ensuring effective data sharing that promotes public trust in genomics research." }, { "pmid": "23743551", "title": "The Electronic Medical Records and Genomics (eMERGE) Network: past, present, and future.", "abstract": "The Electronic Medical Records and Genomics Network is a National Human Genome Research Institute-funded consortium engaged in the development of methods and best practices for using the electronic medical record as a tool for genomic research. Now in its sixth year and second funding cycle, and comprising nine research groups and a coordinating center, the network has played a major role in validating the concept that clinical data derived from electronic medical records can be used successfully for genomic research. Current work is advancing knowledge in multiple disciplines at the intersection of genomics and health-care informatics, particularly for electronic phenotyping, genome-wide association studies, genomic medicine implementation, and the ethical and regulatory issues associated with genomics research and returning results to study participants. Here, we describe the evolution, accomplishments, opportunities, and challenges of the network from its inception as a five-group consortium focused on genotype-phenotype associations for genomic discovery to its current form as a nine-group consortium pivoting toward the implementation of genomic medicine." } ]
JMIR mHealth and uHealth
28279948
PMC5364324
10.2196/mhealth.6552
Remote Monitoring of Hypertension Diseases in Pregnancy: A Pilot Study
BackgroundAlthough remote monitoring (RM) has proven its added value in various health care domains, little is known about the remote follow-up of pregnant women diagnosed with a gestational hypertensive disorders (GHD).ObjectiveThe aim of this study was to evaluate the added value of a remote follow-up program for pregnant women diagnosed with GHD.MethodsA 1-year retrospective study was performed in the outpatient clinic of a 2nd level prenatal center where pregnant women with GHD received RM or conventional care (CC). Primary study endpoints include number of prenatal visits and admissions to the prenatal observation ward. Secondary outcomes include gestational outcome, mode of delivery, neonatal outcome, and admission to neonatal intensive care (NIC). Differences in continuous and categorical variables in maternal demographics and characteristics were tested using Unpaired Student’s two sampled t test or Mann-Whitney U test and the chi-square test. Both a univariate and multivariate analysis were performed for analyzing prenatal follow-up and gestational outcomes. All statistical analyses were done at nominal level, Cronbach alpha=.05.ResultsOf the 166 patients diagnosed with GHD, 53 received RM and 113 CC. After excluding 5 patients in the RM group and 15 in the CC group because of the missing data, 48 patients in RM group and 98 in CC group were taken into final analysis. The RM group had more women diagnosed with gestational hypertension, but less with preeclampsia when compared with CC (81.25% vs 42.86% and 14.58% vs 43.87%). Compared with CC, univariate analysis in RM showed less induction, more spontaneous labors, and less maternal and neonatal hospitalizations (48.98% vs 25.00%; 31.63% vs 60.42%; 74.49% vs 56.25%; and 27.55% vs 10.42%). This was also true in multivariate analysis, except for hospitalizations.ConclusionsAn RM follow-up of women with GHD is a promising tool in the prenatal care. It opens the perspectives to reverse the current evolution of antenatal interventions leading to more interventions and as such to ever increasing medicalized antenatal care.
Related WorkRM has already shown benefits in Cardiology and Pneumology [7,8]. In the prenatal care, RM has also shown an added value to improve maternal and neonatal outcomes. Various studies reported a reduction in unscheduled patient visits, low neonatal birth weight, and admissions to neonatal intensive care (NIC) for pregnant women who received RM compared with pregnant women who did not receive these devices. Additionally, RM can contribute to significant reductions in health care costs. RM was also demonstrated to prolong gestational age and to improve feelings of self-efficacy, maternal satisfaction, and gestational age at delivery when compared with a control group which did not received RM [9-16]. Unfortunately, some of the previous mentioned studies are dating back to 1995 and no more recent work is available. This is in contradiction with the rapid technological advancements that have been seen in the last decade. Further, no studies are published about the added value of RM in pregnant women with GHD. To our knowledge, this is the first publication about a prenatal follow-up program for pregnant women with GHD to date.
[ "26104418", "27010734", "26969198", "24529402", "23903374", "22720184", "20846317", "8942501", "20628517", "20044162", "21822394", "10203647", "7485304", "22512287", "24735917", "27222631", "26158653", "24504933", "26606702", "26455020", "25864288", "26048352", "25450475", "11950741", "21547859", "26019905", "25631179", "18726792", "23514572", "27336237", "23927368" ]
[ { "pmid": "26104418", "title": "Diagnosis, evaluation, and management of the hypertensive disorders of pregnancy.", "abstract": "OBJECTIVE\nThis guideline summarizes the quality of the evidence to date and provides a reasonable approach to the diagnosis, evaluation and treatment of the hypertensive disorders of pregnancy (HDP).\n\n\nEVIDENCE\nThe literature reviewed included the previous Society of Obstetricians and Gynaecologists of Canada (SOGC) HDP guidelines from 2008 and their reference lists, and an update from 2006. Medline, Cochrane Database of Systematic Reviews (CDSR), Cochrane Central Registry of Controlled Trials (CCRCT) and Database of Abstracts and Reviews of Effects (DARE) were searched for literature published between January 2006 and March 2012. Articles were restricted to those published in French or English. Recommendations were evaluated using the criteria of the Canadian Task Force on Preventive Health Care and GRADE." }, { "pmid": "27010734", "title": "Body Mass Index, Smoking and Hypertensive Disorders during Pregnancy: A Population Based Case-Control Study.", "abstract": "While obesity is an indicated risk factor for hypertensive disorders of pregnancy, smoking during pregnancy has been shown to be inversely associated with the development of preeclampsia and gestational hypertension. The purpose of this study was to investigate the combined effects of high body mass index and smoking on hypertensive disorders during pregnancy. This was a case-control study based on national registers, nested within all pregnancies in Iceland 1989-2004, resulting in birth at the Landspitali University Hospital. Cases (n = 500) were matched 1:2 with women without a hypertensive diagnosis who gave birth in the same year. Body mass index (kg/m2) was based on height and weight at 10-15 weeks of pregnancy. We used logistic regression models to calculate odds ratios and corresponding 95% confidence intervals as measures of association, adjusting for potential confounders and tested for additive and multiplicative interactions of body mass index and smoking. Women's body mass index during early pregnancy was positively associated with each hypertensive outcome. Compared with normal weight women, the multivariable adjusted odds ratio for any hypertensive disorder was 1.8 (95% confidence interval, 1.3-2.3) for overweight women and 3.1 (95% confidence interval, 2.2-4.3) for obese women. The odds ratio for any hypertensive disorder with obesity was 3.9 (95% confidence interval 1.8-8.6) among smokers and 3.0 (95% confidence interval 2.1-4.3) among non-smokers. The effect estimates for hypertensive disorders with high body mass index appeared more pronounced among smokers than non-smokers, although the observed difference was not statistically significant. Our findings may help elucidate the complicated interplay of these lifestyle-related factors with the hypertensive disorders during pregnancy." }, { "pmid": "26969198", "title": "An economic analysis of immediate delivery and expectant monitoring in women with hypertensive disorders of pregnancy, between 34 and 37 weeks of gestation (HYPITAT-II).", "abstract": "OBJECTIVE\nTo assess the economic consequences of immediate delivery compared with expectant monitoring in women with preterm non-severe hypertensive disorders of pregnancy.\n\n\nDESIGN\nA cost-effectiveness analysis alongside a randomised controlled trial (HYPITAT-II).\n\n\nSETTING\nObstetric departments of seven academic hospitals and 44 non-academic hospitals in the Netherlands.\n\n\nPOPULATION\nWomen diagnosed with non-severe hypertensive disorders of pregnancy between 340/7 and 370/7  weeks of gestation, randomly allocated to either immediate delivery or expectant monitoring.\n\n\nMETHODS\nA trial-based cost-effectiveness analysis was performed from a healthcare perspective until final maternal and neonatal discharge.\n\n\nMAIN OUTCOME MEASURES\nHealth outcomes were expressed as the prevalence of respiratory distress syndrome, defined as the need for supplemental oxygen for >24 hours combined with radiographic findings typical for respiratory distress syndrome. Costs were estimated from a healthcare perspective until maternal and neonatal discharge.\n\n\nRESULTS\nThe average costs of immediate delivery (n = 352) were €10 245 versus €9563 for expectant monitoring (n = 351), with an average difference of €682 (95% confidence interval, 95% CI -€618 to €2126). This 7% difference predominantly originated from the neonatal admissions, which were €5672 in the immediate delivery arm and €3929 in the expectant monitoring arm.\n\n\nCONCLUSION\nIn women with mild hypertensive disorders between 340/7 and 370/7  weeks of gestation, immediate delivery is more costly than expectant monitoring as a result of differences in neonatal admissions. These findings support expectant monitoring, as the clinical outcomes of the trial demonstrated that expectant monitoring reduced respiratory distress syndrome for a slightly increased risk of maternal complications.\n\n\nTWEETABLE ABSTRACT\nExpectant management in preterm hypertensive disorders is less costly compared with immediate delivery." }, { "pmid": "24529402", "title": "Home telemonitoring in COPD: a systematic review of methodologies and patients' adherence.", "abstract": "AIM\nThis systematic review aimed to provide a comprehensive description of the methodologies used in home telemonitoring interventions for Chronic Obstructive Pulmonary Disease (COPD) and to explore patients' adherence and satisfaction with the use of telemonitoring systems.\n\n\nMETHODS\nA literature search was performed from June to August and updated until December of 2012 on Medline, Embase, Web of Science and B-on databases using the following keywords: [tele(-)monitoring, tele(-)health, tele(-)homecare, tele(-)care, tele-home health or home monitoring] and [Chronic Obstructive Pulmonary Disease or COPD]. References of all articles were also reviewed.\n\n\nRESULTS\nSeventeen articles were included, 12 of them published from 2010 to the present. The methodologies were similar in the training provided to patients and in the data collection and transmission processes. However, differences in the type of technology used, telemonitoring duration and provision of prompts/feedback, were found. Patients were generally satisfied and found the systems useful to help them manage their disease and improve healthcare provision. Nevertheless, they reported some difficulties in their use, which in some studies were related to lower compliance rates.\n\n\nCONCLUSIONS\nTelemonitoring interventions are a relatively new field in COPD research. Findings suggest that these interventions, although promising, present some usability problems that need to be considered in future research. These adjustments are essential before the widespreading of telemonitoring." }, { "pmid": "23903374", "title": "Telemedicine in obstetrics.", "abstract": "Telemedicine lends itself to several obstetric applications and is of growing interest in developed and developing nations worldwide. In this article we review current trends and applications within obstetrics practice. We searched electronic databases, March 2010 to September 2012, for telemedicine use studies related to obstetrics. Thirty-four of 101 identified studies are the main focus of review. Other relevant studies published before March 2010 are included. Telemedicine plays an important role as an adjunct to delivery of health care to remote patients with inadequate medical access in this era of limited resources and emphasis on efficient use of those available resources." }, { "pmid": "22720184", "title": "Telemonitoring in chronic heart failure: a systematic review.", "abstract": "Heart failure (HF) is a growing epidemic with the annual number of hospitalizations constantly increasing over the last decades for HF as a primary or secondary diagnosis. Despite the emergence of novel therapeutic approached that can prolong life and shorten hospital stay, HF patients will be needing rehospitalization and will often have a poor prognosis. Telemonitoring is a novel diagnostic modality that has been suggested to be beneficial for HF patients. Telemonitoring is viewed as a means of recording physiological data, such as body weight, heart rate, arterial blood pressure, and electrocardiogram recordings, by portable devices and transmitting these data remotely (via a telephone line, a mobile phone or a computer) to a server where they can be stored, reviewed and analyzed by the research team. In this systematic review of all randomized clinical trials evaluating telemonitoring in chronic HF, we aim to assess whether telemonitoring provides any substantial benefit in this patient population." }, { "pmid": "20846317", "title": "Insufficient evidence of benefit: a systematic review of home telemonitoring for COPD.", "abstract": "RATIONALE, AIMS AND OBJECTIVES\nThe evidence to support the effectiveness of home telemonitoring interventions for patients with chronic obstructive pulmonary disease (COPD) is limited, yet there are many efforts made to implement these technologies across health care services.\n\n\nMETHODS\nA comprehensive search strategy was designed and implemented across 9 electronic databases and 11 European, Australasian and North American telemedicine websites. Included studies had to examine the effectiveness of telemonitoring interventions, clearly defined for the study purposes, for adult patients with COPD. Two researchers independently screened each study prior to inclusion.\n\n\nRESULTS\nTwo randomized trials and four other evaluations of telemonitoring were included. The studies are typically underpowered, had heterogeneous patient populations and had a lack of detailed intervention descriptions and of the care processes that accompanied telemonitoring. In addition, there were diverse outcome measures and no economic evaluations. The telemonitoring interventions in each study differed widely. Some had an educational element that could itself account for the differences between groups.\n\n\nCONCLUSIONS\nDespite these caveats, the study reports are themselves positive about their results. However, given the risk of bias in the design and scale of the evaluations we conclude that the benefit of telemonitoring for COPD is not yet proven and that further work is required before wide-scale implementation be supported." }, { "pmid": "8942501", "title": "Multicenter randomized clinical trial of home uterine activity monitoring: pregnancy outcomes for all women randomized.", "abstract": "OBJECTIVE\nOur purpose was to evaluate the impact of home uterine activity monitoring on pregnancy outcomes among women at high risk for preterm labor and delivery.\n\n\nSTUDY DESIGN\nWomen at high risk for preterm labor at three centers were randomly assigned to receive high-risk prenatal care alone (not monitored) or to receive the same care with twice-daily home uterine activity monitoring without increased nursing support (monitored). There were 339 women with singleton gestations randomized with caregivers blinded to group assignment. The two groups were medically and demographically similar at entry into the study.\n\n\nRESULTS\nWomen in the monitored group had prolonged pregnancy survival (p = 0.02) and were less likely to experience a preterm delivery (relative risk 0.59; p = 0.04). Infants born to monitored women with singleton gestations were less likely to be of low birth weight (< 2500 gm; relative risk 0.47, p = 0.003), and were less likely to be admitted to a neonatal intensive care unit (relative risk 0.5, p = 0.01).\n\n\nCONCLUSION\nThese data show, among women with singleton gestations at high risk for preterm delivery, that the use of home uterine activity monitoring alone, without additional intensive nursing care, results in improved pregnancy outcomes, including prolonged gestation, decreased risk for preterm delivery, larger-birth-weight infants, and a decreased need for neonatal intensive care." }, { "pmid": "20628517", "title": "The outcomes of gestational diabetes mellitus after a telecare approach are not inferior to traditional outpatient clinic visits.", "abstract": "Objective. To evaluate the feasibility of a telemedicine system based on Internet and a short message service in pregnancy and its influence on delivery and neonatal outcomes of women with gestational diabetes mellitus (GDM). Methods. 100 women diagnosed of GDM were randomized into two parallel groups, a control group based on traditional face-to-face outpatient clinic visits and an intervention group, which was provided with a Telemedicine system for the transmission of capillary glucose data and short text messages with weekly professional feedback. 97 women completed the study (48/49, resp.). Main Outcomes Measured. The percentage of women achieving HbA1c values <5.8%, normal vaginal delivery and having a large for-gestational-age newborn were evaluated. Results. Despite a significant reduction in outpatient clinic visits in the experimental group, particularly in insulin-treated women (2.4 versus 4.6 hours per insulin-treated woman resp.; P < .001), no significant differences were found between the experimental and traditional groups regarding HbA1c levels (all women had HbA1c <5.8% during pregnancy), normal vaginal delivery (40.8% versus 54.2%, resp.; P > .05) and large-for-gestational-age newborns (6.1% versus 8.3%, resp.; P > .05). Conclusions. The system significantly reduces the need for outpatient clinic visits and achieves similar pregnancy, delivery, and newborn outcomes." }, { "pmid": "20044162", "title": "A Telemedicine system based on Internet and short message service as a new approach in the follow-up of patients with gestational diabetes.", "abstract": "To evaluate the feasibility of a Telemedicine system based on Internet and short message service in the follow-up of patients with gestational diabetes. Compared to control group, Telemedicine group reduced 62% the number of unscheduled face-to-face visits, and 82.7% in the subgroup of insulin-treated patients, improving patient satisfaction, and achieving similar pregnancy and new born outcomes." }, { "pmid": "21822394", "title": "Pre-eclampsia: pathophysiology, diagnosis, and management.", "abstract": "The incidence of pre-eclampsia ranges from 3% to 7% for nulliparas and 1% to 3% for multiparas. Pre-eclampsia is a major cause of maternal mortality and morbidity, preterm birth, perinatal death, and intrauterine growth restriction. Unfortunately, the pathophysiology of this multisystem disorder, characterized by abnormal vascular response to placentation, is still unclear. Despite great polymorphism of the disease, the criteria for pre-eclampsia have not changed over the past decade (systolic blood pressure > 140 mmHg or diastolic blood pressure ≥ 90 mmHg and 24-hour proteinuria ≥ 0.3 g). Clinical features and laboratory abnormalities define and determine the severity of pre-eclampsia. Delivery is the only curative treatment for pre-eclampsia. Multidisciplinary management, involving an obstetrician, anesthetist, and pediatrician, is carried out with consideration of the maternal risks due to continued pregnancy and the fetal risks associated with induced preterm delivery. Screening women at high risk and preventing recurrences are key issues in the management of pre-eclampsia." }, { "pmid": "10203647", "title": "A randomized comparison of home uterine activity monitoring in the outpatient management of women treated for preterm labor.", "abstract": "OBJECTIVE\nThe aim of the study was to evaluate home uterine activity monitoring as an intervention in reducing the rate of preterm birth among women treated for preterm labor.\n\n\nSTUDY DESIGN\nA total of 186 women were treated in the hospital with magnesium sulfate for preterm labor and were prospectively randomly assigned to study groups; among these, 162 were ultimately eligible for comparison. Eighty-two of these women were assigned to the monitored group and 80 were assigned to an unmonitored control group. Other than monitoring, all women received identical prenatal follow-up, including daily perinatal telephone contact and oral terbutaline therapy. Outcome comparisons were primarily directed toward evaluation of preterm birth at <35 weeks' gestation. Readmissions for recurrent preterm labor and observations lasting <24 hours were evaluated in monitored and unmonitored groups. Compliance with monitoring was also evaluated in the monitored group.\n\n\nRESULTS\nThe monitored and control groups were demographically similar. According to a multivariate logistic regression model, women with cervical dilatation of >/=2 cm were 4 times more likely to be delivered at <35 weeks' gestation (P <.05). Gestational ages at delivery were similar in the monitored and control groups. There was no significant difference in the overall rate of preterm delivery at <35 weeks' gestation between the monitored group (10.9%) and the control group (15.0%). The overall rates of delivery at <37 weeks' gestation were high (48.8% and 60.0% for monitored and control groups, respectively), and the difference was not significant. The numbers of women with >/=1 instance of readmission and treatment for recurrent preterm labor were equal in the monitored and control groups. The numbers of women with >/=1 hospital observation lasting <24 hours were not different between the groups. Compliance with monitoring did not significantly differ for women who were delivered at <35 weeks' gestation, women with >/=2 cm cervical dilatation at enrollment, or for African American women.\n\n\nCONCLUSION\nA reduction in the likelihood of preterm delivery at <35 weeks' gestation was not further enhanced by the addition of home uterine monitoring to the outpatient management regimens of women treated for preterm labor." }, { "pmid": "7485304", "title": "A multicenter randomized controlled trial of home uterine monitoring: active versus sham device. The Collaborative Home Uterine Monitoring Study (CHUMS) Group.", "abstract": "OBJECTIVE\nOur purpose was to determine the efficacy of a home uterine activity monitoring system for early detection of preterm labor and reduction of preterm birth.\n\n\nSTUDY DESIGN\nA randomized, controlled, double-blinded trial was performed in which pregnant women between 24 and 36 weeks' gestation and at high risk for preterm labor or birth were assigned to receive twice daily nursing contact and home uterine activity monitoring with either active (data revealed) or sham (data concealed) devices. Study end points included mean cervical dilatation and its mean change from a previous visit at preterm labor diagnosis, preterm birth rate, and infant outcomes. Analysis of variance or logistic models including terms for site and group-by-site interaction effects were constructed for all variables.\n\n\nRESULTS\nOf 1355 patients enrolled, 1292 were randomized, 1165 used home uterine activity monitoring devices, and 842 (72.3%) completed the study. Both device groups had similar demographics, enrollment and delivery gestational ages, discontinuation rates, risk factors, birth weights, cervical dilatation at enrollment and at preterm labor diagnosis, change in cervical dilatation at preterm labor diagnosis, rates of preterm labor and birth, and neonatal intensive care requirements. Power to detect a difference in cervical dilatation > or = 1 cm at diagnosis of preterm labor was 0.99 for all risk factors.\n\n\nCONCLUSIONS\nUterine activity data obtained from home uterine activity monitoring, when added to daily nursing contact, were not linked to earlier diagnosis of preterm labor or lower rates of preterm birth or neonatal morbidity in pregnancies at high risk for preterm labor and birth." }, { "pmid": "22512287", "title": "Impact of a telemedicine system with automated reminders on outcomes in women with gestational diabetes mellitus.", "abstract": "BACKGROUND\nHealth information technology has been proven to be a successful tool for the management of patients with multiple medical conditions. The purpose of this study was to examine the impact of an enhanced telemedicine system on glucose control and pregnancy outcomes in women with gestational diabetes mellitus (GDM).\n\n\nSUBJECTS AND METHODS\nWe used an Internet-based telemedicine system to also allow interactive voice response phone communication between patients and providers and to provide automated reminders to transmit data. Women with GDM were randomized to either the telemedicine group (n=40) or the control group (n=40) and asked to monitor their blood glucose levels four times a day. Women in the intervention group transmitted those values via the telemedicine system, whereas women in the control group maintained paper logbooks, which were reviewed at prenatal visits. Primary outcomes were infant birth weight and maternal glucose control. Data collection included blood glucose records, transmission rates for the intervention group, and chart review.\n\n\nRESULTS\nThere were no significant differences between the two groups (telemedicine vs. controls) in regard to maternal blood glucose values or infant birth weight. However, adding telephone access and reminders increased transmission rates of data in the intervention group compared with the intervention group in our previous study (35.6±32.3 sets of data vs.17.4±16.9 sets of data; P<0.01).\n\n\nCONCLUSIONS\nOur enhanced telemedicine monitoring system increased system utilization and contact between women with GDM and their healthcare providers but did not impact upon pregnancy outcomes." }, { "pmid": "24735917", "title": "Chronic hypertension and pregnancy outcomes: systematic review and meta-analysis.", "abstract": "OBJECTIVE\nTo provide an accurate assessment of complications of pregnancy in women with chronic hypertension, including comparison with population pregnancy data (US) to inform pre-pregnancy and antenatal management strategies.\n\n\nDESIGN\nSystematic review and meta-analysis.\n\n\nDATA SOURCES\nEmbase, Medline, and Web of Science were searched without language restrictions, from first publication until June 2013; the bibliographies of relevant articles and reviews were hand searched for additional reports.\n\n\nSTUDY SELECTION\nStudies involving pregnant women with chronic hypertension, including retrospective and prospective cohorts, population studies, and appropriate arms of randomised controlled trials, were included.\n\n\nDATA EXTRACTION\nPooled incidence for each pregnancy outcome was reported and, for US studies, compared with US general population incidence from the National Vital Statistics Report (2006).\n\n\nRESULTS\n55 eligible studies were identified, encompassing 795,221 pregnancies. Women with chronic hypertension had high pooled incidences of superimposed pre-eclampsia (25.9%, 95% confidence interval 21.0% to 31.5 %), caesarean section (41.4%, 35.5% to 47.7%), preterm delivery <37 weeks' gestation (28.1% (22.6 to 34.4%), birth weight <2500 g (16.9%, 13.1% to 21.5%), neonatal unit admission (20.5%, 15.7% to 26.4%), and perinatal death (4.0%, 2.9% to 5.4%). However, considerable heterogeneity existed in the reported incidence of all outcomes (τ(2)=0.286-0.766), with a substantial range of incidences in individual studies around these averages; additional meta-regression did not identify any influential demographic factors. The incidences (the meta-analysis average from US studies) of adverse outcomes in women with chronic hypertension were compared with women from the US national population dataset and showed higher risks in those with chronic hypertension: relative risks were 7.7 (95% confidence interval 5.7 to 10.1) for superimposed pre-eclampsia compared with pre-eclampsia, 1.3 (1.1 to 1.5) for caesarean section, 2.7 (1.9 to 3.6) for preterm delivery <37 weeks' gestation, 2.7 (1.9 to 3.8) for birth weight <2500 g, 3.2 (2.2 to 4.4) for neonatal unit admission, and 4.2 (2.7 to 6.5) for perinatal death.\n\n\nCONCLUSIONS\nThis systematic review, reporting meta-analysed data from studies of pregnant women with chronic hypertension, shows that adverse outcomes of pregnancy are common and emphasises a need for heightened antenatal surveillance. A consistent strategy to study women with chronic hypertension is needed, as previous study designs have been diverse. These findings should inform counselling and contribute to optimisation of maternal health, drug treatment, and pre-pregnancy management in women affected by chronic hypertension." }, { "pmid": "27222631", "title": "No Hypertensive Disorder of Pregnancy; No Preeclampsia-eclampsia; No Gestational Hypertension; No Hellp Syndrome. Vascular Disorder of Pregnancy Speaks for All.", "abstract": "Hypertensive disorders complicate 5%-10% of pregnancies with increasing incidence mainly due to upward trends in obesity globally. In the last century, several terminologies have been introduced to describe the spectrum of this disease. The current and widely used classification of hypertensive pregnancy disorders was introduced in 1972 and in 1982, but has not been free of controversy and confusion. Unlike other diseases, the existing terminology combines signs and symptoms, but does not describe the underlying pathology of the disease itself. In this commentary, a detailed account is given to vascular disorder of pregnancy (VDP) as an inclusive terminology taking into account the underlying pathology of the disease on affected organs and systems. A simple and uniform classification scheme for VDP is proposed." }, { "pmid": "26158653", "title": "Pregnancy-Induced hypertension.", "abstract": "Pregnancy-induced hypertension (PIH) complicates 6-10% of pregnancies. It is defined as systolic blood pressure (SBP) >140 mmHg and diastolic blood pressure (DBP) >90 mmHg. It is classified as mild (SBP 140-149 and DBP 90-99 mmHg), moderate (SBP 150-159 and DBP 100-109 mmHg) and severe (SBP ≥ 160 and DBP ≥ 110 mmHg). PIH refers to one of four conditions: a) pre-existing hypertension, b) gestational hypertension and preeclampsia (PE), c) pre-existing hypertension plus superimposed gestational hypertension with proteinuria and d) unclassifiable hypertension. PIH is a major cause of maternal, fetal and newborn morbidity and mortality. Women with PIH are at a greater risk of abruptio placentae, cerebrovascular events, organ failure and disseminated intravascular coagulation. Fetuses of these mothers are at greater risk of intrauterine growth retardation, prematurity and intrauterine death. Ambulatory blood pressure monitoring over a period of 24 h seems to have a role in predicting deterioration from gestational hypertension to PE. Antiplatelet drugs have moderate benefits when used for prevention of PE. Treatment of PIH depends on blood pressure levels, gestational age, presence of symptoms and associated risk factors. Non-drug management is recommended when SBP ranges between 140-149 mmHg or DBP between 90-99 mmHg. Blood pressure thresholds for drug management in pregnancy vary between different health organizations. According to 2013 ESH/ESC guidelines, antihypertensive treatment is recommended in pregnancy when blood pressure levels are ≥ 150/95 mmHg. Initiation of antihypertensive treatment at values ≥ 140/90 mmHg is recommended in women with a) gestational hypertension, with or without proteinuria, b) pre-existing hypertension with the superimposition of gestational hypertension or c) hypertension with asymptomatic organ damage or symptoms at any time during pregnancy. Methyldopa is the drug of choice in pregnancy. Atenolol and metoprolol appear to be safe and effective in late pregnancy, while labetalol has an efficacy comparable to methyldopa. Angiotensin-converting enzyme (ACE) inhibitors and angiotensin II antagonists are contraindicated in pregnancy due to their association with increased risk of fetopathy." }, { "pmid": "24504933", "title": "Antihypertensive drug therapy for mild to moderate hypertension during pregnancy.", "abstract": "BACKGROUND\nMild to moderate hypertension during pregnancy is common. Antihypertensive drugs are often used in the belief that lowering blood pressure will prevent progression to more severe disease, and thereby improve the outcome.\n\n\nOBJECTIVES\nTo assess the effects of antihypertensive drug treatments for women with mild to moderate hypertension during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 April 2013) and reference lists of retrieved studies.\n\n\nSELECTION CRITERIA\nAll randomised trials evaluating any antihypertensive drug treatment for mild to moderate hypertension during pregnancy defined, whenever possible, as systolic blood pressure 140 to 169 mmHg and diastolic blood pressure 90 to 109 mmHg. Comparisons were of one or more antihypertensive drug(s) with placebo, with no antihypertensive drug, or with another antihypertensive drug, and where treatment was planned to continue for at least seven days.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data.\n\n\nMAIN RESULTS\nForty-nine trials (4723 women) were included. Twenty-nine trials compared an antihypertensive drug with placebo/no antihypertensive drug (3350 women). There is a halving in the risk of developing severe hypertension associated with the use of antihypertensive drug(s) (20 trials, 2558 women; risk ratio (RR) 0.49; 95% confidence interval (CI) 0.40 to 0.60; risk difference (RD) -0.10 (-0.13 to -0.07); number needed to treat to harm (NNTH) 10 (8 to 13)) but little evidence of a difference in the risk of pre-eclampsia (23 trials, 2851 women; RR 0.93; 95% CI 0.80 to 1.08). Similarly, there is no clear effect on the risk of the baby dying (27 trials, 3230 women; RR 0.71; 95% CI 0.49 to 1.02), preterm birth (15 trials, 2141 women; RR 0.96; 95% CI 0.85 to 1.10), or small-for-gestational-age babies (20 trials, 2586 women; RR 0.97; 95% CI 0.80 to 1.17). There were no clear differences in any other outcomes.Twenty-two trials (1723 women) compared one antihypertensive drug with another. Alternative drugs seem better than methyldopa for reducing the risk of severe hypertension (11 trials, 638 women; RR (random-effects) 0.54; 95% CI 0.30 to 0.95; RD -0.11 (-0.20 to -0.02); NNTH 7 (5 to 69)). There is also a reduction in the overall risk of developing proteinuria/pre-eclampsia when beta blockers and calcium channel blockers considered together are compared with methyldopa (11 trials, 997 women; RR 0.73; 95% CI 0.54 to 0.99). However, the effect on both severe hypertension and proteinuria is not seen in the individual drugs. Other outcomes were only reported by a small proportion of studies, and there were no clear differences.\n\n\nAUTHORS' CONCLUSIONS\nIt remains unclear whether antihypertensive drug therapy for mild to moderate hypertension during pregnancy is worthwhile." }, { "pmid": "26606702", "title": "Preeclampsia: Reflections on How to Counsel About Preventing Recurrence.", "abstract": "Preeclampsia is one of the most challenging diseases of pregnancy, with unclear etiology, no specific marker for prediction, and no precise treatment besides delivery of the placenta. Many risk factors have been identified, and diagnostic and management tools have improved in recent years. However, this disease remains one of the leading causes of maternal morbidity and mortality worldwide, especially in under-resourced settings. A history of previous preeclampsia is a known risk factor for a new event in a future pregnancy, with recurrence rates varying from less than 10% to 65%, depending on the population or methodology considered. A recent review that performed an individual participant data meta-analysis on the recurrence of hypertensive disorders of pregnancy in over 99 000 women showed an overall recurrence rate of 20.7%; when specifically considering preeclampsia, it was 13.8%, with milder disease upon recurrence. Prevention of recurrent preeclampsia has been attempted by changes in lifestyle, dietary supplementation, antihypertensive drugs, antithrombotic agents, and others, with much uncertainty about benefit. It is always challenging to treat and counsel a woman with a previous history of preeclampsia; this review will be based on hypothetical clinical cases, using common scenarios in obstetrical practice to consider the available evidence on how to counsel each woman during pre-conception and prenatal consultations." }, { "pmid": "26455020", "title": "[Hypertension during pregnancy--how to manage effectively?].", "abstract": "Arterial hypertension affects 5-10% of all pregnant women and may be present in women with pre-existing primary or secondary chronic hypertension, and in women who develop newonset hypertension in the second half of pregnancy. Hypertensive disorders during pregnancy carry risks for the woman and the baby. Hypertension in pregnancy is diagnosed when SBP is > or = 140 or/and DBP > or = 90 mmHg. According to the guidelines, the decision to start pharmacological treatment of hypertension in pregnancy depends on the type of hypertension: in pregnancy-induced hypertension, developing after 20 weeks of pregnancy (with or without proteinuria) drug treatment is indicated when BP is > or = 140/90 mmHg, in chronic hypertension observed before pregnancy pharmacotherapy is indicated when BP is > or = 150/95 mmHg. For pregnant women with severe hypertension (> or =160 / 110 mmHg) antihypertensive therapy should be initiated immediately. Oral methyldopa, labetalol, other beta-adrenoreceptor blockers and calcium channel blockers are used most commonly. In pre-eclampsia parental labetalol, nitroglycerine, urapidyl and other drugs may also be needed." }, { "pmid": "25864288", "title": "The effect of calcium channel blockers on prevention of preeclampsia in pregnant women with chronic hypertension.", "abstract": "BACKGROUND\nPregnant women with chronic hypertension are at increased risk for complications. This study aims to investigate whether calcium channel blockers plus low dosage aspirin therapy can reduce the incidence of complications during pregnancy with chronic hypertension and improve the prognosis of neonates.\n\n\nMATERIALS AND METHODS\nFrom March 2011 to June 2013, 33 patients were selected to join this trial according to the chronic hypertension criteria set by the Preface Bulletin of American College of Obstetricians and Gynecologists, (ACOG). Patients were administrated calcium channel blockers plus low-dosage aspirin and vitamin C. The statistic data of baseline and prognosis from the patients were retrospectively reviewed and compared.\n\n\nRESULTS\nBlood pressure of patients was controlled by these medicines with average systolic pressure from 146.3 to 148.7 mmHg and average diastolic pressure from 93.8 to 97.9 mmHg; 39.4% patients complicated mild preeclampsia; however, none of them developed severe preeclampsia or eclampsia, or complicate placental abruption. 30.3% patients delivered at preterm labour; 84.8% patients underwent cesarean section. The neonatal average weight was 3,008 ± 629.6 g, in which seven neonatal weights were less than 2,500 g. All of the neonatal Apgar scores were 9 to 10 at one to five minutes. Small for gestational age (SGA) occurred in five (15%).\n\n\nCONCLUSIONS\nCalcium channel blockers can improve the outcome of pregnancy women with chronic hypertension to avoid the occurrence of severe pregnancy complication or neonatal morbidity." }, { "pmid": "26048352", "title": "Pre-eclampsia Diagnosis and Treatment Options: A Review of Published Economic Assessments.", "abstract": "BACKGROUND\nPre-eclampsia is a pregnancy complication affecting both mother and fetus. Although there is no proven effective method to prevent pre-eclampsia, early identification of women at risk of pre-eclampsia could enhance appropriate application of antenatal care, management and treatment. Very little is known about the cost effectiveness of these and other tests for pre-eclampsia, mainly because there is no clear treatment path. The aim of this study was to provide a comprehensive overview of the existing evidence on the health economics of screening, diagnosis and treatment options in pre-eclampsia.\n\n\nMETHODS\nWe searched three electronic databases (PubMed, EMBASE and the Cochrane Library) for studies on screening, diagnosis, treatment or prevention of pre-eclampsia, published between 1994 and 2014. Only full papers written in English containing complete economic assessments in pre-eclampsia were included.\n\n\nRESULTS\nFrom an initial total of 138 references, six papers fulfilled the inclusion criteria. Three studies were on the cost effectiveness of treatment of pre-eclampsia, two of which evaluated magnesium sulphate for prevention of seizures and the third evaluated the cost effectiveness of induction of labour versus expectant monitoring. The other three studies were aimed at screening and diagnosis, in combination with subsequent preventive measures. The two studies on magnesium sulphate were equivocal on the cost effectiveness in non-severe cases, and the other study suggested that induction of labour in term pre-eclampsia was more cost effective than expectant monitoring. The screening studies were quite diverse in their objectives as well as in their conclusions. One study concluded that screening is probably not worthwhile, while two other studies stated that in certain scenarios it may be cost effective to screen all pregnant women and prophylactically treat those who are found to be at high risk of developing pre-eclampsia.\n\n\nDISCUSSION\nThis study is the first to provide a comprehensive overview on the economic aspects of pre-eclampsia in its broadest sense, ranging from screening to treatment options. The main limitation of the present study lies in the variety of topics in combination with the limited number of papers that could be included; this restricted the comparisons that could be made. In conclusion, novel biomarkers in screening for and diagnosing pre-eclampsia show promise, but their accuracy is a major driver of cost effectiveness, as is prevalence. Universal screening for pre-eclampsia, using a biomarker, will be feasible only when accuracy is significantly increased." }, { "pmid": "25450475", "title": "New approaches for managing preeclampsia: clues from clinical and basic research.", "abstract": "One of the most common, and most vexing, obstetric complications is preeclampsia-a major cause of maternal and perinatal morbidity. Hallmarked by new-onset hypertension and a myriad of other symptoms, the underlying cause of the disorder remains obscure despite intensive research into its etiology. Although the initiating events are not clear, one common finding in preeclamptic patients is failure to remodel the maternal arteries that supply the placenta, with resulting hypoxia/ischemia. Intensive research over the past 2 decades has identified several categories of molecular dysfunction resulting from placental hypoxia, which, when released into the maternal circulation, are involved in the spectrum of symptoms seen in these patients-in particular, angiogenic imbalance and the activation of innate and adaptive immune responses. Despite these new insights, little in the way of new treatments for the management of these patients has been advanced into clinical practice. Indeed, few therapeutic options exist for the obstetrician treating a case of preeclampsia. Pharmacologic management is typically seizure prophylaxis, and, in severe cases, antihypertensive agents for controlling worsening hypertension. Ultimately, the induction of labor is indicated, making preeclampsia a leading cause of premature birth. Here, the molecular mechanisms linking placental ischemia to the maternal symptoms of preeclampsia are reviewed, and several areas of recent research suggesting new potential therapeutic approaches to the management of preeclampsia are identified." }, { "pmid": "21547859", "title": "Pregnant women's fear of childbirth in midwife- and obstetrician-led care in Belgium and the Netherlands: test of the medicalization hypothesis.", "abstract": "Fear of childbirth has gained importance in the context of increasing medicalization of childbirth. Belgian and Dutch societies are very similar but differ with regard to the organization of maternity care. The Dutch have a high percentage of home births and low medical intervention rates. In contrast, home births in Belgium are rarer, and the medical model is more widely used. By comparing the Belgian and Dutch maternity care models, the association between fear of childbirth and medicalization can be explored. For this study an antenatal questionnaire was completed by 833 women at 30 weeks of pregnancy. Fear of childbirth was measured by a shortened Dutch version of the Childbirth Attitudes Questionnaire. A four-dimensional model with baby-related, pain and injuries-related, general and personal control-related, and medical interventions and hospital care-related fear, fitted well in both countries. Multiple regression analysis showed no country differences, except that Belgian women in midwife-led care were more fearful of medical interventions and hospital care than the Dutch. For the other dimensions, both Belgian and Dutch women receiving midwifery care reported less fear compared to those in obstetric antenatal care. Hence, irrespective of the maternity care model, antenatal care providers are crucial in preventing fear of childbirth." }, { "pmid": "26019905", "title": "Ambivalence towards childbirth in a medicalized context: a qualitative inquiry among Iranian mothers.", "abstract": "BACKGROUND\nToday, pregnant women are treated as individuals requiring medical care. Every day, more and more technologies, surgical procedures and medications are used even for low-risk childbirths. These interventions can save mothers' lives in threatening situations, although they might be risky for mothers and neonates in low-risk deliveries. Despite the increasing interest in medical care for childbirth, our knowledge about underlying factors for development of medicalized childbirth is limited in Iran.\n\n\nOBJECTIVES\nThe purpose of this study was to provide a broad description of medicalized childbirth in Iran.\n\n\nMATERIALS AND METHODS\nIn this study, a qualitative approach was applied and data was gathered via in-depth interviews. The subjects were selected via purposive sampling. Overall, 27 pregnant and postpartum women were enrolled in this study. Participants were selected from public health centers, hospitals and offices. Data analysis was performed using conventional qualitative content analysis.\n\n\nRESULTS\nAs the results indicated, mothers preferred medicalized childbirth under the supervision of obstetricians. The subjects mostly opted for elective cesarean section; this choice led to an increase in physicians' authority and restricted midwives' role in childbirth. Consequently, mothers' preference for cesarean section led to the expansion of medicalization and challenged the realization of natural childbirth. Mothers also had a strong tendency toward natural childbirth.\n\n\nCONCLUSIONS\nGenerally, many Iranian mothers choose the medicalized approach, despite their inclination to comply with the natural mode of delivery. It seems that mothers have an ambivalent attitude toward childbirth. Health authorities can prevent the adverse effects of medicalized birth and encourage natural childbirth among women using the obtained findings." }, { "pmid": "25631179", "title": "[A gender perspective on medicalized childbirth].", "abstract": "Gender mainstreaming is a worldwide issue. The United Nations and the World Health Organization have emphasized the importance of incorporating gender perspectives and gender equity into government policy decisions. Different cultures have different attitudes toward the management of childbirth and these attitudes influence the feelings and needs of women and their partners. These needs must be better understood and satisfied. The widely held technocratic values of obstetricians influence the birthing experience of women significantly. This article uses a gender perspective to describe the medicalization of childbirth, the pharmacological pain-relief oppression of women, the prevalence of blaming women for decisions to conduct Caesarean sections, and the exclusion of men from involvement in the childbirth process. This article may be used as reference to enhance gender equality childbirth care for women." }, { "pmid": "18726792", "title": "\"We wanted a birth experience, not a medical experience\": exploring Canadian women's use of midwifery.", "abstract": "In this study I explore Canadian women's use of midwifery to examine whether their choice represents a resistance to the medicalization of pregnancy/childbirth. Through my analysis of the data I identified eight ways the women's deliberate decision to pursue midwifery care represented resistance to medicalization. In so doing, I demonstrate how women actively assert their agency over reproduction thus shaping their own reproductive health experiences. The outcome of their resistance and resultant use of midwifery was empowerment. Theoretically the research contributes to understanding the intentionality of resistance and a continuum of resistant behavior." }, { "pmid": "23514572", "title": "The medicalization of birth and midwifery as resistance.", "abstract": "Through the medicalization of women's bodies, the credibility and traditional knowledge of midwives and healers was forcibly lost. Northern Aboriginal communities continue to be especially impacted by the medicalization of birth. In recent years, there has been a resurgence in midwifery that is framed by a feminist discourse of women's reproductive rights. Many researchers believe that women who choose midwifery are exercising a conscious choice of resistance to the medicalization of women's bodies. In this article, I offer a review of the literature on how the medicalization of birth is conceptualized in relation to women's birthing experiences in Canada." }, { "pmid": "23927368", "title": "Prenatal education is an opportunity for improved outcomes in hypertensive disorders of pregnancy: results from an Internet-based survey.", "abstract": "Current prenatal care (PNC) practice guidelines provide little information on educating patients about preeclampsia. Our survey of 754 women who visited the Preeclampsia Foundation website found that most received PNC and regular screenings, but only 42% \"definitely\" recalled specific education about preeclampsia; furthermore, only half \"fully understood\" the explanation. However, 27 of the 169 women (75.0%) who understood acted on this knowledge by promptly reporting symptoms and complying with treatment. Of the 46 who did not remember some or any of the education, 3 (6.0%) took any action; the difference between these two groups is highly significant. We conclude that knowledge enables women to spot signs and symptoms, leading to earlier diagnosis and management, and reduced morbidity and mortality. We propose the adoption of formal guidelines on preeclampsia education." } ]
Scientific Reports
28361913
PMC5374503
10.1038/srep45639
Multi-scale radiomic analysis of sub-cortical regions in MRI related to autism, gender and age
We propose using multi-scale image textures to investigate links between neuroanatomical regions and clinical variables in MRI. Texture features are derived at multiple scales of resolution based on the Laplacian-of-Gaussian (LoG) filter. Three quantifier functions (Average, Standard Deviation and Entropy) are used to summarize texture statistics within standard, automatically segmented neuroanatomical regions. Significance tests are performed to identify regional texture differences between ASD vs. TDC and male vs. female groups, as well as correlations with age (corrected p < 0.05). The open-access brain imaging data exchange (ABIDE) brain MRI dataset is used to evaluate texture features derived from 31 brain regions from 1112 subjects including 573 typically developing control (TDC, 99 females, 474 males) and 539 Autism spectrum disorder (ASD, 65 female and 474 male) subjects. Statistically significant texture differences between ASD vs. TDC groups are identified asymmetrically in the right hippocampus, left choroid-plexus and corpus callosum (CC), and symmetrically in the cerebellar white matter. Sex-related texture differences in TDC subjects are found in primarily in the left amygdala, left cerebellar white matter, and brain stem. Correlations between age and texture in TDC subjects are found in the thalamus-proper, caudate and pallidum, most exhibiting bilateral symmetry.
Related WorkOur review of relevant work focuses on studies using imaging techniques to identify brain regions/characteristics related to autism, gender and age.Starting with work related to ASD, various studies using MR imaging have shown that young children with autism had a significantly larger brain volume compared to normally developing peers3132. Studies on autism have also identified volume differences in specific brain structures including the cerebellum631, amygdala31, corpus callosum33, caudate nucleus34, hippocampus3135 and putamen536. For example, it was shown that caudate nucleus and pallidus volumes were related to the level of ASD-like symptoms of participants with attention-deficit/hyperactivity disorder, and that the interaction of these two structures was a significant predictor of ASD scores36. In some cases, contradictory results have been reported. While8 found that autistic children had an increase of hippocampal volume which persisted during adolescence, another study including autistic adolescents and young adults reported a decrease in hippocampal volume35. Other investigations, such as ref. 37, have shown no significant differences in hippocampal volume between ASD and control subjects. Recent studies on autism have focused on finding abnormalities related to brain development. In ref. 38, it was found that pre-adolescent children with ASD exhibited a lack of normative age-related cortical thinning, volumetric reduction, and an abnormal age-related increase in cortical gyrification. It has been hypothesized that the abnormal trajectories in brain growth, observed in children with autism, alters patterns of functional and structural connectivity during development39.Morphological differences between male and female brains have been explored, such differences are of interest since the prevalence and symptoms of various disorders are linked with gender40. For instance, autism is diagnosed four to five times more frequently in boys than girls4142, and multiple sclerosis is four times more frequent in females than males43. Likewise, anxiety, depression and eating disorders have a markedly higher incidence in females than male, especially during adolescence4445. The incidence of schizophrenia between males and females also differs across the lifespan46. In terms of brain development, the growth trajectories of several brain regions have been shown to be linked to the sex of a subject, with some regions developing faster in boys and others in girls4748. Various studies have also investigated sexual differences associated with autism. A recent study showed that the cortical volume and ventromedial/orbitofrontal prefrontal gyrification of females is greater than males, in both the ASD and healthy subject groups49. In another study, the severity of repetitive/restricted behaviors, often observed in autism, was found to be associated with sexual differences in the gray matter morphometry of motor regions50.A number of studies have focused on morphometric brain changes associated with aging4. In refs 51, 52, 53, cross-sectional and longitudinal analyses of brain region volumes revealed that the shrinkage of the hippocampus, the entorhinal cortices, the inferior temporal cortex and the prefrontal white matter increased with age. These studies have also highlighted trends towards age-related atrophy in the amygdala and cingulate gyrus of elderly individuals. Conversely, other investigations found no significant volumetric changes of the temporolimbic and cingulate cortical regions during the aging process545556. A recent study applied voxel-based morphometry to compare the white matter, grey matter and cerebral spinal fluid volumes of ASD males to control male subjects57. The results of this analysis have demonstrated highly age-dependent atypical brain morphometry in ASD subjects. Other investigations have reported that neuroanatomical abnormalities in ASD are highly age-dependent5859.The vast majority of morphometric analyses in this review have focused on voxel-wise or volumetric measurements derived from brain MRI data. Texture features provide a complementary basis for analysis by summarizing distributions of localized image measurements, e.g. filter responses, within meaningful image regions. Several studies have begun to investigate texture in brain MRI, for example to identify differences between Alzheimer’s and control groups22, to discriminate between ASD and TDC subjects60, and to evaluate the survival time of GBM patients1761. Texture features can be computed at multiple scales within regions of interest, for example multi-scale textures based on the LoG filter have been proposed for grading cerebral gliomas16.Among the works most closely related to this paper is the approach of Kovalev et al.62, where texture features were used to measure the effects of gender and age on structural brain asymmetry. In this work, 3D texture features based on extended multi-sort co-occurrence matrices were extracted from rectangular or spherical regions in T1-weighted MRI volumes, and compared across left and right hemispheres. This analysis revealed a greater asymmetry in male brains, most pronounced in the superior temporal gyrus, Heschl’s gyrus, thalamus, and posterior cingulate. Asymmetry was also found to increase with age in various areas of the brain, such as the inferior frontal gyrus, anterior insula, and anterior cingulate. While this work also investigated the link between MRI texture, gender and age, it was limited to lateral asymmetry and did not consider textural differences across gender and age groups. Moreover, texture features were obtained from arbitrarily defined sub-volumes that do not correspond to known neuroanatomical regions. In contrast, our work links texture observations to standard neuroanatomical regions obtained from a parcellation atlas, which provide a more physiologically meaningful basis for analysis in comparison to arbitrarily defined regions.To our knowledge, no work has yet investigated MRI texture analysis within neuroanatomical regions, obtained from a parcellation atlas, as a basis for studying differences related to autism, sex and age.
[ "3064974", "10714055", "23717269", "26594141", "17056234", "19726029", "11061265", "26901134", "26971430", "20395383", "11403201", "24892406", "17695498", "18362333", "21451997", "22898692", "24029645", "22609452", "12136055", "11468308", "7639631", "12077008", "10599796", "27806078", "9586772", "18171375", "23406909", "21208397", "21383252", "20841422", "17513132", "26146534", "26347127", "11445259", "8544910", "17630048", "17023094", "26045942", "12880818", "22248573", "17707313", "19760235", "26174708", "16326115", "11124043", "23658641", "8660125", "7854416", "12194864", "22846656", "26119913", "20211265", "20621657" ]
[ { "pmid": "10714055", "title": "Subgroups of children with autism by cluster analysis: a longitudinal examination.", "abstract": "OBJECTIVES\nA hierarchical cluster analysis was conducted using a sample of 138 school-age children with autism. The objective was to examine (1) the characteristics of resulting subgroups, (2) the relationship of these subgroups to subgroups of the same children determined at preschool age, and (3) preschool variables that best predicted school-age functioning.\n\n\nMETHOD\nNinety-five cases were analyzed.\n\n\nRESULTS\nFindings support the presence of 2 subgroups marked by different levels of social, language, and nonverbal ability, with the higher group showing essentially normal cognitive and behavioral scores. The relationship of high- and low-functioning subgroup membership to levels of functioning at preschool age was highly significant.\n\n\nCONCLUSIONS\nSchool-age functioning was strongly predicted by preschool cognitive functioning but was not strongly predicted by preschool social abnormality or severity of autistic symptoms. The differential outcome of the 2 groups shows that high IQ is necessary but not sufficient for optimal outcome in the presence of severe language impairment." }, { "pmid": "23717269", "title": "Is autism a disease of the cerebellum? An integration of clinical and pre-clinical research.", "abstract": "Autism spectrum disorders are a group of neurodevelopmental disorders characterized by deficits in social skills and communication, stereotyped and repetitive behavior, and a range of deficits in cognitive function. While the etiology of autism is unknown, current research indicates that abnormalities of the cerebellum, now believed to be involved in cognitive function and the prefrontal cortex (PFC), are associated with autism. The current paper proposes that impaired cerebello-cortical circuitry could, at least in part, underlie autistic symptoms. The use of animal models that allow for manipulation of genetic and environmental influences are an effective means of elucidating both distal and proximal etiological factors in autism and their potential impact on cerebello-cortical circuitry. Some existing rodent models of autism, as well as some models not previously applied to the study of the disorder, display cerebellar and behavioral abnormalities that parallel those commonly seen in autistic patients. The novel findings produced from research utilizing rodent models could provide a better understanding of the neurochemical and behavioral impact of changes in cerebello-cortical circuitry in autism." }, { "pmid": "26594141", "title": "Autism spectrum disorders and neuropathology of the cerebellum.", "abstract": "The cerebellum contains the largest number of neurons and synapses of any structure in the central nervous system. The concept that the cerebellum is solely involved in fine motor function has become outdated; substantial evidence has accumulated linking the cerebellum with higher cognitive functions including language. Cerebellar deficits have been implicated in autism for more than two decades. The computational power of the cerebellum is essential for many, if not most of the processes that are perturbed in autism including language and communication, social interactions, stereotyped behavior, motor activity and motor coordination, and higher cognitive functions. The link between autism and cerebellar dysfunction should not be surprising to those who study its cellular, physiological, and functional properties. Postmortem studies have revealed neuropathological abnormalities in cerebellar cellular architecture while studies on mouse lines with cell loss or mutations in single genes restricted to cerebellar Purkinje cells have also strongly implicated this brain structure in contributing to the autistic phenotype. This connection has been further substantiated by studies investigating brain damage in humans restricted to the cerebellum. In this review, we summarize advances in research on idiopathic autism and three genetic forms of autism that highlight the key roles that the cerebellum plays in this spectrum of neurodevelopmental disorders." }, { "pmid": "17056234", "title": "Detection and mapping of hippocampal abnormalities in autism.", "abstract": "Brain imaging studies of the hippocampus in autism have yielded inconsistent results. In this study, a computational mapping strategy was used to examine the three-dimensional profile of hippocampal abnormalities in autism. Twenty-one males with autism (age: 9.5+/-3.3 years) and 24 male controls (age: 10.3+/-2.4 years) underwent a volumetric magnetic resonance imaging scan at 3 Tesla. The hippocampus was delineated, using an anatomical protocol, and hippocampal volumes were compared between the two groups. Hippocampal traces were also converted into three-dimensional parametric surface meshes, and statistical brain maps were created to visualize morphological differences in the shape and thickness of the hippocampus between groups. Parametric surface meshes and shape analysis revealed subtle differences between patients and controls, particularly in the right posterior hippocampus. These deficits were significant even though the groups did not differ significantly with traditional measures of hippocampal volume. These results suggest that autism may be associated with subtle regional reductions in the size of the hippocampus. The increased statistical and spatial power of computational mapping methods provided the ability to detect these differences, which were not found with traditional volumetric methods." }, { "pmid": "19726029", "title": "Amygdala enlargement in toddlers with autism related to severity of social and communication impairments.", "abstract": "BACKGROUND\nAutism is a heterogeneous neurodevelopmental disorder of unknown etiology. The amygdala has long been a site of intense interest in the search for neuropathology in autism, given its role in emotional and social behavior. An interesting hypothesis has emerged that the amygdala undergoes an abnormal developmental trajectory with a period of early overgrowth in autism; however this finding has not been well established at young ages nor analyzed with boys and girls independently.\n\n\nMETHODS\nWe measured amygdala volumes on magnetic resonance imaging scans from 89 toddlers at 1-5 years of age (mean = 3 years). Each child returned at approximately 5 years of age for final clinical evaluation.\n\n\nRESULTS\nToddlers who later received a confirmed autism diagnosis (32 boys, 9 girls) had a larger right (p < .01) and left (p < .05) amygdala compared with typically developing toddlers (28 boys, 11 girls) with and without covarying for total cerebral volume. Amygdala size in toddlers with autism spectrum disorder correlated with the severity of their social and communication impairments as measured on the Autism Diagnostic Interview and Vineland scale. Strikingly, girls differed more robustly from typical in amygdala volume, whereas boys accounted for the significant relationship of amygdala size with severity of clinical impairment.\n\n\nCONCLUSIONS\nThis study provides evidence that the amygdala is enlarged in young children with autism; the overgrowth must begin before 3 years of age and is associated with the severity of clinical impairments. However, neuroanatomic phenotypic profiles differ between males and females, which critically affects future studies on the genetics and etiology of autism." }, { "pmid": "11061265", "title": "Corpus callosum size in autism.", "abstract": "The size of the seven subregions of the corpus callosum was measured on MRI scans from 22 non-mentally retarded autistic subjects and 22 individually matched controls. Areas of the anterior subregions were smaller in the autistic group. In a subsample, measurements were adjusted for intracranial, total brain, and white matter volumes and the differences between groups remained significant. No differences were found in the other subregions. This observation is consistent with the frontal lobe dysfunction reported in autism." }, { "pmid": "26901134", "title": "Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery.", "abstract": "PURPOSE\nThis paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma.\n\n\nMATERIALS AND METHODS\nIn the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models.\n\n\nRESULTS\nPreliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%.\n\n\nCONCLUSIONS\nThese results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images." }, { "pmid": "26971430", "title": "Diagnostic performance of texture analysis on MRI in grading cerebral gliomas.", "abstract": "BACKGROUND AND PURPOSE\nGrading of cerebral gliomas is important both in treatment decision and assessment of prognosis. The purpose of this study was to determine the diagnostic accuracy of grading cerebral gliomas by assessing the tumor heterogeneity using MRI texture analysis (MRTA).\n\n\nMATERIAL AND METHODS\n95 patients with gliomas were included, 27 low grade gliomas (LGG) all grade II and 68 high grade gliomas (HGG) (grade III=34 and grade IV=34). Preoperative MRI examinations were performed using a 3T scanner and MRTA was done on preoperative contrast-enhanced three-dimensional isotropic spoiled gradient echo images in a representative ROI. The MRTA was assessed using a commercially available research software program (TexRAD) that applies a filtration-histogram technique for characterizing tumor heterogeneity. Filtration step selectively filters and extracts texture features at different anatomical scales varying from 2mm (fine features) to 6mm (coarse features), the statistical parameter standard deviation (SD) was obtained. Receiver operating characteristics (ROC) was performed to assess sensitivity and specificity for differentiating between the different grades and calculating a threshold value to quantify the heterogeneity.\n\n\nRESULTS\nLGG and HGG was best discriminated using SD at fine texture scale, with a sensitivity and specificity of 93% and 81% (AUC 0.910, p<0.0001). The diagnostic ability for MRTA to differentiate between the different sub-groups (grade II-IV) was slightly lower but still significant.\n\n\nCONCLUSIONS\nMeasuring heterogeneity in gliomas to discriminate HGG from LGG and between different histological sub-types on already obtained images using MRTA can be a useful tool to augment the diagnostic accuracy in grading cerebral gliomas and potentially hasten treatment decision." }, { "pmid": "20395383", "title": "Texture analysis: a review of neurologic MR imaging applications.", "abstract": "Texture analysis describes a variety of image-analysis techniques that quantify the variation in surface intensity or patterns, including some that are imperceptible to the human visual system. Texture analysis may be particularly well-suited for lesion segmentation and characterization and for the longitudinal monitoring of disease or recovery. We begin this review by outlining the general procedure for performing texture analysis, identifying some potential pitfalls and strategies for avoiding them. We then provide an overview of some intriguing neuro-MR imaging applications of texture analysis, particularly in the characterization of brain tumors, prediction of seizures in epilepsy, and a host of applications to MS." }, { "pmid": "11403201", "title": "Three-dimensional texture analysis of MRI brain datasets.", "abstract": "A method is proposed for three-dimensional (3-D) texture analysis of magnetic resonance imaging brain datasets. It is based on extended, multisort co-occurrence matrices that employ intensity, gradient and anisotropy image features in a uniform way. Basic properties of matrices as well as their sensitivity and dependence on spatial image scaling are evaluated. The ability of the suggested 3-D texture descriptors is demonstrated on nontrivial classification tasks for pathologic findings in brain datasets." }, { "pmid": "24892406", "title": "Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach.", "abstract": "Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost." }, { "pmid": "17695498", "title": "2-Deoxy-2-[18F] fluoro-D-glucose uptake and correlation to intratumoral heterogeneity.", "abstract": "UNLABELLED\nThe aim of this study was to investigate the pattern of 2-deoxy-2-[18F]fluoro-D-glucose (FDG) uptake in relation to the intratumoral histopathological appearance.\n\n\nMATERIALS AND METHODS\nIntratumoral distribution of FDG in nude mice with xenografted tumours originating from an established head and neck squamous cell carcinoma was studied. FDG uptake and the correlation to histopathological appearance was evaluated in four separate quarters of each tumour.\n\n\nRESULTS\nVariations in FDG uptake correlating to the presence of tumour cells was demonstrated. Quarters containing more than 50% tumour cells showed a significantly higher FDG uptake (p = 0.028) than quarters with more stromal tissue and necrosis.\n\n\nCONCLUSION\nThis study shows that the heterogenic FDG uptake within a tumour correlates to histopathological findings and that the variable appearance of tracer uptake on the PET scan depends on distribution of different tissue components in the tumour. This intratumoral heterogeneity calls for caution when evaluating a PET scan where median values of larger areas will be misguiding and thus small areas with high uptake should be regarded as the regions of interest." }, { "pmid": "18362333", "title": "Identification of noninvasive imaging surrogates for brain tumor gene-expression modules.", "abstract": "Glioblastoma multiforme (GBM) is the most common and lethal primary brain tumor in adults. We combined neuroimaging and DNA microarray analysis to create a multidimensional map of gene-expression patterns in GBM that provided clinically relevant insights into tumor biology. Tumor contrast enhancement and mass effect predicted activation of specific hypoxia and proliferation gene-expression programs, respectively. Overexpression of EGFR, a receptor tyrosine kinase and potential therapeutic target, was also directly inferred by neuroimaging and was validated in an independent set of tumors by immunohistochemistry. Furthermore, imaging provided insights into the intratumoral distribution of gene-expression patterns within GBM. Most notably, an \"infiltrative\" imaging phenotype was identified that predicted patient outcome. Patients with this imaging phenotype had a greater tendency toward having multiple tumor foci and demonstrated significantly shorter survival than their counterparts. Our findings provide an in vivo portrait of genome-wide gene expression in GBM and offer a potential strategy for noninvasively selecting patients who may be candidates for individualized therapies." }, { "pmid": "22898692", "title": "Radiomics: the process and the challenges.", "abstract": "\"Radiomics\" refers to the extraction and analysis of large amounts of advanced quantitative imaging features with high throughput from medical images obtained with computed tomography, positron emission tomography or magnetic resonance imaging. Importantly, these data are designed to be extracted from standard-of-care images, leading to a very large potential subject pool. Radiomics data are in a mineable form that can be used to build descriptive and predictive models relating image features to phenotypes or gene-protein signatures. The core hypothesis of radiomics is that these models, which can include biological or medical data, can provide valuable diagnostic, prognostic or predictive information. The radiomics enterprise can be divided into distinct processes, each with its own challenges that need to be overcome: (a) image acquisition and reconstruction, (b) image segmentation and rendering, (c) feature extraction and feature qualification and (d) databases and data sharing for eventual (e) ad hoc informatics analyses. Each of these individual processes poses unique challenges. For example, optimum protocols for image acquisition and reconstruction have to be identified and harmonized. Also, segmentations have to be robust and involve minimal operator input. Features have to be generated that robustly reflect the complexity of the individual volumes, but cannot be overly complex or redundant. Furthermore, informatics databases that allow incorporation of image features and image annotations, along with medical and genetic data, have to be generated. Finally, the statistical approaches to analyze these data have to be optimized, as radiomics is not a mature field of study. Each of these processes will be discussed in turn, as well as some of their unique challenges and proposed approaches to solve them. The focus of this article will be on images of non-small-cell lung cancer." }, { "pmid": "24029645", "title": "Radiogenomics of clear cell renal cell carcinoma: associations between CT imaging features and mutations.", "abstract": "PURPOSE\nTo investigate associations between computed tomographic (CT) features of clear cell renal cell carcinoma (RCC) and mutations in VHL, PBRM1, SETD2, KDM5C, or BAP1 genes.\n\n\nMATERIALS AND METHODS\nThe institutional review board approved this retrospective, hypothesis-generating study of 233 patients with clear cell RCC and waived the informed consent requirement. The study was HIPAA compliant. Three radiologists independently reviewed pretreatment CT images of all clear cell RCCs without knowledge of their genomic profile. One radiologist measured largest diameter and enhancement parameters of each clear cell RCC. Associations between CT features and mutations in VHL, PBRM1, SETD2, KDM5C, and BAP1 genes were tested by using the Fisher exact test. Associations between mutations and size and enhancement were assessed by using the independent t test. Interreader agreement was calculated by using the Fleiss κ.\n\n\nRESULTS\nMutation frequencies among clear cell RCCs were as follows: VHL, 53.2% (124 of 233); PBRM1, 28.8% (67 of 233); SETD2, 7.3% (17 of 233); KDM5C, 6.9% (16 of 233); and BAP1, 6.0% (14 of 233). Mutations of VHL were significantly associated with well-defined tumor margins (P = .013), nodular tumor enhancement (P = .021), and gross appearance of intratumoral vascularity (P = .018). Mutations of KDM5C and BAP1 were significantly associated with evidence of renal vein invasion (P = .022 and .046, respectively). The genotype of solid clear cell RCC differed significantly from the genotype of multicystic clear cell RCC. While mutations of SETD2, KDM5C, and BAP1 were absent in multicystic clear cell RCC, mutations of VHL (P = .016) and PBRM1 (P = .017) were significantly more common among solid clear cell RCC. Interreader agreement for CT feature assessments ranged from substantial to excellent (κ = 0.791-0.912).\n\n\nCONCLUSION\nThis preliminary radiogenomics analysis of clear cell RCC revealed associations between CT features and underlying mutations that warrant further investigation and validation." }, { "pmid": "22609452", "title": "Multi-scale classification of disease using structural MRI and wavelet transform.", "abstract": "Recently, multivariate analysis algorithms have become a popular tool to diagnose neurological diseases based on neuroimaging data. Most studies, however, are biased for one specific scale, namely the scale given by the spatial resolution (i.e. dimension) of the data. In the present study, we propose to use the dual-tree complex wavelet transform to extract information on different spatial scales from structural MRI data and show its relevance for disease classification. Based on the magnitude representation of the complex wavelet coefficients calculated from the MR images, we identified a new class of features taking scale, directionality and potentially local information into account simultaneously. By using a linear support vector machine, these features were shown to discriminate significantly between spatially normalized MR images of 41 patients suffering from multiple sclerosis and 26 healthy controls. Interestingly, the decoding accuracies varied strongly among the different scales and it turned out that scales containing low frequency information were partly superior to scales containing high frequency information. Usually, this type of information is neglected since most decoding studies use only the original scale of the data. In conclusion, our proposed method has not only a high potential to assist in the diagnostic process of multiple sclerosis, but can be applied to other diseases or general decoding problems in structural or functional MRI." }, { "pmid": "12136055", "title": "Brain structural abnormalities in young children with autism spectrum disorder.", "abstract": "OBJECTIVE\nTo explore the specific gross neuroanatomic substrates of this brain developmental disorder, the authors examine brain morphometric features in a large sample of carefully diagnosed 3- to 4-year-old children with autism spectrum disorder (ASD) compared with age-matched control groups of typically developing (TD) children and developmentally delayed (DD) children.\n\n\nMETHODS\nVolumes of the cerebrum, cerebellum, amygdala, and hippocampus were measured from three-dimensional coronal MR images acquired from 45 children with ASD, 26 TD children, and 14 DD children. The volumes were analyzed with respect to age, sex, volume of the cerebrum, and clinical status.\n\n\nRESULTS\nChildren with ASD were found to have significantly increased cerebral volumes compared with TD and DD children. Cerebellar volume for the ASD group was increased in comparison with the TD group, but this increase was proportional to overall increases in cerebral volume. The DD group had smaller cerebellar volumes compared with both of the other groups. Measurements of amygdalae and hippocampi in this group of young children with ASD revealed enlargement bilaterally that was proportional to overall increases in total cerebral volume. There were similar findings of cerebral enlargement for both girls and boys with ASD. For subregion analyses, structural abnormalities were observed primarily in boys, although this may reflect low statistical power issues because of the small sample (seven girls with ASD) studied. Among the ASD group, structural findings were independent of nonverbal IQ. In a subgroup of children with ASD with strictly defined autism, amygdalar enlargement was in excess of increased cerebral volume.\n\n\nCONCLUSIONS\nThese structural findings suggest abnormal brain developmental processes early in the clinical course of autism. Research currently is underway to better elucidate mechanisms underlying these structural abnormalities and their longitudinal progression." }, { "pmid": "11468308", "title": "Unusual brain growth patterns in early life in patients with autistic disorder: an MRI study.", "abstract": "OBJECTIVE\nTo quantify developmental abnormalities in cerebral and cerebellar volume in autism.\n\n\nMETHODS\nThe authors studied 60 autistic and 52 normal boys (age, 2 to 16 years) using MRI. Thirty autistic boys were diagnosed and scanned when 5 years or older. The other 30 were scanned when 2 through 4 years of age and then diagnosed with autism at least 2.5 years later, at an age when the diagnosis of autism is more reliable.\n\n\nRESULTS\nNeonatal head circumferences from clinical records were available for 14 of 15 autistic 2- to 5-year-olds and, on average, were normal (35.1 +/- 1.3 cm versus clinical norms: 34.6 +/- 1.6 cm), indicative of normal overall brain volume at birth; one measure was above the 95th percentile. By ages 2 to 4 years, 90% of autistic boys had a brain volume larger than normal average, and 37% met criteria for developmental macrencephaly. Autistic 2- to 3-year-olds had more cerebral (18%) and cerebellar (39%) white matter, and more cerebral cortical gray matter (12%) than normal, whereas older autistic children and adolescents did not have such enlarged gray and white matter volumes. In the cerebellum, autistic boys had less gray matter, smaller ratio of gray to white matter, and smaller vermis lobules VI-VII than normal controls.\n\n\nCONCLUSIONS\nAbnormal regulation of brain growth in autism results in early overgrowth followed by abnormally slowed growth. Hyperplasia was present in cerebral gray matter and cerebral and cerebellar white matter in early life in patients with autism." }, { "pmid": "7639631", "title": "Reduced size of corpus callosum in autism.", "abstract": "OBJECTIVE\nTo determine via magnetic resonance imaging if the posterior corpus callosum is reduced in the midline cross-sectional area in autistic patients, consistent with previous reports of parietal lobe abnormalities.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nTertiary care facility.\n\n\nPATIENTS AND OTHER PARTICIPANTS\nFifty-one autistic patients (45 males and six females; age range, 3 to 42 years), including both mentally retarded and nonretarded patients who met several diagnostic criteria for autism were prospectively selected. Fifty-one age-and sex-matched volunteer normal control subjects were also included.\n\n\nINTERVENTION\nNone.\n\n\nMAIN OUTCOME MEASURES\nComputer-aided measurement of cross-sectional area, areas of five subregions, and thickness profile.\n\n\nRESULTS\nOverall size reduction, concentrated in posterior subregions.\n\n\nCONCLUSIONS\nEvidence is found of a reduced size of the corpus callosum in autistic patients. This reduction is localized to posterior regions, where parietal lobe fibers are known to project. This finding further supports the idea that parietal lobe involvement may be a consistent feature in autism." }, { "pmid": "12077008", "title": "Brain anatomy and sensorimotor gating in Asperger's syndrome.", "abstract": "Asperger's syndrome (an autistic disorder) is characterized by stereotyped and obsessional behaviours, and pervasive abnormalities in socio-emotional and communicative behaviour. These symptoms lead to social exclusion and a significant healthcare burden; however, their neurobiological basis is poorly understood. There are few studies on brain anatomy of Asperger's syndrome, and no focal anatomical abnormality has been reliably reported from brain imaging studies of autism, although there is increasing evidence for differences in limbic circuits. These brain regions are important in sensorimotor gating, and impaired 'gating' may partly explain the failure of people with autistic disorders to inhibit repetitive thoughts and actions. Thus, we compared brain anatomy and sensorimotor gating in healthy people with Asperger's syndrome and controls. We included 21 adults with Asperger's syndrome and 24 controls. All had normal IQ and were aged 18-49 years. We studied brain anatomy using quantitative MRI, and sensorimotor gating using prepulse inhibition of startle in a subset of 12 individuals with Asperger's syndrome and 14 controls. We found significant age-related differences in volume of cerebral hemispheres and caudate nuclei (controls, but not people with Asperger's syndrome, had age-related reductions in volume). Also, people with Asperger's syndrome had significantly less grey matter in fronto-striatal and cerebellar regions than controls, and widespread differences in white matter. Moreover, sensorimotor gating was significantly impaired in Asperger's syndrome. People with Asperger's syndrome most likely have generalized alterations in brain development, but this is associated with significant differences from controls in the anatomy and function of specific brain regions implicated in behaviours characterizing the disorder. We hypothesize that Asperger's syndrome is associated with abnormalities in fronto-striatal pathways resulting in defective sensorimotor gating, and consequently characteristic difficulties inhibiting repetitive thoughts, speech and actions." }, { "pmid": "10599796", "title": "MRI volumes of amygdala and hippocampus in non-mentally retarded autistic adolescents and adults.", "abstract": "OBJECTIVE\nTo determine whether volumes of hippocampus and amygdala are abnormal in people with autism.\n\n\nBACKGROUND\nNeuropathologic studies of the limbic system in autism have found decreased neuronal size, increased neuronal packing density, and decreased complexity of dendritic arbors in hippocampus, amygdala, and other limbic structures. These findings are suggestive of a developmental curtailment in the maturation of the neurons and neuropil.\n\n\nMETHODS\nMeasurement of hippocampus, amygdala, and total brain volumes from 1.5-mm coronal, spoiled gradient-recalled echo MRI scans in 14 non-mentally retarded autistic male adolescents and young adults and 14 individually matched, healthy community volunteers.\n\n\nRESULTS\nAmygdala volume was significantly smaller in the autistic subjects, both with (p = 0.006) and without (p = 0.01) correcting for total brain volume. Total brain volume and absolute hippocampal volume did not differ significantly between groups, but hippocampal volume, when corrected for total brain volume, was significantly reduced (p = 0.04) in the autistic subjects.\n\n\nCONCLUSIONS\nThere is a reduction in the volume of amygdala and hippocampus in people with autism, particularly in relation to total brain volume. The histopathology of autism suggests that these volume reductions are related to a reduction in dendritic tree and neuropil development, and likely reflect the underdevelopment of the neural connections of limbic structures with other parts of the brain, particularly cerebral cortex." }, { "pmid": "27806078", "title": "Decreased Left Caudate Volume Is Associated with Increased Severity of Autistic-Like Symptoms in a Cohort of ADHD Patients and Their Unaffected Siblings.", "abstract": "Autism spectrum disorder (ASD) symptoms frequently occur in individuals with attention-deficit/hyperactivity disorder (ADHD). While there is evidence that both ADHD and ASD have differential structural brain correlates, knowledge of the structural brain profile of individuals with ADHD with raised ASD symptoms is limited. The presence of ASD-like symptoms was measured by the Children's Social Behavior Questionnaire (CSBQ) in a sample of typically developing controls (n = 154), participants with ADHD (n = 239), and their unaffected siblings (n = 144) between the ages of 8 and 29. Structural magnetic resonance imaging (MRI) correlates of ASD ratings were analysed by studying the relationship between ASD ratings and grey matter volumes using mixed effects models which controlled for ADHD symptom count and total brain volume. ASD ratings were significantly elevated in participants with ADHD relative to controls and unaffected siblings. For the entire group (participants with ADHD, unaffected siblings and TD controls), mixed effect models revealed that the left caudate nucleus volume was negatively correlated with ASD ratings (t = 2.83; P = 0.005). The current findings are consistent with the role of the caudate nucleus in executive function, including the selection of goals based on the evaluation of action outcomes and the use of social reward to update reward representations. There is a specific volumetric profile associated with subclinical ASD-like symptoms in participants with ADHD, unaffected siblings and controls with the caudate nucleus and globus pallidus being of critical importance in predicting the level of ASD-like symptoms in all three groups." }, { "pmid": "9586772", "title": "No difference in hippocampus volume detected on magnetic resonance imaging in autistic individuals.", "abstract": "Neuropathological and animal studies have implicated the hippocampus as having a potential role in autism. Current imaging methods are well suited to the detailed measurement of the volume of the hippocampus, which has received little attention in previous imaging studies in autism. We report the results of a magnetic resonance imaging (MRI) study of 35 autistic and 36 control subjects. Detailed (1.5 mm) MRI did not reveal differences in the volume of the hippocampus in autistic individuals." }, { "pmid": "18171375", "title": "Growth-related neural reorganization and the autism phenotype: a test of the hypothesis that altered brain growth leads to altered connectivity.", "abstract": "Theoretical considerations, and findings from computational modeling, comparative neuroanatomy and developmental neuroscience, motivate the hypothesis that a deviant brain growth trajectory will lead to deviant patterns of change in cortico-cortical connectivity. Differences in brain size during development will alter the relative cost and effectiveness of short- and long-distance connections, and should thus impact the growth and retention of connections. Reduced brain size should favor long-distance connectivity; brain overgrowth should favor short-distance connectivity; and inconsistent deviations from the normal growth trajectory - as occurs in autism - should result in potentially disruptive changes to established patterns of functional and physical connectivity during development. To explore this hypothesis, neural networks which modeled inter-hemispheric interaction were grown at the rate of either typically developing children or children with autism. The influence of the length of the inter-hemispheric connections was analyzed at multiple developmental time-points. The networks that modeled autistic growth were less affected by removal of the inter-hemispheric connections than those that modeled normal growth - indicating a reduced reliance on long-distance connections - for short response times, and this difference increased substantially at approximately 24 simulated months of age. The performance of the networks showed a corresponding decline during development. And direct analysis of the connection weights showed a parallel reduction in connectivity. These modeling results support the hypothesis that the deviant growth trajectory in autism spectrum disorders may lead to a disruption of established patterns of functional connectivity during development, with potentially negative behavioral consequences, and a subsequent reduction in physical connectivity. The results are discussed in relation to the growing body of evidence of reduced functional and structural connectivity in autism, and in relation to the behavioral phenotype, particularly the developmental aspects." }, { "pmid": "23406909", "title": "Sex differences in autism spectrum disorders.", "abstract": "PURPOSE OF REVIEW\nA strong male bias in autism spectrum disorder (ASD) prevalence has been observed with striking consistency, but no mechanism has yet to definitively account for this sex difference. This review explores the current status of epidemiological, genetic, and neuroendocrinological work addressing ASD prevalence and liability in males and females, so as to frame the major issues necessary to pursue a more complete understanding of the biological basis for sex-differential risk.\n\n\nRECENT FINDINGS\nRecent studies continue to report a male bias in ASD prevalence, but also suggest that sex differences in phenotypic presentation, including fewer restricted and repetitive behaviors and externalizing behavioral problems in females, may contribute to this bias. Genetic studies demonstrate that females are protected from the effects of heritable and de-novo ASD risk variants, and compelling work suggests that sex chromosomal genes and/or sex hormones, especially testosterone, may modulate the effects of genetic variation on the presentation of an autistic phenotype.\n\n\nSUMMARY\nASDs affect females less frequently than males, and several sex-differential genetic and hormonal factors may contribute. Future work to determine the mechanisms by which these factors confer risk and protection to males and females is essential." }, { "pmid": "21208397", "title": "Sex differences in autoimmune diseases.", "abstract": "Women are more susceptible to a variety of autoimmune diseases including systemic lupus erythematosus (SLE), multiple sclerosis (MS), primary biliary cirrhosis, rheumatoid arthritis and Hashimoto's thyroiditis. This increased susceptibility in females compared to males is also present in animal models of autoimmune diseases such as spontaneous SLE in (NZBxNZW)F1 and NZM.2328 mice, experimental autoimmune encephalomyelitis (EAE) in SJL mice, thyroiditis, Sjogren's syndrome in MRL/Mp-lpr/lpr mice and diabetes in non-obese diabetic mice. Indeed, being female confers a greater risk of developing these diseases than any single genetic or environmental risk factor discovered to date. Understanding how the state of being female so profoundly affects autoimmune disease susceptibility would accomplish two major goals. First, it would lead to an insight into the major pathways of disease pathogenesis and, secondly, it would likely lead to novel treatments which would disrupt such pathways." }, { "pmid": "21383252", "title": "Prevalence and correlates of eating disorders in adolescents. Results from the national comorbidity survey replication adolescent supplement.", "abstract": "CONTEXT\nEating disorders are severe conditions, but little is known about the prevalence or correlates of these disorders from population-based surveys of adolescents.\n\n\nOBJECTIVES\nTo examine the prevalence and correlates of eating disorders in a large, reprefentative sample of US adolescents.\n\n\nDESIGN\nCross-sectional survey of adolescents with face-to-face interviews using a modified version of the Composite International Diagnostic Interview.\n\n\nSETTING\nCombined household and school adolescent samples.\n\n\nPARTICIPANTS\nNationally representative sample of 10,123 adolescents aged 13 to 18 years.\n\n\nMAIN OUTCOME MEASURES\nPrevalence and correlates of eating disorders and subthreshold conditions.\n\n\nRESULTS\nLifetime prevalence estimates of anorexia nervosa, bulimia nervosa, and binge-eating disorder were 0.3%, 0.9%, and 1.6%, respectively. Important differences were observed between eating disorder subtypes concerning sociodemographic correlates, psychiatric comorbidity, role impairment, and suicidality. Although the majority of adolescents with an eating disorder sought some form of treatment, only a minority received treatment specifically for their eating or weight problems. Analyses of 2 related subthreshold conditions suggest that these conditions are often clinically significant.\n\n\nCONCLUSIONS\nEating disorders and subthreshold eating conditions are prevalent in the general adolescent population. Their impact is demonstrated by generally strong associations with other psychiatric disorders, role impairment, and suicidality. The unmet treatment needs in the adolescent population place these disorders as important public health concerns." }, { "pmid": "20841422", "title": "Longitudinally mapping the influence of sex and androgen signaling on the dynamics of human cortical maturation in adolescence.", "abstract": "Humans have systematic sex differences in brain-related behavior, cognition, and pattern of mental illness risk. Many of these differences emerge during adolescence, a developmental period of intense neurostructural and endocrine change. Here, by creating \"movies\" of sexually dimorphic brain development using longitudinal in vivo structural neuroimaging, we show regionally specific sex differences in development of the cerebral cortex during adolescence. Within cortical subsystems known to underpin domains of cognitive behavioral sex difference, structural change is faster in the sex that tends to perform less well within the domain in question. By stratifying participants through molecular analysis of the androgen receptor gene, we show that possession of an allele conferring more efficient functioning of this sex steroid receptor is associated with \"masculinization\" of adolescent cortical maturation. Our findings extend models first established in rodents, and suggest that in humans too, sex and sex steroids shape brain development in a spatiotemporally specific manner, within neural systems known to underpin sexually dimorphic behaviors." }, { "pmid": "17513132", "title": "Sexual dimorphism of brain developmental trajectories during childhood and adolescence.", "abstract": "Human total brain size is consistently reported to be approximately 8-10% larger in males, although consensus on regionally specific differences is weak. Here, in the largest longitudinal pediatric neuroimaging study reported to date (829 scans from 387 subjects, ages 3 to 27 years), we demonstrate the importance of examining size-by-age trajectories of brain development rather than group averages across broad age ranges when assessing sexual dimorphism. Using magnetic resonance imaging (MRI) we found robust male/female differences in the shapes of trajectories with total cerebral volume peaking at age 10.5 in females and 14.5 in males. White matter increases throughout this 24-year period with males having a steeper rate of increase during adolescence. Both cortical and subcortical gray matter trajectories follow an inverted U shaped path with peak sizes 1 to 2 years earlier in females. These sexually dimorphic trajectories confirm the importance of longitudinal data in studies of brain development and underline the need to consider sex matching in studies of brain development." }, { "pmid": "26146534", "title": "Sex differences in cortical volume and gyrification in autism.", "abstract": "BACKGROUND\nMale predominance is a prominent feature of autism spectrum disorders (ASD), with a reported male to female ratio of 4:1. Because of the overwhelming focus on males, little is known about the neuroanatomical basis of sex differences in ASD. Investigations of sex differences with adequate sample sizes are critical for improving our understanding of the biological mechanisms underlying ASD in females.\n\n\nMETHODS\nWe leveraged the open-access autism brain imaging data exchange (ABIDE) dataset to obtain structural brain imaging data from 53 females with ASD, who were matched with equivalent samples of males with ASD, and their typically developing (TD) male and female peers. Brain images were processed with FreeSurfer to assess three key features of local cortical morphometry: volume, thickness, and gyrification. A whole-brain approach was used to identify significant effects of sex, diagnosis, and sex-by-diagnosis interaction, using a stringent threshold of p < 0.01 to control for false positives. Stability and power analyses were conducted to guide future research on sex differences in ASD.\n\n\nRESULTS\nWe detected a main effect of sex in the bilateral superior temporal cortex, driven by greater cortical volume in females compared to males in both the ASD and TD groups. Sex-by-diagnosis interaction was detected in the gyrification of the ventromedial/orbitofrontal prefrontal cortex (vmPFC/OFC). Post-hoc analyses revealed that sex-by-diagnosis interaction was driven by reduced vmPFC/OFC gyrification in males with ASD, compared to females with ASD as well as TD males and females. Finally, stability analyses demonstrated a dramatic drop in the likelihood of observing significant clusters as the sample size decreased, suggesting that previous studies have been largely underpowered. For instance, with a sample of 30 females with ASD (total n = 120), a significant sex-by-diagnosis interaction was only detected in 50 % of the simulated subsamples.\n\n\nCONCLUSIONS\nOur results demonstrate that some features of typical sex differences are preserved in the brain of individuals with ASD, while others are not. Sex differences in ASD are associated with cortical regions involved in language and social function, two domains of deficits in the disorder. Stability analyses provide novel quantitative insights into why smaller samples may have previously failed to detect sex differences." }, { "pmid": "26347127", "title": "Sex differences in structural organization of motor systems and their dissociable links with repetitive/restricted behaviors in children with autism.", "abstract": "BACKGROUND\nAutism spectrum disorder (ASD) is diagnosed much less often in females than males. Emerging behavioral accounts suggest that the clinical presentation of autism is different in females and males, yet research examining sex differences in core symptoms of autism in affected children has been limited. Additionally, to date, there have been no systematic attempts to characterize neuroanatomical differences underlying the distinct behavioral profiles observed in girls and boys with ASD. This is in part because extant ASD studies have included a small number of girls.\n\n\nMETHODS\nLeveraging the National Database for Autism Research (NDAR), we first analyzed symptom severity in a large sample consisting of 128 ASD girls and 614 age- and IQ-matched ASD boys. We then examined symptom severity and structural imaging data using novel multivariate pattern analysis in a well-matched group of 25 ASD girls, 25 ASD boys, 19 typically developing (TD) girls, and 19 TD boys, obtained from the Autism Brain Imaging Data Exchange (ABIDE).\n\n\nRESULTS\nIn both the NDAR and ABIDE datasets, girls, compared to boys, with ASD showed less severe repetitive/restricted behaviors (RRBs) and comparable deficits in the social and communication domains. In the ABIDE imaging dataset, gray matter (GM) patterns in the motor cortex, supplementary motor area (SMA), cerebellum, fusiform gyrus, and amygdala accurately discriminated girls and boys with ASD. This sex difference pattern was specific to ASD as the GM in these brain regions did not discriminate TD girls and boys. Moreover, GM in the motor cortex, SMA, and crus 1 subdivision of the cerebellum was correlated with RRB in girls whereas GM in the right putamen-the region that discriminated TD girls and boys-was correlated with RRB in boys.\n\n\nCONCLUSIONS\nWe found robust evidence for reduced levels of RRB in girls, compared to boys, with ASD, providing the strongest evidence to date for sex differences in a core phenotypic feature of childhood ASD. Sex differences in brain morphometry are prominent in the motor system and in areas that comprise the \"social brain.\" Notably, RRB severity is associated with sex differences in GM morphometry in distinct motor regions. Our findings provide novel insights into the neurobiology of sex differences in childhood autism." }, { "pmid": "11445259", "title": "Effects of age on tissues and regions of the cerebrum and cerebellum.", "abstract": "Normal volunteers, aged 30 to 99 years, were studied with MRI. Age was related to estimated volumes of: gray matter, white matter, and CSF of the cerebrum and cerebellum; gray matter, white matter, white matter abnormality, and CSF within each cerebral lobe; and gray matter of eight subcortical structures. The results were: 1) Age-related losses in the hippocampus were significantly accelerated relative to gray matter losses elsewhere in the brain. 2) Among the cerebral lobes, the frontal lobes were disproportionately affected by cortical volume loss and increased white matter abnormality. 3) Loss of cerebral and cerebellar white matter occurred later than, but was ultimately greater than, loss of gray matter. It is estimated that between the ages of 30 and 90 volume loss averages 14% in the cerebral cortex, 35% in the hippocampus, and 26% in the cerebral white matter. Separate analyses were conducted in which genetic risk associated with the Apolipoprotein E epsilon4 allele was either overrepresented or underrepresented among elderly participants. Accelerated loss of hippocampal volume was observed with both analyses and thus does not appear to be due to the presence of at-risk subjects. MR signal alterations in the tissues of older individuals pose challenges to the validity of current methods of tissue segmentation, and should be considered in the interpretation of the results." }, { "pmid": "8544910", "title": "Age-related decline in MRI volumes of temporal lobe gray matter but not hippocampus.", "abstract": "The effect of normal aging on the volume of the hippocampus and temporal cortex was assessed cross-sectionally with quantitative Magnetic Resonance Imaging (MRI) in 72 healthy men, spanning 5 decades of the adult age range (21 to 70 years). Neither the hippocampal nor cortical white matter volumes were significantly correlated with age. By contrast, left and right temporal lobe gray matter volumes, exclusive of the hippocampal measures, each decreased with age (p < 0.01). Volumes of temporal lobe sulcal CSF and the ventricular system (temporal horns and lateral and third ventricles) significantly increased with age. Measures of verbal and nonverbal working memory showed age-related declines and were related to enlargement of the three ventricular regions, which may be indicative of age-related atrophy of the adjacent cortex but not the hippocampus, at least up to age 70 years." }, { "pmid": "17630048", "title": "Voxel-based mapping of brain gray matter volume and glucose metabolism profiles in normal aging.", "abstract": "With age, the brain undergoes both structural and functional alterations, probably resulting in reported cognitive declines. Relatively few investigations have sought to identify those areas that remain intact with aging, or undergo the least deterioration, which might underlie cognitive preservations. Our aim here was to establish a comprehensive profile of both structural and functional changes in the aging brain, using up-to-date voxel-based methodology (i.e. optimized voxel-based morphometry (VBM) procedure; resting-state (18)FDG-PET with correction for partial volume effects (PVE)) in 45 optimally healthy subjects aged 20-83 years. Negative and positive correlations between age and both gray matter (GM) volume and (18)FDG uptake were assessed. The frontal cortex manifested the greatest deterioration, both structurally and functionally, whereas the anterior hippocampus, the thalamus and (functionally) the posterior cingulate cortex were the least affected. Our results support the developmental theory which postulates that the first regions to emerge phylogenetically and ontogenetically are the most resistant to age effects, and the last ones the most vulnerable. Furthermore, the lesser affected anterior hippocampal region, together with the lesser functional alteration of the posterior cingulate cortex, appear to mark the parting of the ways between normal aging and Alzheimer's disease, which is characterized by early and prominent deterioration of both structures." }, { "pmid": "17023094", "title": "Aging in the CNS: comparison of gray/white matter volume and diffusion tensor data.", "abstract": "This study investigated the global and regional effects of aging on brain volume, mean diffusivity (MD), and fractional anisotropy (FA) in 73 normal female subjects using voxel-based analysis. On a global scale, gray matter volume and FA were negatively correlated, whereas MD was positively correlated with age. Voxel-wise analyses showed brain volume and FA were negatively correlated predominantly in anterior structures, whereas MD was positively correlated in the cortical gray matter and periventricular white matter. Volume preservation was observed in the cingulate gyrus and subjacent white matter. FA increase was observed in the putamen. Voxel-based direct comparisons of volume and diffusion properties showed FA was more strongly negatively correlated in the fronto-temporal white matter, compared with volume and MD. Stronger positive correlation of MD was observed in the thalamus, caudate nucleus, and midbrain and stronger negative correlation of brain volume was observed in the frontal lobe and basal ganglia, compared with the other. These results indicate that diffusion properties and brain volume are complementary markers to the effects of aging." }, { "pmid": "26045942", "title": "Regional brain volume differences between males with and without autism spectrum disorder are highly age-dependent.", "abstract": "BACKGROUND\nNeuroanatomical differences between individuals with and without autism spectrum disorder (ASD) were inconsistent in the literature. Such heterogeneity may substantially originate from age-differential effects.\n\n\nMETHODS\nVoxel-based morphometry was applied in 86 males with ASD and 90 typically developing control (TDC) males (aged 7 to 29 years). Three steps of statistical modeling (model 1, multiple regression with age as a covariate; model 2, multiple regression further considering diagnosis-by-age interaction; model 3, age-stratified analyses) were performed to dissect the moderating effects of age on diagnostic group differences in neuroanatomy.\n\n\nRESULTS\nAcross ages, males with and without ASD did not differ significantly in total gray matter (GM) or white matter (WM) volumes. For both groups, total GM volumes decreased and WM volumes increased with age. For regional volume, comparing with the model only held the age constant (model 1), the main effect of group altered when diagnosis-by-age interaction effects were considered (model 2). Here, participants with ASD had significantly greater relative regional GM volumes than TDC in the right inferior orbitofrontal cortex and bilateral thalamus; for WM, participants with ASD were larger than TDC in the bilateral splenium of corpus callosum and right anterior corona radiata. Importantly, significant diagnosis-by-age interactions were identified at the bilateral anterior prefrontal cortex, bilateral cuneus, bilateral caudate, and the left cerebellum Crus I for GM and left forceps minor for WM. Finally, age-stratified analyses (model 3) showed distinct patterns in GM and WM volumetric alterations in ASD among subsamples of children, adolescents, and adults.\n\n\nCONCLUSIONS\nOur findings suggest that the heterogeneous reports on the atypical neuroanatomy of ASD may substantially originate from age variation in the study samples. Age variation and its methodological and biological implications have to be carefully delineated in future studies of the neurobiology of ASD." }, { "pmid": "12880818", "title": "Gender and age effects in structural brain asymmetry as measured by MRI texture analysis.", "abstract": "Effects of gender and age on structural brain asymmetry were studied by 3D texture analysis in 380 adults. Asymmetry is detected by comparing the complex 3D gray-scale image patterns in the left and right cerebral hemispheres as revealed by anatomical T1-weighted MRI datasets. The Talairach and Tournoux parcellation system was applied to study the asymmetry on five levels: the whole cerebrum, nine coronal sections, 12 axial sections, boxes resulting from both coronal and axial subdivisions, and by a sliding spherical window of 9 mm diameter. The analysis revealed that the brain asymmetry increases in the anterior-posterior direction starting from the central region onward. Male brains were found to be more asymmetric than female. This gender-related effect is noticeable in all brain areas but is most significant in the superior temporal gyrus, Heschl's gyrus, the adjacent white matter regions in the temporal stem and the knee of the optic radiation, the thalamus, and the posterior cingulate. The brain asymmetry increases significantly with age in the inferior frontal gyrus, anterior insula, anterior cingulate, parahippocampal gyrus, retrosplenial cortex, coronal radiata, and knee region of the internal capsule. Asymmetry decreases with age in the optic radiation, precentral gyrus, and angular gyrus. The texture-based method reported here is based on extended multisort cooccurrence matrices that employ intensity, gradient, and anisotropy features in a uniform way. It is sensitive, simple to reproduce, robust, and unbiased in the sense that segmentation of brain compartments and spatial transformations are not necessary. Thus, it should be considered as another tool for digital morphometry in neuroscience." }, { "pmid": "22248573", "title": "FreeSurfer.", "abstract": "FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source." }, { "pmid": "17707313", "title": "In search of biologic correlates for liver texture on portal-phase CT.", "abstract": "RATIONALE AND OBJECTIVES\nThe acceptance of computer-assisted diagnosis (CAD) in clinical practice has been constrained by the scarcity of identifiable biologic correlates for CAD-based image parameters. This study aims to identify biologic correlates for computed tomography (CT) liver texture in a series of patients with colorectal cancer.\n\n\nMATERIALS AND METHODS\nIn 28 patients with colorectal cancer, total hepatic perfusion (THP), hepatic arterial perfusion, and hepatic portal perfusion (HPP) were measured using perfusion CT. Hepatic glucose use was also determined from positron emission tomography (PET) and expressed as standardized uptake value (SUV). A hepatic phosphorylation fraction index (HPFI) was determined from both SUV and THP. These physiologic parameters were correlated with CAD parameters namely hepatic densitometry, selective-scale, and relative-scale texture features in apparently normal areas of portal-phase hepatic CT.\n\n\nRESULTS\nFor patients without liver metastases, a relative-scale texture parameter correlated inversely with SUV (r = -0.587, P = .007) and, positively with THP (r = 0.512, P = .021) and HPP (r = 0.451, P = .046). However, this relative texture parameter correlated most significantly with HPFI (r = -0.590, P = .006). For patients with liver metastases, although not significant an opposite trend was observed between these physiologic parameters and relative texture features (THP: r < -0.4, HPFI: r > 0.35).\n\n\nCONCLUSION\nTotal hepatic blood flow and glucose metabolism are two distinct but related biologic correlates for liver texture on portal phase CT, providing a rationale for the use of hepatic texture analysis as a indicator for patients with colorectal cancer." }, { "pmid": "19760235", "title": "Three-dimensional textural analysis of brain images reveals distributed grey-matter abnormalities in schizophrenia.", "abstract": "OBJECTIVES\nThree-dimensional (3-D) selective- and relative-scale texture analysis (TA) was applied to structural magnetic resonance (MR) brain images to quantify the presence of grey-matter (GM) and white-matter (WM) textural abnormalities associated with schizophrenia.\n\n\nMATERIALS AND METHODS\nBrain TA comprised volume filtration using the Laplacian of Gaussian filter to highlight fine, medium and coarse textures within GM and WM, followed by texture quantification. Relative TA (e.g. ratio of fine to medium) was also computed. T1-weighted MR whole-brain images from 32 participants with diagnosis of schizophrenia (n = 10) and healthy controls (n = 22) were examined. Five patients possessed marker alleles (SZ8) associated with schizophrenia on chromosome 8 in the pericentriolar material 1 gene while the remaining five had not inherited any of the alleles (SZ0).\n\n\nRESULTS\nFiltered fine GM texture (mean grey-level intensity; MGI) most significantly differentiated schizophrenic patients from controls (P = 0.0058; area under the receiver-operating characteristic curve = 0.809, sensitivity = 90%, specificity = 70%). WM measurements did not distinguish the two groups. Filtered GM and WM textures (MGI) correlated with total GM and WM volume respectively. Medium-to-coarse GM entropy distinguished SZ0 from controls (P = 0.0069) while measures from SZ8 were intermediate between the two.\n\n\nCONCLUSIONS\n3-D TA of brain MR enables detection of subtle distributed morphological features associated with schizophrenia, determined partly by susceptibility genes." }, { "pmid": "26174708", "title": "Development and functions of the choroid plexus-cerebrospinal fluid system.", "abstract": "The choroid plexus (ChP) is the principal source of cerebrospinal fluid (CSF), which has accepted roles as a fluid cushion and a sink for nervous system waste in vertebrates. Various animal models have provided insights into how the ChP-CSF system develops and matures. In addition, recent studies have uncovered new, active roles for this dynamic system in the regulation of neural stem cells, critical periods and the overall health of the nervous system. Together, these findings have brought about a paradigm shift in our understanding of brain development and health, and have stimulated new initiatives for the treatment of neurological disease." }, { "pmid": "16326115", "title": "Sex-related differences in amygdala functional connectivity during resting conditions.", "abstract": "Recent neuroimaging studies have established a sex-related hemispheric lateralization of amygdala involvement in memory for emotionally arousing material. Here, we examine the possibility that sex-related differences in amygdala involvement in memory for emotional material develop from differential patterns of amygdala functional connectivity evident in the resting brain. Seed voxel partial least square analyses of regional cerebral blood flow data revealed significant sex-related differences in amygdala functional connectivity during resting conditions. The right amygdala was associated with greater functional connectivity in men than in women. In contrast, the left amygdala was associated with greater functional connectivity in women than in men. Furthermore, the regions displaying stronger functional connectivity with the right amygdala in males (sensorimotor cortex, striatum, pulvinar) differed from those displaying stronger functional connectivity with the left amygdala in females (subgenual cortex, hypothalamus). These differences in functional connectivity at rest may link to sex-related differences in medical and psychiatric disorders." }, { "pmid": "11124043", "title": "Sex-related difference in amygdala activity during emotionally influenced memory storage.", "abstract": "We tested the possibility suggested by previous imaging studies that amygdala participation in the storage of emotionally influenced memory is differentially lateralized in men and women. Male and female subjects received two PET scans for regional cerebral glucose-one while viewing a series of emotionally provocative (negative) films, and a second while viewing a series of matched, but emotionally more neutral, films. Consistent with suggestions from several previously published studies, enhanced activity of the right, but not the left, amygdala in men was related to enhanced memory for the emotional films. Conversely, enhanced activity of the left, but not the right, amygdala in women was related to enhanced memory for the emotional films. These results demonstrate a clear gender-related lateralization of amygdala involvement in emotionally influenced memory, and indicate that theories of the neurobiology of emotionally influenced memory must begin to account for the influence of gender." }, { "pmid": "23658641", "title": "Male-female differences in upregulation of vasoconstrictor responses in human cerebral arteries.", "abstract": "BACKGROUND AND PURPOSE\nMale-female differences may significantly impact stroke prevention and treatment in men and women, however underlying mechanisms for sexual dimorphism in stroke are not understood. We previously found in males that cerebral ischemia upregulates contractile receptors in cerebral arteries, which is associated with lower blood flow. The present study investigates if cerebral arteries from men and women differ in cerebrovascular receptor upregulation.\n\n\nEXPERIMENTAL APPROACH\nFreshly obtained human cerebral arteries were placed in organ culture, an established model for studying receptor upregulation. 5-hydroxtryptamine type 1B (5-HT1B), angiotensin II type 1 (AT1) and endothelin-1 type A and B (ETA and ETB) receptors were evaluated using wire myograph for contractile responses, real-time PCR for mRNA and immunohistochemistry for receptor expression.\n\n\nKEY RESULTS\nVascular sensitivity to angiotensin II and endothelin-1 was markedly lower in cultured cerebral arteries from women as compared to men. ETB receptor-mediated contraction occurred in male but not female arteries. Interestingly, there were similar upregulation in mRNA and expression of 5-HT1B, AT1, and ETB receptors and in local expression of Ang II after organ culture.\n\n\nCONCLUSIONS AND IMPLICATIONS\nIn spite of receptor upregulation after organ culture in both sexes, cerebral arteries from women were significantly less responsive to vasoconstrictors angiotensin II and endothelin-1 as compared to arteries from men. This suggests receptor coupling and/or signal transduction mechanisms involved in cerebrovascular contractility may be suppressed in females. This is the first study to demonstrate sex differences in the vascular function of human brain arteries." }, { "pmid": "8660125", "title": "Sex differences in human brain morphometry and metabolism: an in vivo quantitative magnetic resonance imaging and positron emission tomography study on the effect of aging.", "abstract": "BACKGROUND\nThere are significant age and sex effects in cognitive ability and brain disease. However, sex differences in aging of human brain areas associated with nonreproductive behavior have not been extensively studied. We hypothesized that there would be significant sex differences in aging of brain areas that subserve speech, visuospatial, and memory function.\n\n\nMETHODS\nWe investigated sex differences in the effect of aging on human brain morphometry by means of volumetric magnetic resonance imaging and on regional cerebral metabolism for glucose by positron emission tomography. In the magnetic resonance imaging study, we examined 69 healthy right-handed subjects (34 women and 35 men), divided into young (age range, 20 to 35 years) and old (60 to 85 years) groups. In the positron emission tomography study, we investigated 120 healthy right-handed subjects (65 women and 55 men) aged 21 to 91 years.\n\n\nRESULTS\nIn the magnetic resonance imaging study, age-related volume loss was significantly greater in men than women in whole brain and frontal and temporal lobes, whereas it was greater in women than men in hippocampus and parietal lobes. In the positron emission tomography study, significant sex differences existed in the effect of age on regional brain metabolism, and asymmetry of metabolism, in the temporal and parietal lobes, Broca's area, thalamus, and hippocampus.\n\n\nCONCLUSIONS\nWe found significant sex differences in aging of brain areas that are essential to higher cognitive functioning. Thus, our findings may explain some of the age-sex differences in human cognition and response to brain injury and disease." }, { "pmid": "7854416", "title": "Sex differences in the functional organization of the brain for language.", "abstract": "A much debated question is whether sex differences exist in the functional organization of the brain for language. A long-held hypothesis posits that language functions are more likely to be highly lateralized in males and to be represented in both cerebral hemispheres in females, but attempts to demonstrate this have been inconclusive. Here we use echo-planar functional magnetic resonance imaging to study 38 right-handed subjects (19 males and 19 females) during orthographic (letter recognition), phonological (rhyme) and semantic (semantic category) tasks. During phonological tasks, brain activation in males is lateralized to the left inferior frontal gyrus regions; in females the pattern of activation is very different, engaging more diffuse neural systems that involve both the left and right inferior frontal gyrus. Our data provide clear evidence for a sex difference in the functional organization of the brain for language and indicate that these variations exist at the level of phonological processing." }, { "pmid": "12194864", "title": "The human hippocampus and spatial and episodic memory.", "abstract": "Finding one's way around an environment and remembering the events that occur within it are crucial cognitive abilities that have been linked to the hippocampus and medial temporal lobes. Our review of neuropsychological, behavioral, and neuroimaging studies of human hippocampal involvement in spatial memory concentrates on three important concepts in this field: spatial frameworks, dimensionality, and orientation and self-motion. We also compare variation in hippocampal structure and function across and within species. We discuss how its spatial role relates to its accepted role in episodic memory. Five related studies use virtual reality to examine these two types of memory in ecologically valid situations. While processing of spatial scenes involves the parahippocampus, the right hippocampus appears particularly involved in memory for locations within an environment, with the left hippocampus more involved in context-dependent episodic or autobiographical memory." }, { "pmid": "22846656", "title": "Regional changes in thalamic shape and volume with increasing age.", "abstract": "The thalamus undergoes significant volume loss and microstructural change with increasing age. Alterations in thalamo-cortical connectivity may contribute to the decline in cognitive ability associated with aging. The aim of this study was to assess changes in thalamic shape and in the volume and diffusivity of thalamic regions parcellated by their connectivity to specific cortical regions in order to test the hypothesis age related thalamic change primarily affects thalamic nuclei connecting to the frontal cortex. Using structural magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI), we assessed thalamic volume and diffusivity in 86 healthy volunteers, median (range) age 44 (20-74) years. Regional thalamic micro and macro structural changes were assessed by segmenting the thalamus based on connectivity to the frontal, parietal, temporal and occipital cortices and determining the volumes and mean diffusivity of the thalamic projections. Linear regression analysis was performed to test the relationship between increasing age and (i) normalised thalamic volume, (ii) whole thalamus diffusion measures, (iii) mean diffusivity (MD) of the thalamo-cortical projections, and (iv) volumes of the thalamo-cortical projections. We also assessed thalamic shape change using vertex analysis. We observed a significant reduction in the volume and a significant increase in MD of the whole thalamus with increasing age. The volume of the thalamo-frontal projections decreased significantly with increasing age, however there was no significant relationship between the volumes of the thalamo-cortical projections to the parietal, temporal, and occipital cortex and age. Thalamic shape analysis showed that the greatest shape change was in the anterior thalamus, incorporating regions containing the anterior nucleus, the ventroanterior nucleus and the dorsomedial nucleus. To explore these results further we studied two additional groups of subjects (a younger and an older aged group, n=20), which showed that the volume of the thalamo-frontal projections was correlated to executive functions scores, as assessed by the Stroop test. These data suggest that atrophy of the frontal thalamo-cortical unit may explain, at least in part, disorders of attention, working memory and executive function associated with increasing age." }, { "pmid": "26119913", "title": "The significance of caudate volume for age-related associative memory decline.", "abstract": "Aging comes along with reduced gray matter (GM) volume in several cerebral areas and with cognitive performance decline in different cognitive domains. Moreover, regional GM volume is linked to specific cognitive sub processes in older adults. However, it remains unclear which regional changes in older individuals are directly associated with decreased cognitive performance. Moreover, most of the studies on this topic focused on hippocampal and prefrontal brain regions and their relation to memory and executive functioning. Interestingly, there are only a few studies that reported an association between striatal brain volume and cognitive performance. This is insofar surprising that striatal structures are (1) highly affected by age and (2) involved in different neural circuits that serve intact cognition. To address these issues, voxel-based morphometry (VBM) was used to analyze GM volume in 18 younger and 18 older adults. Moreover, several neuropsychological tests from different neuropsychological test batteries were applied to assess a broad range of cognitive domains. Older adults showed less GM volume than younger adults within frontal, striatal, and cerebellar brain regions. In the group of older adults, significant correlations were found between striatal GM volume and memory performance and between prefrontal/temporal GM volume and executive functioning. The only direct overlap between brain regions associated with regional atrophy and cognitive performance in older adults was found for the right caudate: older adults showed reduced caudate volume relative to younger adults. Moreover, caudate volume was positively correlated with associative memory accuracy in older adults and older adults showed poorer performances than younger adults in the respective associative memory task. Taken together, the current findings indicate the relevance of the caudate for associative memory decline in the aging brain." }, { "pmid": "20211265", "title": "Age-related changes in grey and white matter structure throughout adulthood.", "abstract": "Normal ageing is associated with gradual brain atrophy. Determining spatial and temporal patterns of change can help shed light on underlying mechanisms. Neuroimaging provides various measures of brain structure that can be used to assess such age-related change but studies to date have typically considered single imaging measures. Although there is consensus on the notion that brain structure deteriorates with age, evidence on the precise time course and spatial distribution of changes is mixed. We assessed grey matter (GM) and white matter (WM) structure in a group of 66 adults aged between 23 and 81. Multimodal imaging measures included voxel-based morphometry (VBM)-style analysis of GM and WM volume and diffusion tensor imaging (DTI) metrics of WM microstructure. We found widespread reductions in GM volume from middle age onwards but earlier reductions in GM were detected in frontal cortex. Widespread age-related deterioration in WM microstructure was detected from young adulthood onwards. WM decline was detected earlier and more sensitively using DTI-based measures of microstructure than using markers of WM volume derived from conventional T1-weighted imaging." }, { "pmid": "20621657", "title": "Evaluating the validity of volume-based and surface-based brain image registration for developmental cognitive neuroscience studies in children 4 to 11 years of age.", "abstract": "Understanding the neurophysiology of human cognitive development relies on methods that enable accurate comparison of structural and functional neuroimaging data across brains from people of different ages. A fundamental question is whether the substantial brain growth and related changes in brain morphology that occur in early childhood permit valid comparisons of brain structure and function across ages. Here we investigated whether valid comparisons can be made in children from ages 4 to 11, and whether there are differences in the use of volume-based versus surface-based registration approaches for aligning structural landmarks across these ages. Regions corresponding to the calcarine sulcus, central sulcus, and Sylvian fissure in both the hemispheres were manually labeled on T1-weighted structural magnetic resonance images from 31 children ranging in age from 4.2 to 11.2years old. Quantitative measures of shape similarity and volumetric-overlap of these manually labeled regions were calculated when brains were aligned using a 12-parameter affine transform, SPM's nonlinear normalization, a diffeomorphic registration (ANTS), and FreeSurfer's surface-based registration. Registration error for normalization into a common reference framework across participants in this age range was lower than commonly used functional imaging resolutions. Surface-based registration provided significantly better alignment of cortical landmarks than volume-based registration. In addition, registering children's brains to a common space does not result in an age-associated bias between older and younger children, making it feasible to accurately compare structural properties and patterns of brain activation in children from ages 4 to 11." } ]
International Journal of Biomedical Imaging
28408921
PMC5376475
10.1155/2017/1985796
Phase Segmentation Methods for an Automatic Surgical Workflow Analysis
In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.
2. Related WorkNumerous methods have been developed for identifying intraoperative activities, segment common phases in a surgical workflow, and combine all gained knowledge into a model of the given workflows [4–7]. In segmentation work over surgical phases, various types of data were used, such as manual annotations by observers [8], sensor data obtained by surgical tracking tools based on frames of recorded videos [9, 10], intraoperative localization systems [4], and surgical robots [11]. In [4], Agarwal et al. incorporated patient monitoring systems used to acquire vital signals of patients during surgery. In [5], Stauder et al. proposed a method to utilize random decision forests to segment surgical workflow phases based on instrument usage data and other easily obtainable measurements.Recently, decision forests have become a very versatile and popular tool in the field of medical image analysis. In [6], Suzuki et al. developed the Intelligent Operating Theater, which has a multichannel video recorder and is able to detect intraoperative incidents. This system is installed in the operating room and analyzes video files that capture surgical staff motions in the operating room. Intraoperative information is then transmitted to another room in real time to provide support for the surgical workflow via a supervisor. In [7], Padoy et al. used three-dimensional motion features to estimate human activities in environments including the operating room and production lines. They defined workflows as ordered groups of activities with different durations and temporal patterns. Three-dimensional motion data are obtained in real time using videos from multiple cameras. A recent methodological review of the literature is available in [12].For medical terms, HMM has been used successfully in several research studies to model surgical activities for skill evaluation [13–15]. In [13], Leong et al. recorded six degrees-of-freedom (DOF) data from a laparoscopic simulator and then used them to train a four-state HMM to classify subjects according to their skill level. In [14], Rosen et al. constructed an HMM using data from two endoscopic tools, including such data as position, orientation, force, and torque. Here the HMM was able to identify differences in the skill levels of subjects with different levels of training. In [15], Bhatia et al. segmented four phases, namely, a patient entering or leaving the room, also the beginning and the end of a surgical workflow by using a combination of support vector machines (SVMs) and HMMs from video images.
[ "21632485", "20526819", "21195015", "17127647", "24014322", "16532766" ]
[ { "pmid": "20526819", "title": "Analysis of surgical intervention populations using generic surgical process models.", "abstract": "PURPOSE\nAccording to differences in patient characteristics, surgical performance, or used surgical technological resources, surgical interventions have high variability. No methods for the generation and comparison of statistical 'mean' surgical procedures are available. The convenience of these models is to provide increased evidence for clinical, technical, and administrative decision-making.\n\n\nMETHODS\nBased on several measurements of patient individual surgical treatments, we present a method of how to calculate a statistical 'mean' intervention model, called generic Surgical Process Model (gSPM), from a number of interventions. In a proof-of-concept study, we show how statistical 'mean' procedure courses can be computed and how differences between several of these models can be quantified. Patient individual surgical treatments of 102 cataract interventions from eye surgery were allocated to an ambulatory or inpatient sample, and the gSPMs for each of the samples were computed. Both treatment strategies are exemplary compared for the interventional phase Capsulorhexis.\n\n\nRESULTS\nStatistical differences between the gSPMs of ambulatory and inpatient procedures of performance times for surgical activities and activity sequences were identified. Furthermore, the work flow that corresponds to the general recommended clinical treatment was recovered out of the individual Surgical Process Models.\n\n\nCONCLUSION\nThe computation of gSPMs is a new approach in medical engineering and medical informatics. It supports increased evidence, e.g. for the application of alternative surgical strategies, investments for surgical technology, optimization protocols, or surgical education. Furthermore, this may be applicable in more technical research fields, as well, such as the development of surgical workflow management systems for the operating room of the future." }, { "pmid": "21195015", "title": "Statistical modeling and recognition of surgical workflow.", "abstract": "In this paper, we contribute to the development of context-aware operating rooms by introducing a novel approach to modeling and monitoring the workflow of surgical interventions. We first propose a new representation of interventions in terms of multidimensional time-series formed by synchronized signals acquired over time. We then introduce methods based on Dynamic Time Warping and Hidden Markov Models to analyze and process this data. This results in workflow models combining low-level signals with high-level information such as predefined phases, which can be used to detect actions and trigger an event. Two methods are presented to train these models, using either fully or partially labeled training surgeries. Results are given based on tool usage recordings from sixteen laparoscopic cholecystectomies performed by several surgeons." }, { "pmid": "17127647", "title": "Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions.", "abstract": "This paper reports our progress in developing techniques for \"parsing\" raw motion data from a simple surgical task into a labeled sequence of surgical gestures. The ability to automatically detect and segment surgical motion can be useful in evaluating surgical skill, providing surgical training feedback, or documenting essential aspects of a procedure. If processed online, the information can be used to provide context-specific information or motion enhancements to the surgeon. However, in every case, the key step is to relate recorded motion data to a model of the procedure being performed. Robotic surgical systems such as the da Vinci system from Intuitive Surgical provide a rich source of motion and video data from surgical procedures. The application programming interface (API) of the da Vinci outputs 192 kinematics values at 10 Hz. Through a series of feature-processing steps, tailored to this task, the highly redundant features are projected to a compact and discriminative space. The resulting classifier is simple and effective.Cross-validation experiments show that the proposed approach can achieve accuracies higher than 90% when segmenting gestures in a 4-throw suturing task, for both expert and intermediate surgeons. These preliminary results suggest that gesture-specific features can be extracted to provide highly accurate surgical skill evaluation." }, { "pmid": "24014322", "title": "Surgical process modelling: a review.", "abstract": "PURPOSE\nSurgery is continuously subject to technological and medical innovations that are transforming daily surgical routines. In order to gain a better understanding and description of surgeries, the field of surgical process modelling (SPM) has recently emerged. The challenge is to support surgery through the quantitative analysis and understanding of operating room activities. Related surgical process models can then be introduced into a new generation of computer-assisted surgery systems.\n\n\nMETHODS\nIn this paper, we present a review of the literature dealing with SPM. This methodological review was obtained from a search using Google Scholar on the specific keywords: \"surgical process analysis\", \"surgical process model\" and \"surgical workflow analysis\".\n\n\nRESULTS\nThis paper gives an overview of current approaches in the field that study the procedural aspects of surgery. We propose a classification of the domain that helps to summarise and describe the most important components of each paper we have reviewed, i.e., acquisition, modelling, analysis, application and validation/evaluation. These five aspects are presented independently along with an exhaustive list of their possible instantiations taken from the studied publications.\n\n\nCONCLUSION\nThis review allows a greater understanding of the SPM field to be gained and introduces future related prospects." }, { "pmid": "16532766", "title": "Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete Markov model.", "abstract": "Minimally invasive surgery (MIS) involves a multidimensional series of tasks requiring a synthesis between visual information and the kinematics and dynamics of the surgical tools. Analysis of these sources of information is a key step in defining objective criteria for characterizing surgical performance. The Blue DRAGON is a new system for acquiring the kinematics and the dynamics of two endoscopic tools synchronized with the endoscopic view of the surgical scene. Modeling the process of MIS using a finite state model [Markov model (MM)] reveals the internal structure of the surgical task and is utilized as one of the key steps in objectively assessing surgical performance. The experimental protocol includes tying an intracorporeal knot in a MIS setup performed on an animal model (pig) by 30 surgeons at different levels of training including expert surgeons. An objective learning curve was defined based on measuring quantitative statistical distance (similarity) between MM of experts and MM of residents at different levels of training. The objective learning curve was similar to that of the subjective performance analysis. The MM proved to be a powerful and compact mathematical model for decomposing a complex task such as laparoscopic suturing. Systems like surgical robots or virtual reality simulators in which the kinematics and the dynamics of the surgical tool are inherently measured may benefit from incorporation of the proposed methodology." } ]
JMIR Medical Informatics
28347973
PMC5387113
10.2196/medinform.6693
A Software Framework for Remote Patient Monitoring by Using Multi-Agent Systems Support
BackgroundAlthough there have been significant advances in network, hardware, and software technologies, the health care environment has not taken advantage of these developments to solve many of its inherent problems. Research activities in these 3 areas make it possible to apply advanced technologies to address many of these issues such as real-time monitoring of a large number of patients, particularly where a timely response is critical.ObjectiveThe objective of this research was to design and develop innovative technological solutions to offer a more proactive and reliable medical care environment. The short-term and primary goal was to construct IoT4Health, a flexible software framework to generate a range of Internet of things (IoT) applications, containing components such as multi-agent systems that are designed to perform Remote Patient Monitoring (RPM) activities autonomously. An investigation into its full potential to conduct such patient monitoring activities in a more proactive way is an expected future step.MethodsA framework methodology was selected to evaluate whether the RPM domain had the potential to generate customized applications that could achieve the stated goal of being responsive and flexible within the RPM domain. As a proof of concept of the software framework’s flexibility, 3 applications were developed with different implementations for each framework hot spot to demonstrate potential. Agents4Health was selected to illustrate the instantiation process and IoT4Health’s operation. To develop more concrete indicators of the responsiveness of the simulated care environment, an experiment was conducted while Agents4Health was operating, to measure the number of delays incurred in monitoring the tasks performed by agents.ResultsIoT4Health’s construction can be highlighted as our contribution to the development of eHealth solutions. As a software framework, IoT4Health offers extensibility points for the generation of applications. Applications can extend the framework in the following ways: identification, collection, storage, recovery, visualization, monitoring, anomalies detection, resource notification, and dynamic reconfiguration. Based on other outcomes involving observation of the resulting applications, it was noted that its design contributed toward more proactive patient monitoring. Through these experimental systems, anomalies were detected in real time, with agents sending notifications instantly to the health providers.ConclusionsWe conclude that the cost-benefit of the construction of a more generic and complex system instead of a custom-made software system demonstrated the worth of the approach, making it possible to generate applications in this domain in a more timely fashion.
Related WorkOur proposal takes a similar approach to that in [19]. This paper shows the implementation of a distributed information infrastructure that uses the intelligent agent paradigm for: (1) automatically notifying the patient’s medical team regarding the abnormalities in his or her health status; (2) offering medical advice from a distance; and (3) enabling continuous monitoring of a patient’s health status. In addition, the authors have promoted the adoption of ubiquitous computing systems [20] and apps that allow immediate analysis of a patient’s physiological data such as a personalized feedback of their condition in real time, by using an alarm-and-remember mechanism. In this solution, patients can be evaluated, diagnosed, and cared for through a mode that is both remote and ubiquitous. In the case of rapid deterioration of a patient’s condition, the system automatically notifies the medical team through voice calls or SMS messages, providing a first-level medical response. This proposal differs from ours, in that the resulting application is closed, as opposed to our broader eHealth application generator.The approach in [21] focuses on design and development of a distributed information system based on mobile agents to allow automatic and real-time fetal monitoring. Devices such as a PDA, mobile phone, laptop, and personal computer are used to capture and display the monitored data.In [22], mobile health apps are proposed as solutions for (1) overcoming personalized health service barriers; (2) providing opportune access to critical information on a patient’s health status; (3) avoiding duplication of exams, delays and errors in patient treatment.
[ "24452256", "24783916" ]
[ { "pmid": "24452256", "title": "A mobile multi-agent information system for ubiquitous fetal monitoring.", "abstract": "Electronic fetal monitoring (EFM) systems integrate many previously separate clinical activities related to fetal monitoring. Promoting the use of ubiquitous fetal monitoring services with real time status assessments requires a robust information platform equipped with an automatic diagnosis engine. This paper presents the design and development of a mobile multi-agent platform-based open information systems (IMAIS) with an automated diagnosis engine to support intensive and distributed ubiquitous fetal monitoring. The automatic diagnosis engine that we developed is capable of analyzing data in both traditional paper-based and digital formats. Issues related to interoperability, scalability, and openness in heterogeneous e-health environments are addressed through the adoption of a FIPA2000 standard compliant agent development platform-the Java Agent Development Environment (JADE). Integrating the IMAIS with light-weight, portable fetal monitor devices allows for continuous long-term monitoring without interfering with a patient's everyday activities and without restricting her mobility. The system architecture can be also applied to vast monitoring scenarios such as elder care and vital sign monitoring." }, { "pmid": "24783916", "title": "Patient monitoring in mobile health: opportunities and challenges.", "abstract": "BACKGROUND\nIn most countries chronic diseases lead to high health care costs and reduced productivity of people in society. The best way to reduce costs of health sector and increase the empowerment of people is prevention of chronic diseases and appropriate health activities management through monitoring of patients. To enjoy the full benefits of E-health, making use of methods and modern technologies is very important.\n\n\nMETHODS\nThis literature review articles were searched with keywords like Patient monitoring, Mobile Health, and Chronic Disease in Science Direct, Google Scholar and Pub Med databases without regard to the year of publications.\n\n\nRESULTS\nApplying remote medical diagnosis and monitoring system based on mobile health systems can help significantly to reduce health care costs, correct performance management particularly in chronic disease management. Also some challenges are in patient monitoring in general and specific aspects like threats to confidentiality and privacy, technology acceptance in general and lack of system interoperability with electronic health records and other IT tools, decrease in face to face communication between doctor and patient, sudden interruptions of telecommunication networks, and device and sensor type in specific aspect.\n\n\nCONCLUSIONS\nIt is obvious identifying the opportunities and challenges of mobile technology and reducing barriers, strengthening the positive points will have a significant role in the appropriate planning and promoting the achievements of the health care systems based on mobile and helps to design a roadmap for improvement of mobile health." } ]
Journal of Cheminformatics
29086119
PMC5395521
10.1186/s13321-017-0209-z
SimBoost: a read-across approach for predicting drug–target binding affinities using gradient boosting machines
Computational prediction of the interaction between drugs and targets is a standing challenge in the field of drug discovery. A number of rather accurate predictions were reported for various binary drug–target benchmark datasets. However, a notable drawback of a binary representation of interaction data is that missing endpoints for non-interacting drug–target pairs are not differentiated from inactive cases, and that predicted levels of activity depend on pre-defined binarization thresholds. In this paper, we present a method called SimBoost that predicts continuous (non-binary) values of binding affinities of compounds and proteins and thus incorporates the whole interaction spectrum from true negative to true positive interactions. Additionally, we propose a version of the method called SimBoostQuant which computes a prediction interval in order to assess the confidence of the predicted affinity, thus defining the Applicability Domain metrics explicitly. We evaluate SimBoost and SimBoostQuant on two established drug–target interaction benchmark datasets and one new dataset that we propose to use as a benchmark for read-across cheminformatics applications. We demonstrate that our methods outperform the previously reported models across the studied datasets.
Related workTraditional methods for drug target interaction prediction typically focus on one particular target of interest. These approaches can again be divided into two types which are target-based approaches [12–14] and ligand-based approaches [15–18]. In target-based approaches the molecular docking of a candidate compound with the protein target is simulated, based on the 3D structure of the target (and the compound). This approach is widely utilized to virtually screen compounds against target proteins; however this approach is not applicable when the 3D structure of a target protein is not available which is often the case, especially for G-protein coupled receptors and ion channels. The intuition in ligand-based methods is to model the common characteristics of a target, based on its known interacting ligands (compounds). One interesting example for this approach is the study [4] which utilizes similarities in the side-effects of known drugs to predict new drug–target interactions. However, the ligand-based approach may not work well if the number of known interacting ligands of a protein target is small.To allow more efficient predictions on a larger scale, i.e. for many targets simultaneously, and to overcome the limitations of the traditional methods, machine learning based approaches have attracted much attention recently. In the chemical and biologicals sciences, machine learning-based approaches have been known as (multi-target) Quantitative structure–activity relationship (QSAR) methods, which relate a set of predictor variables, describing the physico-chemical properties of a drug–target pair, to the response variable, representing the existence or the strength of an interaction.Current machine learning methods can be classified into two types, which are feature-based and similarity-based approaches. In feature-based methods, known drug–target interactions are represented by feature vectors generated by combining chemical descriptors of drugs with descriptors for targets [19–23]. With these feature vectors as input, standard machine learning methods such as Support Vector Machines (SVM), Naïve Bayes (NB) or Neural Networks (NN) can be used to predict the interaction of new drug–target pairs. Vina et al. [24] proposes a method taking into consideration only the sequence of the target and the chemical connectivity of the drug, but without relying on geometry optimization or drug–drug and target–target similarities. Cheng et al. [25] introduces a multi-target QSAR method that integrates chemical substructures and protein sequence descriptors to predict interactions for G-protein coupled receptors and kinases based on two comprehensive data sets derived from the ChEMBL database. Merget et al. [26] evaluates different machine learning methods and data balancing schemes and reports that random forests yielded the best activity prediction and allowed accurate inference of compound selectivity.In similarity-based methods [3, 27–32], similarity matrices for both the drug–drug pairs and the target–target pairs are generated. Different types of similarity metrics can be used to generate these matrices [33]; typically, chemical structure fingerprints are used to compute the similarity among drugs and a protein sequence alignment score is used for targets. One of the simplest ways of using the similarities is a Nearest Neighbor classifier [28], which predicts new interactions from the weighted (by the similarity) sum of the interaction profiles of the most similar drugs/targets. The Kernel method proposed in [27] computes a similarity for all drug–target pairs (a pairwise-kernel) using the drug–drug and target–target similarities and then uses this kernel of drug–target pairs with known labels to train an SVM-classifier. The approaches presented in [28–30] represent drug–target interactions by a bipartite graph and label drug–target pairs as +1 if the edge exists or −1, otherwise. For each drug and for each target, a separate SVM (local model) is trained, which predicts interactions of that drug (target) with all targets (drugs). The similarity matrices are used as kernels for those SVMs, and the final prediction for a pair is obtained by averaging the scores for the respective drug SVM and target SVM.All of the above machine-learning based methods for drug–target interaction prediction formulate the task as a binary classification problem, with the goal to classify a given drug–target pair as binding or non-binding. As pointed out in [4], drawbacks of the binary problem formulation are that true-negative interactions and untested drug–target pairs are not differentiated, and that the whole interaction spectrum, including both true-positive and true-negative interactions, is not covered well. Pahikkala et al. [4] introduces the method KronRLS which predicts continuous drug–target binding affinity values. To the best of our knowledge, KronRLS is the only method in the literature which predicts continuous binding affinities, and we give a detailed introduction to KronRLS below, since we use it as baseline in our experiments. Below, we also introduce Matrix Factorization as it was used in the literature for binary drug–target interaction prediction and as it plays an important role in our proposed method.KronRLSRegularized Least Squares Models (RLS) have previously been shown to be able to predict binary drug target interaction with high accuracy [31]. KronRLS as introduced in [4] can be seen as a generalization of these models for the prediction of continuous binding values. Given a set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{ d_{i} \}$$\end{document}{di} of drugs and a set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{ t_{j} \}$$\end{document}{tj} of targets, the training data consists of a set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X = \{ x_{1} , \ldots ,x_{m} \}$$\end{document}X={x1,…,xm} of drug–target pairs (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document}X is a subset of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{ d_{i} \times t_{j } \}$$\end{document}{di×tj}) and an associated vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y = y_{1} , \ldots ,y_{m}$$\end{document}y=y1,…,ym of continuous binding affinities. The goal is to learn a prediction function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x)$$\end{document}f(x) for all possible drug–target pairs \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in \{ d_{i} \times t_{j } \}$$\end{document}x∈{di×tj}, i.e. a function that minimizes the objective:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(f) = \mathop \sum \limits_{i = 1}^{m} (y_{i} - f(x_{i} ))^{2} + \lambda ||f||_{k}^{2}$$\end{document}J(f)=∑i=1m(yi-f(xi))2+λ||f||k2 In the objective function, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$||f||_{k}^{2}$$\end{document}||f||k2 is the norm of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f$$\end{document}f, which is associated to a kernel function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k$$\end{document}k (described below), and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda > 0$$\end{document}λ>0 is a user defined regularization parameter. A minimizer of the above objective can be expressed as\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x) = \mathop \sum \limits_{i = 1}^{m} a_{i} k(x,x_{i} )$$\end{document}f(x)=∑i=1maik(x,xi) The kernel function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k$$\end{document}k is a symmetric similarity measure between two of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m$$\end{document}m drug–target pairs, which can be represented by an \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m \times m$$\end{document}m×m matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document}K. For two individual similarity matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_{d}$$\end{document}Kd and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_{t}$$\end{document}Kt for the drugs and targets respectively, a similarity matrix for each drug–target pair can be computed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_{d} \otimes K_{t}$$\end{document}Kd⊗Kt, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\otimes$$\end{document}⊗ stands for the Kronecker product. If the training set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X$$\end{document}X contains every possible pair of drugs and targets, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K$$\end{document}K can be computed as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K = K_{d} \otimes K_{t}$$\end{document}K=Kd⊗Kt and the parameter vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a$$\end{document}a can be learnt by solving the following system of linear equations:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(K + \lambda I)a = y$$\end{document}(K+λI)a=ywhere \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I$$\end{document}I is the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d_{i} \times t_{j}$$\end{document}di×tj identity matrix. If only a subset of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{ d_{i} \times t_{j} \}$$\end{document}{di×tj} is given as training data, the vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y$$\end{document}y has missing values. To learn the parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a$$\end{document}a, [4] suggests to use conjugate gradient with Kronecker algebraic optimization to solve the system of linear equations.Matrix factorizationThe Matrix Factorization (MF) technique has been demonstrated to be effective especially for personalized recommendation tasks [34], and it has been previously applied for drug–target interaction prediction [5–7]. In MF, a matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M \in R^{d \times t}$$\end{document}M∈Rd×t (for the drug–target prediction task, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M$$\end{document}M represents a matrix of binding affinities of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d$$\end{document}d drugs and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t$$\end{document}t targets) is approximated by the product of two latent factor matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P \in R^{k \times d}$$\end{document}P∈Rk×d and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q \in R^{k \times t}$$\end{document}Q∈Rk×t.The factor matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document}Q are learned by minimizing the regularized squared error on the set of observed affinities \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa$$\end{document}κ:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathop {min}\limits_{Q,P} \mathop \sum \limits_{{(d_{i} ,t_{j} ) \in \kappa }}^{{}} (m_{i,j} - q_{i}^{T} p_{j} )^{2} + \lambda (||p||^{2} + ||q||^{2} )$$\end{document}minQ,P∑(di,tj)∈κ(mi,j-qiTpj)2+λ(||p||2+||q||2) The term \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(m_{i,j} - q_{i}^{T} p_{j} )^{2}$$\end{document}(mi,j-qiTpj)2 represents the fit of the learned parameters to the observed binding affinities. The term \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda (||p||^{2} + ||q||^{2} )$$\end{document}λ(||p||2+||q||2) penalizes the magnitudes of the learned parameters to prevent overfitting, and the constant \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\uplambda$$\end{document}λ controls the weight of the two terms. With learned matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document}Q, a matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M^{{\prime }}$$\end{document}M′ with predictions for all drug–target pairs can be computed as:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M^{{\prime }} = P^{T} Q$$\end{document}M′=PTQ In SimBoost, the columns of the factor matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q$$\end{document}Q are utilized as parts of the feature vectors for the drugs and targets respectively and thus Matrix Factorization is used as a feature extraction step.
[ "23933754", "24723570", "26872142", "26352634", "19399780", "17211405", "8780787", "18621671", "19578428", "21909252", "24244130", "21364574", "17510168", "19503826", "23259810", "12377017", "19281186", "22751809", "18676415", "19605421", "17646345", "18689844", "21893517", "26987649", "21336281", "22037378", "24521231" ]
[ { "pmid": "23933754", "title": "Similarity-based machine learning methods for predicting drug-target interactions: a brief review.", "abstract": "Computationally predicting drug-target interactions is useful to select possible drug (or target) candidates for further biochemical verification. We focus on machine learning-based approaches, particularly similarity-based methods that use drug and target similarities, which show relationships among drugs and those among targets, respectively. These two similarities represent two emerging concepts, the chemical space and the genomic space. Typically, the methods combine these two types of similarities to generate models for predicting new drug-target interactions. This process is also closely related to a lot of work in pharmacogenomics or chemical biology that attempt to understand the relationships between the chemical and genomic spaces. This background makes the similarity-based approaches attractive and promising. This article reviews the similarity-based machine learning methods for predicting drug-target interactions, which are state-of-the-art and have aroused great interest in bioinformatics. We describe each of these methods briefly, and empirically compare these methods under a uniform experimental setting to explore their advantages and limitations." }, { "pmid": "24723570", "title": "Toward more realistic drug-target interaction predictions.", "abstract": "A number of supervised machine learning models have recently been introduced for the prediction of drug-target interactions based on chemical structure and genomic sequence information. Although these models could offer improved means for many network pharmacology applications, such as repositioning of drugs for new therapeutic uses, the prediction models are often being constructed and evaluated under overly simplified settings that do not reflect the real-life problem in practical applications. Using quantitative drug-target bioactivity assays for kinase inhibitors, as well as a popular benchmarking data set of binary drug-target interactions for enzyme, ion channel, nuclear receptor and G protein-coupled receptor targets, we illustrate here the effects of four factors that may lead to dramatic differences in the prediction results: (i) problem formulation (standard binary classification or more realistic regression formulation), (ii) evaluation data set (drug and target families in the application use case), (iii) evaluation procedure (simple or nested cross-validation) and (iv) experimental setting (whether training and test sets share common drugs and targets, only drugs or targets or neither). Each of these factors should be taken into consideration to avoid reporting overoptimistic drug-target interaction prediction results. We also suggest guidelines on how to make the supervised drug-target interaction prediction studies more realistic in terms of such model formulations and evaluation setups that better address the inherent complexity of the prediction task in the practical applications, as well as novel benchmarking data sets that capture the continuous nature of the drug-target interactions for kinase inhibitors." }, { "pmid": "26872142", "title": "Neighborhood Regularized Logistic Matrix Factorization for Drug-Target Interaction Prediction.", "abstract": "In pharmaceutical sciences, a crucial step of the drug discovery process is the identification of drug-target interactions. However, only a small portion of the drug-target interactions have been experimentally validated, as the experimental validation is laborious and costly. To improve the drug discovery efficiency, there is a great need for the development of accurate computational approaches that can predict potential drug-target interactions to direct the experimental verification. In this paper, we propose a novel drug-target interaction prediction algorithm, namely neighborhood regularized logistic matrix factorization (NRLMF). Specifically, the proposed NRLMF method focuses on modeling the probability that a drug would interact with a target by logistic matrix factorization, where the properties of drugs and targets are represented by drug-specific and target-specific latent vectors, respectively. Moreover, NRLMF assigns higher importance levels to positive observations (i.e., the observed interacting drug-target pairs) than negative observations (i.e., the unknown pairs). Because the positive observations are already experimentally verified, they are usually more trustworthy. Furthermore, the local structure of the drug-target interaction data has also been exploited via neighborhood regularization to achieve better prediction accuracy. We conducted extensive experiments over four benchmark datasets, and NRLMF demonstrated its effectiveness compared with five state-of-the-art approaches." }, { "pmid": "26352634", "title": "Kernelized Bayesian Matrix Factorization.", "abstract": "We extend kernelized matrix factorization with a full-Bayesian treatment and with an ability to work with multiple side information sources expressed as different kernels. Kernels have been introduced to integrate side information about the rows and columns, which is necessary for making out-of-matrix predictions. We discuss specifically binary output matrices but extensions to realvalued matrices are straightforward. We extend the state of the art in two key aspects: (i) A full-conjugate probabilistic formulation of the kernelized matrix factorization enables an efficient variational approximation, whereas full-Bayesian treatments are not computationally feasible in the earlier approaches. (ii) Multiple side information sources are included, treated as different kernels in multiple kernel learning which additionally reveals which side sources are informative. We then show that the framework can also be used for supervised and semi-supervised multilabel classification and multi-output regression, by considering samples and outputs as the domains where matrix factorization operates. Our method outperforms alternatives in predicting drug-protein interactions on two data sets. On multilabel classification, our algorithm obtains the lowest Hamming losses on 10 out of 14 data sets compared to five state-of-the-art multilabel classification algorithms. We finally show that the proposed approach outperforms alternatives in multi-output regression experiments on a yeast cell cycle data set." }, { "pmid": "19399780", "title": "AutoDock4 and AutoDockTools4: Automated docking with selective receptor flexibility.", "abstract": "We describe the testing and release of AutoDock4 and the accompanying graphical user interface AutoDockTools. AutoDock4 incorporates limited flexibility in the receptor. Several tests are reported here, including a redocking experiment with 188 diverse ligand-protein complexes and a cross-docking experiment using flexible sidechains in 87 HIV protease complexes. We also report its utility in analysis of covalently bound ligands, using both a grid-based docking method and a modification of the flexible sidechain technique." }, { "pmid": "17211405", "title": "Structure-based maximal affinity model predicts small-molecule druggability.", "abstract": "Lead generation is a major hurdle in small-molecule drug discovery, with an estimated 60% of projects failing from lack of lead matter or difficulty in optimizing leads for drug-like properties. It would be valuable to identify these less-druggable targets before incurring substantial expenditure and effort. Here we show that a model-based approach using basic biophysical principles yields good prediction of druggability based solely on the crystal structure of the target binding site. We quantitatively estimate the maximal affinity achievable by a drug-like molecule, and we show that these calculated values correlate with drug discovery outcomes. We experimentally test two predictions using high-throughput screening of a diverse compound collection. The collective results highlight the utility of our approach as well as strategies for tackling difficult targets." }, { "pmid": "8780787", "title": "A fast flexible docking method using an incremental construction algorithm.", "abstract": "We present an automatic method for docking organic ligands into protein binding sites. The method can be used in the design process of specific protein ligands. It combines an appropriate model of the physico-chemical properties of the docked molecules with efficient methods for sampling the conformational space of the ligand. If the ligand is flexible, it can adopt a large variety of different conformations. Each such minimum in conformational space presents a potential candidate for the conformation of the ligand in the complexed state. Our docking method samples the conformation space of the ligand on the basis of a discrete model and uses a tree-search technique for placing the ligand incrementally into the active site. For placing the first fragment of the ligand into the protein, we use hashing techniques adapted from computer vision. The incremental construction algorithm is based on a greedy strategy combined with efficient methods for overlap detection and for the search of new interactions. We present results on 19 complexes of which the binding geometry has been crystallographically determined. All considered ligands are docked in at most three minutes on a current workstation. The experimentally observed binding mode of the ligand is reproduced with 0.5 to 1.2 A rms deviation. It is almost always found among the highest-ranking conformations computed." }, { "pmid": "18621671", "title": "Drug target identification using side-effect similarity.", "abstract": "Targets for drugs have so far been predicted on the basis of molecular or cellular features, for example, by exploiting similarity in chemical structure or in activity across cell lines. We used phenotypic side-effect similarities to infer whether two drugs share a target. Applied to 746 marketed drugs, a network of 1018 side effect-driven drug-drug relations became apparent, 261 of which are formed by chemically dissimilar drugs from different therapeutic indications. We experimentally tested 20 of these unexpected drug-drug relations and validated 13 implied drug-target relations by in vitro binding assays, of which 11 reveal inhibition constants equal to less than 10 micromolar. Nine of these were tested and confirmed in cell assays, documenting the feasibility of using phenotypic information to infer molecular interactions and hinting at new uses of marketed drugs." }, { "pmid": "19578428", "title": "Drug discovery using chemical systems biology: repositioning the safe medicine Comtan to treat multi-drug and extensively drug resistant tuberculosis.", "abstract": "The rise of multi-drug resistant (MDR) and extensively drug resistant (XDR) tuberculosis around the world, including in industrialized nations, poses a great threat to human health and defines a need to develop new, effective and inexpensive anti-tubercular agents. Previously we developed a chemical systems biology approach to identify off-targets of major pharmaceuticals on a proteome-wide scale. In this paper we further demonstrate the value of this approach through the discovery that existing commercially available drugs, prescribed for the treatment of Parkinson's disease, have the potential to treat MDR and XDR tuberculosis. These drugs, entacapone and tolcapone, are predicted to bind to the enzyme InhA and directly inhibit substrate binding. The prediction is validated by in vitro and InhA kinetic assays using tablets of Comtan, whose active component is entacapone. The minimal inhibition concentration (MIC(99)) of entacapone for Mycobacterium tuberculosis (M.tuberculosis) is approximately 260.0 microM, well below the toxicity concentration determined by an in vitro cytotoxicity model using a human neuroblastoma cell line. Moreover, kinetic assays indicate that Comtan inhibits InhA activity by 47.0% at an entacapone concentration of approximately 80 microM. Thus the active component in Comtan represents a promising lead compound for developing a new class of anti-tubercular therapeutics with excellent safety profiles. More generally, the protocol described in this paper can be included in a drug discovery pipeline in an effort to discover novel drug leads with desired safety profiles, and therefore accelerate the development of new drugs." }, { "pmid": "21909252", "title": "A computational approach to finding novel targets for existing drugs.", "abstract": "Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM), suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects." }, { "pmid": "24244130", "title": "Prediction of drug-target interactions for drug repositioning only based on genomic expression similarity.", "abstract": "Small drug molecules usually bind to multiple protein targets or even unintended off-targets. Such drug promiscuity has often led to unwanted or unexplained drug reactions, resulting in side effects or drug repositioning opportunities. So it is always an important issue in pharmacology to identify potential drug-target interactions (DTI). However, DTI discovery by experiment remains a challenging task, due to high expense of time and resources. Many computational methods are therefore developed to predict DTI with high throughput biological and clinical data. Here, we initiatively demonstrate that the on-target and off-target effects could be characterized by drug-induced in vitro genomic expression changes, e.g. the data in Connectivity Map (CMap). Thus, unknown ligands of a certain target can be found from the compounds showing high gene-expression similarity to the known ligands. Then to clarify the detailed practice of CMap based DTI prediction, we objectively evaluate how well each target is characterized by CMap. The results suggest that (1) some targets are better characterized than others, so the prediction models specific to these well characterized targets would be more accurate and reliable; (2) in some cases, a family of ligands for the same target tend to interact with common off-targets, which may help increase the efficiency of DTI discovery and explain the mechanisms of complicated drug actions. In the present study, CMap expression similarity is proposed as a novel indicator of drug-target interactions. The detailed strategies of improving data quality by decreasing the batch effect and building prediction models are also effectively established. We believe the success in CMap can be further translated into other public and commercial data of genomic expression, thus increasing research productivity towards valid drug repositioning and minimal side effects." }, { "pmid": "21364574", "title": "Analysis of multiple compound-protein interactions reveals novel bioactive molecules.", "abstract": "The discovery of novel bioactive molecules advances our systems-level understanding of biological processes and is crucial for innovation in drug development. For this purpose, the emerging field of chemical genomics is currently focused on accumulating large assay data sets describing compound-protein interactions (CPIs). Although new target proteins for known drugs have recently been identified through mining of CPI databases, using these resources to identify novel ligands remains unexplored. Herein, we demonstrate that machine learning of multiple CPIs can not only assess drug polypharmacology but can also efficiently identify novel bioactive scaffold-hopping compounds. Through a machine-learning technique that uses multiple CPIs, we have successfully identified novel lead compounds for two pharmaceutically important protein families, G-protein-coupled receptors and protein kinases. These novel compounds were not identified by existing computational ligand-screening methods in comparative studies. The results of this study indicate that data derived from chemical genomics can be highly useful for exploring chemical space, and this systems biology perspective could accelerate drug discovery processes." }, { "pmid": "17510168", "title": "Statistical prediction of protein chemical interactions based on chemical structure and mass spectrometry data.", "abstract": "MOTIVATION\nPrediction of interactions between proteins and chemical compounds is of great benefit in drug discovery processes. In this field, 3D structure-based methods such as docking analysis have been developed. However, the genomewide application of these methods is not really feasible as 3D structural information is limited in availability.\n\n\nRESULTS\nWe describe a novel method for predicting protein-chemical interaction using SVM. We utilize very general protein data, i.e. amino acid sequences, and combine these with chemical structures and mass spectrometry (MS) data. MS data can be of great use in finding new chemical compounds in the future. We assessed the validity of our method in the dataset of the binding of existing drugs and found that more than 80% accuracy could be obtained. Furthermore, we conducted comprehensive target protein predictions for MDMA, and validated the biological significance of our method by successfully finding proteins relevant to its known functions.\n\n\nAVAILABILITY\nAvailable on request from the authors." }, { "pmid": "19503826", "title": "Integrating statistical predictions and experimental verifications for enhancing protein-chemical interaction predictions in virtual screening.", "abstract": "Predictions of interactions between target proteins and potential leads are of great benefit in the drug discovery process. We present a comprehensively applicable statistical prediction method for interactions between any proteins and chemical compounds, which requires only protein sequence data and chemical structure data and utilizes the statistical learning method of support vector machines. In order to realize reasonable comprehensive predictions which can involve many false positives, we propose two approaches for reduction of false positives: (i) efficient use of multiple statistical prediction models in the framework of two-layer SVM and (ii) reasonable design of the negative data to construct statistical prediction models. In two-layer SVM, outputs produced by the first-layer SVM models, which are constructed with different negative samples and reflect different aspects of classifications, are utilized as inputs to the second-layer SVM. In order to design negative data which produce fewer false positive predictions, we iteratively construct SVM models or classification boundaries from positive and tentative negative samples and select additional negative sample candidates according to pre-determined rules. Moreover, in order to fully utilize the advantages of statistical learning methods, we propose a strategy to effectively feedback experimental results to computational predictions with consideration of biological effects of interest. We show the usefulness of our approach in predicting potential ligands binding to human androgen receptors from more than 19 million chemical compounds and verifying these predictions by in vitro binding. Moreover, we utilize this experimental validation as feedback to enhance subsequent computational predictions, and experimentally validate these predictions again. This efficient procedure of the iteration of the in silico prediction and in vitro or in vivo experimental verifications with the sufficient feedback enabled us to identify novel ligand candidates which were distant from known ligands in the chemical space." }, { "pmid": "23259810", "title": "Kinome-wide activity modeling from diverse public high-quality data sets.", "abstract": "Large corpora of kinase small molecule inhibitor data are accessible to public sector research from thousands of journal article and patent publications. These data have been generated employing a wide variety of assay methodologies and experimental procedures by numerous laboratories. Here we ask the question how applicable these heterogeneous data sets are to predict kinase activities and which characteristics of the data sets contribute to their utility. We accessed almost 500,000 molecules from the Kinase Knowledge Base (KKB) and after rigorous aggregation and standardization generated over 180 distinct data sets covering all major groups of the human kinome. To assess the value of the data sets, we generated hundreds of classification and regression models. Their rigorous cross-validation and characterization demonstrated highly predictive classification and quantitative models for the majority of kinase targets if a minimum required number of active compounds or structure-activity data points were available. We then applied the best classifiers to compounds most recently profiled in the NIH Library of Integrated Network-based Cellular Signatures (LINCS) program and found good agreement of profiling results with predicted activities. Our results indicate that, although heterogeneous in nature, the publically accessible data sets are exceedingly valuable and well suited to develop highly accurate predictors for practical Kinome-wide virtual screening applications and to complement experimental kinase profiling." }, { "pmid": "12377017", "title": "Selecting screening candidates for kinase and G protein-coupled receptor targets using neural networks.", "abstract": "A series of neural networks has been trained, using consensus methods, to recognize compounds that act at biological targets belonging to specific gene families. The MDDR database was used to provide compounds targeted against gene families and sets of randomly selected molecules. BCUT parameters were employed as input descriptors that encode structural properties and information relevant to ligand-receptor interactions. In each case, the networks identified over 80% of the compounds targeting a gene family. The technique was applied to purchasing compounds from external suppliers, and results from screening against one gene family demonstrated impressive abilities to predict the activity of the majority of known hit compounds." }, { "pmid": "19281186", "title": "Alignment-free prediction of a drug-target complex network based on parameters of drug connectivity and protein sequence of receptors.", "abstract": "There are many drugs described with very different affinity to a large number of receptors. In this work, we selected drug-receptor pairs (DRPs) of affinity/nonaffinity drugs to similar/dissimilar receptors and we represented them as a large network, which may be used to identify drugs that can act on a receptor. Computational chemistry prediction of the biological activity based on quantitative structure-activity relationships (QSAR) substantially increases the potentialities of this kind of networks avoiding time- and resource-consuming experiments. Unfortunately, most QSAR models are unspecific or predict activity against only one receptor. To solve this problem, we developed here a multitarget QSAR (mt-QSAR) classification model. Overall model classification accuracy was 72.25% (1390/1924 compounds) in training, 72.28% (459/635) in cross-validation. Outputs of this mt-QSAR model were used as inputs to construct a network. The observed network has 1735 nodes (DRPs), 1754 edges or pairs of DRPs with similar drug-target affinity (sPDRPs), and low coverage density d = 0.12%. The predicted network has 1735 DRPs, 1857 sPDRPs, and also low coverage density d = 0.12%. After an edge-to-edge comparison (chi-square = 9420.3; p < 0.005), we have demonstrated that the predicted network is significantly similar to the one observed and both have a distribution closer to exponential than to normal." }, { "pmid": "22751809", "title": "Prediction of chemical-protein interactions: multitarget-QSAR versus computational chemogenomic methods.", "abstract": "Elucidation of chemical-protein interactions (CPI) is the basis of target identification and drug discovery. It is time-consuming and costly to determine CPI experimentally, and computational methods will facilitate the determination of CPI. In this study, two methods, multitarget quantitative structure-activity relationship (mt-QSAR) and computational chemogenomics, were developed for CPI prediction. Two comprehensive data sets were collected from the ChEMBL database for method assessment. One data set consisted of 81 689 CPI pairs among 50 924 compounds and 136 G-protein coupled receptors (GPCRs), while the other one contained 43 965 CPI pairs among 23 376 compounds and 176 kinases. The range of the area under the receiver operating characteristic curve (AUC) for the test sets was 0.95 to 1.0 and 0.82 to 1.0 for 100 GPCR mt-QSAR models and 100 kinase mt-QSAR models, respectively. The AUC of 5-fold cross validation were about 0.92 for both 176 kinases and 136 GPCRs using the chemogenomic method. However, the performance of the chemogenomic method was worse than that of mt-QSAR for the external validation set. Further analysis revealed that there was a high false positive rate for the external validation set when using the chemogenomic method. In addition, we developed a web server named CPI-Predictor, , which is available for free. The methods and tool have potential applications in network pharmacology and drug repositioning." }, { "pmid": "18676415", "title": "Protein-ligand interaction prediction: an improved chemogenomics approach.", "abstract": "MOTIVATION\nPredicting interactions between small molecules and proteins is a crucial step to decipher many biological processes, and plays a critical role in drug discovery. When no detailed 3D structure of the protein target is available, ligand-based virtual screening allows the construction of predictive models by learning to discriminate known ligands from non-ligands. However, the accuracy of ligand-based models quickly degrades when the number of known ligands decreases, and in particular the approach is not applicable for orphan receptors with no known ligand.\n\n\nRESULTS\nWe propose a systematic method to predict ligand-protein interactions, even for targets with no known 3D structure and few or no known ligands. Following the recent chemogenomics trend, we adopt a cross-target view and attempt to screen the chemical space against whole families of proteins simultaneously. The lack of known ligand for a given target can then be compensated by the availability of known ligands for similar targets. We test this strategy on three important classes of drug targets, namely enzymes, G-protein-coupled receptors (GPCR) and ion channels, and report dramatic improvements in prediction accuracy over classical ligand-based virtual screening, in particular for targets with few or no known ligands.\n\n\nAVAILABILITY\nAll data and algorithms are available as Supplementary Material." }, { "pmid": "19605421", "title": "Supervised prediction of drug-target interactions using bipartite local models.", "abstract": "MOTIVATION\nIn silico prediction of drug-target interactions from heterogeneous biological data is critical in the search for drugs for known diseases. This problem is currently being attacked from many different points of view, a strong indication of its current importance. Precisely, being able to predict new drug-target interactions with both high precision and accuracy is the holy grail, a fundamental requirement for in silico methods to be useful in a biological setting. This, however, remains extremely challenging due to, amongst other things, the rarity of known drug-target interactions.\n\n\nRESULTS\nWe propose a novel supervised inference method to predict unknown drug-target interactions, represented as a bipartite graph. We use this method, known as bipartite local models to first predict target proteins of a given drug, then to predict drugs targeting a given protein. This gives two independent predictions for each putative drug-target interaction, which we show can be combined to give a definitive prediction for each interaction. We demonstrate the excellent performance of the proposed method in the prediction of four classes of drug-target interaction networks involving enzymes, ion channels, G protein-coupled receptors (GPCRs) and nuclear receptors in human. This enables us to suggest a number of new potential drug-target interactions.\n\n\nAVAILABILITY\nAn implementation of the proposed algorithm is available upon request from the authors. Datasets and all prediction results are available at http://cbio.ensmp.fr/~yyamanishi/bipartitelocal/." }, { "pmid": "17646345", "title": "Supervised reconstruction of biological networks with local models.", "abstract": "MOTIVATION\nInference and reconstruction of biological networks from heterogeneous data is currently an active research subject with several important applications in systems biology. The problem has been attacked from many different points of view with varying degrees of success. In particular, predicting new edges with a reasonable false discovery rate is highly demanded for practical applications, but remains extremely challenging due to the sparsity of the networks of interest.\n\n\nRESULTS\nWhile most previous approaches based on the partial knowledge of the network to be inferred build global models to predict new edges over the network, we introduce here a novel method which predicts whether there is an edge from a newly added vertex to each of the vertices of a known network using local models. This involves learning individually a certain subnetwork associated with each vertex of the known network, then using the discovered classification rule associated with only that vertex to predict the edge to the new vertex. Excellent experimental results are shown in the case of metabolic and protein-protein interaction network reconstruction from a variety of genomic data.\n\n\nAVAILABILITY\nAn implementation of the proposed algorithm is available upon request from the authors." }, { "pmid": "18689844", "title": "SIRENE: supervised inference of regulatory networks.", "abstract": "MOTIVATION\nLiving cells are the product of gene expression programs that involve the regulated transcription of thousands of genes. The elucidation of transcriptional regulatory networks is thus needed to understand the cell's working mechanism, and can for example, be useful for the discovery of novel therapeutic targets. Although several methods have been proposed to infer gene regulatory networks from gene expression data, a recent comparison on a large-scale benchmark experiment revealed that most current methods only predict a limited number of known regulations at a reasonable precision level.\n\n\nRESULTS\nWe propose SIRENE (Supervised Inference of Regulatory Networks), a new method for the inference of gene regulatory networks from a compendium of expression data. The method decomposes the problem of gene regulatory network inference into a large number of local binary classification problems, that focus on separating target genes from non-targets for each transcription factor. SIRENE is thus conceptually simple and computationally efficient. We test it on a benchmark experiment aimed at predicting regulations in Escherichia coli, and show that it retrieves of the order of 6 times more known regulations than other state-of-the-art inference methods.\n\n\nAVAILABILITY\nAll data and programs are freely available at http://cbio. ensmp.fr/sirene." }, { "pmid": "21893517", "title": "Gaussian interaction profile kernels for predicting drug-target interaction.", "abstract": "MOTIVATION\nThe in silico prediction of potential interactions between drugs and target proteins is of core importance for the identification of new drugs or novel targets for existing drugs. However, only a tiny portion of all drug-target pairs in current datasets are experimentally validated interactions. This motivates the need for developing computational methods that predict true interaction pairs with high accuracy.\n\n\nRESULTS\nWe show that a simple machine learning method that uses the drug-target network as the only source of information is capable of predicting true interaction pairs with high accuracy. Specifically, we introduce interaction profiles of drugs (and of targets) in a network, which are binary vectors specifying the presence or absence of interaction with every target (drug) in that network. We define a kernel on these profiles, called the Gaussian Interaction Profile (GIP) kernel, and use a simple classifier, (kernel) Regularized Least Squares (RLS), for prediction drug-target interactions. We test comparatively the effectiveness of RLS with the GIP kernel on four drug-target interaction networks used in previous studies. The proposed algorithm achieves area under the precision-recall curve (AUPR) up to 92.7, significantly improving over results of state-of-the-art methods. Moreover, we show that using also kernels based on chemical and genomic information further increases accuracy, with a neat improvement on small datasets. These results substantiate the relevance of the network topology (in the form of interaction profiles) as source of information for predicting drug-target interactions.\n\n\nAVAILABILITY\nSoftware and Supplementary Material are available at http://cs.ru.nl/~tvanlaarhoven/drugtarget2011/.\n\n\nCONTACT\[email protected]; [email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "26987649", "title": "A comparative study of SMILES-based compound similarity functions for drug-target interaction prediction.", "abstract": "BACKGROUND\nMolecular structures can be represented as strings of special characters using SMILES. Since each molecule is represented as a string, the similarity between compounds can be computed using SMILES-based string similarity functions. Most previous studies on drug-target interaction prediction use 2D-based compound similarity kernels such as SIMCOMP. To the best of our knowledge, using SMILES-based similarity functions, which are computationally more efficient than the 2D-based kernels, has not been investigated for this task before.\n\n\nRESULTS\nIn this study, we adapt and evaluate various SMILES-based similarity methods for drug-target interaction prediction. In addition, inspired by the vector space model of Information Retrieval we propose cosine similarity based SMILES kernels that make use of the Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) weighting approaches. We also investigate generating composite kernels by combining our best SMILES-based similarity functions with the SIMCOMP kernel. With this study, we provided a comparison of 13 different ligand similarity functions, each of which utilizes the SMILES string of molecule representation. Additionally, TF and TF-IDF based cosine similarity kernels are proposed.\n\n\nCONCLUSION\nThe more efficient SMILES-based similarity functions performed similarly to the more complex 2D-based SIMCOMP kernel in terms of AUC-ROC scores. The TF-IDF based cosine similarity obtained a better AUC-PR score than the SIMCOMP kernel on the GPCR benchmark data set. The composite kernel of TF-IDF based cosine similarity and SIMCOMP achieved the best AUC-PR scores for all data sets." }, { "pmid": "21336281", "title": "Navigating the kinome.", "abstract": "Although it is increasingly being recognized that drug-target interaction networks can be powerful tools for the interrogation of systems biology and the rational design of multitargeted drugs, there is no generalized, statistically validated approach to harmonizing sequence-dependent and pharmacology-dependent networks. Here we demonstrate the creation of a comprehensive kinome interaction network based not only on sequence comparisons but also on multiple pharmacology parameters derived from activity profiling data. The framework described for statistical interpretation of these network connections also enables rigorous investigation of chemotype-specific interaction networks, which is critical for multitargeted drug design." }, { "pmid": "22037378", "title": "Comprehensive analysis of kinase inhibitor selectivity.", "abstract": "We tested the interaction of 72 kinase inhibitors with 442 kinases covering >80% of the human catalytic protein kinome. Our data show that, as a class, type II inhibitors are more selective than type I inhibitors, but that there are important exceptions to this trend. The data further illustrate that selective inhibitors have been developed against the majority of kinases targeted by the compounds tested. Analysis of the interaction patterns reveals a class of 'group-selective' inhibitors broadly active against a single subfamily of kinases, but selective outside that subfamily. The data set suggests compounds to use as tools to study kinases for which no dedicated inhibitors exist. It also provides a foundation for further exploring kinase inhibitor biology and toxicity, as well as for studying the structural basis of the observed interaction patterns. Our findings will help to realize the direct enabling potential of genomics for drug development and basic research about cellular signaling." }, { "pmid": "24521231", "title": "Making sense of large-scale kinase inhibitor bioactivity data sets: a comparative and integrative analysis.", "abstract": "We carried out a systematic evaluation of target selectivity profiles across three recent large-scale biochemical assays of kinase inhibitors and further compared these standardized bioactivity assays with data reported in the widely used databases ChEMBL and STITCH. Our comparative evaluation revealed relative benefits and potential limitations among the bioactivity types, as well as pinpointed biases in the database curation processes. Ignoring such issues in data heterogeneity and representation may lead to biased modeling of drugs' polypharmacological effects as well as to unrealistic evaluation of computational strategies for the prediction of drug-target interaction networks. Toward making use of the complementary information captured by the various bioactivity types, including IC50, K(i), and K(d), we also introduce a model-based integration approach, termed KIBA, and demonstrate here how it can be used to classify kinase inhibitor targets and to pinpoint potential errors in database-reported drug-target interactions. An integrated drug-target bioactivity matrix across 52,498 chemical compounds and 467 kinase targets, including a total of 246,088 KIBA scores, has been made freely available." } ]
BMC Medical Informatics and Decision Making
28427384
PMC5399417
10.1186/s12911-017-0443-3
Imbalanced target prediction with pattern discovery on clinical data repositories
BackgroundClinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice.MethodsWe propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed.ResultsWe compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance.ConclusionsPattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and noise. The prediction results and interpretable patterns can provide insights in an agile and inexpensive way for the potential formal studies.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-017-0443-3) contains supplementary material, which is available to authorized users.
Problem definition and related worksIn this section, we define the problem we address and review the key related works. Data mining has been extensively applied in healthcare domain, which is believed to be able to uncover new biomedical and healthcare knowledge for clinical and administrative decision making as well as generate scientific hypotheses [3]. We focus on the prediction problem of classification, where for a given (training) dataset D, we would like to utilize the known (labelled) values of a target T to establish (train) a model and method (a classifier) to predict a target of interest (T = t), i.e. positive cases, for future (testing) data where T is not known. Specifically, the dataset \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{D}=\left[\begin{array}{c}\hfill {D}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {D}_n\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {d}_{11}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {d}_{n1}\hfill \end{array}\begin{array}{c}\hfill \cdots \hfill \\ {}\hfill \ddots \hfill \\ {}\hfill \cdots \hfill \end{array}\begin{array}{c}\hfill {d}_{1 m\hbox{'}}{t}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {d}_{n m\hbox{'}}{t}_n\hfill \end{array}\right]=\left[{R}_1,{R}_2\dots, {R}_m,\mathrm{T}\right] $$\end{document}D=D1⋮Dn=d11⋮dn1⋯⋱⋯d1m't1⋮dnm'tn=R1,R2…,Rm,Tis with n samples and m + 1 attributes (columns) where for simplicity the first m attributes R = [R1, R2, … Rm] represent the predictor variables and the last attribute T represents the target to predict (response). dij is a value in D for attribute Ri for i = 1, 2, …, n and j = 1, 2, …, m. T is a nominal attribute and one is specifically interested in cases of T = t, compared to cases of any other values. Therefore, we model the problem as binary classification where we would like to distinguish T = t (positive) from T ≠ t (negative, and can be of multiple values in data). We assume there are no missing values of T in training, but R can have certain missing values, reflecting the reality of healthcare data in practice. Furthermore, most targets of clinical interest (T = t) are minorities in real data, e.g. Cardiac death = Yes and Death in 1 year = Yes. In such a case, the prevalence, defined as # (T = t)/n, is considerably smaller than 1/2 (50%), and we interchangeably denote the dataset and prediction problem as imbalanced.We have listed existing interpretable classifiers included for comparisons: logistic regression, Naïve Bayes, and decision tree (C4.5). They were not designed for imbalanced datasets. Naive Bayes would be less influenced as the target proportion could be used as the prior in training. But a moderately high imbalance ratio would overweigh the prior and impact the prediction performance, as will be shown in experimental results and recent work [13]. Both logistic regression and decision tree optimize towards the overall accuracy where the prediction performance of a minority target can be significantly influenced.The other non-interpretable methods, such as k-nearest-neighbor [19], support vector machines [20] and artificial neural nets [3], are beyond our scope of comparison as they do not directly provide explicit human-readable “patterns” to follow up for domain users.The proposed pattern discovery in this work has some resemblance with association rule mining [21], associated motif discovery from biological sequences [22] and feature selection for data mining [23]. Association rule finds only frequent items, but does not model prediction (classification). One critical limitation of association rule based methods is that the target has to be frequent, which is not the case in clinical outcomes of interest [6]. Further extensions of classification after association rule mining suffer from scalability because non-trivial rules (over 3 attributes) can take intractable time to compute [24]. Furthermore, association rule mining works with only exact occurrences which cannot tolerate noises in healthcare data. These two limitations also apply to rule extraction based prediction methods [25]. Motif discovery works on sequential and contiguous patterns which are not the case in mining healthcare data (attributes are disjoint without an order and are not contiguous) [22, 26]. Nonetheless, the approximate matching modeling of biological motifs [27] inspires us to introduce a control to tolerate noise and increase flexibility of the pattern model. Feature selection usually works as an auxiliary method in combination with formal data mining methods for target prediction [23], but it works only on the attribute level (not attribute-value) and does not explicitly generates an prediction model for direct interpretation. On the other hand, the wide spectrum of feature selection methods provides many choices to select attributes for pattern discovery, such as Chi-Squared test based feature selection [28].Motivated by these, this work presents a pattern discovery classifier featuring a highly interpretable predictive pattern model on noisy, imbalanced healthcare data in practice for domain users.
[ "9812073", "11923031", "21537851", "24050858", "24971157", "26286871", "16792098", "21801360", "28065840", "15637633", "20529874", "21193520" ]
[ { "pmid": "9812073", "title": "Implementation of a computerized cardiovascular information system in a private hospital setting.", "abstract": "BACKGROUND\nThe use of clinical databases improves quality of care, reduces operating costs, helps secure managed care contracts, and assists in clinical research. Because of the large physician input required to maintain these systems, private institutions have often found them difficult to implement. At LDS Hospital in Salt Lake City, Utah, we developed a cardiovascular information system (LDS-CIS) patterned after the Duke University Cardiovascular Database and designed for ease of use in a private hospital setting.\n\n\nMETHODS\nFeatures of the LDS-CIS include concise single-page report forms, a relational database engine that is easily queried, automatic generation of final procedure reports, and merging of all data with the hospital's existing information system. So far, data from more than 14,000 patients have been entered.\n\n\nRESULTS\nLDS-CIS provides access to data for research to improve patient care. For example, by using data generated by LDS-CIS, the policy requiring surgical backup during percutaneous transluminal coronary angioplasty was eliminated, resulting in no increased patient risk while saving nearly $1 million in 1 year. LDS-CIS generates physician feedback reports documenting performance compared with peers. This physician self-evaluation has standardized and improved care. Information from LDS-CIS has been instrumental in securing and maintaining managed care contracts. LDS-CIS risk analysis provides physicians with outcomes data specific to their current patient's demographics and level of disease to assist in point of care decisions.\n\n\nCONCLUSION\nThe use of LDS-CIS in the routine operations of LDS Hospital heart services has been found to be feasible, beneficial, and cost-effective." }, { "pmid": "11923031", "title": "A contemporary overview of percutaneous coronary interventions. The American College of Cardiology-National Cardiovascular Data Registry (ACC-NCDR).", "abstract": "The American College of Cardiology (ACC) established the National Cardiovascular Data Registry (ACC-NCDR) to provide a uniform and comprehensive database for analysis of cardiovascular procedures across the country. The initial focus has been the high-volume, high-profile procedures of diagnostic cardiac catheterization and percutaneous coronary intervention (PCI). Several large-scale multicenter efforts have evaluated diagnostic catheterization and PCI, but these have been limited by lack of standard definitions and relatively nonuniform data collection and reporting methods. Both clinical and procedural data, and adverse events occurring up to hospital discharge, were collected and reported according to uniform guidelines using a standard set of 143 data elements. Datasets were transmitted quarterly to a central facility for quality-control screening, storage and analysis. This report is based on PCI data collected from January 1, 1998, through September 30, 2000.A total of 139 hospitals submitted data on 146,907 PCI procedures. Of these, 32% (46,615 procedures) were excluded because data did not pass quality-control screening. The remaining 100,292 procedures (68%) were included in the analysis set. Average age was 64 +/- 12 years; 34% were women, 26% had diabetes mellitus, 29% had histories of prior myocardial infarction (MI), 32% had prior PCI and 19% had prior coronary bypass surgery. In 10% the indication for PCI was acute MI < or =6 h from onset, while in 52% it was class II to IV or unstable angina. Only 5% of procedures did not have a class I indication by ACC criteria, but this varied by hospital from a low of 0 to a high of 38%. A coronary stent was placed in 77% of procedures, but this varied by hospital from a low of 0 to a high of 97%. The frequencies of in-hospital Q-wave MI, coronary artery bypass graft surgery and death were 0.4%, 1.9% and 1.4%, respectively. Mortality varied by hospital from a low of 0 to a high of 4.2%. This report presents the first data collected and analyzed by the ACC-NCDR. It portrays a contemporary overview of coronary interventional practices and outcomes, using uniform data collection and reporting standards. These data reconfirm overall acceptable results that are consistent with other reported data, but also confirm large variations between individual institutions." }, { "pmid": "21537851", "title": "Data mining in healthcare and biomedicine: a survey of the literature.", "abstract": "As a new concept that emerged in the middle of 1990's, data mining can help researchers gain both novel and deep insights and can facilitate unprecedented understanding of large biomedical datasets. Data mining can uncover new biomedical and healthcare knowledge for clinical and administrative decision making as well as generate scientific hypotheses from large experimental data, clinical databases, and/or biomedical literature. This review first introduces data mining in general (e.g., the background, definition, and process of data mining), discusses the major differences between statistics and data mining and then speaks to the uniqueness of data mining in the biomedical and healthcare fields. A brief summarization of various data mining algorithms used for classification, clustering, and association as well as their respective advantages and drawbacks is also presented. Suggested guidelines on how to use data mining algorithms in each area of classification, clustering, and association are offered along with three examples of how data mining has been used in the healthcare industry. Given the successful application of data mining by health related organizations that has helped to predict health insurance fraud and under-diagnosed patients, and identify and classify at-risk people in terms of health with the goal of reducing healthcare cost, we introduce how data mining technologies (in each area of classification, clustering, and association) have been used for a multitude of purposes, including research in the biomedical and healthcare fields. A discussion of the technologies available to enable the prediction of healthcare costs (including length of hospital stay), disease diagnosis and prognosis, and the discovery of hidden biomedical and healthcare patterns from related databases is offered along with a discussion of the use of data mining to discover such relationships as those between health conditions and a disease, relationships among diseases, and relationships among drugs. The article concludes with a discussion of the problems that hamper the clinical use of data mining by health professionals." }, { "pmid": "24050858", "title": "An updated bleeding model to predict the risk of post-procedure bleeding among patients undergoing percutaneous coronary intervention: a report using an expanded bleeding definition from the National Cardiovascular Data Registry CathPCI Registry.", "abstract": "OBJECTIVES\nThis study sought to develop a model that predicts bleeding complications using an expanded bleeding definition among patients undergoing percutaneous coronary intervention (PCI) in contemporary clinical practice.\n\n\nBACKGROUND\nNew knowledge about the importance of periprocedural bleeding combined with techniques to mitigate its occurrence and the inclusion of new data in the updated CathPCI Registry data collection forms encouraged us to develop a new bleeding definition and risk model to improve the monitoring and safety of PCI.\n\n\nMETHODS\nDetailed clinical data from 1,043,759 PCI procedures at 1,142 centers from February 2008 through April 2011 participating in the CathPCI Registry were used to identify factors associated with major bleeding complications occurring within 72 h post-PCI. Risk models (full and simplified risk scores) were developed in 80% of the cohort and validated in the remaining 20%. Model discrimination and calibration were assessed in the overall population and among the following pre-specified patient subgroups: females, those older than 70 years of age, those with diabetes mellitus, those with ST-segment elevation myocardial infarction, and those who did not undergo in-hospital coronary artery bypass grafting.\n\n\nRESULTS\nUsing the updated definition, the rate of bleeding was 5.8%. The full model included 31 variables, and the risk score had 10. The full model had similar discriminatory value across pre-specified subgroups and was well calibrated across the PCI risk spectrum.\n\n\nCONCLUSIONS\nThe updated bleeding definition identifies important post-PCI bleeding events. Risk models that use this expanded definition provide accurate estimates of post-PCI bleeding risk, thereby better informing clinical decision making and facilitating risk-adjusted provider feedback to support quality improvement." }, { "pmid": "24971157", "title": "Gene expression profiles associated with acute myocardial infarction and risk of cardiovascular death.", "abstract": "BACKGROUND\nGenetic risk scores have been developed for coronary artery disease and atherosclerosis, but are not predictive of adverse cardiovascular events. We asked whether peripheral blood expression profiles may be predictive of acute myocardial infarction (AMI) and/or cardiovascular death.\n\n\nMETHODS\nPeripheral blood samples from 338 subjects aged 62 ± 11 years with coronary artery disease (CAD) were analyzed in two phases (discovery N = 175, and replication N = 163), and followed for a mean 2.4 years for cardiovascular death. Gene expression was measured on Illumina HT-12 microarrays with two different normalization procedures to control technical and biological covariates. Whole genome genotyping was used to support comparative genome-wide association studies of gene expression. Analysis of variance was combined with receiver operating curve and survival analysis to define a transcriptional signature of cardiovascular death.\n\n\nRESULTS\nIn both phases, there was significant differential expression between healthy and AMI groups with overall down-regulation of genes involved in T-lymphocyte signaling and up-regulation of inflammatory genes. Expression quantitative trait loci analysis provided evidence for altered local genetic regulation of transcript abundance in AMI samples. On follow-up there were 31 cardiovascular deaths. A principal component (PC1) score capturing covariance of 238 genes that were differentially expressed between deceased and survivors in the discovery phase significantly predicted risk of cardiovascular death in the replication and combined samples (hazard ratio = 8.5, P < 0.0001) and improved the C-statistic (area under the curve 0.82 to 0.91, P = 0.03) after adjustment for traditional covariates.\n\n\nCONCLUSIONS\nA specific blood gene expression profile is associated with a significant risk of death in Caucasian subjects with CAD. This comprises a subset of transcripts that are also altered in expression during acute myocardial infarction." }, { "pmid": "26286871", "title": "Enhancing the Prediction of 30-Day Readmission After Percutaneous Coronary Intervention Using Data Extracted by Querying of the Electronic Health Record.", "abstract": "BACKGROUND\nEarly readmission after percutaneous coronary intervention is an important quality metric, but prediction models from registry data have only moderate discrimination. We aimed to improve ability to predict 30-day readmission after percutaneous coronary intervention from a previously validated registry-based model.\n\n\nMETHODS AND RESULTS\nWe matched readmitted to non-readmitted patients in a 1:2 ratio by risk of readmission, and extracted unstructured and unconventional structured data from the electronic medical record, including need for medical interpretation, albumin level, medical nonadherence, previous number of emergency department visits, atrial fibrillation/flutter, syncope/presyncope, end-stage liver disease, malignancy, and anxiety. We assessed differences in rates of these conditions between cases/controls, and estimated their independent association with 30-day readmission using logistic regression conditional on matched groups. Among 9288 percutaneous coronary interventions, we matched 888 readmitted with 1776 non-readmitted patients. In univariate analysis, cases and controls were significantly different with respect to interpreter (7.9% for cases and 5.3% for controls; P=0.009), emergency department visits (1.12 for cases and 0.77 for controls; P<0.001), homelessness (3.2% for cases and 1.6% for controls; P=0.007), anticoagulation (33.9% for cases and 22.1% for controls; P<0.001), atrial fibrillation/flutter (32.7% for cases and 28.9% for controls; P=0.045), presyncope/syncope (27.8% for cases and 21.3% for controls; P<0.001), and anxiety (69.4% for cases and 62.4% for controls; P<0.001). Anticoagulation, emergency department visits, and anxiety were independently associated with readmission.\n\n\nCONCLUSIONS\nPatient characteristics derived from review of the electronic health record can be used to refine risk prediction for hospital readmission after percutaneous coronary intervention." }, { "pmid": "16792098", "title": "Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval.", "abstract": "Relevance feedback schemes based on support vector machines (SVM) have been widely used in content-based image retrieval (CBIR). However, the performance of SVM-based relevance feedback is often poor when the number of labeled positive feedback samples is small. This is mainly due to three reasons: 1) an SVM classifier is unstable on a small-sized training set, 2) SVM's optimal hyperplane may be biased when the positive feedback samples are much less than the negative feedback samples, and 3) overfitting happens because the number of feature dimensions is much higher than the size of the training set. In this paper, we develop a mechanism to overcome these problems. To address the first two problems, we propose an asymmetric bagging-based SVM (AB-SVM). For the third problem, we combine the random subspace method and SVM for relevance feedback, which is named random subspace SVM (RS-SVM). Finally, by integrating AB-SVM and RS-SVM, an asymmetric bagging and random subspace SVM (ABRS-SVM) is built to solve these three problems and further improve the relevance feedback performance." }, { "pmid": "21801360", "title": "Predicting disease risks from highly imbalanced data using random forest.", "abstract": "BACKGROUND\nWe present a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare.\n\n\nMETHODS\nWe employed the National Inpatient Sample (NIS) data, which is publicly available through Healthcare Cost and Utilization Project (HCUP), to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM), bagging, boosting and RF to predict the risk of eight chronic diseases.\n\n\nRESULTS\nWe predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC) curve (AUC). In addition, RF has the advantage of computing the importance of each variable in the classification process.\n\n\nCONCLUSIONS\nIn combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%." }, { "pmid": "28065840", "title": "MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.", "abstract": "OBJECTIVES\nMajor adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data.\n\n\nMETHODS\nThe proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model.\n\n\nRESULTS\nWe verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively.\n\n\nCONCLUSIONS\nWe consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR." }, { "pmid": "15637633", "title": "Assessing computational tools for the discovery of transcription factor binding sites.", "abstract": "The prediction of regulatory elements is a problem where computational methods offer great hope. Over the past few years, numerous tools have become available for this task. The purpose of the current assessment is twofold: to provide some guidance to users regarding the accuracy of currently available tools in various settings, and to provide a benchmark of data sets for assessing future tools." }, { "pmid": "20529874", "title": "Discovering protein-DNA binding sequence patterns using association rule mining.", "abstract": "Protein-DNA bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) play an essential role in transcriptional regulation. Over the past decades, significant efforts have been made to study the principles for protein-DNA bindings. However, it is considered that there are no simple one-to-one rules between amino acids and nucleotides. Many methods impose complicated features beyond sequence patterns. Protein-DNA bindings are formed from associated amino acid and nucleotide sequence pairs, which determine many functional characteristics. Therefore, it is desirable to investigate associated sequence patterns between TFs and TFBSs. With increasing computational power, availability of massive experimental databases on DNA and proteins, and mature data mining techniques, we propose a framework to discover associated TF-TFBS binding sequence patterns in the most explicit and interpretable form from TRANSFAC. The framework is based on association rule mining with Apriori algorithm. The patterns found are evaluated by quantitative measurements at several levels on TRANSFAC. With further independent verifications from literatures, Protein Data Bank and homology modeling, there are strong evidences that the patterns discovered reveal real TF-TFBS bindings across different TFs and TFBSs, which can drive for further knowledge to better understand TF-TFBS bindings." }, { "pmid": "21193520", "title": "Discovering approximate-associated sequence patterns for protein-DNA interactions.", "abstract": "MOTIVATION\nThe bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) are fundamental protein-DNA interactions in transcriptional regulation. Extensive efforts have been made to better understand the protein-DNA interactions. Recent mining on exact TF-TFBS-associated sequence patterns (rules) has shown great potentials and achieved very promising results. However, exact rules cannot handle variations in real data, resulting in limited informative rules. In this article, we generalize the exact rules to approximate ones for both TFs and TFBSs, which are essential for biological variations.\n\n\nRESULTS\nA progressive approach is proposed to address the approximation to alleviate the computational requirements. Firstly, similar TFBSs are grouped from the available TF-TFBS data (TRANSFAC database). Secondly, approximate and highly conserved binding cores are discovered from TF sequences corresponding to each TFBS group. A customized algorithm is developed for the specific objective. We discover the approximate TF-TFBS rules by associating the grouped TFBS consensuses and TF cores. The rules discovered are evaluated by matching (verifying with) the actual protein-DNA binding pairs from Protein Data Bank (PDB) 3D structures. The approximate results exhibit many more verified rules and up to 300% better verification ratios than the exact ones. The customized algorithm achieves over 73% better verification ratios than traditional methods. Approximate rules (64-79%) are shown statistically significant. Detailed variation analysis and conservation verification on NCBI records demonstrate that the approximate rules reveal both the flexible and specific protein-DNA interactions accurately. The approximate TF-TFBS rules discovered show great generalized capability of exploring more informative binding rules." } ]
BioData Mining
28465724
PMC5408444
10.1186/s13040-017-0133-9
Feature analysis for classification of trace fluorescent labeled protein crystallization images
BackgroundLarge number of features are extracted from protein crystallization trial images to improve the accuracy of classifiers for predicting the presence of crystals or phases of the crystallization process. The excessive number of features and computationally intensive image processing methods to extract these features make utilization of automated classification tools on stand-alone computing systems inconvenient due to the required time to complete the classification tasks. Combinations of image feature sets, feature reduction and classification techniques for crystallization images benefiting from trace fluorescence labeling are investigated.ResultsFeatures are categorized into intensity, graph, histogram, texture, shape adaptive, and region features (using binarized images generated by Otsu’s, green percentile, and morphological thresholding). The effects of normalization, feature reduction with principle components analysis (PCA), and feature selection using random forest classifier are also analyzed. The time required to extract feature categories is computed and an estimated time of extraction is provided for feature category combinations. We have conducted around 8624 experiments (different combinations of feature categories, binarization methods, feature reduction/selection, normalization, and crystal categories). The best experimental results are obtained using combinations of intensity features, region features using Otsu’s thresholding, region features using green percentile G 90 thresholding, region features using green percentile G 99 thresholding, graph features, and histogram features. Using this feature set combination, 96% accuracy (without misclassifying crystals as non-crystals) was achieved for the first level of classification to determine presence of crystals. Since missing a crystal is not desired, our algorithm is adjusted to achieve a high sensitivity rate. In the second level classification, 74.2% accuracy for (5-class) crystal sub-category classification. Best classification rates were achieved using random forest classifier.ContributionsThe feature extraction and classification could be completed in about 2 s per image on a stand-alone computing system, which is suitable for real time analysis. These results enable research groups to select features according to their hardware setups for real-time analysis.
Related workIn general, protein crystallization trial image analysis work is compared with respect to the accuracy of classification. The accuracy depends on the number of categories, features, and the ability of classifiers to model the data. Moreover, the hardware resources, training time and real-time analysis of new images are important factors that affect the usability of these methods. Table 1 provides the summary of related work with respect to different factors. Table 1Summary of related workResearch paperImage categoriesFeature extractionClassification methodClassification accuracyZuk and Ward (1991) [7]NAEdge featuresDetection of lines using Hough transform and line trackingNot providedWalker et al. (2007) [22]7Radial and angular descriptors from Fourier TransformLearning vector quantization14 - 97% for different categoriesXu et al. (2006) [23]2Features from multiscale Laplacian pyramid filtersNeural network95% accuracyWilson (2002) [24]3Intensity and geometric featuresNaive BayesRecall 86% for crystals, 77% for unfavourable objectsHung et al. (2014) [26]3Shape context, Gabor filters and Fourier transformsCascade classifier on naive Bayes and random forest74% accuracySpraggon et al. (2002) [17]6Geometric and texture featuresSelf-organizing neural networks47 to 82% for different categoriesCumba et al. (2003) [8]2Radon transform line features and texture featuresLinear discriminant analysis85% accuracy with roc 0.84Saitoh et al. (2004) [20]5Geometric and texture featuresLinear discriminant analysis80 - 98% for different categoriesBern et al. (2004) [15]5Gradient and geometric featuresDecision tree with hand crafted thresholds12% FN and 14% FPCumba et al. (2005) [9]2Texture features, line measures and energy measuresAssociation rule mining85% accuracy with ROC 0.87Zhu et al. (2004) [10]2Geometric and texture featuresDecision tree with boosting14.6% FP and 9.6% FNBerry et al. (2006) [11]2NALearning vector quantization, self organizing maps and bayesian algorithmNAPan et al. (2006) [12]2Intensity stats, texture features, Gabor wavelet decompositionSupport vector machine2.94% FN and 37.68% FPYang et al. (2006) [14]3Hough transform, DFT, GLCM featuresHand tuned thresholds85% accuracySaitoh et al. (2006) [16]5Texture features, differential image featuresDecision tree and SVM90% for 3-class problemPo and Laine (2008) [13]2Multiscale Laplacian pyramid filters and histogram analysisGenetic algorithm and neural networkAccuracy: 93.5% with 88% TP and 99% TNLiu et al. (2008) [21]Crystal likelihoodFeatures from Gabor filters, integral histograms, and gradient imagesDecision tree with boostingROC 0.92Cumba et al. (2010) [18]3 and 6Basic stats, energy, Euler numbers, Radon-Laplacian, Sobel-edge, GLCMMultiple random forest with bagging and feature subsamplingRecall 80% crystals, 89% precipitate, 98% clear dropsSigdel et al. (2013) [28]3Intensity and blob featuresMultilayer perception neural network1.2% crystal misses with 88% accuracySigdel et al. (2014) [25]3Intensity and blob featuresSemi-supervised75% - 85% overall accuracyDinc et al. (2014) [27]3 and 2Intensity and blob features5 classifiers, feature reduction using PCA96% on non-crystals, 95% on likely-leadsYann et al. (2016) [19]10Deep learnining on grayscale imageDeep CNN with 13 layers90.8% accuracy The Number of Categories. A significant amount of previous work (for example, Zuk and Ward (1991) [7], Cumba et al. (2003) [8], Cumba et al. (2005) [9], Zhu et al. (2006) [10], Berry et al. (2006) [11], Pan et al. (2006) [12], Po and Laine (2008) [13]) classified crystallization trials into non-crystal or crystal categories. Yang et al. (2006) [14] classified the trials into three categories (clear, precipitate, and crystal). Bern et al. (2004) [15] classified the images into five categories (empty, clear, precipitate, microcrystal hit, and crystal). Likewise, Saitoh et al. (2006) [16] classified into five categories (clear drop, creamy precipitate, granulated precipitate, amorphous state precipitate, and crystal). Spraggon et al. (2002) [17] proposed classification of the crystallization images into six categories (experimental mistake, clear drop, homogeneous precipitant, inhomogeneous precipitant, micro-crystals, and crystals). Cumba et al. (2010) [18] developed a system that classifies the images into three or six categories (phase separation, precipitate, skin effect, crystal, junk, and unsure). Yann et al. (2016) [19] classified into 10 categories (clear, precipitate, crystal, phase, precipitate and crystal, precipitate and skin, phase and crystal, phase and precipitate, skin, and junk). It should be noted that there is no standard for categorizing the images, and different research studies proposed different categories in their own way. Hampton’s scheme specifies 9 possible outcomes of crystallization trials. We intend to classify the crystallization trials according to Hampton’s scale. Features for Classification. For feature extraction, a variety of image processing techniques have been proposed. Zuk and Ward (1991) [7] used the Hough transform to identify straight edges of crystals. Bern et al. (2004) [15] extract gradient and geometry-related features from the selected drop. Pan et al. (2006) [12] used intensity statistics, blob texture features, and results from Gabor wavelet decomposition to obtain the image features. Research studies by Cumba et al. (2003) [8], Saitoh et al. (2004) [20], Spraggon et al. (2002) [17], and Zhu et al. (2004) [10] used a combination of geometric and texture features as the input to their classifier. Saitoh et al. (2006) [16] used global texture features as well as features from local parts in the image and features from differential images. Yang et al. (2006) [14] derived the features from gray-level co-occurrence matrix, Hough transform and discrete fourier transform (DFT). Liu et al. (2008) [21] extracted features from Gabor filters, integral histograms, and gradient images to obtain 466-dimensional feature vector. Po and Laine (2008) [13] applied multiscale Laplacian pyramid filters and histogram analysis techniques for feature extraction. Similarly, other extracted image features included Hough transform features [13], Discrete Fourier Transform features [22], features from multiscale Laplacian pyramid filters [23], histogram analysis features [9], Sobel-edge features [24], etc. Cumba et al. (2010) [18] presented the most sophisticated feature extraction techniques for the classification of crystallization trial images. Features such as basic statistics, energy, Euler numbers, Radon-Laplacian features, Sobel-edge features, microcrystal features, and gray-level co-occurrence matrix features were extracted to obtain a 14,908 dimensional feature vector. They utilized a web-based distributed system and extracted as many features as possible hoping that the huge set of features could improve the accuracy of the classification [18]. Time Analysis of Classification. Because of the high-throughput rate of image collection, the speed of processing an image becomes an important factor. The system by Pan et al. (2006) [12] required 30s per image for feature extraction. Po and Laine mentioned that it took 12.5s per image for the feature extraction in their system [13]. Because of high computational requirement, they considered implementation of their approach on the Google computing grid. Feature extraction described by Cumba et al. (2010) [18] is the most sophisticated, which could take 5 h per image on a normal system. To speed up the process, they executed the feature extraction using a web-based distributed computing system. Yann et al. (2016) [19] utilized deep convolutional neural network (CNN) where training took 1.5 days for 150,000 weights and around 300 passes and classification takes 86 ms for 128x128 image on their GPU-based system. Classifiers for Protein Crystallization. To obtain the decision model for classification, various classification technique have been used. Zhu, et al. (2004) [10] and Liu et al. (2008) [21] applied a decision tree with boosting. Bern et al. (2004) [15] used a decision tree classifier with hand-crafted thresholds. Pan et al. (2006) [12] applied a support vector machines (SVM) learning algorithm. Saitoh et al. (2006) [16] applied a combination of decision tree and SVM classifiers. Spraggon et al. (2002) [17] applied self-organizing neural networks. Po et al. (2008) [13] combined genetic algorithms and neural networks to obtain a decision model. Berry et al. (2006) [11] determined scores for each object within a drop using self-organizing maps, learning vector quantization, and Bayesian algorithms. The overall score for the drop was calculated by aggregating the classification scores of individual objects. Cumba et al. (2003) [8] and Saitoh et al. (2004) [20] applied linear discriminant analysis. Yang et al. (2006) [14] applied hand-tuned rules based classification followed by linear discriminant analysis. Cumba et al. (2005) [9] used association rule mining, while Cumba et al. (2010) [18] used multiple random forest classifiers generated via bagging and feature subsampling. In [25], classification performance using semi-supervised approaches was investigated. The recent study by Hung et al. (2014) [26] proposed protein crystallization image classification using elastic net. In our previous work [27], we evaluated the classification performance using 5 different classifiers, and feature reduction using principal components analysis (PCA) and normalization methods for the non-crystal and likely-lead datasets. Yann et al. (2016) [19] utilized deep convolutional neural networks (CNN) with 13 layers: 0) 128x128 image, 1) contrast normalization, 2) horizontal mirroring, 3) transformation, 4) convolution (5x5 filter), 5) max pooling (2x2 filter), 6) convolution (5x5 filter), 7) max pooling (2x2 filter), 8) convolution (5x5 filter), 9) max pooling (2x2 filter), 10) convolution (3x3 filter), 11) 2048 node fully connected layer, 12) 2048 fully connected layer for rectified linear activation, and 13) output layer using softmax. Accuracy of Classification. With regard to the correctness of a classification, the best reported accuracy for the binary classification (i.e., classification into two categories) is 96.56% (83.6% true positive rate and 99.4% true negative rate) using deep CNN [19]. Despite high accuracy rate, around 16% of crystals are missed. Using genetic algorithms and neural networks [13], an accuracy of 93.5% average true performance (88% true positive and 99% true negative rates) is achieved for binary classification. Saitoh et al. achieved accuracy in the range of 80−98% for different image categories [20]. Likewise, the automated system by Cumba et al. (2010) [18] detected 80% of crystal-bearing images, 89% of precipitate images, and 98% of clear drops accurately. The accuracy also depends on the number of categories. As the number of categories increases, the accuracy goes down since there are more misclassifications possible. For 10-way classification using deep CNN, Yann et al. [19] achieved 91% accuracy with around 76.85% true positive rate for crystals and 8% of crystals categorized into classes not related to crystals. While overall accuracy is important, true positive rate (recall or sensitivity) for crystals may carry more value. As crystallographers would like to trust these automated classification systems, it is not desirable to see successful crystalline cases are missed by these systems.In this study, we will look into whether it is possible to achieve high accuracy with a small set of feature set using a proper classifier considering as many as 10 categories for real-time analysis. We provide an exhaustive set of experiments using all feature combinations and representative classifiers to achieve real-time analysis.
[ "24419610", "26955046", "12925793", "17001091", "16510974", "12393922", "19018095", "12393921", "24532991", "20360022", "15652250" ]
[ { "pmid": "24419610", "title": "Introduction to protein crystallization.", "abstract": "Protein crystallization was discovered by chance about 150 years ago and was developed in the late 19th century as a powerful purification tool and as a demonstration of chemical purity. The crystallization of proteins, nucleic acids and large biological complexes, such as viruses, depends on the creation of a solution that is supersaturated in the macromolecule but exhibits conditions that do not significantly perturb its natural state. Supersaturation is produced through the addition of mild precipitating agents such as neutral salts or polymers, and by the manipulation of various parameters that include temperature, ionic strength and pH. Also important in the crystallization process are factors that can affect the structural state of the macromolecule, such as metal ions, inhibitors, cofactors or other conventional small molecules. A variety of approaches have been developed that combine the spectrum of factors that effect and promote crystallization, and among the most widely used are vapor diffusion, dialysis, batch and liquid-liquid diffusion. Successes in macromolecular crystallization have multiplied rapidly in recent years owing to the advent of practical, easy-to-use screening kits and the application of laboratory robotics. A brief review will be given here of the most popular methods, some guiding principles and an overview of current technologies." }, { "pmid": "26955046", "title": "Optimizing Associative Experimental Design for Protein Crystallization Screening.", "abstract": "The goal of protein crystallization screening is the determination of the main factors of importance to crystallizing the protein under investigation. One of the major issues about determining these factors is that screening is often expanded to many hundreds or thousands of conditions to maximize combinatorial chemical space coverage for maximizing the chances of a successful (crystalline) outcome. In this paper, we propose an experimental design method called \"Associative Experimental Design (AED)\" and an optimization method includes eliminating prohibited combinations and prioritizing reagents based on AED analysis of results from protein crystallization experiments. AED generates candidate cocktails based on these initial screening results. These results are analyzed to determine those screening factors in chemical space that are most likely to lead to higher scoring outcomes, crystals. We have tested AED on three proteins derived from the hyperthermophile Thermococcus thioreducens, and we applied an optimization method to these proteins. Our AED method generated novel cocktails (count provided in parentheses) leading to crystals for three proteins as follows: Nucleoside diphosphate kinase (4), HAD superfamily hydrolase (2), Nucleoside kinase (1). After getting promising results, we have tested our optimization method on four different proteins. The AED method with optimization yielded 4, 3, and 20 crystalline conditions for holo Human Transferrin, archaeal exosome protein, and Nucleoside diphosphate kinase, respectively." }, { "pmid": "12925793", "title": "Automatic classification of sub-microlitre protein-crystallization trials in 1536-well plates.", "abstract": "A technique for automatically evaluating microbatch (400 nl) protein-crystallization trials is described. This method addresses analysis problems introduced at the sub-microlitre scale, including non-uniform lighting and irregular droplet boundaries. The droplet is segmented from the well using a loopy probabilistic graphical model with a two-layered grid topology. A vector of 23 features is extracted from the droplet image using the Radon transform for straight-edge features and a bank of correlation filters for microcrystalline features. Image classification is achieved by linear discriminant analysis of its feature vector. The results of the automatic method are compared with those of a human expert on 32 1536-well plates. Using the human-labeled images as ground truth, this method classifies images with 85% accuracy and a ROC score of 0.84. This result compares well with the experimental repeatability rate, assessed at 87%. Images falsely classified as crystal-positive variously contain speckled precipitate resembling microcrystals, skin effects or genuine crystals falsely labeled by the human expert. Many images falsely classified as crystal-negative variously contain very fine crystal features or dendrites lacking straight edges. Characterization of these misclassifications suggests directions for improving the method." }, { "pmid": "17001091", "title": "SPINE high-throughput crystallization, crystal imaging and recognition techniques: current state, performance analysis, new technologies and future aspects.", "abstract": "This paper reviews the developments in high-throughput and nanolitre-scale protein crystallography technologies within the remit of workpackage 4 of the Structural Proteomics In Europe (SPINE) project since the project's inception in October 2002. By surveying the uptake, use and experience of new technologies by SPINE partners across Europe, a picture emerges of highly successful adoption of novel working methods revolutionizing this area of structural biology. Finally, a forward view is taken of how crystallization methodologies may develop in the future." }, { "pmid": "16510974", "title": "Automated classification of protein crystallization images using support vector machines with scale-invariant texture and Gabor features.", "abstract": "Protein crystallography laboratories are performing an increasing number of experiments to obtain crystals of good diffraction quality. Better automation has enabled researchers to prepare and run more experiments in a shorter time. However, the problem of identifying which experiments are successful remains difficult. In fact, most of this work is still performed manually by humans. Automating this task is therefore an important goal. As part of a project to develop a new and automated high-throughput capillary-based protein crystallography instrument, a new image-classification subsystem has been developed to greatly reduce the number of images that require human viewing. This system must have low rates of false negatives (missed crystals), possibly at the cost of raising the number of false positives. The image-classification system employs a support vector machine (SVM) learning algorithm to classify the blocks making up each image. A new algorithm to find the area within the image that contains the drop is employed. The SVM uses numerical features, based on texture and the Gabor wavelet decomposition, that are calculated for each block. If a block within an image is classified as containing a crystal, then the entire image is classified as containing a crystal. In a study of 375 images, 87 of which contained crystals, a false-negative rate of less than 4% with a false-positive rate of about 40% was consistently achieved." }, { "pmid": "12393922", "title": "Computational analysis of crystallization trials.", "abstract": "A system for the automatic categorization of the results of crystallization experiments generated by robotic screening is presented. Images from robotically generated crystallization screens are taken at preset time intervals and analyzed by the computer program Crystal Experiment Evaluation Program (CEEP). This program attempts to automatically categorize the individual crystal experiments into a number of simple classes ranging from clear drop to mountable crystal. The algorithm first selects features from the images via edge detection and texture analysis. Classification is achieved via a self-organizing neural net generated from a set of hand-classified images used as a training set. New images are then classified according to this neural net. It is demonstrated that incorporation of time-series information may enhance the accuracy of classification. Preliminary results from the screening of the proteome of Thermotoga maritima are presented showing the utility of the system." }, { "pmid": "19018095", "title": "Image-based crystal detection: a machine-learning approach.", "abstract": "The ability of computers to learn from and annotate large databases of crystallization-trial images provides not only the ability to reduce the workload of crystallization studies, but also an opportunity to annotate crystallization trials as part of a framework for improving screening methods. Here, a system is presented that scores sets of images based on the likelihood of containing crystalline material as perceived by a machine-learning algorithm. The system can be incorporated into existing crystallization-analysis pipelines, whereby specialists examine images as they normally would with the exception that the images appear in rank order according to a simple real-valued score. Promising results are shown for 319 112 images associated with 150 structures solved by the Joint Center for Structural Genomics pipeline during the 2006-2007 year. Overall, the algorithm achieves a mean receiver operating characteristic score of 0.919 and a 78% reduction in human effort per set when considering an absolute score cutoff for screening images, while incurring a loss of five out of 150 structures." }, { "pmid": "12393921", "title": "Towards the automated evaluation of crystallization trials.", "abstract": "A method to evaluate images from crystallization experiments is described. Image discontinuities are used to determine boundaries of artifacts in the images and these are then considered as individual objects. This allows the edge of the drop to be identified and any objects outside this ignored. Each object is evaluated in terms of a number of attributes related to its size and shape, the curvature of the boundary and the variance in intensity, as well as obvious crystal-like characteristics such as straight sections of the boundary and straight lines of constant intensity within the object. With each object in the image assigned to one of a number of different classes, an overall report can be given. The objects to be considered have no predefined shape or size and, although one may expect to see straight edges and angles in a crystal, this is not a prerequisite for diffraction. This means there is much overlap in the values of the variables expected for the different classes. However, each attribute gives some information about the object in question and, although no single attribute can be expected to correctly classify an image, it has been found that a combination of classifiers gives very good results." }, { "pmid": "24532991", "title": "Real-Time Protein Crystallization Image Acquisition and Classification System.", "abstract": "In this paper, we describe the design and implementation of a stand-alone real-time system for protein crystallization image acquisition and classification with a goal to assist crystallographers in scoring crystallization trials. In-house assembled fluorescence microscopy system is built for image acquisition. The images are classified into three categories as non-crystals, likely leads, and crystals. Image classification consists of two main steps - image feature extraction and application of classification based on multilayer perceptron (MLP) neural networks. Our feature extraction involves applying multiple thresholding techniques, identifying high intensity regions (blobs), and generating intensity and blob features to obtain a 45-dimensional feature vector per image. To reduce the risk of missing crystals, we introduce a max-class ensemble classifier which applies multiple classifiers and chooses the highest score (or class). We performed our experiments on 2250 images consisting 67% non-crystal, 18% likely leads, and 15% clear crystal images and tested our results using 10-fold cross validation. Our results demonstrate that the method is very efficient (< 3 seconds to process and classify an image) and has comparatively high accuracy. Our system only misses 1.2% of the crystals (classified as non-crystals) most likely due to low illumination or out of focus image capture and has an overall accuracy of 88%." }, { "pmid": "20360022", "title": "Letter to the editor: Stability of Random Forest importance measures.", "abstract": "The goal of this article (letter to the editor) is to emphasize the value of exploring ranking stability when using the importance measures, mean decrease accuracy (MDA) and mean decrease Gini (MDG), provided by Random Forest. We illustrate with a real and a simulated example that ranks based on the MDA are unstable to small perturbations of the dataset and ranks based on the MDG provide more robust results." }, { "pmid": "15652250", "title": "Life in the fast lane for protein crystallization and X-ray crystallography.", "abstract": "The common goal for structural genomic centers and consortiums is to decipher as quickly as possible the three-dimensional structures for a multitude of recombinant proteins derived from known genomic sequences. Since X-ray crystallography is the foremost method to acquire atomic resolution for macromolecules, the limiting step is obtaining protein crystals that can be useful of structure determination. High-throughput methods have been developed in recent years to clone, express, purify, crystallize and determine the three-dimensional structure of a protein gene product rapidly using automated devices, commercialized kits and consolidated protocols. However, the average number of protein structures obtained for most structural genomic groups has been very low compared to the total number of proteins purified. As more entire genomic sequences are obtained for different organisms from the three kingdoms of life, only the proteins that can be crystallized and whose structures can be obtained easily are studied. Consequently, an astonishing number of genomic proteins remain unexamined. In the era of high-throughput processes, traditional methods in molecular biology, protein chemistry and crystallization are eclipsed by automation and pipeline practices. The necessity for high-rate production of protein crystals and structures has prevented the usage of more intellectual strategies and creative approaches in experimental executions. Fundamental principles and personal experiences in protein chemistry and crystallization are minimally exploited only to obtain \"low-hanging fruit\" protein structures. We review the practical aspects of today's high-throughput manipulations and discuss the challenges in fast pace protein crystallization and tools for crystallography. Structural genomic pipelines can be improved with information gained from low-throughput tactics that may help us reach the higher-bearing fruits. Examples of recent developments in this area are reported from the efforts of the Southeast Collaboratory for Structural Genomics (SECSG)." } ]
Frontiers in Computational Neuroscience
28522969
PMC5415673
10.3389/fncom.2017.00024
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task.
4. Related workIn Section 2.3, we have discussed the relationship between Equilibrium Propagation and Backpropagation. In the weakly clamped phase, the change of the influence parameter β creates a perturbation at the output layer which propagates backwards in the hidden layers. The error derivatives and the gradient of the objective function are encoded by this perturbation.In this section, we discuss the connection between our work and other algorihms, starting with Contrastive Hebbian Learning. Equilibrium Propagation offers a new perspective on the relationship between Backpropagation in feedforward nets and Contrastive Hebbian Learning in Hopfield nets and Boltzmann machines (Table 1).Table 1Correspondence of the phases for different learning algorithms: Back-propagation, Equilibrium Propagation (our algorithm), Contrastive Hebbian Learning (and Boltzmann Machine Learning) and Almeida-Pineida's Recurrent Back-Propagation.BackpropEquilibrium PropContrastive Hebbian LearningAlmeida-PineidaFirst PhaseForward PassFree PhaseFree Phase (or Negative Phase)Free PhaseSecond PhaseBackward PassWeakly Clamped PhaseClamped Phase (or Positive Phase)Recurrent Backprop4.1. Link to contrastive hebbian learningDespite the similarity between our learning rule and the Contrastive Hebbian Learning rule (CHL) for the continuous Hopfield model, there are important differences.First, recall that our learning rule is: (27)ΔWij∝limβ→01β(ρ(uiβ)ρ(ujβ)−ρ(ui0)ρ(uj0)), where u0 is the free fixed point and uβ is the weakly clamped fixed point. The Contrastive Hebbian Learning rule is: (28)ΔWij∝ρ(ui∞)ρ(uj∞)−ρ(ui0)ρ(uj0), where u∞ is the fully clamped fixed point (i.e., fixed point with fully clamped outputs). We choose the notation u∞ for the fully clamped fixed point because it corresponds to β → +∞ with the notations of our model. Indeed Equation (9) shows that in the limit β → +∞, the output unit yi moves infinitely fast toward yi, so yi is immediately clamped to yi and is no longer sensitive to the “internal force” (Equation 8). Another way to see it is by considering Equation (3): as β → +∞, the only value of y that gives finite energy is d.The objective functions that these two algorithms optimize also differ. Recalling the form of the Hopfield energy (Equation 1) and the cost function (Equation 2), Equilibrium Propagation computes the gradient of: (29)J=12‖y0−d2‖, where y0 is the output state at the free phase fixed point u0, while CHL computes the gradient of: (30)JCHL=E(u∞)−E(u0). The objective function for CHL has theoretical problems: it may take negative values if the clamped phase and free phase stabilize in different modes of the energy function, in which case the weight update is inconsistent and learning usually deteriorates, as pointed out by Movellan (1990). Our objective function does not suffer from this problem, because it is defined in terms of local perturbations, and the implicit function theorem guarantees that the weakly clamped fixed point will be close to the free fixed point (thus in the same mode of the energy function).We can also reformulate the learning rules and objective functions of these algorithms using the notations of the general setting (Section 3). For Equilibrium Propagation we have: Δθ∝−limβ→01β(∂F∂θ(θ,v,β,sθ,vβ)−∂F∂θ(θ,v,0,sθ,v0)) and (31)J(θ,v)=∂F∂β(θ,v,0,sθ,v0). As for Contrastive Hebbian Learning, one has Δθ∝−(∂F∂θ(θ,v,∞,sθ,v∞)−∂F∂θ(θ,v,0,sθ,v0)) and (32)JCHL(θ,v)=F(θ,v,∞,sθ,v∞)−F(θ,v,0,sθ,v0), where β = 0 and β = ∞ are the values of β corresponding to free and (fully) clamped outputs, respectively.Our learning algorithm is also more flexible because we are free to choose the cost function C (as well as the energy funtion E), whereas the contrastive function that CHL optimizes is fully determined by the energy function E.4.2. Link to boltzmann machine learningAgain, the log-likelihood that the Boltzmann machine optimizes is determined by the Hopfield energy E, whereas we have the freedom to choose the cost function in the framework for Equilibrium Propagation.As discussed in Section 2.3, the second phase of Equilibrium Propagation (going from the free fixed point to the weakly clamped fixed point) can be seen as a brief “backpropagation phase” with weakly clamped target outputs. By contrast, in the positive phase of the Boltzmann machine, the target is fully clamped, so the (correct version of the) Boltzmann machine learning rule requires two separate and independent phases (Markov chains), making an analogy with backprop less obvious.Our algorithm is also similar in spirit to the CD algorithm (Contrastive Divergence) for Boltzmann machines. In our model, we start from a free fixed point (which requires a long relaxation in the free phase) and then we run a short weakly clamped phase. In the CD algorithm, one starts from a positive equilibrium sample with the visible units clamped (which requires a long positive phase Markov chain in the case of a general Boltzmann machine) and then one runs a short negative phase. But there is an important difference: our algorithm computes the correct gradient of our objective function (in the limit β → 0), whereas the CD algorithm computes a biased estimator of the gradient of the log-likelihood. The CD1 update rule is provably not the gradient of any objective function and may cycle indefinitely in some pathological cases (Sutskever and Tieleman, 2010).Finally, in the supervised setting presented in Section 2, a more subtle difference with the Boltzmann machine is that the “output” state y in our model is best thought of as being part of the latent state variable s. If we were to make an analogy with the Boltzmann machine, the visible units of the Boltzmann machine would be v = {x, d}, while the hidden units would be s = {h, y}. In the Boltzmann machine, the state of the external world is inferred directly on the visible units (because it is a probabilistic generative model that maximizes the log-likelyhood of the data), whereas in our model we make the choice to integrate in s special latent variables y that aim to match the target d.4.3. Link to recurrent back-propagationDirectly connected to our model is the work by Pineda (1987) and Almeida (1987) on recurrent back-propagation. They consider the same objective function as ours, but formulate the problem as a constrained optimization problem. In Appendix B, we derive another proof for the learning rule (Theorem 1) with the Lagrangian formalism for constrained optimization problems. The beginning of this proof is in essence the same as the one proposed by Pineda (1987); Almeida (1987), but there is a major difference when it comes to solving Equation (75) for the costate variable λ*. The method proposed by Pineda (1987) and Almeida (1987) is to use Equation (75) to compute λ* by a fixed point iteration in a linearized form of the recurrent network. The computation of λ* corresponds to their second phase, which they call recurrent back-propagation. However, this second phase does not follow the same kind of dynamics as the first phase (the free phase) because it uses a linearization of the neural activation rather than the fully non-linear activation5. From a biological plausibility point of view, having to use a different kind of hardware and computation for the two phases is not satisfying.By contrast, like the continuous Hopfield net and the Boltzmann machine, our model involves only one kind of neural computations for both phases.4.4. The model by Xie and SeungPrevious work on the back-propagation interpretation of contrastive Hebbian learning was done by Xie and Seung (2003).The model by Xie and Seung (2003) is a modified version of the Hopfield model. They consider the case of a layered MLP-like network, but their model can be extended to a more general connectivity, as shown here. In essence, using the notations of our model (Section 2), the energy function that they consider is: (33)EX&S(u) : =12∑i γiui2−∑i<j γjWijρ(ui)ρ(uj)−∑i γibiρ(ui). The difference with Equation (1) is that they introduce a parameter γ, assumed to be small, that scales the strength of the connections. Their update rule is the contrastive Hebbian learning rule which, for this particular energy function, takes the form: (34)ΔWij∝−(∂EX&S∂Wij(u∞)−∂EX&S∂Wij(u0))       =γj(ρ(ui∞)ρ(uj∞)−ρ(ui0)ρ(uj0)) for every pair of indices (i, j) such that i < j. Here, u∞ and u0 are the (fully) clamped fixed point and free fixed point, respectively. Xie and Seung (2003) show that in the regime γ → 0 this contrastive Hebbian learning rule is equivalent to back-propagation. At the free fixed point u0, one has ∂EX&S∂si(u0)=0 for every unit si6, which yields, after dividing by γi and rearranging the terms: (35)si0=ρ′(si0)(∑j < iWijρ(uj0)+∑j>iγj − iWijρ(uj0)+bi). In the limit γ → 0, one gets si0≈ρ′(si0)(∑j<iWijρ(uj0)+bi), so that the network almost behaves like a feedforward net in this regime.As a comparison, recall that in our model (Section 2) the energy function is: (36)E(u) : =12∑iui2−∑i < jWijρ(ui)ρ(uj)−∑ibiρ(ui), the learning rule is: (37)ΔWij∝−limβ→01β(∂E∂Wij(uβ)−∂E∂Wij(u0))           =limβ→01β(ρ(uiβ)ρ(ujβ)−ρ(ui0)ρ(uj0)), and at the free fixed point, we have ∂E∂si(u0)=0 for every unit si, which gives: (38)si0=ρ′(si0)(∑j ≠ iWijρ(uj0)+bi). Here, are the main differences between our model and theirs. In our model, the feedforward and feedback connections are both strong. In their model, the feedback weights are tiny compared to the feedforward weights, which makes the (recurrent) computations look almost feedforward. In our second phase, the outputs are weakly clamped. In their second phase, they are fully clamped. The theory of our model requires a unique learning rate for the weights, while in their model the update rule for Wij (with i < j) is scaled by a factor γj (see Equation 34). Since γ is small, the learning rates for the weights vary on many orders of magnitude in their model. Intuitively, these multiple learning rates are required to compensate for the small feedback weights.
[ "27870614", "21212356", "11283308", "7054394", "22920249", "19325932", "11919633", "22080608", "18255734", "12180402", "6587342", "22807913", "12590814" ]
[ { "pmid": "27870614", "title": "Active Inference: A Process Theory.", "abstract": "This article describes a process theory based on active inference and belief propagation. Starting from the premise that all neuronal processing (and action selection) can be explained by maximizing Bayesian model evidence-or minimizing variational free energy-we ask whether neuronal responses can be described as a gradient descent on variational free energy. Using a standard (Markov decision process) generative model, we derive the neuronal dynamics implicit in this description and reproduce a remarkable range of well-characterized neuronal phenomena. These include repetition suppression, mismatch negativity, violation responses, place-cell activity, phase precession, theta sequences, theta-gamma coupling, evidence accumulation, race-to-bound dynamics, and transfer of dopamine responses. Furthermore, the (approximately Bayes' optimal) behavior prescribed by these dynamics has a degree of face validity, providing a formal explanation for reward seeking, context learning, and epistemic foraging. Technically, the fact that a gradient descent appears to be a valid description of neuronal activity means that variational free energy is a Lyapunov function for neuronal dynamics, which therefore conform to Hamilton's principle of least action." }, { "pmid": "21212356", "title": "Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.", "abstract": "The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level." }, { "pmid": "11283308", "title": "Synaptic modification by correlated activity: Hebb's postulate revisited.", "abstract": "Correlated spiking of pre- and postsynaptic neurons can result in strengthening or weakening of synapses, depending on the temporal order of spiking. Recent findings indicate that there are narrow and cell type-specific temporal windows for such synaptic modification and that the generally accepted input- (or synapse-) specific rule for modification appears not to be strictly adhered to. Spike timing-dependent modifications, together with selective spread of synaptic changes, provide a set of cellular mechanisms that are likely to be important for the development and functioning of neural networks. When an axon of cell A is near enough to excite cell B or repeatedly or consistently takes part in firing it, some growth or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." }, { "pmid": "7054394", "title": "Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex.", "abstract": "The development of stimulus selectivity in the primary sensory cortex of higher vertebrates is considered in a general mathematical framework. A synaptic evolution scheme of a new kind is proposed in which incoming patterns rather than converging afferents compete. The change in the efficacy of a given synapse depends not only on instantaneous pre- and postsynaptic activities but also on a slowly varying time-averaged value of the postsynaptic activity. Assuming an appropriate nonlinear form for this dependence, development of selectivity is obtained under quite general conditions on the sensory environment. One does not require nonlinearity of the neuron's integrative power nor does one need to assume any particular form for intracortical circuitry. This is first illustrated in simple cases, e.g., when the environment consists of only two different stimuli presented alternately in a random manner. The following formal statement then holds: the state of the system converges with probability 1 to points of maximum selectivity in the state space. We next consider the problem of early development of orientation selectivity and binocular interaction in primary visual cortex. Giving the environment an appropriate form, we obtain orientation tuning curves and ocular dominance comparable to what is observed in normally reared adult cats or monkeys. Simulations with binocular input and various types of normal or altered environments show good agreement with the relevant experimental data. Experiments are suggested that could test our theory further." }, { "pmid": "22920249", "title": "The spike-timing dependence of plasticity.", "abstract": "In spike-timing-dependent plasticity (STDP), the order and precise temporal interval between presynaptic and postsynaptic spikes determine the sign and magnitude of long-term potentiation (LTP) or depression (LTD). STDP is widely utilized in models of circuit-level plasticity, development, and learning. However, spike timing is just one of several factors (including firing rate, synaptic cooperativity, and depolarization) that govern plasticity induction, and its relative importance varies across synapses and activity regimes. This review summarizes this broader view of plasticity, including the forms and cellular mechanisms for the spike-timing dependence of plasticity, and, the evidence that spike timing is an important determinant of plasticity in vivo." }, { "pmid": "19325932", "title": "Free-energy and the brain.", "abstract": "If one formulates Helmholtz's ideas about perception in terms of modern-day theories one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring what cause our sensory input and learning causal regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organisation and responses.In this paper, we suggest that these perceptual processes are just one emergent property of systems that conform to a free-energy principle. The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free-energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception respectively and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at models entailed by the brain and how minimisation of free-energy can explain its dynamics and structure." }, { "pmid": "11919633", "title": "Spike-timing-dependent synaptic modification induced by natural spike trains.", "abstract": "The strength of the connection between two neurons can be modified by activity, in a way that depends on the timing of neuronal firing on either side of the synapse. This spike-timing-dependent plasticity (STDP) has been studied by systematically varying the intervals between pre- and postsynaptic spikes. Here we studied how STDP operates in the context of more natural spike trains. We found that in visual cortical slices the contribution of each pre-/postsynaptic spike pair to synaptic modification depends not only on the interval between the pair, but also on the timing of preceding spikes. The efficacy of each spike in synaptic modification was suppressed by the preceding spike in the same neuron, occurring within several tens of milliseconds. The direction and magnitude of synaptic modifications induced by spike patterns recorded in vivo in response to natural visual stimuli were well predicted by incorporating the suppressive inter-spike interaction within each neuron. Thus, activity-induced synaptic modification depends not only on the relative spike timing between the neurons, but also on the spiking pattern within each neuron. For natural spike trains, the timing of the first spike in each burst is dominant in synaptic modification." }, { "pmid": "22080608", "title": "A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations.", "abstract": "Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing-dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of rate-based patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity." }, { "pmid": "18255734", "title": "Nonlinear backpropagation: doing backpropagation without derivatives of the activation function.", "abstract": "The conventional linear backpropagation algorithm is replaced by a nonlinear version, which avoids the necessity for calculating the derivative of the activation function. This may be exploited in hardware realizations of neural processors. In this paper we derive the nonlinear backpropagation algorithms in the framework of recurrent backpropagation and present some numerical simulations of feedforward networks on the NetTalk problem. A discussion of implementation in analog very large scale integration (VLSI) electronics concludes the paper." }, { "pmid": "12180402", "title": "Training products of experts by minimizing contrastive divergence.", "abstract": "It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data." }, { "pmid": "6587342", "title": "Neurons with graded response have collective computational properties like those of two-state neurons.", "abstract": "A model for a large network of \"neurons\" with a graded response (or sigmoid input-output relation) is studied. This deterministic system has collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons. The content- addressable memory and other emergent collective properties of the original model also are present in the graded response model. The idea that such collective properties are used in biological systems is given added credence by the continued presence of such properties for more nearly biological \"neurons.\" Collective analog electrical circuits of the kind described will certainly function. The collective states of the two models have a simple correspondence. The original model will continue to be useful for simulations, because its connection to graded response systems is established. Equations that include the effect of action potentials in the graded response system are also developed." }, { "pmid": "12590814", "title": "Equivalence of backpropagation and contrastive Hebbian learning in a layered network.", "abstract": "Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks." } ]
Journal of Cheminformatics
29086162
PMC5425364
10.1186/s13321-017-0213-3
Scaffold Hunter: a comprehensive visual analytics framework for drug discovery
The era of big data is influencing the way how rational drug discovery and the development of bioactive molecules is performed and versatile tools are needed to assist in molecular design workflows. Scaffold Hunter is a flexible visual analytics framework for the analysis of chemical compound data and combines techniques from several fields such as data mining and information visualization. The framework allows analyzing high-dimensional chemical compound data in an interactive fashion, combining intuitive visualizations with automated analysis methods including versatile clustering methods. Originally designed to analyze the scaffold tree, Scaffold Hunter is continuously revised and extended. We describe recent extensions that significantly increase the applicability for a variety of tasks.
Related workSeveral other software tools exist that address the challenges regarding the organization and analysis of chemical and biological data. Early tools such as Spotfire [8] were not originally developed to analyze these kinds of data, but are often applied to compound datasets. Simultaneously, workflow environments such as the Konstanz Information Miner (KNIME) [9], Pipeline Pilot [10] or Taverna [11] were developed. The basic idea was to enable scientists in the life sciences to perform tasks which are traditionally in the domain of data analytics specialists. KNIME additionally integrates specific cheminformatics extensions [12]. Some of them focus on the integration of chemical toolkits (e.g., RDKit [13], CDK [14], and Indigo [15]) and some others on analytical aspects (e.g., CheS-Mapper [16, 17]). CDK is likewise available in Taverna [18, 19] and Pipeline Pilot can integrate ChemAxon components [20]. Thus, these tools assist the scientists in their decision making process, e.g., which compounds should undergo further investigation. While these workflow systems facilitate data-orientated tasks such as filtering or property calculations, they lack an intuitive visualization of the chemical space. Hence, it is challenging to evaluate the results and to plan subsequent steps or to draw conclusions from a performed screen. Recently, tools tailored to the specific needs of life scientists in the chemical biology, medicinal chemistry and pharmaceutical domain were developed. These include MONA 2 [21], Screening Assistant 2 [22], DataWarrior [23], the Chemical Space Mapper (CheS-Mapper) [16, 17] and the High-Throughput Screening Exploration Environment (HiTSEE) [24, 25]. The last two tools complement the workflow environment KNIME with a visualization node. To the best of the authors’ knowledge, HiTSEE is not publicly available at present. Screening Assistant 2, CheS-Mapper and DataWarrior are open-source tools based on Java, which leads to a platform independent applicability. MONA 2 focuses on set operations and is particularly useful for the comparison of datasets and the tracking of changes. DataWarrior has a wider range of features and is beyond the scope of a pure analysis software. For example, it is capable of generating combinatorial libraries. Screening Assistant 2 was originally developed to manage screening libraries, and is able to deal with several million substances [22]. Furthermore, during import, datasets are scanned for problematic molecules or features of molecules like Pan Assay Interference Compounds (PAINS), which may disturb the assay setup or unspecifically bind to diverse proteins [26]. CheS-Mapper focuses on the assessment of quantitative structure activity relationship (QSAR) studies. Hence, it facilitates the discovery of changes in molecular structure to explain (Q)SAR models by visualizing the results (either predicted or experimentally determined). CheS-Mapper utilizes the software R [27] and the WEKA toolkit [28] to visually embed analyzed molecules in the 3D space. In summary, DataWarrior and CheS-Mapper as well as Scaffold Hunter are able to assist the discovery and analysis of SAR by utilizing different visualization techniques, see Table 1. All three tools use dimension reduction techniques and clustering methods. DataWarrior and Scaffold Hunter support a set of different visualizations in order to cope with more diverse issues and aspects regarding the raw data. Both are able to visualize bioactivity data related to chemical structures smoothly. DataWarrior utilizes self organizing maps, principal component analysis and 2D rubber band scaling to reduce data dimensionality. In contrast, Scaffold Hunter employs the scaffold concept, which provides the basis for the scaffold tree view, the innovative molecule cloud view and the heat map view, which enables the user to analyze multiple properties such as bioactivities referring to different selectivity within a protein family. Altogether, Scaffold Hunter provides an unique collection of data visualizations to solve most frequent molecular design and drug discovery tasks.Table 1Comparison of visualization techniques of cheminformatics software supporting visualizationTechniqueDataWarriorCheS-MapperScaffold HunterPlotYes–YesDim. reductionPCA, 2D-RBSa, SOMPCA, MDSMDSSpreadsheetYes–YesClusteringHierarchicalWEKA/R methodsHierarchicalSpecial features2D-RBSa 3D space, web applicationScaffold concept, collaborative features, fast heuristic clustering [29] a2D rubber band scaling
[ "27779378", "19561620", "17238248", "22769057", "22424447", "24397863", "22166170", "22644661", "26389652", "23327565", "25558886", "16180923", "16859312", "27485979", "20426451", "22537178", "19561619", "20481515", "20394088", "17125213", "21615076" ]
[ { "pmid": "27779378", "title": "What Can We Learn from Bioactivity Data? Chemoinformatics Tools and Applications in Chemical Biology Research.", "abstract": "The ever increasing bioactivity data that are produced nowadays allow exhaustive data mining and knowledge discovery approaches that change chemical biology research. A wealth of chemoinformatics tools, web services, and applications therefore exists that supports a careful evaluation and analysis of experimental data to draw conclusions that can influence the further development of chemical probes and potential lead structures. This review focuses on open-source approaches that can be handled by scientists who are not familiar with computational methods having no expert knowledge in chemoinformatics and modeling. Our aim is to present an easily manageable toolbox for support of every day laboratory work. This includes, among other things, the available bioactivity and related molecule databases as well as tools to handle and analyze in-house data." }, { "pmid": "19561620", "title": "Interactive exploration of chemical space with Scaffold Hunter.", "abstract": "We describe Scaffold Hunter, a highly interactive computer-based tool for navigation in chemical space that fosters intuitive recognition of complex structural relationships associated with bioactivity. The program reads compound structures and bioactivity data, generates compound scaffolds, correlates them in a hierarchical tree-like arrangement, and annotates them with bioactivity. Brachiation along tree branches from structurally complex to simple scaffolds allows identification of new ligand types. We provide proof of concept for pyruvate kinase." }, { "pmid": "17238248", "title": "The scaffold tree--visualization of the scaffold universe by hierarchical scaffold classification.", "abstract": "A hierarchical classification of chemical scaffolds (molecular framework, which is obtained by pruning all terminal side chains) has been introduced. The molecular frameworks form the leaf nodes in the hierarchy trees. By an iterative removal of rings, scaffolds forming the higher levels in the hierarchy tree are obtained. Prioritization rules ensure that less characteristic, peripheral rings are removed first. All scaffolds in the hierarchy tree are well-defined chemical entities making the classification chemically intuitive. The classification is deterministic, data-set-independent, and scales linearly with the number of compounds included in the data set. The application of the classification is demonstrated on two data sets extracted from the PubChem database, namely, pyruvate kinase binders and a collection of pesticides. The examples shown demonstrate that the classification procedure handles robustly synthetic structures and natural products." }, { "pmid": "22769057", "title": "The Molecule Cloud - compact visualization of large collections of molecules.", "abstract": "BACKGROUND\nAnalysis and visualization of large collections of molecules is one of the most frequent challenges cheminformatics experts in pharmaceutical industry are facing. Various sophisticated methods are available to perform this task, including clustering, dimensionality reduction or scaffold frequency analysis. In any case, however, viewing and analyzing large tables with molecular structures is necessary. We present a new visualization technique, providing basic information about the composition of molecular data sets at a single glance.\n\n\nSUMMARY\nA method is presented here allowing visual representation of the most common structural features of chemical databases in a form of a cloud diagram. The frequency of molecules containing particular substructure is indicated by the size of respective structural image. The method is useful to quickly perceive the most prominent structural features present in the data set. This approach was inspired by popular word cloud diagrams that are used to visualize textual information in a compact form. Therefore we call this approach \"Molecule Cloud\". The method also supports visualization of additional information, for example biological activity of molecules containing this scaffold or the protein target class typical for particular scaffolds, by color coding. Detailed description of the algorithm is provided, allowing easy implementation of the method by any cheminformatics toolkit. The layout algorithm is available as open source Java code.\n\n\nCONCLUSIONS\nVisualization of large molecular data sets using the Molecule Cloud approach allows scientists to get information about the composition of molecular databases and their most frequent structural features easily. The method may be used in the areas where analysis of large molecular collections is needed, for example processing of high throughput screening results, virtual screening or compound purchasing. Several example visualizations of large data sets, including PubChem, ChEMBL and ZINC databases using the Molecule Cloud diagrams are provided." }, { "pmid": "22424447", "title": "CheS-Mapper - Chemical Space Mapping and Visualization in 3D.", "abstract": "Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis." }, { "pmid": "24397863", "title": "Prediction of novel drug indications using network driven biological data prioritization and integration.", "abstract": "BACKGROUND\nWith the rapid development of high-throughput genomic technologies and the accumulation of genome-wide datasets for gene expression profiling and biological networks, the impact of diseases and drugs on gene expression can be comprehensively characterized. Drug repositioning offers the possibility of reduced risks in the drug discovery process, thus it is an essential step in drug development.\n\n\nRESULTS\nComputational prediction of drug-disease interactions using gene expression profiling datasets and biological networks is a new direction in drug repositioning that has gained increasing interest. We developed a computational framework to build disease-drug networks using drug- and disease-specific subnetworks. The framework incorporates protein networks to refine drug and disease associated genes and prioritize genes in disease and drug specific networks. For each drug and disease we built multiple networks using gene expression profiling and text mining. Finally a logistic regression model was used to build functional associations between drugs and diseases.\n\n\nCONCLUSIONS\nWe found that representing drugs and diseases by genes with high centrality degree in gene networks is the most promising representation of drug or disease subnetworks." }, { "pmid": "22166170", "title": "New developments on the cheminformatics open workflow environment CDK-Taverna.", "abstract": "BACKGROUND\nThe computational processing and analysis of small molecules is at heart of cheminformatics and structural bioinformatics and their application in e.g. metabolomics or drug discovery. Pipelining or workflow tools allow for the Lego™-like, graphical assembly of I/O modules and algorithms into a complex workflow which can be easily deployed, modified and tested without the hassle of implementing it into a monolithic application. The CDK-Taverna project aims at building a free open-source cheminformatics pipelining solution through combination of different open-source projects such as Taverna, the Chemistry Development Kit (CDK) or the Waikato Environment for Knowledge Analysis (WEKA). A first integrated version 1.0 of CDK-Taverna was recently released to the public.\n\n\nRESULTS\nThe CDK-Taverna project was migrated to the most up-to-date versions of its foundational software libraries with a complete re-engineering of its worker's architecture (version 2.0). 64-bit computing and multi-core usage by paralleled threads are now supported to allow for fast in-memory processing and analysis of large sets of molecules. Earlier deficiencies like workarounds for iterative data reading are removed. The combinatorial chemistry related reaction enumeration features are considerably enhanced. Additional functionality for calculating a natural product likeness score for small molecules is implemented to identify possible drug candidates. Finally the data analysis capabilities are extended with new workers that provide access to the open-source WEKA library for clustering and machine learning as well as training and test set partitioning. The new features are outlined with usage scenarios.\n\n\nCONCLUSIONS\nCDK-Taverna 2.0 as an open-source cheminformatics workflow solution matured to become a freely available and increasingly powerful tool for the biosciences. The combination of the new CDK-Taverna worker family with the already available workflows developed by a lively Taverna community and published on myexperiment.org enables molecular scientists to quickly calculate, process and analyse molecular data as typically found in e.g. today's systems biology scenarios." }, { "pmid": "26389652", "title": "MONA 2: A Light Cheminformatics Platform for Interactive Compound Library Processing.", "abstract": "Because of the availability of large compound collections on the Web, elementary cheminformatics tasks such as chemical library browsing, analyzing, filtering, or unifying have become widespread in the life science community. Furthermore, the high performance of desktop hardware allows an interactive, problem-driven approach to these tasks, avoiding rigid processing scripts and workflows. Here, we present MONA 2, which is the second major release of our cheminformatics desktop application addressing this need. Using MONA requires neither complex database setups nor expert knowledge of cheminformatics. A new molecular set concept purely based on structural entities rather than individual compounds has allowed the development of an intuitive user interface. Based on a chemically precise, high-performance software library, typical tasks on chemical libraries with up to one million compounds can be performed mostly interactively. This paper describes the functionality of MONA, its fundamental concepts, and a collection of application scenarios ranging from file conversion, compound library curation, and management to the post-processing of large-scale experiments." }, { "pmid": "23327565", "title": "Mining collections of compounds with Screening Assistant 2.", "abstract": "UNLABELLED\n\n\n\nBACKGROUND\nHigh-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened.\n\n\nRESULTS\nWe present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones.\n\n\nCONCLUSIONS\nScreening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/." }, { "pmid": "25558886", "title": "DataWarrior: an open-source program for chemistry aware data visualization and analysis.", "abstract": "Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs." }, { "pmid": "16180923", "title": "InfVis--platform-independent visual data mining of multidimensional chemical data sets.", "abstract": "The tremendous increase of chemical data sets, both in size and number, and the simultaneous desire to speed up the drug discovery process has resulted in an increasing need for a new generation of computational tools that assist in the extraction of information from data and allow for rapid and in-depth data mining. During recent years, visual data mining has become an important tool within the life sciences and drug discovery area with the potential to help avoiding data analysis from turning into a bottleneck. In this paper, we present InfVis, a platform-independent visual data mining tool for chemists, who usually only have little experience with classical data mining tools, for the visualization, exploration, and analysis of multivariate data sets. InfVis represents multidimensional data sets by using intuitive 3D glyph information visualization techniques. Interactive and dynamic tools such as dynamic query devices allow real-time, interactive data set manipulations and support the user in the identification of relationships and patterns. InfVis has been implemented in Java and Java3D and can be run on a broad range of platforms and operating systems. It can also be embedded as an applet in Web-based interfaces. We will present in this paper examples detailing the analysis of a reaction database that demonstrate how InfVis assists chemists in identifying and extracting hidden information." }, { "pmid": "16859312", "title": "Data visualization during the early stages of drug discovery.", "abstract": "Multidimensional compound optimization is a new paradigm in the drug discovery process, yielding efficiencies during early stages and reducing attrition in the later stages of drug development. The success of this strategy relies heavily on understanding this multidimensional data and extracting useful information from it. This paper demonstrates how principled visualization algorithms can be used to understand and explore a large data set created in the early stages of drug discovery. The experiments presented are performed on a real-world data set comprising biological activity data and some whole-molecular physicochemical properties. Data visualization is a popular way of presenting complex data in a simpler form. We have applied powerful principled visualization methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), to help the domain experts (screening scientists, chemists, biologists, etc.) understand and draw meaningful decisions. We also benchmark these principled methods against relatively better known visualization approaches, principal component analysis (PCA), Sammon's mapping, and self-organizing maps (SOMs), to demonstrate their enhanced power to help the user visualize the large multidimensional data sets one has to deal with during the early stages of the drug discovery process. The results reported clearly show that the GTM and HGTM algorithms allow the user to cluster active compounds for different targets and understand them better than the benchmarks. An interactive software tool supporting these visualization algorithms was provided to the domain experts. The tool facilitates the domain experts by exploration of the projection obtained from the visualization algorithms providing facilities such as parallel coordinate plots, magnification factors, directional curvatures, and integration with industry standard software." }, { "pmid": "27485979", "title": "Exploring Activity Cliffs from a Chemoinformatics Perspective.", "abstract": "The activity cliff concept experiences considerable interest in medicinal chemistry and chemoinformatics. Activity cliffs are defined as pairs or groups of structurally similar or analogous active compounds having large differences in potency. Depending on the research field, views of activity cliffs partly differ. While interpretability and utility of activity cliff information is considered to be of critical importance in medicinal chemistry, large-scale exploration and prediction of activity cliffs are of special interest in chemoinformatics. Much emphasis has recently been put on making activity cliff information accessible for medicinal chemistry applications. Herein, different approaches to the analysis and prediction of activity cliffs are discussed that are of particular relevance from a chemoinformatics viewpoint." }, { "pmid": "20426451", "title": "Extended-connectivity fingerprints.", "abstract": "Extended-connectivity fingerprints (ECFPs) are a novel class of topological fingerprints for molecular characterization. Historically, topological fingerprints were developed for substructure and similarity searching. ECFPs were developed specifically for structure-activity modeling. ECFPs are circular fingerprints with a number of useful qualities: they can be very rapidly calculated; they are not predefined and can represent an essentially infinite number of different molecular features (including stereochemical information); their features represent the presence of particular substructures, allowing easier interpretation of analysis results; and the ECFP algorithm can be tailored to generate different types of circular fingerprints, optimized for different uses. While the use of ECFPs has been widely adopted and validated, a description of their implementation has not previously been presented in the literature." }, { "pmid": "22537178", "title": "Charting, navigating, and populating natural product chemical space for drug discovery.", "abstract": "Natural products are a heterogeneous group of compounds with diverse, yet particular molecular properties compared to synthetic compounds and drugs. All relevant analyses show that natural products indeed occupy parts of chemical space not explored by available screening collections while at the same time largely adhering to the rule-of-five. This renders them a valuable, unique, and necessary component of screening libraries used in drug discovery. With ChemGPS-NP on the Web and Scaffold Hunter two tools are available to the scientific community to guide exploration of biologically relevant NP chemical space in a focused and targeted fashion with a view to guide novel synthesis approaches. Several of the examples given illustrate the possibility of bridging the gap between computational methods and compound library synthesis and the possibility of integrating cheminformatics and chemical space analyses with synthetic chemistry and biochemistry to successfully explore chemical space for the identification of novel small molecule modulators of protein function.The examples also illustrate the synergistic potential of the chemical space concept and modern chemical synthesis for biomedical research and drug discovery. Chemical space analysis can map under explored biologically relevant parts of chemical space and identify the structure types occupying these parts. Modern synthetic methodology can then be applied to efficiently fill this “virtual space” with real compounds.From a cheminformatics perspective, there is a clear demand for open-source and easy to use tools that can be readily applied by educated nonspecialist chemists and biologists in their daily research. This will include further development of Scaffold Hunter, ChemGPS-NP, and related approaches on the Web. Such a “cheminformatics toolbox” would enable chemists and biologists to mine their own data in an intuitive and highly interactive process and without the need for specialized computer science and cheminformatics expertise. We anticipate that it may be a viable, if not necessary, step for research initiatives based on large high-throughput screening campaigns,in particular in the pharmaceutical industry, to make the most out of the recent advances in computational tools in order to leverage and take full advantage of the large data sets generated and available in house. There are “holes” in these data sets that can and should be identified and explored by chemistry and biology." }, { "pmid": "19561619", "title": "Bioactivity-guided mapping and navigation of chemical space.", "abstract": "The structure- and chemistry-based hierarchical organization of library scaffolds in tree-like arrangements provides a valid, intuitive means to map and navigate chemical space. We demonstrate that scaffold trees built using bioactivity as the key selection criterion for structural simplification during tree construction allow efficient and intuitive mapping, visualization and navigation of the chemical space defined by a given library, which in turn allows correlation of this chemical space with the investigated bioactivity and further compound design. Brachiation along the branches of such trees from structurally complex to simple scaffolds with retained yet varying bioactivity is feasible at high frequency for the five major pharmaceutically relevant target classes and allows for the identification of new inhibitor types for a given target. We provide proof of principle by identifying new active scaffolds for 5-lipoxygenase and the estrogen receptor ERalpha." }, { "pmid": "20481515", "title": "Bioactivity-guided navigation of chemical space.", "abstract": "A central aim of biological research is to elucidate the many roles of proteins in complex, dynamic living systems; the selective perturbation of protein function is an important tool in achieving this goal. Because chemical perturbations offer opportunities often not accessible with genetic methods, the development of small-molecule modulators of protein function is at the heart of chemical biology research. In this endeavor, the identification of biologically relevant starting points within the vast chemical space available for the design of compound collections is a particularly relevant, yet difficult, task. In this Account, we present our research aimed at linking chemical and biological space to define suitable starting points that guide the synthesis of compound collections with biological relevance. Both protein folds and natural product (NP) scaffolds are highly conserved in nature. Whereas different amino acid sequences can make up ligand-binding sites in proteins with highly similar fold types, differently substituted NPs characterized by particular scaffold classes often display diverse biological activities. Therefore, we hypothesized that (i) ligand-binding sites with similar ligand-sensing cores embedded in their folds would bind NPs with similar scaffolds and (ii) selectivity is ensured by variation of both amino acid side chains and NP substituents. To investigate this notion in compound library design, we developed an approach termed biology-oriented synthesis (BIOS). BIOS employs chem- and bioinformatic methods for mapping biologically relevant chemical space and protein space to generate hypotheses for compound collection design and synthesis. BIOS also provides hypotheses for potential bioactivity of compound library members. On the one hand, protein structure similarity clustering (PSSC) is used to identify ligand binding sites with high subfold similarity, that is, high structural similarity in their ligand-sensing cores. On the other hand, structural classification by scaffold trees (for example, structural classification of natural products or SCONP), when combined with software tools like \"Scaffold Hunter\", enables the hierarchical structural classification of small-molecule collections in tree-like arrangements, their annotation with bioactivity data, and the intuitive navigation of chemical space. Brachiation (in a manner analogous to tree-swinging primates) within the scaffold trees serves to identify new starting points for the design and synthesis of small-molecule libraries, and PSSC may be used to select potential protein targets. The introduction of chemical diversity in compound collections designed according to the logic of BIOS is essential for the frequent identification of small molecules with diverse biological activities. The continuing development of synthetic methodology, both on solid phase and in solution, enables the generation of focused small-molecule collections with sufficient substituent, stereochemical, and scaffold diversity to yield comparatively high hit rates in biochemical and biological screens from relatively small libraries. BIOS has also allowed the identification of new ligand classes for several different proteins and chemical probes for the study of protein function in cells." }, { "pmid": "17125213", "title": "3D QSAR selectivity analyses of carbonic anhydrase inhibitors: insights for the design of isozyme selective inhibitors.", "abstract": "A 3D QSAR selectivity analysis of carbonic anhydrase (CA) inhibitors using a data set of 87 CA inhibitors is reported. After ligand minimization in the binding pockets of CA I, CA II, and CA IV isoforms, selectivity CoMFA and CoMSIA 3D QSAR models have been derived by taking the affinity differences (DeltapKi) with respect to two CA isozymes as independent variables. Evaluation of the developed 3D QSAR selectivity models allows us to determine amino acids in the respective CA isozymes that possibly play a crucial role for selective inhibition of these isozymes. We further combined the ligand-based 3D QSAR models with the docking program AUTODOCK in order to screen for novel CA inhibitors. Correct binding modes are predicted for various CA inhibitors with respect to known crystal structures. Furthermore, in combination with the developed 3D QSAR models we could successfully estimate the affinity of CA inhibitors even in cases where the applied scoring function failed. This novel strategy to combine AUTODOCK poses with CoMFA/CoMSIA 3D QSAR models can be used as a guideline to assess the relevance of generated binding modes and to accurately predict the binding affinity of newly designed CA inhibitors that could play a crucial role in the treatment of pathologies such as tumors, obesity, or glaucoma." }, { "pmid": "21615076", "title": "Mining for bioactive scaffolds with scaffold networks: improved compound set enrichment from primary screening data.", "abstract": "Identification of meaningful chemical patterns in the increasing amounts of high-throughput-generated bioactivity data available today is an increasingly important challenge for successful drug discovery. Herein, we present the scaffold network as a novel approach for mapping and navigation of chemical and biological space. A scaffold network represents the chemical space of a library of molecules consisting of all molecular scaffolds and smaller \"parent\" scaffolds generated therefrom by the pruning of rings, effectively leading to a network of common scaffold substructure relationships. This algorithm provides an extension of the scaffold tree algorithm that, instead of a network, generates a tree relationship between a heuristically rule-based selected subset of parent scaffolds. The approach was evaluated for the identification of statistically significantly active scaffolds from primary screening data for which the scaffold tree approach has already been shown to be successful. Because of the exhaustive enumeration of smaller scaffolds and the full enumeration of relationships between them, about twice as many statistically significantly active scaffolds were identified compared to the scaffold-tree-based approach. We suggest visualizing scaffold networks as islands of active scaffolds." } ]
Frontiers in Psychology
28588533
PMC5439009
10.3389/fpsyg.2017.00824
A Probabilistic Model of Meter Perception: Simulating Enculturation
Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.
1.2. Related workOur approach in some respects resembles other recent probabilistic models, in particular a generative model presented by Temperley (2007). Temperley (2007, ch. 2) models meter perception as probabilistic inference on a generative model whose parameters are estimated using a training corpus. Meter is represented as a multi-leveled hierarchical framework, which the model generates level by level. The probability of onsets depends only on the metrical status of the corresponding onset time. Temperley (2009) generalizes this model to polyphonic musical structure, and introduces a metrical model that conditions onset probability on whether onsets occur on surrounding metrically stronger beats. This approach introduces some sensitivity to rhythmic context into the model. In later work, Temperley (2010) evaluates this model, the hierarchical position model, and compares its performance to other metrical models with varying degrees of complexity. One model, called the first-order metrical position model, was found to perform slightly better than the hierarchical position model, but this increase in performance comes at the cost of a higher number of parameters. Temperley concludes that the hierarchical position model provides the best trade-off between model-complexity and performance.In a different approach, Holzapfel (2015) employs Bayesian model selection to investigate the relation between usul (a type of rhythmic mode, similar in some ways to meter) and rhythm in Turkish makam music. The representation of metrical structure does not assume hierarchically organization, allowing for arbitrary onset distributions to be learned. Like the models compared by Temperley (2010), this model is not presented explicitly as a meter-finding model, but is used to investigate the statistical properties of a corpus of rhythms.The approach presented here diverges from these models in that it employs a general purpose probabilistic model of sequential temporal expectation based on statistical learning (Pearce, 2005) combined with an integrated process of metrical inference such that expectations are generated given an inferred meter. The sequential model is a variable-order metrical position model. Taking into account preceding context widens the range of statistical properties of rhythmic organization that can be learned by the model. In particular, the model is capable of representing not only the frequency of onsets at various metrical positions, but also the probability of onsets at metrical positions conditioned on the preceding rhythmic sequence. The vastly increased number of parameters of this model introduces a risk of over-fitting; models with many parameters may start to fit to noise in their training data, which harms generalization performance. However, we employ sophisticated smoothing techniques that avoid over-fitting (Pearce and Wiggins, 2004). Furthermore, we to some extent safe-guard against over-fitting by evaluating our model using cross-validation.
[ "23663408", "21553992", "22659582", "23605956", "15937014", "26594881", "15462633", "22352419", "15660851", "16105946", "25295018", "27383617", "7155765", "8637596", "22414591", "23707539", "2148588", "21180358", "22847872", "10195184", "26124105", "16495999", "25324813", "24740381" ]
[ { "pmid": "23663408", "title": "Whatever next? Predictive brains, situated agents, and the future of cognitive science.", "abstract": "Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this \"hierarchical prediction machine\" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency." }, { "pmid": "21553992", "title": "Specific previous experience affects perception of harmony and meter.", "abstract": "Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract." }, { "pmid": "22659582", "title": "Similarity-based restoration of metrical information: different listening experiences result in different perceptual inferences.", "abstract": "How do perceivers apply knowledge to instances they have never experienced before? On one hand, listeners might use idealized representations that do not contain specific details. On the other, they might recognize and process information based on more detailed memory representations. The current study examined the latter possibility with respect to musical meter perception, previously thought to be computed based on highly-idealized (isochronous) internal representations. In six experiments, listeners heard sets of metrically-ambiguous melodies. Each melody was played in a simultaneous musical context with unambiguous metrical cues (3/4 or 6/8). Cross-melody similarity was manipulated by pairing certain cues-timbre (musical instrument) and motif content (2-6-note patterns)-with each meter, or distributing cues across meters. After multiple exposures, listeners heard each melody without context, and judged metrical continuations (all Experiments) or familiarity (Experiments 5-6). Responses were assessed for \"metrical restoration\"-the tendency to make metrical judgments that fit the melody's previously-heard metrical context. Cross-melody similarity affected the presence and degree of metrical restoration, and timbre affected familiarity. Results suggest that metrical processing may be calculated based on fairly detailed representations rather than idealized isochronous pulses, and is dissociated somewhat from familiarity judgments. Implications for theories of meter perception are discussed." }, { "pmid": "23605956", "title": "Probabilistic models of expectation violation predict psychophysiological emotional responses to live concert music.", "abstract": "We present the results of a study testing the often-theorized role of musical expectations in inducing listeners' emotions in a live flute concert experiment with 50 participants. Using an audience response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music." }, { "pmid": "15937014", "title": "A theory of cortical responses.", "abstract": "This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts.It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain's free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain's attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective. In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses." }, { "pmid": "26594881", "title": "Linking melodic expectation to expressive performance timing and perceived musical tension.", "abstract": "This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record" }, { "pmid": "15462633", "title": "The role of melodic and temporal cues in perceiving musical meter.", "abstract": "A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folk songs scored in 6/8 or 3/4 meter. Participants matched excerpts with 1 of 2 metrical drum accompaniments. Melodic accents included contour change, melodic leaps, registral extreme, melodic repetition, and harmonic rhythm. Two experiments with isochronous melodies showed that contour change and melodic repetition predicted judgments. For longer melodies in the 2nd experiment, variables predicted judgments best at the beginning of excerpts. The final experiment, with rhythmically varied melodies, showed that temporal accents, tempo, and contour change were the strongest predictors of meter. The authors' findings suggest that listeners combine multiple melodic and temporal features to perceive musical meter." }, { "pmid": "22352419", "title": "Familiarity overrides complexity in rhythm perception: a cross-cultural comparison of American and Turkish listeners.", "abstract": "Despite the ubiquity of dancing and synchronized movement to music, relatively few studies have examined cognitive representations of musical rhythm and meter among listeners from contrasting cultures. We aimed to disentangle the contributions of culture-general and culture-specific influences by examining American and Turkish listeners' detection of temporal disruptions (varying in size from 50-250 ms in duration) to three types of stimuli: simple rhythms found in both American and Turkish music, complex rhythms found only in Turkish music, and highly complex rhythms that are rare in all cultures. Americans were most accurate when detecting disruptions to the simple rhythm. However, they performed less accurately but comparably in both the complex and highly complex conditions. By contrast, Turkish participants performed accurately and indistinguishably in both simple and complex conditions. However, they performed less accurately in the unfamiliar, highly complex condition. Together, these experiments implicate a crucial role of culture-specific listening experience and acquired musical knowledge in rhythmic pattern perception." }, { "pmid": "15660851", "title": "Metrical categories in infancy and adulthood.", "abstract": "Intrinsic perceptual biases for simple duration ratios are thought to constrain the organization of rhythmic patterns in music. We tested that hypothesis by exposing listeners to folk melodies differing in metrical structure (simple or complex duration ratios), then testing them on alterations that preserved or violated the original metrical structure. Simple meters predominate in North American music, but complex meters are common in many other musical cultures. In Experiment 1, North American adults rated structure-violating alterations as less similar to the original version than structure-preserving alterations for simple-meter patterns but not for complex-meter patterns. In Experiment 2, adults of Bulgarian or Macedonian origin provided differential ratings to structure-violating and structure-preserving alterations in complex- as well as simple-meter contexts. In Experiment 3, 6-month-old infants responded differentially to structure-violating and structure-preserving alterations in both metrical contexts. These findings imply that the metrical biases of North American adults reflect enculturation processes rather than processing predispositions for simple meters." }, { "pmid": "16105946", "title": "Tuning in to musical rhythms: infants learn more readily than adults.", "abstract": "Domain-general tuning processes may guide the acquisition of perceptual knowledge in infancy. Here, we demonstrate that 12-month-old infants show an adult-like, culture-specific pattern of responding to musical rhythms, in contrast to the culture-general responding that is evident at 6 months of age. Nevertheless, brief exposure to foreign music enables 12-month-olds, but not adults, to perceive rhythmic distinctions in foreign musical contexts. These findings may indicate a sensitive period early in life for acquiring rhythm in particular or socially and biologically important structures more generally." }, { "pmid": "25295018", "title": "Predictive uncertainty in auditory sequence processing.", "abstract": "Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music." }, { "pmid": "27383617", "title": "Rhythm histograms and musical meter: A corpus study of Malian percussion music.", "abstract": "Studies of musical corpora have given empirical grounding to the various features that characterize particular musical styles and genres. Palmer & Krumhansl (1990) found that in Western classical music the likeliest places for a note to occur are the most strongly accented beats in a measure, and this was also found in subsequent studies using both Western classical and folk music corpora (Huron & Ommen, 2006; Temperley, 2010). We present a rhythmic analysis of a corpus of 15 performances of percussion music from Bamako, Mali. In our corpus, the relative frequency of note onsets in a given metrical position does not correspond to patterns of metrical accent, though there is a stable relationship between onset frequency and metrical position. The implications of this non-congruence between simple statistical likelihood and metrical structure for the ways in which meter and metrical accent may be learned and understood are discussed, along with importance of cross-cultural studies for psychological research." }, { "pmid": "8637596", "title": "Emergence of simple-cell receptive field properties by learning a sparse code for natural images.", "abstract": "The receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs." }, { "pmid": "22414591", "title": "Tracking of pitch probabilities in congenital amusia.", "abstract": "Auditory perception involves not only hearing a series of sounds but also making predictions about future ones. For typical listeners, these predictions are formed on the basis of long-term schematic knowledge, gained over a lifetime of exposure to the auditory environment. Individuals with a developmental disorder known as congenital amusia show marked difficulties with music perception and production. The current study investigated whether these difficulties can be explained, either by a failure to internalise the statistical regularities present in music, or by a failure to consciously access this information. Two versions of a melodic priming paradigm were used to probe participants' abilities to form melodic pitch expectations, in an implicit and an explicit manner. In the implicit version (Experiment 1), participants made speeded, forced-choice discriminations concerning the timbre of a cued target note. In the explicit version (Experiment 2), participants used a 1-7 rating scale to indicate the degree to which the pitch of the cued target note was expected or unexpected. Target notes were chosen to have high or low probability in the context of the melody, based on the predictions of a computational model of melodic expectation. Analysis of the data from the implicit task revealed a melodic priming effect in both amusic and control participants whereby both groups showed faster responses to high probability than low probability notes rendered in the same timbre as the context. However, analysis of the data from the explicit task revealed that amusic participants were significantly worse than controls at using explicit ratings to differentiate between high and low probability events in a melodic context. Taken together, findings from the current study make an important contribution in demonstrating that amusic individuals track melodic pitch probabilities at an implicit level despite an impairment, relative to controls, when required to make explicit judgments in this regard. However the unexpected finding that amusics nevertheless are able to use explicit ratings to distinguish between high and low probability notes (albeit not as well as controls) makes a similarly important contribution in revealing a sensitivity to musical structure that has not previously been demonstrated in these individuals." }, { "pmid": "23707539", "title": "Electrophysiological correlates of melodic processing in congenital amusia.", "abstract": "Music listening involves using previously internalized regularities to process incoming musical structures. A condition known as congenital amusia is characterized by musical difficulties, notably in the detection of gross musical violations. However, there has been increasing evidence that individuals with the disorder show preserved musical ability when probed using implicit methods. To further characterize the degree to which amusic individuals show evidence of latent sensitivity to musical structure, particularly in the context of stimuli that are ecologically valid, electrophysiological recordings were taken from a sample of amusic and control participants as they listened to real melodies. To encourage them to pay attention to the music, participants were asked to detect occasional notes in a different timbre. Using a computational model of auditory expectation to identify points of varying levels of expectedness in these melodies (in units of information content (IC), a measure which has an inverse relationship with probability), ERP analysis investigated the extent to which the amusic brain differs from that of controls when processing notes of high IC (low probability) as compared to low IC ones (high probability). The data revealed a novel effect that was highly comparable in both groups: Notes with high IC reliably elicited a delayed P2 component relative to notes with low IC, suggesting that amusic individuals, like controls, found these notes more difficult to evaluate. However, notes with high IC were also characterized by an early frontal negativity in controls that was attenuated in amusic individuals. A correlation of this early negative effect with the ability to make accurate note expectedness judgments (previous data collected from a subset of the current sample) was shown to be present in typical individuals but compromised in individuals with amusia: a finding in line with evidence of a close relationship between the amplitude of such a response and explicit knowledge of musical deviance." }, { "pmid": "2148588", "title": "Mental representations for musical meter.", "abstract": "Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music." }, { "pmid": "21180358", "title": "The role of expectation and probabilistic learning in auditory boundary perception: a model comparison.", "abstract": "Grouping and boundary perception are central to many aspects of sensory processing in cognition. We present a comparative study of recently published computational models of boundary perception in music. In doing so, we make three contributions. First, we hypothesise a relationship between expectation and grouping in auditory perception, and introduce a novel information-theoretic model of perceptual segmentation to test the hypothesis. Although we apply the model to musical melody, it is applicable in principle to sequential grouping in other areas of cognition. Second, we address a methodological consideration in the analysis of ambiguous stimuli that produce different percepts between individuals. We propose and demonstrate a solution to this problem, based on clustering of participants prior to analysis. Third, we conduct the first comparative analysis of probabilistic-learning and rule-based models of perceptual grouping in music. In spite of having only unsupervised exposure to music, the model performs comparably to rule-based models based on expert musical knowledge, supporting a role for probabilistic learning in perceptual segmentation of music." }, { "pmid": "22847872", "title": "Auditory expectation: the information dynamics of music perception and cognition.", "abstract": "Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond." }, { "pmid": "10195184", "title": "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.", "abstract": "We describe a model of visual processing in which feedback connections from a higher- to a lower-order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images." }, { "pmid": "26124105", "title": "Statistical universals reveal the structures and functions of human music.", "abstract": "Music has been called \"the universal language of mankind.\" Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation." }, { "pmid": "16495999", "title": "Efficient auditory coding.", "abstract": "The auditory neural code must serve a wide range of auditory tasks that require great sensitivity in time and frequency and be effective over the diverse array of sounds present in natural acoustic environments. It has been suggested that sensory systems might have evolved highly efficient coding strategies to maximize the information conveyed to the brain while minimizing the required energy and neural resources. Here we show that, for natural sounds, the complete acoustic waveform can be represented efficiently with a nonlinear model based on a population spike code. In this model, idealized spikes encode the precise temporal positions and magnitudes of underlying acoustic features. We find that when the features are optimized for coding either natural sounds or speech, they show striking similarities to time-domain cochlear filter estimates, have a frequency-bandwidth dependence similar to that of auditory nerve fibres, and yield significantly greater coding efficiency than conventional signal representations. These results indicate that the auditory code might approach an information theoretic optimum and that the acoustic structure of speech might be adapted to the coding capacity of the mammalian auditory system." }, { "pmid": "25324813", "title": "Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music.", "abstract": "Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (\"rhythm\") and the brain's anticipatory structuring of music (\"meter\"). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain's general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms." }, { "pmid": "24740381", "title": "Syncopation, body-movement and pleasure in groove music.", "abstract": "Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove." } ]
JMIR Medical Informatics
28487265
PMC5442348
10.2196/medinform.7235
Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies
BackgroundExtracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time.ObjectiveOur goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results.MethodsA clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction.ResultsThree datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%.ConclusionsIDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable.
Related WorkA number of research efforts have been made in different fields of medical information extraction. Successful systems include caTIES [5], MedEx [6], MedLEE [7], cTAKES [8], MetaMap [9], HITEx [10], and so on. These methods either take a rule-based approach, a traditional machine learning–based approach, or a combination of both.Different online learning algorithms have been studied and developed for classification tasks [11], but their direct application to information extraction has not been studied. Especially in the clinical environment, the effectiveness of these algorithms is yet to be examined. Several pioneering projects have used learning processes that involve user interaction and certain elements of IDEAL-X. I2E2 is an early rule-based interactive information extraction system [12]. It is limited by its restriction to a predefined feature set. Amilcare [13,14] is adaptable to different domains. Each domain requires an initial training that can be retrained on the basis of the user’s revision. Its algorithm (LP)2 is able to generalize and induce symbolic rules. RapTAT [15] is most similar to IDEAL-X in its goals. It preannotates text interactively to accelerate the annotation process. It uses a multinominal naïve Baysian algorithm for classification but does not appear to use contextual information beyond previously found values in its search process. This may limit its ability to extract certain value types.Different from online machine learning but related is active learning [16,17], it assumes the ability to retrieve labels for the most informative data points while involving the users in the annotation process. DUALIST [18] allows users to select system-populated rules for feature annotation to support information extraction. Other example applications in health care informatics include word sense disambiguation [19] and phenotyping [20]. Active learning usually requires comprehending the entire corpus in order to pick the most useful data point. However, in a clinical environment, data arrive in a steaming fashion over time that limits our ability to choose data points. Hence, an online learning approach is more suitable.IDEAL-X adopts the Hidden Markov Model for its compatibility with online learning, and for its efficiency and scalability. We will also describe a broader set of contextual information used by the learning algorithm to facilitate extraction of values of all types.
[ "20442142", "20064797", "15187068", "20819853", "11825149", "16872495", "24431336", "23364851", "23851443", "23665099", "17329723", "21508414" ]
[ { "pmid": "20442142", "title": "caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.", "abstract": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs." }, { "pmid": "20064797", "title": "MedEx: a medication information extraction system for clinical narratives.", "abstract": "Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes." }, { "pmid": "15187068", "title": "Automated encoding of clinical documents based on natural language processing.", "abstract": "OBJECTIVE\nThe aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method.\n\n\nMETHODS\nAn existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts.\n\n\nRESULTS\nRecall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91.\n\n\nCONCLUSION\nExtraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval." }, { "pmid": "20819853", "title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.", "abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text." }, { "pmid": "11825149", "title": "Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program.", "abstract": "The UMLS Metathesaurus, the largest thesaurus in the biomedical domain, provides a representation of biomedical knowledge consisting of concepts classified by semantic type and both hierarchical and non-hierarchical relationships among the concepts. This knowledge has proved useful for many applications including decision support systems, management of patient records, information retrieval (IR) and data mining. Gaining effective access to the knowledge is critical to the success of these applications. This paper describes MetaMap, a program developed at the National Library of Medicine (NLM) to map biomedical text to the Metathesaurus or, equivalently, to discover Metathesaurus concepts referred to in text. MetaMap uses a knowledge intensive approach based on symbolic, natural language processing (NLP) and computational linguistic techniques. Besides being applied for both IR and data mining applications, MetaMap is one of the foundations of NLM's Indexing Initiative System which is being applied to both semi-automatic and fully automatic indexing of the biomedical literature at the library." }, { "pmid": "16872495", "title": "Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system.", "abstract": "BACKGROUND\nThe text descriptions in electronic medical records are a rich source of information. We have developed a Health Information Text Extraction (HITEx) tool and used it to extract key findings for a research study on airways disease.\n\n\nMETHODS\nThe principal diagnosis, co-morbidity and smoking status extracted by HITEx from a set of 150 discharge summaries were compared to an expert-generated gold standard.\n\n\nRESULTS\nThe accuracy of HITEx was 82% for principal diagnosis, 87% for co-morbidity, and 90% for smoking status extraction, when cases labeled \"Insufficient Data\" by the gold standard were excluded.\n\n\nCONCLUSION\nWe consider the results promising, given the complexity of the discharge summaries and the extraction tasks." }, { "pmid": "24431336", "title": "Assisted annotation of medical free text using RapTAT.", "abstract": "OBJECTIVE\nTo determine whether assisted annotation using interactive training can reduce the time required to annotate a clinical document corpus without introducing bias.\n\n\nMATERIALS AND METHODS\nA tool, RapTAT, was designed to assist annotation by iteratively pre-annotating probable phrases of interest within a document, presenting the annotations to a reviewer for correction, and then using the corrected annotations for further machine learning-based training before pre-annotating subsequent documents. Annotators reviewed 404 clinical notes either manually or using RapTAT assistance for concepts related to quality of care during heart failure treatment. Notes were divided into 20 batches of 19-21 documents for iterative annotation and training.\n\n\nRESULTS\nThe number of correct RapTAT pre-annotations increased significantly and annotation time per batch decreased by ~50% over the course of annotation. Annotation rate increased from batch to batch for assisted but not manual reviewers. Pre-annotation F-measure increased from 0.5 to 0.6 to >0.80 (relative to both assisted reviewer and reference annotations) over the first three batches and more slowly thereafter. Overall inter-annotator agreement was significantly higher between RapTAT-assisted reviewers (0.89) than between manual reviewers (0.85).\n\n\nDISCUSSION\nThe tool reduced workload by decreasing the number of annotations needing to be added and helping reviewers to annotate at an increased rate. Agreement between the pre-annotations and reference standard, and agreement between the pre-annotations and assisted annotations, were similar throughout the annotation process, which suggests that pre-annotation did not introduce bias.\n\n\nCONCLUSIONS\nPre-annotations generated by a tool capable of interactive training can reduce the time required to create an annotated document corpus by up to 50%." }, { "pmid": "23364851", "title": "Applying active learning to supervised word sense disambiguation in MEDLINE.", "abstract": "OBJECTIVES\nThis study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models.\n\n\nMETHODS\nWe developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation.\n\n\nRESULTS\nOur experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements.\n\n\nCONCLUSIONS\nThis study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models." }, { "pmid": "23851443", "title": "Applying active learning to high-throughput phenotyping algorithms for electronic health records data.", "abstract": "OBJECTIVES\nGeneralizable, high-throughput phenotyping methods based on supervised machine learning (ML) algorithms could significantly accelerate the use of electronic health records data for clinical and translational research. However, they often require large numbers of annotated samples, which are costly and time-consuming to review. We investigated the use of active learning (AL) in ML-based phenotyping algorithms.\n\n\nMETHODS\nWe integrated an uncertainty sampling AL approach with support vector machines-based phenotyping algorithms and evaluated its performance using three annotated disease cohorts including rheumatoid arthritis (RA), colorectal cancer (CRC), and venous thromboembolism (VTE). We investigated performance using two types of feature sets: unrefined features, which contained at least all clinical concepts extracted from notes and billing codes; and a smaller set of refined features selected by domain experts. The performance of the AL was compared with a passive learning (PL) approach based on random sampling.\n\n\nRESULTS\nOur evaluation showed that AL outperformed PL on three phenotyping tasks. When unrefined features were used in the RA and CRC tasks, AL reduced the number of annotated samples required to achieve an area under the curve (AUC) score of 0.95 by 68% and 23%, respectively. AL also achieved a reduction of 68% for VTE with an optimal AUC of 0.70 using refined features. As expected, refined features improved the performance of phenotyping classifiers and required fewer annotated samples.\n\n\nCONCLUSIONS\nThis study demonstrated that AL can be useful in ML-based phenotyping methods. Moreover, AL and feature engineering based on domain knowledge could be combined to develop efficient and generalizable phenotyping methods." }, { "pmid": "23665099", "title": "Aggregate risk score based on markers of inflammation, cell stress, and coagulation is an independent predictor of adverse cardiovascular outcomes.", "abstract": "OBJECTIVES\nThis study sought to determine an aggregate, pathway-specific risk score for enhanced prediction of death and myocardial infarction (MI).\n\n\nBACKGROUND\nActivation of inflammatory, coagulation, and cellular stress pathways contribute to atherosclerotic plaque rupture. We hypothesized that an aggregate risk score comprised of biomarkers involved in these different pathways-high-sensitivity C-reactive protein (CRP), fibrin degradation products (FDP), and heat shock protein 70 (HSP70) levels-would be a powerful predictor of death and MI.\n\n\nMETHODS\nSerum levels of CRP, FDP, and HSP70 were measured in 3,415 consecutive patients with suspected or confirmed coronary artery disease (CAD) undergoing cardiac catheterization. Survival analyses were performed with models adjusted for established risk factors.\n\n\nRESULTS\nMedian follow-up was 2.3 years. Hazard ratios (HRs) for all-cause death and MI based on cutpoints were as follows: CRP ≥3.0 mg/l, HR: 1.61; HSP70 >0.625 ng/ml, HR; 2.26; and FDP ≥1.0 μg/ml, HR: 1.62 (p < 0.0001 for all). An aggregate biomarker score between 0 and 3 was calculated based on these cutpoints. Compared with the group with a 0 score, HRs for all-cause death and MI were 1.83, 3.46, and 4.99 for those with scores of 1, 2, and 3, respectively (p for each: <0.001). Annual event rates were 16.3% for the 4.2% of patients with a score of 3 compared with 2.4% in 36.4% of patients with a score of 0. The C statistic and net reclassification improved (p < 0.0001) with the addition of the biomarker score.\n\n\nCONCLUSIONS\nAn aggregate score based on serum levels of CRP, FDP, and HSP70 is a predictor of future risk of death and MI in patients with suspected or known CAD." }, { "pmid": "17329723", "title": "A novel hybrid approach to automated negation detection in clinical radiology reports.", "abstract": "OBJECTIVE\nNegation is common in clinical documents and is an important source of poor precision in automated indexing systems. Previous research has shown that negated terms may be difficult to identify if the words implying negations (negation signals) are more than a few words away from them. We describe a novel hybrid approach, combining regular expression matching with grammatical parsing, to address the above limitation in automatically detecting negations in clinical radiology reports.\n\n\nDESIGN\nNegations are classified based upon the syntactical categories of negation signals, and negation patterns, using regular expression matching. Negated terms are then located in parse trees using corresponding negation grammar.\n\n\nMEASUREMENTS\nA classification of negations and their corresponding syntactical and lexical patterns were developed through manual inspection of 30 radiology reports and validated on a set of 470 radiology reports. Another 120 radiology reports were randomly selected as the test set on which a modified Delphi design was used by four physicians to construct the gold standard.\n\n\nRESULTS\nIn the test set of 120 reports, there were a total of 2,976 noun phrases, of which 287 were correctly identified as negated (true positives), along with 23 undetected true negations (false negatives) and 4 mistaken negations (false positives). The hybrid approach identified negated phrases with sensitivity of 92.6% (95% CI 90.9-93.4%), positive predictive value of 98.6% (95% CI 96.9-99.4%), and specificity of 99.87% (95% CI 99.7-99.9%).\n\n\nCONCLUSION\nThis novel hybrid approach can accurately locate negated concepts in clinical radiology reports not only when in close proximity to, but also at a distance from, negation signals." }, { "pmid": "21508414", "title": "A study of machine-learning-based approaches to extract clinical entities and their assertions from discharge summaries.", "abstract": "OBJECTIVE\nThe authors' goal was to develop and evaluate machine-learning-based approaches to extracting clinical entities-including medical problems, tests, and treatments, as well as their asserted status-from hospital discharge summaries written using natural language. This project was part of the 2010 Center of Informatics for Integrating Biology and the Bedside/Veterans Affairs (VA) natural-language-processing challenge.\n\n\nDESIGN\nThe authors implemented a machine-learning-based named entity recognition system for clinical text and systematically evaluated the contributions of different types of features and ML algorithms, using a training corpus of 349 annotated notes. Based on the results from training data, the authors developed a novel hybrid clinical entity extraction system, which integrated heuristic rule-based modules with the ML-base named entity recognition module. The authors applied the hybrid system to the concept extraction and assertion classification tasks in the challenge and evaluated its performance using a test data set with 477 annotated notes.\n\n\nMEASUREMENTS\nStandard measures including precision, recall, and F-measure were calculated using the evaluation script provided by the Center of Informatics for Integrating Biology and the Bedside/VA challenge organizers. The overall performance for all three types of clinical entities and all six types of assertions across 477 annotated notes were considered as the primary metric in the challenge.\n\n\nRESULTS AND DISCUSSION\nSystematic evaluation on the training set showed that Conditional Random Fields outperformed Support Vector Machines, and semantic information from existing natural-language-processing systems largely improved performance, although contributions from different types of features varied. The authors' hybrid entity extraction system achieved a maximum overall F-score of 0.8391 for concept extraction (ranked second) and 0.9313 for assertion classification (ranked fourth, but not statistically different than the first three systems) on the test data set in the challenge." } ]
Materials
null
PMC5449064
10.3390/ma5122465
A Novel Fractional Order Model for the Dynamic Hysteresis of Piezoelectrically Actuated Fast Tool Servo
The main contribution of this paper is the development of a linearized model for describing the dynamic hysteresis behaviors of piezoelectrically actuated fast tool servo (FTS). A linearized hysteresis force model is proposed and mathematically described by a fractional order differential equation. Combining the dynamic modeling of the FTS mechanism, a linearized fractional order dynamic hysteresis (LFDH) model for the piezoelectrically actuated FTS is established. The unique features of the LFDH model could be summarized as follows: (a) It could well describe the rate-dependent hysteresis due to its intrinsic characteristics of frequency-dependent nonlinear phase shifts and amplitude modulations; (b) The linearization scheme of the LFDH model would make it easier to implement the inverse dynamic control on piezoelectrically actuated micro-systems. To verify the effectiveness of the proposed model, a series of experiments are conducted. The toolpaths of the FTS for creating two typical micro-functional surfaces involving various harmonic components with different frequencies and amplitudes are scaled and employed as command signals for the piezoelectric actuator. The modeling errors in the steady state are less than ±2.5% within the full span range which is much smaller than certain state-of-the-art modeling methods, demonstrating the efficiency and superiority of the proposed model for modeling dynamic hysteresis effects. Moreover, it indicates that the piezoelectrically actuated micro systems would be more suitably described as a fractional order dynamic system.
2. A Brief Review of Related WorkMrad and Hu and Hu et al. extended the classical Preisach model to describe the rate-dependent behaviors of hysteresis by use of an explicit weighting function with respect to the average change rate of the input signal [21,22,23]. To capacitate the Preisach model to represent the dynamic behaviors of controlled PEA, Yu et al. modified the weighting function to be dependent on the variation rates of input signal; To avoid the ill-behaviors caused by the great variations of input signal, an adjustment function with respect to the variation rate of input signal was introduced, which should be fitted through experimental data [24]. Recently, various rate-dependent Prandtl–Ishlinskii (PI) elementary operators have been introduced to model dynamic hysteresis effects. Ang et al. proposed a modified dynamic PI model, the rate-dependent hysteresis was modeled by the rate-dependent weighting values which were derived from the linear slope model of the hysteresis curve [25,26]. Janaideh et al. introduced a dynamic threshold, which was a function of input variation rates, the relationship between the threshold and the variation rate of input signal is in the logarithmic form to describe the essential characteristics of the hysteresis [27,28,29]. In both the generalized Preisach model and the PI model, the hysteresis loops were modeled by the sum of a number of elementary operators, and the rate-dependent behaviors were further described by modified dynamic weighting values, which were often functions of the derivation of input signal. The main disadvantage of these modeling methods is that they have a large number of parameters to be identified, which may limit their applications in real-time control.Besides the well-known Preisach model and PI model, neural network (NN) based methods have also been extensively employed to model the dynamic hysteresis effects. Dong et al. employed a feedforward NN to model the hysteresis of the PEA, the variation rate was used to construct the expanded input space [30]. Zhang and Tan proposed a parallel hybrid model for the rate-dependent hysteresis, a neural submodel was established to simulate the static hysteresis loop, meanwhile, first-order differential operators with time delays based submodel were employed to describe the dynamics of the hysteresis [31]. However, there exist inherent defects of NN based modeling, which can be summarized as follows: (a) There is no universal rules to optimally determine the structure of the NN; (b) NN has the shortcomings of overfitting and sinking into local optima [32]; (c) The capacities of fitting and prediction could not be well balanced.Some other novel mathematical models for dynamic hysteresis have been proposed. For instance, by transforming the multi-valued mapping of hysteresis into a one-to-one mapping, Deng and Tan proposed a nonlinear auto-regressive moving average with exogenous inputs (NARMAX) model to describe the dynamic hysteresis [33]. Similarly, Wong et al. formulated the modeling as a regression process and proposed the online updating least square support vector machines (LS-SVM) model and the relevance vector machine (RVM) model to capture the dynamic hysteretic behaviors [32]. Nevertheless, a compromise should be made between the modeling accuracy and the updating time, which meant that it would be challenged to apply it for high-frequency working conditions. Rakotondrabe et al. modeled the dynamic hysteresis to be a combination of the static Bouc-Wen model and a second-order linear dynamic part [34]. In [35] and [36], Gu and Zhu proposed an ellipse-based hysteresis model where the frequency and amplitude of the input signal was modeled by adjusting the major and minor axes and orientation of the ellipse. However, the model parameters were difficult to be determined to well describe and predict the dynamic hysteresis characteristics, and the ability of describing responses to the input signals with multi-frequencies would be limited.Fractional order calculus (FOC) theory, which is a generalization of the conventional calculus theory, has found a rapidly increasing application in various fields [37,38,39]. It has been widely believed that FOC can be used to describe a real process more accurately and more flexibly than classical methods [38,39,40]. A typical implementation of FOC is the description of dynamic properties of visco-elastic materials [41,42]. Motivated by the fractional order models for visco-elastic materials, Sunny et al. proposed two models to describe the resistance-strain hysteresis of a conductive polymer sample by combining a series of fractional/integer order functions [43]. However, both the developed models contained too many parameters to be identified and the existing hysteresis phenomenon was different from that of PEAs. Guyomar et al. described the ferroelectric hysteresis dynamics based on fractional order derivatives covering a wide range of frequency bandwidth [44,45]. In this method, the fractional order derivative term was employed to represent the viscous-like energy loss, and the derivative order was especially set as 0.5. Although the fixed order would present the unique characteristics of fractional calculus, it would significantly decrease the flexibility of the model and block the application of this method. Similar with the work presented by Sunny et al., the hysteresis between the electrical polarization and the mechanical strain was also much different from that of the PEA. However, all these results have demonstrated the potentials of fractional order models in modeling both the static and the dynamic hysteresis behaviors, and provided a fresh idea towards this topic.
[ "20815625" ]
[ { "pmid": "20815625", "title": "High-speed tracking control of piezoelectric actuators using an ellipse-based hysteresis model.", "abstract": "In this paper, an ellipse-based mathematic model is developed to characterize the rate-dependent hysteresis in piezoelectric actuators. Based on the proposed model, an expanded input space is constructed to describe the multivalued hysteresis function H[u](t) by a multiple input single output (MISO) mapping Gamma:R(2)-->R. Subsequently, the inverse MISO mapping Gamma(-1)(H[u](t),H[u](t);u(t)) is proposed for real-time hysteresis compensation. In controller design, a hybrid control strategy combining a model-based feedforward controller and a proportional integral differential (PID) feedback loop is used for high-accuracy and high-speed tracking control of piezoelectric actuators. The real-time feedforward controller is developed to cancel the rate-dependent hysteresis based on the inverse hysteresis model, while the PID controller is used to compensate for the creep, modeling errors, and parameter uncertainties. Finally, experiments with and without hysteresis compensation are conducted and the experimental results are compared. The experimental results show that the hysteresis compensation in the feedforward path can reduce the hysteresis-caused error by up to 88% and the tracking performance of the hybrid controller is greatly improved in high-speed tracking control applications, e.g., the root-mean-square tracking error is reduced to only 0.34% of the displacement range under the input frequency of 100 Hz." } ]
Frontiers in Neuroscience
28701911
PMC5487436
10.3389/fnins.2017.00350
An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data
This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.
3.7. Summary of results and comparison with related workTable 3 and Figure 7 summarize the results of this work for all data sets. More specifically, Figure 7A shows how the classification accuracy of an SNN improves as a function of the percentage of input events. This seems to be consistent with all data sets, both synthetically generated (Latency and Poisson encoding) and from a DVS sensor, and is in accordance with previously published studies (Neil and Liu, 2014; Diehl et al., 2015; Stromatias et al., 2015b). With Fast-Poker-DVS data set, there is a decrease in the performance in the last 20% of the input events due to the deformation of the card symbol when it disappears. Figure 7B presents the classification accuracy as a function of the absolute number of input events, in log scale, for the different data sets. This information is useful because in neuromorphic systems the overall energy consumption depends on the total number of events that are going to be processed (Stromatias et al., 2013; Merolla et al., 2014; Neil and Liu, 2014) and this might have an impact on deciding which data set to use based on the energy and latency constraints Figure 9, presents the network latency for each data set. We identify network latency as the time lapsed from the first input spike to the first output spike.Figure 9The mean and standard deviation of the classification latency of the SNNs for each data set.Table 5 presents a comparison of the current work with results in the literature on the MNIST data set and SNNs. The current state-of-the-art results come from a spiking CNN with 7 layers and max-pooling achieving a score of 99.44% and from a 4 layer spiking FC network achieving a score of 98.64% (Diehl et al., 2015). Both approaches were trained offline using frames and backpropagation and then mapped the network parameters to an SNN. However, even though this approach works very well with Poisson spike-trains or direct use of pixels, performance drops significantly with real DVS data. In addition, a direct comparison is not fair because the focus of this paper was to develop a classifier that works with both synthetic and DVS data and not to train a complete neural network with multiple layers.Table 5Comparison of classification accuracies (CA) of SNNs on the MNIST data set.ArchitectureNeural codingLearning-typeLearning-ruleCA (%)Spiking RBM (Neftci et al., 2015)PoissonUnsupervisedEvent-Based CD91.9FC (2 layer network) (Querlioz et al., 2013)PoissonUnsupervisedSTDP93.5FC (4 layer network) (O'Connor et al., 2013)PoissonUnsupervisedCD94.1FC (2 layer network) (Diehl and Cook, 2015)PoissonUnsupervisedSTDP95.0Synaptic Sampling Machine (3 layer network) (Neftci et al., 2016)PoissonUnsupervisedEvent-Based CD95.6FC (4 layer network) (this work(O'Connor et al., 2013))PoissonSupervisedStochastic GD97.25FC (4 layer network) (O'Connor and Welling, 2016)–SupervisedFractional SGD97.8FC (4 layer network) (Hunsberger and Eliasmith, 2015)Not reportedSupervisedBackprop soft LIF neurons98.37FC (4 layer network) (Diehl et al., 2015)PoissonSupervisedStochastic GD98.64CNN (Kheradpisheh et al., 2016)LatencyUnsupervisedSTDP98.4CNN (Diehl et al., 2015)PoissonSupervisedStochastic GD99.14Sparsely Connected Network (×64) (Esser et al., 2015)PoissonSupervisedBackprop99.42CNN (Rueckauer et al., 2016)PoissonSupervisedStochastic GD99.44CNN (this work)LatencySupervisedStochastic GD98.42CNN (this work)PoissonSupervisedStochastic GD98.20Table 6 gathers the results in literature using the N-MNIST data set and SNN. The best classification accuracy reported is 98.66% using a FC 3 layer network (Lee et al., 2016). Using a CNN, this work reports the best classification accuracy 97.77% until now. Again, the focus of this paper is not beating the classification accuracy, there is no optimization done to improve the performance, but to provide a valid SNN classifier training method with an insignificant Classifier Loss compared to frame based classification accuracy.Table 6Comparison of classification accuracies (CA) of SNNs on the N-MNIST data set.ArchitecturePreprocessingLearning-typeLearning-ruleCA (%)CNN (Orchard et al., 2015b)NoneUnsupervisedHFirst71.15FC (2 layer network) (Cohen et al., 2016)NoneSupervisedOPIUM (van Schaik and Tapson, 2015)92.87CNN (Neil and Liu, 2016)CenteringSupervised–95.72FC (3 layer network) (Lee et al., 2016)NoneSupervisedBackpropagation98.66CNN (this work)NoneSupervisedSGD97.77Finally Table 7 shows the literature results for the 40 card fast-poker-dvs data set. With this work, we demonstrate that 100% of classification accuracy is obtained using LOOCV method.Table 7Comparison of classification accuracies (CA) of SNNs on the 40 cards Fast-Poker-DVS data set.ArchitectureLearning-typeLearning-ruleCA (%)CNN (Pérez-Carrasco et al., 2013)SupervisedBackprop90.1 − 91.6CNN (Orchard et al., 2015b)UnsupervisedHFirst97.5 ± 3.5CNN (Lagorce et al., 2016)SupervisedHOTS100CNN (this work)SupervisedStochastic GD100
[ "22386501", "27199646", "1317971", "26941637", "23197532", "16764513", "16873662", "18292226", "26017442", "27877107", "27853419", "17305422", "25104385", "24574952", "25873857", "27445650", "24115919", "26635513", "26353184", "24051730", "25462637", "26733794", "26217169" ]
[ { "pmid": "22386501", "title": "Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity.", "abstract": "A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated from the retina in response to relative changes in illumination at the pixel level and transmitted to a feed-forward spiking neural network. Neurons become sensitive to patterns of pixels with correlated activation times, in a fully unsupervised scheme. This is achieved using a special form of Spike-Timing-Dependent Plasticity which depresses synapses that did not recently contribute to the post-synaptic spike activation, regardless of their activation time. Competitive learning is implemented with lateral inhibition. When tested with real-life data, the system is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway, after only 10 min of traffic learning. Complete trajectories can be learned with a 98% detection rate using a second layer, still with unsupervised learning, and the system may be used as a car counter. The proposed neural network is extremely robust to noise and it can tolerate a high degree of synaptic and neuronal variability with little impact on performance. Such results show that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience." }, { "pmid": "27199646", "title": "Skimming Digits: Neuromorphic Classification of Spike-Encoded Images.", "abstract": "The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classification using N-MNIST, a neuromorphic dataset created with a silicon retina, and the Synaptic Kernel Inverse Method (SKIM), a learning method based on principles of dendritic computation. As this work represents the first large-scale and multi-class classification task performed using the SKIM network, it explores different training patterns and output determination methods necessary to extend the original SKIM method to support multi-class problems. Making use of SKIM networks applied to real-world datasets, implementing the largest hidden layer sizes and simultaneously training the largest number of output neurons, the classification system achieved a best-case accuracy of 92.87% for a network containing 10,000 hidden layer neurons. These results represent the highest accuracies achieved against the dataset to date and serve to validate the application of the SKIM method to event-based visual classification tasks. Additionally, the study found that using a square pulse as the supervisory training signal produced the highest accuracy for most output determination methods, but the results also demonstrate that an exponential pattern is better suited to hardware implementations as it makes use of the simplest output determination method based on the maximum value." }, { "pmid": "1317971", "title": "Hebbian depression of isolated neuromuscular synapses in vitro.", "abstract": "Modulation of synaptic efficacy may depend on the temporal correlation between pre- and postsynaptic activities. At isolated neuromuscular synapses in culture, repetitive postsynaptic application of acetylcholine pulses alone or in the presence of asynchronous presynaptic activity resulted in immediate and persistent synaptic depression, whereas synchronous pre- and postsynaptic coactivation had no effect. This synaptic depression was a result of a reduction of evoked transmitter release, but induction of the depression requires a rise in postsynaptic cytosolic calcium concentration. Thus, Hebbian modulation operates at isolated peripheral synapses in vitro, and transsynaptic retrograde interaction appears to be an underlying mechanism." }, { "pmid": "26941637", "title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.", "abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks." }, { "pmid": "23197532", "title": "A large-scale model of the functioning brain.", "abstract": "A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called \"Spaun\") that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks." }, { "pmid": "16764513", "title": "A fast learning algorithm for deep belief nets.", "abstract": "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind." }, { "pmid": "16873662", "title": "Reducing the dimensionality of data with neural networks.", "abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." }, { "pmid": "18292226", "title": "Large-scale model of mammalian thalamocortical systems.", "abstract": "The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "27877107", "title": "Training Deep Spiking Neural Networks Using Backpropagation.", "abstract": "Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations." }, { "pmid": "27853419", "title": "Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.", "abstract": "Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field." }, { "pmid": "17305422", "title": "Unsupervised learning of visual features through spike timing dependent plasticity.", "abstract": "Spike timing dependent plasticity (STDP) is a learning rule that modifies synaptic strength as a function of the relative timing of pre- and postsynaptic spikes. When a neuron is repeatedly presented with similar inputs, STDP is known to have the effect of concentrating high synaptic weights on afferents that systematically fire early, while postsynaptic spike latencies decrease. Here we use this learning rule in an asynchronous feedforward spiking neural network that mimics the ventral visual pathway and shows that when the network is presented with natural images, selectivity to intermediate-complexity visual features emerges. Those features, which correspond to prototypical patterns that are both salient and consistently present in the images, are highly informative and enable robust object recognition, as demonstrated on various classification tasks. Taken together, these results show that temporal codes may be a key to understanding the phenomenal processing speed achieved by the visual system and that STDP can lead to fast and selective responses." }, { "pmid": "25104385", "title": "Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.", "abstract": "Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts." }, { "pmid": "24574952", "title": "Event-driven contrastive divergence for spiking neuromorphic systems.", "abstract": "Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality." }, { "pmid": "27445650", "title": "Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.", "abstract": "Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware." }, { "pmid": "24115919", "title": "Real-time classification and sensor fusion with a spiking deep belief network.", "abstract": "Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input." }, { "pmid": "26635513", "title": "Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.", "abstract": "Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches." }, { "pmid": "26353184", "title": "HFirst: A Temporal Approach to Object Recognition.", "abstract": "This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% ± 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% ± 1.9% for a new more difficult 36 class character recognition task." }, { "pmid": "24051730", "title": "Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing--application to feedforward ConvNets.", "abstract": "Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given \"frame rate.\" Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or \"temporal contrast.\" The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to \"reality.\" These events can be processed \"as they flow\" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules." }, { "pmid": "25462637", "title": "Deep learning in neural networks: an overview.", "abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks." }, { "pmid": "26733794", "title": "Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details.", "abstract": "This article reports on two databases for event-driven object recognition using a Dynamic Vision Sensor (DVS). The first, which we call Poker-DVS and is being released together with this article, was obtained by browsing specially made poker card decks in front of a DVS camera for 2-4 s. Each card appeared on the screen for about 20-30 ms. The poker pips were tracked and isolated off-line to constitute the 131-recording Poker-DVS database. The second database, which we call MNIST-DVS and which was released in December 2013, consists of a set of 30,000 DVS camera recordings obtained by displaying 10,000 moving symbols from the standard MNIST 70,000-picture database on an LCD monitor for about 2-3 s each. Each of the 10,000 symbols was displayed at three different scales, so that event-driven object recognition algorithms could easily be tested for different object sizes. This article tells the story behind both databases, covering, among other aspects, details of how they work and the reasons for their creation. We provide not only the databases with corresponding scripts, but also the scripts and data used to generate the figures shown in this article (as Supplementary Material)." }, { "pmid": "26217169", "title": "Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.", "abstract": "Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time." } ]
JMIR Public Health and Surveillance
28630032
PMC5495967
10.2196/publichealth.7157
What Are People Tweeting About Zika? An Exploratory Study Concerning Its Symptoms, Treatment, Transmission, and Prevention
BackgroundIn order to harness what people are tweeting about Zika, there needs to be a computational framework that leverages machine learning techniques to recognize relevant Zika tweets and, further, categorize these into disease-specific categories to address specific societal concerns related to the prevention, transmission, symptoms, and treatment of Zika virus.ObjectiveThe purpose of this study was to determine the relevancy of the tweets and what people were tweeting about the 4 disease characteristics of Zika: symptoms, transmission, prevention, and treatment.MethodsA combination of natural language processing and machine learning techniques was used to determine what people were tweeting about Zika. Specifically, a two-stage classifier system was built to find relevant tweets about Zika, and then the tweets were categorized into 4 disease categories. Tweets in each disease category were then examined using latent Dirichlet allocation (LDA) to determine the 5 main tweet topics for each disease characteristic.ResultsOver 4 months, 1,234,605 tweets were collected. The number of tweets by males and females was similar (28.47% [351,453/1,234,605] and 23.02% [284,207/1,234,605], respectively). The classifier performed well on the training and test data for relevancy (F1 score=0.87 and 0.99, respectively) and disease characteristics (F1 score=0.79 and 0.90, respectively). Five topics for each category were found and discussed, with a focus on the symptoms category.ConclusionsWe demonstrate how categories of discussion on Twitter about an epidemic can be discovered so that public health officials can understand specific societal concerns within the disease-specific categories. Our two-stage classifier was able to identify relevant tweets to enable more specific analysis, including the specific aspects of Zika that were being discussed as well as misinformation being expressed. Future studies can capture sentiments and opinions on epidemic outbreaks like Zika virus in real time, which will likely inform efforts to educate the public at large.
Related WorksA study by Oyeyemi et al [10] concerning misinformation about Ebola on Twitter found that 44.0% (248/564) of the tweets about Ebola were retweeted at least once, with 38.3% (95/248) of those tweets being scientifically accurate, whereas 58.9% (146/248) were inaccurate. Furthermore, most of the tweets containing misinformation were never corrected. Another study about Ebola by Tran and Lee [4] found that the first reported incident of the doctor with Ebola had more impact and received more attention than any other incident, showing that people pay more attention and react more strongly to a new issue.Majumder et al attempted to estimate the basic R0 and Robs for Zika using HealthMap and Google Trends [11]. R0 is known as the basic reproduction number and is the number of expected new infections per first infected individual in a disease-free population. Robs is the observed number of secondary cases per infected individual. Their results indicate that the ranges for Robs were comparable between the traditional method and the novel method. However, traditional methods had higher R0 estimates than the HealthMap and Google Trend data. This indicates that digital surveillance methods can estimate transmission parameters in real time in the absence of traditional methods.Another study collected tweets on Zika for 3 months [12]. They found that citizens were more concerned with the long-term issues than the short-term issues such as fever and rash. Using hierarchical clustering and word co-occurrence analysis, they found underlying themes related to immediate effects such as the spread of Zika. Long-term effects had themes such as pregnancy. One issue with this paper was that they never employed experts to check the relevance of the tweets with respect to these topics, which is a common problem in mining social media data.A study by Glowacki et al [13] collected tweets during an hour-long live CDC twitter chat. They only included words used in more than 4 messages to do a topic analysis and found that the 10-topic solution best explained the themes. Some of the themes were virology of Zika, spread, consequences for infants and pregnant women, sexual transmission, and symptoms. This was a curated study where only tweets to and from the CDC were explored, whereas the aim of our larger study was to determine what the general public was discussing about Zika.A study by Fu et al [14] analyzed tweets from May 1, 2015 to April 2, 2016 and found 5 themes using topic modeling: (1) government, private and public sector, and general public response to the outbreak; (2) transmission routes; (3) societal impacts of the outbreak; (4) case reports; and (5) pregnancy and microcephaly. This study did not check for noise within the social media data. Moreover, the computational analysis was limited to 3 days of data, which may not reflect the themes in the larger dataset.In many of these studies, the need for checking the performance of the system as well as a post hoc error analysis on checking for the generalizability of their method is overlooked. We address this in our study by employing machine learning techniques on an annotated data set, as well as a post hoc error analysis on a test dataset, to ensure the generalizability of our system.Figure 1Block diagram of the pragmatic function-oriented content retrieval using a hierarchical supervised classification technique, followed by deeper analysis for characteristics of disease content.In this study, an exploratory analysis focused on finding important subcategories of discussion topics from Zika-related tweets was performed. Specifically, we addressed 4 key characteristics of Zika: symptoms, transmission, treatment, and prevention. Using the system described in Figure 1, the following research questions were addressed:R1. Dataset Distribution Analysis: What proportion of male and female users tweeted about Zika, what were the polarities of the tweets by male and female users, and what were the proportions of tweets that discussed topics related to the different disease characteristics—symptoms, transmission, treatment, and prevention?R2. Classification Performance Analysis: What was the agreement among annotators’ labels that were used as the ground truth in this study, what was the classification performance to detect the tweets relevant to Zika, and how well were the classifiers able to distinguish between tweets on the different disease characteristics?R3. Topical Analysis: What were the main discussion topics in each of these categories, and what were the most persistent concerns or misconceptions regarding the Zika virus?
[ "26965962", "25315514", "27251981", "27544795", "27566874", "23092060" ]
[ { "pmid": "26965962", "title": "Zika virus infection-the next wave after dengue?", "abstract": "Zika virus was initially discovered in east Africa about 70 years ago and remained a neglected arboviral disease in Africa and Southeast Asia. The virus first came into the limelight in 2007 when it caused an outbreak in Micronesia. In the ensuing decade, it spread widely in other Pacific islands, after which its incursion into Brazil in 2015 led to a widespread epidemic in Latin America. In most infected patients the disease is relatively benign. Serious complications include Guillain-Barré syndrome and congenital infection which may lead to microcephaly and maculopathy. Aedes mosquitoes are the main vectors, in particular, Ae. aegypti. Ae. albopictus is another potential vector. Since the competent mosquito vectors are highly prevalent in most tropical and subtropical countries, introduction of the virus to these areas could readily result in endemic transmission of the disease. The priorities of control include reinforcing education of travellers to and residents of endemic areas, preventing further local transmission by vectors, and an integrated vector management programme. The container habitats of Ae. aegypti and Ae. albopictus means engagement of the community and citizens is of utmost importance to the success of vector control." }, { "pmid": "27251981", "title": "Utilizing Nontraditional Data Sources for Near Real-Time Estimation of Transmission Dynamics During the 2015-2016 Colombian Zika Virus Disease Outbreak.", "abstract": "BACKGROUND\nApproximately 40 countries in Central and South America have experienced local vector-born transmission of Zika virus, resulting in nearly 300,000 total reported cases of Zika virus disease to date. Of the cases that have sought care thus far in the region, more than 70,000 have been reported out of Colombia.\n\n\nOBJECTIVE\nIn this paper, we use nontraditional digital disease surveillance data via HealthMap and Google Trends to develop near real-time estimates for the basic (R) and observed (Robs) reproductive numbers associated with Zika virus disease in Colombia. We then validate our results against traditional health care-based disease surveillance data.\n\n\nMETHODS\nCumulative reported case counts of Zika virus disease in Colombia were acquired via the HealthMap digital disease surveillance system. Linear smoothing was conducted to adjust the shape of the HealthMap cumulative case curve using Google search data. Traditional surveillance data on Zika virus disease were obtained from weekly Instituto Nacional de Salud (INS) epidemiological bulletin publications. The Incidence Decay and Exponential Adjustment (IDEA) model was used to estimate R0 and Robs for both data sources.\n\n\nRESULTS\nUsing the digital (smoothed HealthMap) data, we estimated a mean R0 of 2.56 (range 1.42-3.83) and a mean Robs of 1.80 (range 1.42-2.30). The traditional (INS) data yielded a mean R0 of 4.82 (range 2.34-8.32) and a mean Robs of 2.34 (range 1.60-3.31).\n\n\nCONCLUSIONS\nAlthough modeling using the traditional (INS) data yielded higher R estimates than the digital (smoothed HealthMap) data, modeled ranges for Robs were comparable across both data sources. As a result, the narrow range of possible case projections generated by the traditional (INS) data was largely encompassed by the wider range produced by the digital (smoothed HealthMap) data. Thus, in the absence of traditional surveillance data, digital surveillance data can yield similar estimates for key transmission parameters and should be utilized in other Zika virus-affected countries to assess outbreak dynamics in near real time." }, { "pmid": "27544795", "title": "Identifying the public's concerns and the Centers for Disease Control and Prevention's reactions during a health crisis: An analysis of a Zika live Twitter chat.", "abstract": "The arrival of the Zika virus in the United States caused much concern among the public because of its ease of transmission and serious consequences for pregnant women and their newborns. We conducted a text analysis to examine original tweets from the public and responses from the Centers for Disease Control and Prevention (CDC) during a live Twitter chat hosted by the CDC. Both the public and the CDC expressed concern about the spread of Zika virus, but the public showed more concern about the consequences it had for women and babies, whereas the CDC focused more on symptoms and education." }, { "pmid": "27566874", "title": "How people react to Zika virus outbreaks on Twitter? A computational content analysis.", "abstract": "Zika-related Twitter incidence peaked after the World Health Organization declared an emergency. Five themes were identified from Zika-related Twitter content: (1) societal impact of the outbreak; (2) government, public and private sector, and general public responses to the outbreak; (3) pregnancy and microcephaly: negative health consequences related to pregnant women and babies; (4) transmission routes; and (5) case reports. User-generated contents sites were preferred direct information channels rather than those of the government authorities." }, { "pmid": "23092060", "title": "Interrater reliability: the kappa statistic.", "abstract": "The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested." } ]
BMC Medical Informatics and Decision Making
28673289
PMC5496182
10.1186/s12911-017-0498-1
Semantic relatedness and similarity of biomedical terms: examining the effects of recency, size, and section of biomedical publications on the performance of word2vec
BackgroundUnderstanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec’s ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec.MethodsWe download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects.ResultsModels trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task).ConclusionsIncreasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article bodies, abstracts excel in accuracy but lose in coverage of identifiable relations.
Related workIn this section, we first briefly introduce word2vec and then survey the related work that used word2vec on biomedical publications. These studies primarily focused on the effects of architectures and parameter settings on experimental results. A few empirical studies were identified on how to configure the method to get better performance.
[ "16875881", "19649320", "27195695", "25160253", "27531100" ]
[ { "pmid": "16875881", "title": "Measures of semantic similarity and relatedness in the biomedical domain.", "abstract": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures." }, { "pmid": "19649320", "title": "Semantic similarity in biomedical ontologies.", "abstract": "In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research." }, { "pmid": "27195695", "title": "Identifying Liver Cancer and Its Relations with Diseases, Drugs, and Genes: A Literature-Based Approach.", "abstract": "In biomedicine, scientific literature is a valuable source for knowledge discovery. Mining knowledge from textual data has become an ever important task as the volume of scientific literature is growing unprecedentedly. In this paper, we propose a framework for examining a certain disease based on existing information provided by scientific literature. Disease-related entities that include diseases, drugs, and genes are systematically extracted and analyzed using a three-level network-based approach. A paper-entity network and an entity co-occurrence network (macro-level) are explored and used to construct six entity specific networks (meso-level). Important diseases, drugs, and genes as well as salient entity relations (micro-level) are identified from these networks. Results obtained from the literature-based literature mining can serve to assist clinical applications." }, { "pmid": "25160253", "title": "Exploring the application of deep learning techniques on medical text corpora.", "abstract": "With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies." }, { "pmid": "27531100", "title": "Corpus domain effects on distributional semantic modeling of medical terms.", "abstract": "MOTIVATION\nAutomatically quantifying semantic similarity and relatedness between clinical terms is an important aspect of text mining from electronic health records, which are increasingly recognized as valuable sources of phenotypic information for clinical genomics and bioinformatics research. A key obstacle to development of semantic relatedness measures is the limited availability of large quantities of clinical text to researchers and developers outside of major medical centers. Text from general English and biomedical literature are freely available; however, their validity as a substitute for clinical domain to represent semantics of clinical terms remains to be demonstrated.\n\n\nRESULTS\nWe constructed neural network representations of clinical terms found in a publicly available benchmark dataset manually labeled for semantic similarity and relatedness. Similarity and relatedness measures computed from text corpora in three domains (Clinical Notes, PubMed Central articles and Wikipedia) were compared using the benchmark as reference. We found that measures computed from full text of biomedical articles in PubMed Central repository (rho = 0.62 for similarity and 0.58 for relatedness) are on par with measures computed from clinical reports (rho = 0.60 for similarity and 0.57 for relatedness). We also evaluated the use of neural network based relatedness measures for query expansion in a clinical document retrieval task and a biomedical term word sense disambiguation task. We found that, with some limitations, biomedical articles may be used in lieu of clinical reports to represent the semantics of clinical terms and that distributional semantic methods are useful for clinical and biomedical natural language processing applications.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe software and reference standards used in this study to evaluate semantic similarity and relatedness measures are publicly available as detailed in the article.\n\n\nCONTACT\[email protected] information: Supplementary data are available at Bioinformatics online." } ]
JMIR Public Health and Surveillance
28642212
PMC5500778
10.2196/publichealth.6577
Filtering Entities to Optimize Identification of Adverse Drug Reaction From Social Media: How Can the Number of Words Between Entities in the Messages Help?
BackgroundWith the increasing popularity of Web 2.0 applications, social media has made it possible for individuals to post messages on adverse drug reactions. In such online conversations, patients discuss their symptoms, medical history, and diseases. These disorders may correspond to adverse drug reactions (ADRs) or any other medical condition. Therefore, methods must be developed to distinguish between false positives and true ADR declarations.ObjectiveThe aim of this study was to investigate a method for filtering out disorder terms that did not correspond to adverse events by using the distance (as number of words) between the drug term and the disorder or symptom term in the post. We hypothesized that the shorter the distance between the disorder name and the drug, the higher the probability to be an ADR.MethodsWe analyzed a corpus of 648 messages corresponding to a total of 1654 (drug and disorder) pairs from 5 French forums using Gaussian mixture models and an expectation-maximization (EM) algorithm .ResultsThe distribution of the distances between the drug term and the disorder term enabled the filtering of 50.03% (733/1465) of the disorders that were not ADRs. Our filtering strategy achieved a precision of 95.8% and a recall of 50.0%.ConclusionsThis study suggests that such distance between terms can be used for identifying false positives, thereby improving ADR detection in social media.
Related WorkThe current technological challenges include the difficulty for text mining algorithms to interpret patient lay vocabulary [23].After the review of multiple approaches, Sarker et al [9] concluded that following data collection, filtering was a real challenge. Filtering methods are likely to aid in the ADR detection process by removing most irrelevant information. Based on our review of prior research, two types of filtering methods can be used: semantic approaches and statistical approaches.Semantic filtering relies on semantic information, for example, negation rules and vocabularies, to identify messages not corresponding to an ADR declaration. Liu and al [24] developed negation rules and incorporated linguistic and medical knowledge bases in their algorithms to filter out negated ADRs, then remove drug indications and non- and unreported cases on FAERS (FDA’s Adverse Event Reporting System) database. In their use case of 1822 discussions about beta blockers, 71% of the related medical events were adverse drug events, 20% were drug indications, and 9% were negated adverse drug events.Powell et al [25] developed “Social Media Listening,” a tool to augment postmarketing safety. This tool consisted on the removal of questionable Internet pharmacy advertisements (named “Junk”), posts in which a drug was discussed (named “mention”), posts in which a potential event was discussed (called “Proto-AE”), and any type of medical interaction description (called “Health System Interaction”). Their study revealed that only 26% of the considered posts contained relevant information. The distribution of post classifications by social media source varied considerably among drugs. Between 11% (7/63) and 50.5% (100/198) of the posts contained Proto-AEs (between 3.2% (4/123) and 33.64% (726/2158) for over-the-counter products). The final step was a manual evaluation.The second type of filtering was based on statistical approaches using the topic models method [26]. Yang et al [27] used latent Dirichlet allocation probabilistic modeling [28] to filter topics and thereby reduce the dataset to a cluster of posts to evoke an ADR declaration. This method was evaluated by the comparison of 4 benchmark methods (example adaption for text categorization [EAT], positive examples and negative examples labeling heuristics [PNLH], active semisupervised clustering based two-stage text classification [ACTC], and Laplacian SVM) and the calculation of F scores (the harmonic mean of precision and recall) on ADRs posts. These 4 methods were improved by the use of this approach. The F score gains fluctuated between 1.94% and 6.14%. Sarker and Gonzalez [16] improved their ADR detection method by using different features for filtering. These multiple features were selected by the use of leave-one-out classification scores and were evaluated with accuracy and F scores. These features were based on n-grams (accuracy 82.6%, F score 0.654), computing the Tf-idf values for the semantic types (accuracy 82.6%, F score 0.652), polarity of sentences (accuracy 84.0%, F score 0.669), the positive or negative outcome (accuracy 83.9%, F score 0.665), ADR lexicon match (accuracy 83.5%, F score 0.659), sentiment analysis in posts (accuracy 82.0%), and filtering by topics (accuracy 83.7%, F score 0.670) for filtering posts without mention of ADRs. The use of all features for the filtering process provided an accuracy of 83.6% and an F score of 0.678. Bian et al [29] utilized SVM to filter the noise in tweets. Their motivation for classifying tweets arose from the fact that most posts were not associated with ADRs; thus, filtering out nonrelevant posts was crucial.Wei and al [30] performed an automatic chemical-diseases relation extraction on a corpus of PubMed articles. Their process was divided in two subtasks. The first one was a disease named entity recognition (DNER) subtask based on the 1500 PubMed titles and abstracts. The second subtask was a chemical-induced disease (CID) relation extraction (on the same corpus as DNER subtask). Chemicals and diseases were described utilizing the medical subject headings (MeSH) controlled vocabulary. They evaluated several approaches and obtained an average precision, recall, and standard F score of 78.99%, 74.81%, and 76.03%, respectively for DNER step and an average of 43.37% of F score with the CID step. The best result for CID step was obtained by combining two SVM approaches.
[ "7791255", "9002492", "22549283", "25005606", "20658130", "12608885", "25895907", "25720841", "26271492", "26163365", "21820083", "25451103", "24559132", "24304185", "25151493", "25755127", "26147850", "26518315", "26798054", "25688695", "26994911", "20679242" ]
[ { "pmid": "7791255", "title": "Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.", "abstract": "OBJECTIVES\nTo assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to develop prevention strategies.\n\n\nDESIGN\nProspective cohort study.\n\n\nPARTICIPANTS\nAll 4031 adult admissions to a stratified random sample of 11 medical and surgical units in two tertiary care hospitals over a 6-month period. Units included two medical and three surgical intensive care units and four medical and two surgical general care units.\n\n\nMAIN OUTCOME MEASURES\nAdverse drug events and potential ADEs.\n\n\nMETHODS\nIncidents were detected by stimulated self-report by nurses and pharmacists and by daily review of all charts by nurse investigators. Incidents were subsequently classified by two independent reviewers as to whether they represented ADEs or potential ADEs and as to severity and preventability.\n\n\nRESULTS\nOver 6 months, 247 ADEs and 194 potential ADEs were identified. Extrapolated event rates were 6.5 ADEs and 5.5 potential ADEs per 100 nonobstetrical admissions, for mean numbers per hospital per year of approximately 1900 ADEs and 1600 potential ADEs. Of all ADEs, 1% were fatal (none preventable), 12% life-threatening, 30% serious, and 57% significant. Twenty-eight percent were judged preventable. Of the life-threatening and serious ADEs, 42% were preventable, compared with 18% of significant ADEs. Errors resulting in preventable ADEs occurred most often at the stages of ordering (56%) and administration (34%); transcription (6%) and dispensing errors (4%) were less common. Errors were much more likely to be intercepted if the error occurred earlier in the process: 48% at the ordering stage vs 0% at the administration stage.\n\n\nCONCLUSION\nAdverse drug events were common and often preventable; serious ADEs were more likely to be preventable. Most resulted from errors at the ordering stage, but many also occurred at the administration stage. Prevention strategies should target both stages of the drug delivery process." }, { "pmid": "9002492", "title": "Adverse drug events in hospitalized patients. Excess length of stay, extra costs, and attributable mortality.", "abstract": "OBJECTIVE\nTo determine the excess length of stay, extra costs, and mortality attributable to adverse drug events (ADEs) in hospitalized patients.\n\n\nDESIGN\nMatched case-control study.\n\n\nSETTING\nThe LDS Hospital, a tertiary care health care institution.\n\n\nPATIENTS\nAll patients admitted to LDS Hospital from January 1, 1990, to December 31, 1993, were eligible. Cases were defined as patients with ADEs that occurred during hospitalization; controls were selected according to matching variables in a stepwise fashion.\n\n\nMETHODS\nControls were matched to cases on primary discharge diagnosis related group (DRG), age, sex, acuity, and year of admission; varying numbers of controls were matched to each case. Matching was successful for 71% of the cases, leading to 1580 cases and 20,197 controls.\n\n\nMAIN OUTCOME MEASURES\nCrude and attributable mortality, crude and attributable length of stay, and cost of hospitalization.\n\n\nRESULTS\nADEs complicated 2.43 per 100 admissions to the LDS Hospital during the study period. The crude mortality rates for the cases and matched controls were 3.5% and 1.05%, respectively (P<.001). The mean length of hospital stay significantly differed between the cases and matched controls (7.69 vs 4.46 days; P<.001) as did the mean cost of hospitalization ($10,010 vs $5355; P<.001). The extra length of hospital stay attributable to an ADE was 1.74 days (P<.001). The excess cost of hospitalization attributable to an ADE was $2013 (P<.001). A linear regression analysis for length of stay and cost controlling for all matching variables revealed that the occurrence of an ADE was associated with increased length of stay of 1.91 days and an increased cost of $2262 (P<.001). In a similar logistic regression analysis for mortality, the increased risk of death among patients experiencing an ADE was 1.88 (95% confidence interval, 1.54-2.22; P<.001).\n\n\nCONCLUSION\nThe attributable lengths of stay and costs of hospitalization for ADEs are substantial. An ADE is associated with a significantly prolonged length of stay, increased economic burden, and an almost 2-fold increased risk of death." }, { "pmid": "22549283", "title": "Novel data-mining methodologies for adverse drug event discovery and analysis.", "abstract": "An important goal of the health system is to identify new adverse drug events (ADEs) in the postapproval period. Datamining methods that can transform data into meaningful knowledge to inform patient safety have proven essential for this purpose. New opportunities have emerged to harness data sources that have not been used within the traditional framework. This article provides an overview of recent methodological innovations and data sources used to support ADE discovery and analysis." }, { "pmid": "25005606", "title": "The influence of social networking sites on health behavior change: a systematic review and meta-analysis.", "abstract": "OBJECTIVE\nOur aim was to evaluate the use and effectiveness of interventions using social networking sites (SNSs) to change health behaviors.\n\n\nMATERIALS AND METHODS\nFive databases were scanned using a predefined search strategy. Studies were included if they focused on patients/consumers, involved an SNS intervention, had an outcome related to health behavior change, and were prospective. Studies were screened by independent investigators, and assessed using Cochrane's 'risk of bias' tool. Randomized controlled trials were pooled in a meta-analysis.\n\n\nRESULTS\nThe database search retrieved 4656 citations; 12 studies (7411 participants) met the inclusion criteria. Facebook was the most utilized SNS, followed by health-specific SNSs, and Twitter. Eight randomized controlled trials were combined in a meta-analysis. A positive effect of SNS interventions on health behavior outcomes was found (Hedges' g 0.24; 95% CI 0.04 to 0.43). There was considerable heterogeneity (I(2) = 84.0%; T(2) = 0.058) and no evidence of publication bias.\n\n\nDISCUSSION\nTo the best of our knowledge, this is the first meta-analysis evaluating the effectiveness of SNS interventions in changing health-related behaviors. Most studies evaluated multi-component interventions, posing problems in isolating the specific effect of the SNS. Health behavior change theories were seldom mentioned in the included articles, but two particularly innovative studies used 'network alteration', showing a positive effect. Overall, SNS interventions appeared to be effective in promoting changes in health-related behaviors, and further research regarding the application of these promising tools is warranted.\n\n\nCONCLUSIONS\nOur study showed a positive effect of SNS interventions on health behavior-related outcomes, but there was considerable heterogeneity. Protocol registration The protocol for this systematic review is registered at http://www.crd.york.ac.uk/PROSPERO with the number CRD42013004140." }, { "pmid": "20658130", "title": "Motives for reporting adverse drug reactions by patient-reporters in the Netherlands.", "abstract": "AIM\nThe aim of this study was to quantify the reasons and opinions of patients who reported adverse drug reactions (ADRs) in the Netherlands to a pharmacovigilance centre.\n\n\nMETHOD\nA web-based questionnaire was sent to 1370 patients who had previously reported an ADR to a pharmacovigilance centre. The data were analysed using descriptive statistics, χ(2) tests and Spearman's correlation coefficients.\n\n\nRESULTS\nThe response rate was 76.5% after one reminder. The main reasons for patients to report ADRs were to share their experiences (89% agreed or strongly agreed), the severity of the reaction (86% agreed or strongly agreed to the statement), worries about their own situation (63.2% agreed or strongly agreed) and the fact the ADR was not mentioned in the patient information leaflet (57.6% agreed or strongly agreed). Of the patient-responders, 93.8% shared the opinion that reporting an ADR can prevent harm to other people, 97.9% believed that reporting contributes to research and knowledge, 90.7% stated that they felt responsible for reporting an ADR and 92.5% stated that they will report a possible ADR once again in the future.\n\n\nCONCLUSION\nThe main motives for patients to report their ADRs to a pharmacovigilance centre were the severity of the ADR and their need to share experiences. The high level of response to the questionnaire shows that patients are involved when it comes to ADRs and that they are also willing to share their motivations for and opinions about the reporting of ADRs with a pharmacovigilance centre." }, { "pmid": "12608885", "title": "Consumer adverse drug reaction reporting: a new step in pharmacovigilance?", "abstract": "The direct reporting of adverse drug reactions by patients is becoming an increasingly important topic for discussion in the world of pharmacovigilance. At this time, few countries accept consumer reports. We present an overview of experiences with consumer reporting in various countries of the world. The potential contribution of patient reports of adverse drug reactions is discussed, both in terms of their qualitative and quantitative contribution. The crucial question is one of whether patient reports will increase the number and quality of the reports submitted and/or lead to a more timely detection of signals of possible adverse reactions, thus contributing to an enhancement of the existing methods of drug safety monitoring. To date, the data available are insufficient to establish such added value." }, { "pmid": "25895907", "title": "A new source of data for public health surveillance: Facebook likes.", "abstract": "BACKGROUND\nInvestigation into personal health has become focused on conditions at an increasingly local level, while response rates have declined and complicated the process of collecting data at an individual level. Simultaneously, social media data have exploded in availability and have been shown to correlate with the prevalence of certain health conditions.\n\n\nOBJECTIVE\nFacebook likes may be a source of digital data that can complement traditional public health surveillance systems and provide data at a local level. We explored the use of Facebook likes as potential predictors of health outcomes and their behavioral determinants.\n\n\nMETHODS\nWe performed principal components and regression analyses to examine the predictive qualities of Facebook likes with regard to mortality, diseases, and lifestyle behaviors in 214 counties across the United States and 61 of 67 counties in Florida. These results were compared with those obtainable from a demographic model. Health data were obtained from both the 2010 and 2011 Behavioral Risk Factor Surveillance System (BRFSS) and mortality data were obtained from the National Vital Statistics System.\n\n\nRESULTS\nFacebook likes added significant value in predicting most examined health outcomes and behaviors even when controlling for age, race, and socioeconomic status, with model fit improvements (adjusted R(2)) of an average of 58% across models for 13 different health-related metrics over basic sociodemographic models. Small area data were not available in sufficient abundance to test the accuracy of the model in estimating health conditions in less populated markets, but initial analysis using data from Florida showed a strong model fit for obesity data (adjusted R(2)=.77).\n\n\nCONCLUSIONS\nFacebook likes provide estimates for examined health outcomes and health behaviors that are comparable to those obtained from the BRFSS. Online sources may provide more reliable, timely, and cost-effective county-level data than that obtainable from traditional public health surveillance systems as well as serve as an adjunct to those systems." }, { "pmid": "25720841", "title": "Utilizing social media data for pharmacovigilance: A review.", "abstract": "OBJECTIVE\nAutomatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection/extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media.\n\n\nMETHODS\nWe identified studies describing approaches for ADR detection from social media from the Medline, Embase, Scopus and Web of Science databases, and the Google Scholar search engine. Studies that met our inclusion criteria were those that attempted to extract ADR information posted by users on any publicly available social media platform. We categorized the studies according to different characteristics such as primary ADR detection approach, size of corpus, data source(s), availability, and evaluation criteria.\n\n\nRESULTS\nTwenty-two studies met our inclusion criteria, with fifteen (68%) published within the last two years. However, publicly available annotated data is still scarce, and we found only six studies that made the annotations used publicly available, making system performance comparisons difficult. In terms of algorithms, supervised classification techniques to detect posts containing ADR mentions, and lexicon-based approaches for extraction of ADR mentions from texts have been the most popular.\n\n\nCONCLUSION\nOur review suggests that interest in the utilization of the vast amounts of available social media data for ADR monitoring is increasing. In terms of sources, both health-related and general social media data have been used for ADR detection-while health-related sources tend to contain higher proportions of relevant data, the volume of data from general social media websites is significantly higher. There is still very limited amount of annotated data publicly available , and, as indicated by the promising results obtained by recent supervised learning approaches, there is a strong need to make such data available to the research community." }, { "pmid": "26271492", "title": "Systematic review on the prevalence, frequency and comparative value of adverse events data in social media.", "abstract": "AIM\nThe aim of this review was to summarize the prevalence, frequency and comparative value of information on the adverse events of healthcare interventions from user comments and videos in social media.\n\n\nMETHODS\nA systematic review of assessments of the prevalence or type of information on adverse events in social media was undertaken. Sixteen databases and two internet search engines were searched in addition to handsearching, reference checking and contacting experts. The results were sifted independently by two researchers. Data extraction and quality assessment were carried out by one researcher and checked by a second. The quality assessment tool was devised in-house and a narrative synthesis of the results followed.\n\n\nRESULTS\nFrom 3064 records, 51 studies met the inclusion criteria. The studies assessed over 174 social media sites with discussion forums (71%) being the most popular. The overall prevalence of adverse events reports in social media varied from 0.2% to 8% of posts. Twenty-nine studies compared the results from searching social media with using other data sources to identify adverse events. There was general agreement that a higher frequency of adverse events was found in social media and that this was particularly true for 'symptom' related and 'mild' adverse events. Those adverse events that were under-represented in social media were laboratory-based and serious adverse events.\n\n\nCONCLUSIONS\nReports of adverse events are identifiable within social media. However, there is considerable heterogeneity in the frequency and type of events reported, and the reliability or validity of the data has not been thoroughly evaluated." }, { "pmid": "26163365", "title": "Adverse Drug Reaction Identification and Extraction in Social Media: A Scoping Review.", "abstract": "BACKGROUND\nThe underreporting of adverse drug reactions (ADRs) through traditional reporting channels is a limitation in the efficiency of the current pharmacovigilance system. Patients' experiences with drugs that they report on social media represent a new source of data that may have some value in postmarketing safety surveillance.\n\n\nOBJECTIVE\nA scoping review was undertaken to explore the breadth of evidence about the use of social media as a new source of knowledge for pharmacovigilance.\n\n\nMETHODS\nDaubt et al's recommendations for scoping reviews were followed. The research questions were as follows: How can social media be used as a data source for postmarketing drug surveillance? What are the available methods for extracting data? What are the different ways to use these data? We queried PubMed, Embase, and Google Scholar to extract relevant articles that were published before June 2014 and with no lower date limit. Two pairs of reviewers independently screened the selected studies and proposed two themes of review: manual ADR identification (theme 1) and automated ADR extraction from social media (theme 2). Descriptive characteristics were collected from the publications to create a database for themes 1 and 2.\n\n\nRESULTS\nOf the 1032 citations from PubMed and Embase, 11 were relevant to the research question. An additional 13 citations were added after further research on the Internet and in reference lists. Themes 1 and 2 explored 11 and 13 articles, respectively. Ways of approaching the use of social media as a pharmacovigilance data source were identified.\n\n\nCONCLUSIONS\nThis scoping review noted multiple methods for identifying target data, extracting them, and evaluating the quality of medical information from social media. It also showed some remaining gaps in the field. Studies related to the identification theme usually failed to accurately assess the completeness, quality, and reliability of the data that were analyzed from social media. Regarding extraction, no study proposed a generic approach to easily adding a new site or data source. Additional studies are required to precisely determine the role of social media in the pharmacovigilance system." }, { "pmid": "21820083", "title": "Identifying potential adverse effects using the web: a new approach to medical hypothesis generation.", "abstract": "Medical message boards are online resources where users with a particular condition exchange information, some of which they might not otherwise share with medical providers. Many of these boards contain a large number of posts and contain patient opinions and experiences that would be potentially useful to clinicians and researchers. We present an approach that is able to collect a corpus of medical message board posts, de-identify the corpus, and extract information on potential adverse drug effects discussed by users. Using a corpus of posts to breast cancer message boards, we identified drug event pairs using co-occurrence statistics. We then compared the identified drug event pairs with adverse effects listed on the package labels of tamoxifen, anastrozole, exemestane, and letrozole. Of the pairs identified by our system, 75-80% were documented on the drug labels. Some of the undocumented pairs may represent previously unidentified adverse drug effects." }, { "pmid": "25451103", "title": "Portable automatic text classification for adverse drug reaction detection via multi-corpus training.", "abstract": "OBJECTIVE\nAutomatic detection of adverse drug reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media-where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing (NLP) approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies.\n\n\nMETHODS\nOne of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies.\n\n\nRESULTS\nOur feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively.\n\n\nCONCLUSIONS\nOur research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future." }, { "pmid": "24559132", "title": "A pipeline to extract drug-adverse event pairs from multiple data sources.", "abstract": "BACKGROUND\nPharmacovigilance aims to uncover and understand harmful side-effects of drugs, termed adverse events (AEs). Although the current process of pharmacovigilance is very systematic, the increasing amount of information available in specialized health-related websites as well as the exponential growth in medical literature presents a unique opportunity to supplement traditional adverse event gathering mechanisms with new-age ones.\n\n\nMETHOD\nWe present a semi-automated pipeline to extract associations between drugs and side effects from traditional structured adverse event databases, enhanced by potential drug-adverse event pairs mined from user-comments from health-related websites and MEDLINE abstracts. The pipeline was tested using a set of 12 drugs representative of two previous studies of adverse event extraction from health-related websites and MEDLINE abstracts.\n\n\nRESULTS\nTesting the pipeline shows that mining non-traditional sources helps substantiate the adverse event databases. The non-traditional sources not only contain the known AEs, but also suggest some unreported AEs for drugs which can then be analyzed further.\n\n\nCONCLUSION\nA semi-automated pipeline to extract the AE pairs from adverse event databases as well as potential AE pairs from non-traditional sources such as text from MEDLINE abstracts and user-comments from health-related websites is presented." }, { "pmid": "24304185", "title": "Analysis of patients' narratives posted on social media websites on benfluorex's (Mediator® ) withdrawal in France.", "abstract": "WHAT IS KNOWN AND OBJECTIVE\nWebsites and discussion lists on health issues are among the most popular resources on the Web. Use experience reported on social media websites may provide useful information on drugs and their adverse reactions (ADRs). Clear communication on the benefit/harm balance of drugs is important to inform proper use of drugs. Some data have shown that communication (advisories or warnings) is difficult. This study aimed to explore the Internet as a source of data on patients' perception of risk associated with benfluorex and the impact of wider media coverage.\n\n\nMETHODS\nThree French websites were selected: Doctissimo, Atoute.org considered the best-known and visited website in France for health questions and Vivelesrondes (Long live the Tubbies) for overweight people. Three periods were chosen: (1) before November 2009 (i.e. before benfluorex withdrawal), (2) between November 2009 and November 2010 (when the risk of valvulopathy with benfluorex appeared in social media) and (3) after November 2010.\n\n\nRESULTS AND DISCUSSION\nTwo hundred twenty initial postings were analysed. These lead to 660 secondary postings which were analysed separately. In period 1, 114 initial postings were analysed, mostly concerning efficacy of the drug (72%). In period 2, 42 initial postings were analysed involving mainly ADRs or warnings (73%). In period 3, 64 initial postings were analysed; most frequent expressing anger directed at the healthcare system (58%) and anxiety about cardiovascular ADRs (30%). Online consumer postings showed that there were drastic changes in consumers' perceptions following media coverage.\n\n\nWHAT IS NEW AND CONCLUSION\nThis study suggests that analysis of website data can inform on drug ADRs. Social media are important for communicating information on drug ADRs and for assessing consumer behaviour and their risk perception." }, { "pmid": "25151493", "title": "Text mining for adverse drug events: the promise, challenges, and state of the art.", "abstract": "Text mining is the computational process of extracting meaningful information from large amounts of unstructured text. It is emerging as a tool to leverage underutilized data sources that can improve pharmacovigilance, including the objective of adverse drug event (ADE) detection and assessment. This article provides an overview of recent advances in pharmacovigilance driven by the application of text mining, and discusses several data sources-such as biomedical literature, clinical narratives, product labeling, social media, and Web search logs-that are amenable to text mining for pharmacovigilance. Given the state of the art, it appears text mining can be applied to extract useful ADE-related information from multiple textual sources. Nonetheless, further research is required to address remaining technical challenges associated with the text mining methodologies, and to conclusively determine the relative contribution of each textual source to improving pharmacovigilance." }, { "pmid": "25755127", "title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features.", "abstract": "OBJECTIVE\nSocial media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media.\n\n\nMETHODS\nWe introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique.\n\n\nRESULTS\nADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance.\n\n\nCONCLUSION\nIt is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets." }, { "pmid": "26147850", "title": "Social media and pharmacovigilance: A review of the opportunities and challenges.", "abstract": "Adverse drug reactions come at a considerable cost on society. Social media are a potentially invaluable reservoir of information for pharmacovigilance, yet their true value remains to be fully understood. In order to realize the benefits social media holds, a number of technical, regulatory and ethical challenges remain to be addressed. We outline these key challenges identifying relevant current research and present possible solutions." }, { "pmid": "26518315", "title": "A research framework for pharmacovigilance in health social media: Identification and evaluation of patient adverse drug event reports.", "abstract": "Social media offer insights of patients' medical problems such as drug side effects and treatment failures. Patient reports of adverse drug events from social media have great potential to improve current practice of pharmacovigilance. However, extracting patient adverse drug event reports from social media continues to be an important challenge for health informatics research. In this study, we develop a research framework with advanced natural language processing techniques for integrated and high-performance patient reported adverse drug event extraction. The framework consists of medical entity extraction for recognizing patient discussions of drug and events, adverse drug event extraction with shortest dependency path kernel based statistical learning method and semantic filtering with information from medical knowledge bases, and report source classification to tease out noise. To evaluate the proposed framework, a series of experiments were conducted on a test bed encompassing about postings from major diabetes and heart disease forums in the United States. The results reveal that each component of the framework significantly contributes to its overall effectiveness. Our framework significantly outperforms prior work." }, { "pmid": "26798054", "title": "Social Media Listening for Routine Post-Marketing Safety Surveillance.", "abstract": "INTRODUCTION\nPost-marketing safety surveillance primarily relies on data from spontaneous adverse event reports, medical literature, and observational databases. Limitations of these data sources include potential under-reporting, lack of geographic diversity, and time lag between event occurrence and discovery. There is growing interest in exploring the use of social media ('social listening') to supplement established approaches for pharmacovigilance. Although social listening is commonly used for commercial purposes, there are only anecdotal reports of its use in pharmacovigilance. Health information posted online by patients is often publicly available, representing an untapped source of post-marketing safety data that could supplement data from existing sources.\n\n\nOBJECTIVES\nThe objective of this paper is to describe one methodology that could help unlock the potential of social media for safety surveillance.\n\n\nMETHODS\nA third-party vendor acquired 24 months of publicly available Facebook and Twitter data, then processed the data by standardizing drug names and vernacular symptoms, removing duplicates and noise, masking personally identifiable information, and adding supplemental data to facilitate the review process. The resulting dataset was analyzed for safety and benefit information.\n\n\nRESULTS\nIn Twitter, a total of 6,441,679 Medical Dictionary for Regulatory Activities (MedDRA(®)) Preferred Terms (PTs) representing 702 individual PTs were discussed in the same post as a drug compared with 15,650,108 total PTs representing 946 individual PTs in Facebook. Further analysis revealed that 26 % of posts also contained benefit information.\n\n\nCONCLUSION\nSocial media listening is an important tool to augment post-marketing safety surveillance. Much work remains to determine best practices for using this rapidly evolving data source." }, { "pmid": "25688695", "title": "Filtering big data from social media--Building an early warning system for adverse drug reactions.", "abstract": "OBJECTIVES\nAdverse drug reactions (ADRs) are believed to be a leading cause of death in the world. Pharmacovigilance systems are aimed at early detection of ADRs. With the popularity of social media, Web forums and discussion boards become important sources of data for consumers to share their drug use experience, as a result may provide useful information on drugs and their adverse reactions. In this study, we propose an automated ADR related posts filtering mechanism using text classification methods. In real-life settings, ADR related messages are highly distributed in social media, while non-ADR related messages are unspecific and topically diverse. It is expensive to manually label a large amount of ADR related messages (positive examples) and non-ADR related messages (negative examples) to train classification systems. To mitigate this challenge, we examine the use of a partially supervised learning classification method to automate the process.\n\n\nMETHODS\nWe propose a novel pharmacovigilance system leveraging a Latent Dirichlet Allocation modeling module and a partially supervised classification approach. We select drugs with more than 500 threads of discussion, and collect all the original posts and comments of these drugs using an automatic Web spidering program as the text corpus. Various classifiers were trained by varying the number of positive examples and the number of topics. The trained classifiers were applied to 3000 posts published over 60 days. Top-ranked posts from each classifier were pooled and the resulting set of 300 posts was reviewed by a domain expert to evaluate the classifiers.\n\n\nRESULTS\nCompare to the alternative approaches using supervised learning methods and three general purpose partially supervised learning methods, our approach performs significantly better in terms of precision, recall, and the F measure (the harmonic mean of precision and recall), based on a computational experiment using online discussion threads from Medhelp.\n\n\nCONCLUSIONS\nOur design provides satisfactory performance in identifying ADR related posts for post-marketing drug surveillance. The overall design of our system also points out a potentially fruitful direction for building other early warning systems that need to filter big data from social media networks." }, { "pmid": "26994911", "title": "Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task.", "abstract": "Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task--a result that approaches the human inter-annotator agreement (0.8875)--and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system's ability to return real-time results: the average response time for each team's DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/." }, { "pmid": "20679242", "title": "Discovery of drug mode of action and drug repositioning from transcriptional responses.", "abstract": "A bottleneck in drug discovery is the identification of the molecular targets of a compound (mode of action, MoA) and of its off-target effects. Previous approaches to elucidate drug MoA include analysis of chemical structures, transcriptional responses following treatment, and text mining. Methods based on transcriptional responses require the least amount of information and can be quickly applied to new compounds. Available methods are inefficient and are not able to support network pharmacology. We developed an automatic and robust approach that exploits similarity in gene expression profiles following drug treatment, across multiple cell lines and dosages, to predict similarities in drug effect and MoA. We constructed a \"drug network\" of 1,302 nodes (drugs) and 41,047 edges (indicating similarities between pair of drugs). We applied network theory, partitioning drugs into groups of densely interconnected nodes (i.e., communities). These communities are significantly enriched for compounds with similar MoA, or acting on the same pathway, and can be used to identify the compound-targeted biological pathways. New compounds can be integrated into the network to predict their therapeutic and off-target effects. Using this network, we correctly predicted the MoA for nine anticancer compounds, and we were able to discover an unreported effect for a well-known drug. We verified an unexpected similarity between cyclin-dependent kinase 2 inhibitors and Topoisomerase inhibitors. We discovered that Fasudil (a Rho-kinase inhibitor) might be \"repositioned\" as an enhancer of cellular autophagy, potentially applicable to several neurodegenerative disorders. Our approach was implemented in a tool (Mode of Action by NeTwoRk Analysis, MANTRA, http://mantra.tigem.it)." } ]
BMC Medical Informatics and Decision Making
28699564
PMC5506580
10.1186/s12911-017-0464-y
Detecting clinically relevant new information in clinical notes across specialties and settings
BackgroundAutomated methods for identifying clinically relevant new versus redundant information in electronic health record (EHR) clinical notes is useful for clinicians and researchers involved in patient care and clinical research, respectively. We evaluated methods to automatically identify clinically relevant new information in clinical notes, and compared the quantity of redundant information across specialties and clinical settings.MethodsStatistical language models augmented with semantic similarity measures were evaluated as a means to detect and quantify clinically relevant new and redundant information over longitudinal clinical notes for a given patient. A corpus of 591 progress notes over 40 inpatient admissions was annotated for new information longitudinally by physicians to generate a reference standard. Note redundancy between various specialties was evaluated on 71,021 outpatient notes and 64,695 inpatient notes from 500 solid organ transplant patients (April 2015 through August 2015).ResultsOur best method achieved at best performance of 0.87 recall, 0.62 precision, and 0.72 F-measure. Addition of semantic similarity metrics compared to baseline improved recall but otherwise resulted in similar performance. While outpatient and inpatient notes had relatively similar levels of high redundancy (61% and 68%, respectively), redundancy differed by author specialty with mean redundancy of 75%, 66%, 57%, and 55% observed in pediatric, internal medicine, psychiatry and surgical notes, respectively.ConclusionsAutomated techniques with statistical language models for detecting redundant versus clinically relevant new information in clinical notes do not improve with the addition of semantic similarity measures. While levels of redundancy seem relatively similar in the inpatient and ambulatory settings in the Fairview Health Services, clinical note redundancy appears to vary significantly with different medical specialties.
Related workA number of approaches have previously been reported around quantifying redundancy in clinical notes. For example, Weir et al., manually reviewed 1,891 notes in the Salt Lake City Veterans Affairs (VA) health care system and found that approximately 20% of notes contained copied text [11]. With respect to automated methods, 167,076 progress notes for 1,479 patients from the VA Computerized Patient Record System (CPRS) were examined using pair-wise comparison of all patient documents to identify matches of at least 40 consecutive word sequences in two documents. They found 9% of progress notes contained copied text [12]. Wrenn et al. used global alignment to quantify the percentage of redundant information in a collection of 1,670 inpatient notes (including sign-out note, progress note, admission note and discharge note) and found an average of 78% and 54% redundant information in sign-out and progress notes, respectively [13]. More recently, Cohen et al. used Smith-Waterman text alignment algorithm to quantify redundancy both in terms of word and semantic concept repetition [8]. They found that corpus redundancy had a negative impact on the quality of text-mining and topic modeling, and suggested that redundancy of the corpus must be accounted for in applying subsequent text-mining techniques for many secondary clinical applications [8].Other work has looked at techniques using a modification of classic global alignment with a sliding window and lexical normalization [14]. This work demonstrated a cyclic pattern in the quantity of redundant information longitudinally in ambulatory clinical notes for a given patient, and that the overall amount of redundant information increases over time. Subsequently, statistical language models were used to identify and visualize relevant new information via highlighting texts [15]. Work quantifying information redundancy between each note also demonstrates that, in most cases, clinicians tend to copy information exclusively from the most recent note. New information proportion (i.e., percentage of counterpart of redundant information) also appears to be a helpful metric for navigating clinicians or researchers to notes or information in notes that is clinically significant [16]. Moreover, categorizing clinically relevant new information based on semantic type group [17] (e.g., medication, problem/disease, laboratory) can potentially improve information navigation for specific event types [10].
[ "16720812", "20399309", "23304398", "21292706", "25882031", "25717418", "12695797", "20064801", "22195227", "23920658", "11604736", "15120654", "12835272", "16875881", "20351894", "22195148", "16221939", "25954591", "25954438", "20442139", "25954438" ]
[ { "pmid": "23304398", "title": "A qualitative analysis of EHR clinical document synthesis by clinicians.", "abstract": "Clinicians utilize electronic health record (EHR) systems during time-constrained patient encounters where large amounts of clinical text must be synthesized at the point of care. Qualitative methods may be an effective approach for uncovering cognitive processes associated with the synthesis of clinical documents within EHR systems. We utilized a think-aloud protocol and content analysis with the goal of understanding cognitive processes and barriers involved as medical interns synthesized patient clinical documents in an EHR system to accomplish routine clinical tasks. Overall, interns established correlations of significance and meaning between problem, symptom and treatment concepts to inform hypotheses generation and clinical decision-making. Barriers identified with synthesizing EHR documents include difficulty searching for patient data, poor readability, redundancy, and unfamiliar specialized terms. Our study can inform recommendations for future designs of EHR clinical document user interfaces to aid clinicians in providing improved patient care." }, { "pmid": "21292706", "title": "Use of electronic clinical documentation: time spent and team interactions.", "abstract": "OBJECTIVE\nTo measure the time spent authoring and viewing documentation and to study patterns of usage in healthcare practice.\n\n\nDESIGN\nAudit logs for an electronic health record were used to calculate rates, and social network analysis was applied to ascertain usage patterns. Subjects comprised all care providers at an urban academic medical center who authored or viewed electronic documentation.\n\n\nMEASUREMENT\nRate and time of authoring and viewing clinical documentation, and associations among users were measured.\n\n\nRESULTS\nUsers spent 20-103 min per day authoring notes and 7-56 min per day viewing notes, with physicians spending less than 90 min per day total. About 16% of attendings' notes, 8% of residents' notes, and 38% of nurses' notes went unread by other users, and, overall, 16% of notes were never read by anyone. Viewing of notes dropped quickly with the age of the note, but notes were read at a low but measurable rate, even after 2 years. Most healthcare teams (77%) included a nurse, an attending, and a resident, and those three users' groups were the first to write notes during an admission. Limitations The limitations were restriction to a single academic medical center and use of log files without direct observation.\n\n\nCONCLUSIONS\nCare providers spend a significant amount of time viewing and authoring notes. Many notes are never read, and rates of usage vary significantly by author and viewer. While the rate of viewing a note drops quickly with its age, even after 2 years inpatient notes are still viewed." }, { "pmid": "25882031", "title": "Automated methods for the summarization of electronic health records.", "abstract": "OBJECTIVES\nThis review examines work on automated summarization of electronic health record (EHR) data and in particular, individual patient record summarization. We organize the published research and highlight methodological challenges in the area of EHR summarization implementation.\n\n\nTARGET AUDIENCE\nThe target audience for this review includes researchers, designers, and informaticians who are concerned about the problem of information overload in the clinical setting as well as both users and developers of clinical summarization systems.\n\n\nSCOPE\nAutomated summarization has been a long-studied subject in the fields of natural language processing and human-computer interaction, but the translation of summarization and visualization methods to the complexity of the clinical workflow is slow moving. We assess work in aggregating and visualizing patient information with a particular focus on methods for detecting and removing redundancy, describing temporality, determining salience, accounting for missing data, and taking advantage of encoded clinical knowledge. We identify and discuss open challenges critical to the implementation and use of robust EHR summarization systems." }, { "pmid": "25717418", "title": "Longitudinal analysis of new information types in clinical notes.", "abstract": "It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states." }, { "pmid": "12695797", "title": "Direct text entry in electronic progress notes. An evaluation of input errors.", "abstract": "OBJECTIVES\nIt is not uncommon that the introduction of a new technology fixes old problems while introducing new ones. The Veterans Administration recently implemented a comprehensive electronic medical record system (CPRS) to support provider order entry. Progress notes are entered directly by clinicians, primarily through keyboard input. Due to concerns that there may be significant, invisible disruptions to information flow, this study was conducted to formally examine the incidence and characteristics of input errors in the electronic patient record.\n\n\nMETHODS\nSixty patient charts were randomly selected from all 2,301 inpatient admissions during a 5-month period. A panel of clinicians with informatics backgrounds developed the review criteria. After establishing inter-rater reliability, two raters independently reviewed 1,891 notes for copying, copying errors, inconsistent text, inappropriate object insertion and signature issues.\n\n\nRESULTS\nOverall, 60% of patients reviewed had one or more input-related errors averaging 7.8 errors per patient. About 20% of notes showed evidence of copying, with an average of 1.01 error per copied note. Copying another clinician's note and making changes had the highest risk of error. Templating resulted in large amounts of blank spaces. Overall, MDs make more errors than other clinicians even after controlling for the number of notes.\n\n\nCONCLUSIONS\nMoving towards a more progressive model for the electronic medical record, where actions are recorded only once, history and physical information is encoded for use later, and note generation is organized around problems, would greatly minimize the potential for error." }, { "pmid": "20064801", "title": "Quantifying clinical narrative redundancy in an electronic health record.", "abstract": "OBJECTIVE\nAlthough electronic notes have advantages compared to handwritten notes, they take longer to write and promote information redundancy in electronic health records (EHRs). We sought to quantify redundancy in clinical documentation by studying collections of physician notes in an EHR.\n\n\nDESIGN AND METHODS\nWe implemented a retrospective design to gather all electronic admission, progress, resident signout and discharge summary notes written during 100 randomly selected patient admissions within a 6 month period. We modified and applied a Levenshtein edit-distance algorithm to align and compare the documents written for each of the 100 admissions. We then identified and measured the amount of text duplicated from previous notes. Finally, we manually reviewed the content that was conserved between note types in a subsample of notes.\n\n\nMEASUREMENTS\nWe measured the amount of new information in a document, which was calculated as the number of words that did not match with previous documents divided by the length, in words, of the document. Results are reported as the percentage of information in a document that had been duplicated from previously written documents.\n\n\nRESULTS\nSignout and progress notes proved to be particularly redundant, with an average of 78% and 54% information duplicated from previous documents respectively. There was also significant information duplication between document types (eg, from an admission note to a progress note).\n\n\nCONCLUSION\nThe study established the feasibility of exploring redundancy in the narrative record with a known sequence alignment algorithm used frequently in the field of bioinformatics. The findings provide a foundation for studying the usefulness and risks of redundancy in the EHR." }, { "pmid": "22195227", "title": "Evaluating measures of redundancy in clinical texts.", "abstract": "Although information redundancy has been reported as an important problem for clinicians when using electronic health records and clinical reports, measuring redundancy in clinical text has not been extensively investigated. We evaluated several automated techniques to quantify the redundancy in clinical documents using an expert-derived reference standard consisting of outpatient clinical documents. The technique that resulted in the best correlation (82%) with human ratings consisted a modified dynamic programming alignment algorithm over a sliding window augmented with a) lexical normalization and b) stopword removal. When this method was applied to the overall outpatient record, we found that overall information redundancy in clinical notes increased over time and that mean document redundancy scores for individual patient documents appear to have cyclical patterns corresponding to clinical events. These results show that outpatient documents have large amounts of redundant information and that development of effective redundancy measures warrants additional investigation." }, { "pmid": "23920658", "title": "Navigating longitudinal clinical notes with an automated method for detecting new information.", "abstract": "Automated methods to detect new information in clinical notes may be valuable for navigating and using information in these documents for patient care. Statistical language models were evaluated as a means to quantify new information over longitudinal clinical notes for a given patient. The new information proportion (NIP) in target notes decreased logarithmically with increasing numbers of previous notes to create the language model. For a given patient, the amount of new information had cyclic patterns. Higher NIP scores correlated with notes having more new information often with clinically significant events, and lower NIP scores indicated notes with less new information. Our analysis also revealed \"copying and pasting\" to be widely used in generating clinical notes by copying information from the most recent historical clinical notes forward. These methods can potentially aid clinicians in finding notes with more clinically relevant new information and in reviewing notes more purposefully which may increase the efficiency of clinicians in delivering patient care." }, { "pmid": "11604736", "title": "Aggregating UMLS semantic types for reducing conceptual complexity.", "abstract": "The conceptual complexity of a domain can make it difficult for users of information systems to comprehend and interact with the knowledge embedded in those systems. The Unified Medical Language System (UMLS) currently integrates over 730,000 biomedical concepts from more than fifty biomedical vocabularies. The UMLS semantic network reduces the complexity of this construct by grouping concepts according to the semantic types that have been assigned to them. For certain purposes, however, an even smaller and coarser-grained set of semantic type groupings may be desirable. In this paper, we discuss our approach to creating such a set. We present six basic principles, and then apply those principles in aggregating the existing 134 semantic types into a set of 15 groupings. We present some of the difficulties we encountered and the consequences of the decisions we have made. We discuss some possible uses of the semantic groups, and we conclude with implications for future work." }, { "pmid": "15120654", "title": "Towards the development of a conceptual distance metric for the UMLS.", "abstract": "The objective of this work is to investigate the feasibility of conceptual similarity metrics in the framework of the Unified Medical Language System (UMLS). We have investigated an approach based on the minimum number of parent links between concepts, and evaluated its performance relative to human expert estimates on three sets of concepts for three terminologies within the UMLS (i.e., MeSH, ICD9CM, and SNOMED). The resulting quantitative metric enables computer-based applications that use decision thresholds and approximate matching criteria. The proposed conceptual matching supports problem solving and inferencing (using high-level, generic concepts) based on readily available data (typically represented as low-level, specific concepts). Through the identification of semantically similar concepts, conceptual matching also enables reasoning in the absence of exact, or even approximate, lexical matching. Finally, conceptual matching is relevant for terminology development and maintenance, machine learning research, decision support system development, and data mining research in biomedical informatics and other fields." }, { "pmid": "12835272", "title": "Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation.", "abstract": "MOTIVATION\nMany bioinformatics data resources not only hold data in the form of sequences, but also as annotation. In the majority of cases, annotation is written as scientific natural language: this is suitable for humans, but not particularly useful for machine processing. Ontologies offer a mechanism by which knowledge can be represented in a form capable of such processing. In this paper we investigate the use of ontological annotation to measure the similarities in knowledge content or 'semantic similarity' between entries in a data resource. These allow a bioinformatician to perform a similarity measure over annotation in an analogous manner to those performed over sequences. A measure of semantic similarity for the knowledge component of bioinformatics resources should afford a biologist a new tool in their repertoire of analyses.\n\n\nRESULTS\nWe present the results from experiments that investigate the validity of using semantic similarity by comparison with sequence similarity. We show a simple extension that enables a semantic search of the knowledge held within sequence databases.\n\n\nAVAILABILITY\nSoftware available from http://www.russet.org.uk." }, { "pmid": "16875881", "title": "Measures of semantic similarity and relatedness in the biomedical domain.", "abstract": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures." }, { "pmid": "20351894", "title": "UMLS-Interface and UMLS-Similarity : open source software for measuring paths and semantic similarity.", "abstract": "A number of computational measures for determining semantic similarity between pairs of biomedical concepts have been developed using various standards and programming platforms. In this paper, we introduce two new open-source frameworks based on the Unified Medical Language System (UMLS). These frameworks consist of the UMLS-Similarity and UMLS-Interface packages. UMLS-Interface provides path information about UMLS concepts. UMLS-Similarity calculates the semantic similarity between UMLS concepts using several previously developed measures and can be extended to include new measures. We validate the functionality of these frameworks by reproducing the results from previous work. Our frameworks constitute a significant contribution to the field of biomedical Natural Language Processing by providing a common development and testing platform for semantic similarity measures based on the UMLS." }, { "pmid": "22195148", "title": "Knowledge-based method for determining the meaning of ambiguous biomedical terms using information content measures of similarity.", "abstract": "In this paper, we introduce a novel knowledge-based word sense disambiguation method that determines the sense of an ambiguous word in biomedical text using semantic similarity or relatedness measures. These measures quantify the degree of similarity between concepts in the Unified Medical Language System (UMLS). The objective of this work was to develop a method that can disambiguate terms in biomedical text by exploiting similarity information extracted from the UMLS and to evaluate the efficacy of information content-based semantic similarity measures, which augment path-based information with probabilities derived from biomedical corpora. We show that information content-based measures obtain a higher disambiguation accuracy than path-based measures because they weight the path based on where it exists in the taxonomy coupled with the probability of the concepts occurring in a corpus of text." }, { "pmid": "16221939", "title": "HL7 Clinical Document Architecture, Release 2.", "abstract": "Clinical Document Architecture, Release One (CDA R1), became an American National Standards Institute (ANSI)-approved HL7 Standard in November 2000, representing the first specification derived from the Health Level 7 (HL7) Reference Information Model (RIM). CDA, Release Two (CDA R2), became an ANSI-approved HL7 Standard in May 2005 and is the subject of this article, where the focus is primarily on how the standard has evolved since CDA R1, particularly in the area of semantic representation of clinical events. CDA is a document markup standard that specifies the structure and semantics of a clinical document (such as a discharge summary or progress note) for the purpose of exchange. A CDA document is a defined and complete information object that can include text, images, sounds, and other multimedia content. It can be transferred within a message and can exist independently, outside the transferring message. CDA documents are encoded in Extensible Markup Language (XML), and they derive their machine processable meaning from the RIM, coupled with terminology. The CDA R2 model is richly expressive, enabling the formal representation of clinical statements (such as observations, medication administrations, and adverse events) such that they can be interpreted and acted upon by a computer. On the other hand, CDA R2 offers a low bar for adoption, providing a mechanism for simply wrapping a non-XML document with the CDA header or for creating a document with a structured header and sections containing only narrative content. The intent is to facilitate widespread adoption, while providing a mechanism for incremental semantic interoperability." }, { "pmid": "25954591", "title": "Application of HL7/LOINC Document Ontology to a University-Affiliated Integrated Health System Research Clinical Data Repository.", "abstract": "Fairview Health Services is an affiliated integrated health system partnering with the University of Minnesota to establish a secure research-oriented clinical data repository that includes large numbers of clinical documents. Standardization of clinical document names and associated attributes is essential for their exchange and secondary use. The HL7/LOINC Document Ontology (DO) was developed to provide a standard representation of clinical document attributes with a multi-axis structure. In this study, we evaluated the adequacy of DO to represent documents in the clinical data repository from legacy and current EHR systems across community and academic practice sites. The results indicate that a large portion of repository data items can be mapped to the current DO ontology but that document attributes do not always link consistently with DO axes and additional values for certain axes, particularly \"Setting\" and \"Role\" are needed for better coverage. To achieve a more comprehensive representation of clinical documents, more effort on algorithms, DO value sets, and data governance over clinical document attributes is needed." }, { "pmid": "25954438", "title": "Using language models to identify relevant new information in inpatient clinical notes.", "abstract": "Redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous and may negatively impact the use of these notes by clinicians, and, potentially, the efficiency of patient care delivery. Automated methods to identify redundant versus relevant new information may provide a valuable tool for clinicians to better synthesize patient information and navigate to clinically important details. In this study, we investigated the use of language models for identification of new information in inpatient notes, and evaluated our methods using expert-derived reference standards. The best method achieved precision of 0.743, recall of 0.832 and F1-measure of 0.784. The average proportion of redundant information was similar between inpatient and outpatient progress notes (76.6% (SD=17.3%) and 76.7% (SD=14.0%), respectively). Advanced practice providers tended to have higher rates of redundancy in their notes compared to physicians. Future investigation includes the addition of semantic components and visualization of new information." }, { "pmid": "20442139", "title": "An overview of MetaMap: historical perspective and recent advances.", "abstract": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD." }, { "pmid": "25954438", "title": "Using language models to identify relevant new information in inpatient clinical notes.", "abstract": "Redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous and may negatively impact the use of these notes by clinicians, and, potentially, the efficiency of patient care delivery. Automated methods to identify redundant versus relevant new information may provide a valuable tool for clinicians to better synthesize patient information and navigate to clinically important details. In this study, we investigated the use of language models for identification of new information in inpatient notes, and evaluated our methods using expert-derived reference standards. The best method achieved precision of 0.743, recall of 0.832 and F1-measure of 0.784. The average proportion of redundant information was similar between inpatient and outpatient progress notes (76.6% (SD=17.3%) and 76.7% (SD=14.0%), respectively). Advanced practice providers tended to have higher rates of redundancy in their notes compared to physicians. Future investigation includes the addition of semantic components and visualization of new information." } ]
Frontiers in Neuroscience
28769745
PMC5513987
10.3389/fnins.2017.00406
Sums of Spike Waveform Features for Motor Decoding
Traditionally, the key step before decoding motor intentions from cortical recordings is spike sorting, the process of identifying which neuron was responsible for an action potential. Recently, researchers have started investigating approaches to decoding which omit the spike sorting step, by directly using information about action potentials' waveform shapes in the decoder, though this approach is not yet widespread. Particularly, one recent approach involves computing the moments of waveform features and using these moment values as inputs to decoders. This computationally inexpensive approach was shown to be comparable in accuracy to traditional spike sorting. In this study, we use offline data recorded from two Rhesus monkeys to further validate this approach. We also modify this approach by using sums of exponentiated features of spikes, rather than moments. Our results show that using waveform feature sums facilitates significantly higher hand movement reconstruction accuracy than using waveform feature moments, though the magnitudes of differences are small. We find that using the sums of one simple feature, the spike amplitude, allows better offline decoding accuracy than traditional spike sorting by expert (correlation of 0.767, 0.785 vs. 0.744, 0.738, respectively, for two monkeys, average 16% reduction in mean-squared-error), as well as unsorted threshold crossings (0.746, 0.776; average 9% reduction in mean-squared-error). Our results suggest that the sums-of-features framework has potential as an alternative to both spike sorting and using unsorted threshold crossings, if developed further. Also, we present data comparing sorted vs. unsorted spike counts in terms of offline decoding accuracy. Traditional sorted spike counts do not include waveforms that do not match any template (“hash”), but threshold crossing counts do include this hash. On our data and in previous work, hash contributes to decoding accuracy. Thus, using the comparison between sorted spike counts and threshold crossing counts to evaluate the benefit of sorting is confounded by the presence of hash. We find that when the comparison is controlled for hash, performing sorting is better than not. These results offer a new perspective on the question of to sort or not to sort.
Related workBesides the work of Ventura and Todorova (2015), on which this work is based, there has been other work using waveform features for decoding. Chen et al. (2012) and Kloosterman et al. (2014) used spike waveform features to decode from hippocampal recordings. Their approach is based on the spatial-temporal Poisson process, where a Poisson process describes the spike arrival time and a random vector describes the waveform features of the spike. Later, Deng et al. (2015) presented a marked point-process decoder that uses waveform features as the marks of the point-process and tested it on hippocampal recordings. This approach is similar in spirit to the spatial-temporal Poisson process. The primary difference between these approaches and ours is in the way time is segmented. The spatial-temporal Poisson process and marked point-process operate on single spikes, which requires a high refresh rate and somewhat more sophisticated Bayesian inference. Our approach works on time bins, which allow lower refresh rates and compatibility with the relatively simple Kalman and Wiener filters. However, operating on time bins requires some way to summarize the waveform shape information of all the spikes which occurred during the bin, hence Ventura and Todorova's moments and our sums. These statistics entail their own assumptions (linearity of tuning, stationarity of waveform shape, etc.) and approximations (using a finite number of moments or sums).Todorova et al. (2014) decoded motor intent from threshold crossing counts and the spike amplitude waveform feature. Their model was non-parametric and fitted using the expectation-maximization scheme of Ventura (2009). Due to the non-parametric model, decoding required a computationally-expensive particle filter. This drawback led to the search for a more computationally efficient method, the result of which is the waveform feature moments framework of Ventura and Todorova (2015).Also related are earlier work by Ventura on spike sorting using motor tuning information (Ventura, 2009) and on sorting entire spike trains to take advantage of history information (Ventura and Gerkin, 2012). These ideas are similar to waveform feature decoding in that they also combine spike shape information and neural tuning, but different in that the goal is spike sorting.
[ "24739786", "14624244", "21775782", "25504690", "19349143", "25380335", "19721186", "24089403", "10221571", "28066170", "21919788", "12728268", "27097901", "26133797", "23742213", "17670985", "25082508", "18085990", "19548802", "22529350", "25380335", "7173942", "16354382" ]
[ { "pmid": "24739786", "title": "Restoring sensorimotor function through intracortical interfaces: progress and looming challenges.", "abstract": "The loss of a limb or paralysis resulting from spinal cord injury has devastating consequences on quality of life. One approach to restoring lost sensory and motor abilities in amputees and patients with tetraplegia is to supply them with implants that provide a direct interface with the CNS. Such brain-machine interfaces might enable a patient to exert voluntary control over a prosthetic or robotic limb or over the electrically induced contractions of paralysed muscles. A parallel interface could convey sensory information about the consequences of these movements back to the patient. Recent developments in the algorithms that decode motor intention from neuronal activity and in approaches to convey sensory feedback by electrically stimulating neurons, using biomimetic and adaptation-based approaches, have shown the promise of invasive interfaces with sensorimotor cortices, although substantial challenges remain." }, { "pmid": "14624244", "title": "Learning to control a brain-machine interface for reaching and grasping by primates.", "abstract": "Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations." }, { "pmid": "21775782", "title": "Long-term stability of neural prosthetic control signals from silicon cortical arrays in rhesus macaque motor cortex.", "abstract": "Cortically-controlled prosthetic systems aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic devices. Recent reports have demonstrated reasonably high levels of performance and control of computer cursors and prosthetic limbs, but to achieve true clinical viability, the long-term operation of these systems must be better understood. In particular, the quality and stability of the electrically-recorded neural signals require further characterization. Here, we quantify action potential changes and offline neural decoder performance over 382 days of recording from four intracortical arrays in three animals. Action potential amplitude decreased by 2.4% per month on average over the course of 9.4, 10.4, and 31.7 months in three animals. During most time periods, decoder performance was not well correlated with action potential amplitude (p > 0.05 for three of four arrays). In two arrays from one animal, action potential amplitude declined by an average of 37% over the first 2 months after implant. However, when using simple threshold-crossing events rather than well-isolated action potentials, no corresponding performance loss was observed during this time using an offline decoder. One of these arrays was effectively used for online prosthetic experiments over the following year. Substantial short-term variations in waveforms were quantified using a wireless system for contiguous recording in one animal, and compared within and between days for all three animals. Overall, this study suggests that action potential amplitude declines more slowly than previously supposed, and performance can be maintained over the course of multiple years when decoding from threshold-crossing events rather than isolated action potentials. This suggests that neural prosthetic systems may provide high performance over multiple years in human clinical trials." }, { "pmid": "25504690", "title": "Comparison of spike sorting and thresholding of voltage waveforms for intracortical brain-machine interface performance.", "abstract": "OBJECTIVE\nFor intracortical brain-machine interfaces (BMIs), action potential voltage waveforms are often sorted to separate out individual neurons. If these neurons contain independent tuning information, this process could increase BMI performance. However, the sorting of action potentials ('spikes') requires high sampling rates and is computationally expensive. To explicitly define the difference between spike sorting and alternative methods, we quantified BMI decoder performance when using threshold-crossing events versus sorted action potentials.\n\n\nAPPROACH\nWe used data sets from 58 experimental sessions from two rhesus macaques implanted with Utah arrays. Data were recorded while the animals performed a center-out reaching task with seven different angles. For spike sorting, neural signals were sorted into individual units by using a mixture of Gaussians to cluster the first four principal components of the waveforms. For thresholding events, spikes that simply crossed a set threshold were retained. We decoded the data offline using both a Naïve Bayes classifier for reaching direction and a linear regression to evaluate hand position.\n\n\nMAIN RESULTS\nWe found the highest performance for thresholding when placing a threshold between -3 and -4.5 × Vrms. Spike sorted data outperformed thresholded data for one animal but not the other. The mean Naïve Bayes classification accuracy for sorted data was 88.5% and changed by 5% on average when data were thresholded. The mean correlation coefficient for sorted data was 0.92, and changed by 0.015 on average when thresholded.\n\n\nSIGNIFICANCE\nFor prosthetics applications, these results imply that when thresholding is used instead of spike sorting, only a small amount of performance may be lost. The utilization of threshold-crossing events may significantly extend the lifetime of a device because these events are often still detectable once single neurons are no longer isolated." }, { "pmid": "19349143", "title": "Methods for estimating neural firing rates, and their application to brain-machine interfaces.", "abstract": "Neural spike trains present analytical challenges due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of a spike train's underlying firing rate. Numerous methods for estimating neural firing rates have been developed in recent years, but to date no systematic comparison has been made between them. In this study, we review both classic and current firing rate estimation techniques. We compare the advantages and drawbacks of these methods. Then, in an effort to understand their relevance to the field of neural prostheses, we also apply these estimators to experimentally gathered neural data from a prosthetic arm-reaching paradigm. Using these estimates of firing rate, we apply standard prosthetic decoding algorithms to compare the performance of the different firing rate estimators, and, perhaps surprisingly, we find minimal differences. This study serves as a review of available spike train smoothers and a first quantitative comparison of their performance for brain-machine interfaces." }, { "pmid": "25380335", "title": "Spike train SIMilarity Space (SSIMS): a framework for single neuron and ensemble data analysis.", "abstract": "Increased emphasis on circuit level activity in the brain makes it necessary to have methods to visualize and evaluate large-scale ensemble activity beyond that revealed by raster-histograms or pairwise correlations. We present a method to evaluate the relative similarity of neural spiking patterns by combining spike train distance metrics with dimensionality reduction. Spike train distance metrics provide an estimate of similarity between activity patterns at multiple temporal resolutions. Vectors of pair-wise distances are used to represent the intrinsic relationships between multiple activity patterns at the level of single units or neuronal ensembles. Dimensionality reduction is then used to project the data into concise representations suitable for clustering analysis as well as exploratory visualization. Algorithm performance and robustness are evaluated using multielectrode ensemble activity data recorded in behaving primates. We demonstrate how spike train SIMilarity space (SSIMS) analysis captures the relationship between goal directions for an eight-directional reaching task and successfully segregates grasp types in a 3D grasping task in the absence of kinematic information. The algorithm enables exploration of virtually any type of neural spiking (time series) data, providing similarity-based clustering of neural activity states with minimal assumptions about potential information encoding models." }, { "pmid": "19721186", "title": "Control of a brain-computer interface without spike sorting.", "abstract": "Two rhesus monkeys were trained to move a cursor using neural activity recorded with silicon arrays of 96 microelectrodes implanted in the primary motor cortex. We have developed a method to extract movement information from the recorded single and multi-unit activity in the absence of spike sorting. By setting a single threshold across all channels and fitting the resultant events with a spline tuning function, a control signal was extracted from this population using a Bayesian particle-filter extraction algorithm. The animals achieved high-quality control comparable to the performance of decoding schemes based on sorted spikes. Our results suggest that even the simplest signal processing is sufficient for high-quality neuroprosthetic control." }, { "pmid": "24089403", "title": "Bayesian decoding using unsorted spikes in the rat hippocampus.", "abstract": "A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces." }, { "pmid": "10221571", "title": "A review of methods for spike sorting: the detection and classification of neural action potentials.", "abstract": "The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local area. This article reviews algorithms and methods for detecting and classifying action potentials, a problem commonly referred to as spike sorting. The article first discusses the challenges of measuring neural activity and the basic issues of signal detection and classification. It reviews and illustrates algorithms and techniques that have been applied to many of the problems in spike sorting and discusses the advantages and limitations of each and the applicability of these methods for different types of experimental demands. The article is written both for the physiologist wanting to use simple methods that will improve experimental yield and minimize the selection biases of traditional techniques and for those who want to apply or extend more sophisticated algorithms to meet new experimental challenges." }, { "pmid": "28066170", "title": "An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.", "abstract": "Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well." }, { "pmid": "21919788", "title": "Adaptive decoding for brain-machine interfaces through Bayesian parameter updates.", "abstract": "Brain-machine interfaces (BMIs) transform the activity of neurons recorded in motor areas of the brain into movements of external actuators. Representation of movements by neuronal populations varies over time, during both voluntary limb movements and movements controlled through BMIs, due to motor learning, neuronal plasticity, and instability in recordings. To ensure accurate BMI performance over long time spans, BMI decoders must adapt to these changes. We propose the Bayesian regression self-training method for updating the parameters of an unscented Kalman filter decoder. This novel paradigm uses the decoder's output to periodically update its neuronal tuning model in a Bayesian linear regression. We use two previously known statistical formulations of Bayesian linear regression: a joint formulation, which allows fast and exact inference, and a factorized formulation, which allows the addition and temporary omission of neurons from updates but requires approximate variational inference. To evaluate these methods, we performed offline reconstructions and closed-loop experiments with rhesus monkeys implanted cortically with microwire electrodes. Offline reconstructions used data recorded in areas M1, S1, PMd, SMA, and PP of three monkeys while they controlled a cursor using a handheld joystick. The Bayesian regression self-training updates significantly improved the accuracy of offline reconstructions compared to the same decoder without updates. We performed 11 sessions of real-time, closed-loop experiments with a monkey implanted in areas M1 and S1. These sessions spanned 29 days. The monkey controlled the cursor using the decoder with and without updates. The updates maintained control accuracy and did not require information about monkey hand movements, assumptions about desired movements, or knowledge of the intended movement goals as training signals. These results indicate that Bayesian regression self-training can maintain BMI control accuracy over long periods, making clinical neuroprosthetics more viable." }, { "pmid": "27097901", "title": "Extracellular voltage threshold settings can be tuned for optimal encoding of movement and stimulus parameters.", "abstract": "OBJECTIVE\nA traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1).\n\n\nAPPROACH\nWe record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings.\n\n\nMAIN RESULTS\nThe optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography.\n\n\nSIGNIFICANCE\nHow neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue." }, { "pmid": "26133797", "title": "Single-unit activity, threshold crossings, and local field potentials in motor cortex differentially encode reach kinematics.", "abstract": "A diversity of signals can be recorded with extracellular electrodes. It remains unclear whether different signal types convey similar or different information and whether they capture the same or different underlying neural phenomena. Some researchers focus on spiking activity, while others examine local field potentials, and still others posit that these are fundamentally the same signals. We examined the similarities and differences in the information contained in four signal types recorded simultaneously from multielectrode arrays implanted in primary motor cortex: well-isolated action potentials from putative single units, multiunit threshold crossings, and local field potentials (LFPs) at two distinct frequency bands. We quantified the tuning of these signal types to kinematic parameters of reaching movements. We found 1) threshold crossing activity is not a proxy for single-unit activity; 2) when examined on individual electrodes, threshold crossing activity more closely resembles LFP activity at frequencies between 100 and 300 Hz than it does single-unit activity; 3) when examined across multiple electrodes, threshold crossing activity and LFP integrate neural activity at different spatial scales; and 4) LFP power in the \"beta band\" (between 10 and 40 Hz) is a reliable indicator of movement onset but does not encode kinematic features on an instant-by-instant basis. These results show that the diverse signals recorded from extracellular electrodes provide somewhat distinct and complementary information. It may be that these signal types arise from biological phenomena that are partially distinct. These results also have practical implications for harnessing richer signals to improve brain-machine interface control." }, { "pmid": "23742213", "title": "Computing loss of efficiency in optimal Bayesian decoders given noisy or incomplete spike trains.", "abstract": "We investigate Bayesian methods for optimal decoding of noisy or incompletely-observed spike trains. Information about neural identity or temporal resolution may be lost during spike detection and sorting, or spike times measured near the soma may be corrupted with noise due to stochastic membrane channel effects in the axon. We focus on neural encoding models in which the (discrete) neural state evolves according to stimulus-dependent Markovian dynamics. Such models are sufficiently flexible that we may incorporate realistic stimulus encoding and spiking dynamics, but nonetheless permit exact computation via efficient hidden Markov model forward-backward methods. We analyze two types of signal degradation. First, we quantify the information lost due to jitter or downsampling in the spike-times. Second, we quantify the information lost when knowledge of the identities of different spiking neurons is corrupted. In each case the methods introduced here make it possible to quantify the dependence of the information loss on biophysical parameters such as firing rate, spike jitter amplitude, spike observation noise, etc. In particular, decoders that model the probability distribution of spike-neuron assignments significantly outperform decoders that use only the most likely spike assignments, and are ignorant of the posterior spike assignment uncertainty." }, { "pmid": "17670985", "title": "Predicting movement from multiunit activity.", "abstract": "Previous studies have shown that intracortical activity can be used to operate prosthetic devices such as an artificial limb. Previously used neuronal signals were either the activity of tens to hundreds of spiking neurons, which are difficult to record for long periods of time, or local field potentials, which are highly correlated with each other. Here, we show that by estimating multiunit activity (MUA), the superimposed activity of many neurons around a microelectrode, and using a small number of electrodes, an accurate prediction of the upcoming movement is obtained. Compared with single-unit spikes, single MUA recordings are obtained more easily and the recordings are more stable over time. Compared with local field potentials, pairs of MUA recordings are considerably less redundant. Compared with any other intracortical signal, single MUA recordings are more informative. MUA is informative even in the absence of spikes. By combining information from multielectrode recordings from the motor cortices of monkeys that performed either discrete prehension or continuous tracing movements, we demonstrate that predictions based on multichannel MUA are superior to those based on either spikes or local field potentials. These results demonstrate that considerable information is retained in the superimposed activity of multiple neurons, and therefore suggest that neurons within the same locality process similar information. They also illustrate that complex movements can be predicted using relatively simple signal processing without the detection of spikes and, thus, hold the potential to greatly expedite the development of motor-cortical prosthetic devices." }, { "pmid": "25082508", "title": "To sort or not to sort: the impact of spike-sorting on neural decoding performance.", "abstract": "OBJECTIVE\nBrain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity.\n\n\nAPPROACH\nWe present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step.\n\n\nMAIN RESULTS\nDiscarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior.\n\n\nSIGNIFICANCE\nOur results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding." }, { "pmid": "18085990", "title": "Spike train decoding without spike sorting.", "abstract": "We propose a novel paradigm for spike train decoding, which avoids entirely spike sorting based on waveform measurements. This paradigm directly uses the spike train collected at recording electrodes from thresholding the bandpassed voltage signal. Our approach is a paradigm, not an algorithm, since it can be used with any of the current decoding algorithms, such as population vector or likelihood-based algorithms. Based on analytical results and an extensive simulation study, we show that our paradigm is comparable to, and sometimes more efficient than, the traditional approach based on well-isolated neurons and that it remains efficient even when all electrodes are severely corrupted by noise, a situation that would render spike sorting particularly difficult. Our paradigm will also save time and computational effort, both of which are crucially important for successful operation of real-time brain-machine interfaces. Indeed, in place of the lengthy spike-sorting task of the traditional approach, it involves an exact expectation EM algorithm that is fast enough that it could also be left to run during decoding to capture potential slow changes in the states of the neurons." }, { "pmid": "19548802", "title": "Automatic spike sorting using tuning information.", "abstract": "Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only." }, { "pmid": "22529350", "title": "Accurately estimating neuronal correlation requires a new spike-sorting paradigm.", "abstract": "Neurophysiology is increasingly focused on identifying coincident activity among neurons. Strong inferences about neural computation are made from the results of such studies, so it is important that these results be accurate. However, the preliminary step in the analysis of such data, the assignment of spike waveforms to individual neurons (\"spike-sorting\"), makes a critical assumption which undermines the analysis: that spikes, and hence neurons, are independent. We show that this assumption guarantees that coincident spiking estimates such as correlation coefficients are biased. We also show how to eliminate this bias. Our solution involves sorting spikes jointly, which contrasts with the current practice of sorting spikes independently of other spikes. This new \"ensemble sorting\" yields unbiased estimates of coincident spiking, and permits more data to be analyzed with confidence, improving the quality and quantity of neurophysiological inferences. These results should be of interest outside the context of neuronal correlations studies. Indeed, simultaneous recording of many neurons has become the rule rather than the exception in experiments, so it is essential to spike sort correctly if we are to make valid inferences about any properties of, and relationships between, neurons." }, { "pmid": "25380335", "title": "Spike train SIMilarity Space (SSIMS): a framework for single neuron and ensemble data analysis.", "abstract": "Increased emphasis on circuit level activity in the brain makes it necessary to have methods to visualize and evaluate large-scale ensemble activity beyond that revealed by raster-histograms or pairwise correlations. We present a method to evaluate the relative similarity of neural spiking patterns by combining spike train distance metrics with dimensionality reduction. Spike train distance metrics provide an estimate of similarity between activity patterns at multiple temporal resolutions. Vectors of pair-wise distances are used to represent the intrinsic relationships between multiple activity patterns at the level of single units or neuronal ensembles. Dimensionality reduction is then used to project the data into concise representations suitable for clustering analysis as well as exploratory visualization. Algorithm performance and robustness are evaluated using multielectrode ensemble activity data recorded in behaving primates. We demonstrate how spike train SIMilarity space (SSIMS) analysis captures the relationship between goal directions for an eight-directional reaching task and successfully segregates grasp types in a 3D grasping task in the absence of kinematic information. The algorithm enables exploration of virtually any type of neural spiking (time series) data, providing similarity-based clustering of neural activity states with minimal assumptions about potential information encoding models." }, { "pmid": "16354382", "title": "Bayesian population decoding of motor cortical activity using a Kalman filter.", "abstract": "Effective neural motor prostheses require a method for decoding neural activity representing desired movement. In particular, the accurate reconstruction of a continuous motion signal is necessary for the control of devices such as computer cursors, robots, or a patient's own paralyzed limbs. For such applications, we developed a real-time system that uses Bayesian inference techniques to estimate hand motion from the firing rates of multiple neurons. In this study, we used recordings that were previously made in the arm area of primary motor cortex in awake behaving monkeys using a chronically implanted multielectrode microarray. Bayesian inference involves computing the posterior probability of the hand motion conditioned on a sequence of observed firing rates; this is formulated in terms of the product of a likelihood and a prior. The likelihood term models the probability of firing rates given a particular hand motion. We found that a linear gaussian model could be used to approximate this likelihood and could be readily learned from a small amount of training data. The prior term defines a probabilistic model of hand kinematics and was also taken to be a linear gaussian model. Decoding was performed using a Kalman filter, which gives an efficient recursive method for Bayesian inference when the likelihood and prior are linear and gaussian. In off-line experiments, the Kalman filter reconstructions of hand trajectory were more accurate than previously reported results. The resulting decoding algorithm provides a principled probabilistic model of motor-cortical coding, decodes hand motion in real time, provides an estimate of uncertainty, and is straightforward to implement. Additionally the formulation unifies and extends previous models of neural coding while providing insights into the motor-cortical code." } ]
Scientific Reports
28720794
PMC5515977
10.1038/s41598-017-05988-5
Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks
The node buffer size has a large influence on the performance of Mobile Opportunistic Networks (MONs). This is mainly because each node should temporarily cache packets to deal with the intermittently connected links. In this paper, we study fundamental bounds on node buffer size below which the network system can not achieve the expected performance such as the transmission delay and packet delivery ratio. Given the condition that each link has the same probability p to be active in the next time slot when the link is inactive and q to be inactive when the link is active, there exists a critical value p c from a percolation perspective. If p > p c, the network is in the supercritical case, where we found that there is an achievable upper bound on the buffer size of nodes, independent of the inactive probability q. When p < p c, the network is in the subcritical case, and there exists a closed-form solution for buffer occupation, which is independent of the size of the network.
Related WorksMONs is a natural evolution from traditional mobile ad hoc networks. In MONs, the links are intermittently connected due to node mobility and power on/off, mobile nodes communicate with each other opportunistically and route packets in a store-carry-and-forward style. In the past several years, much effort has been expended to improve the performance of opportunistic routing algorithms in terms of reducing the forwarding delay or increasing the packet delivery ratio. Some valuable results have been achieved that provide theoretical guidance for performance optimization. We introduce them in detail.Cache-aware Opportunistic Routing AlgorithmsConsidering the limited buffer size of portable devices, cache-aware solutions become very important in MONs. A. Balasubramanian et al. take MONs routing as a resource allocation problem and turn the forwarding metric into per-packet utilities that incorporate two factors: one is the expected contribution of a packet if it were replicated, and the other is the packet size. The utility, then, is the ratio of the former factor over the latter, which determines how a packet should be replicated in the system17. To deal with the short contacts and fragmented bundles, M. J. Pitkanen and J. Ott18 integrated application level erasure coding on top of existing protocols. They used Reed Solomon codes to divide single bundles into multiple blocks and observed that the block redundancy increased the catch hit rate and reduced the response latency.S. Kaveevivitchai and H. Esaki proposed a message deletion strategy for a multi-copy routing scheme19. They employed extra nodes deployed at the system’s hot regions to relay the acknowledgement (ACK) messages, and copies matching the ID of ACK messages are dropped from the buffer. A. T. Prodhan et al. proposed TBR20, which ranks messages with their TTL, hop count and number of copies. A node will delete the copy of a message if it receives a higher priority message and its buffer is full. Recently, D. Pan et al.21 developed a comprehensive cache schedule algorithm that integrates different aspects of storage managements including queue strategy, buffer replacement and redundancy deletion.Performance Analysis of Cache-aware Opportunistic Routing AlgorithmsSome analytical results mainly focus on metrics such as the flooding time13, 22, 23, network diameter14 and delay-capacity tradeoff15, in which the buffer size of nodes is usually assumed to be unlimited. Several works discuss the congestion issue with the epidemic algorithm. For example, A. Krifa et al.24 proposed an efficient buffer management policy by modeling the relationship between the number of copies and the mean delivery delay/rate. When a new packet copy arrives at a node and the node finds its buffer full, it drops the packets with the minimal marginal utility value. G. Zhang and Y. Liu employed revenue management and dynamic programming to study the congestion management strategy of MONs25. Given a class of utility functions, they showed that one arrived packet should be accepted and that the solution is optimal if and only if the value of the benefit function is greater than that of the cost function.The authors of26 evaluated the impact of buffer size on the efficiency of four kinds of routing algorithms. They observed that these protocols reacted differently to the increase of buffer size in mobile vehicle networks. Generally speaking, both the Epidemic and MaxProp benefit from the increased buffer size on all nodes (i.e., the mobile and terminal nodes). PROPHET and SprayWait instead have no significant improvement when only the buffer size of terminal nodes increases. X. Zhuo et al.27 explored the influence of contact duration on data forwarding performance. To maximize the delivery rate, they modeled the data replication problem with mixed integer programming technology, subject to the storage constraint.
[]
[]
Scientific Reports
28729710
PMC5519723
10.1038/s41598-017-05778-z
Learning a Health Knowledge Graph from Electronic Medical Records
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
Related workIn recent work, Finlayson et al. quantify the relatedness of 1 million concepts by computing their co-occurrence in free-text notes in the EMR, releasing a “graph of medicine”22. Sondhi et al. measure the distance between mentions of two concepts within a clinical note for determination of edge-strength in the resulting graph23. Goodwin et al. use natural language processing to incorporate the belief state of the physician for assertions in the medical record, which is complementary to and could be used together with our approach24. Importantly, whereas the aforementioned works consider purely associative relations between medical concepts, our methodology models more complex relationships, and our evaluation focuses on whether the proposed algorithms can derive known causal relations between diseases and symptoms.
[ "17098763", "3295316", "1762578", "2695783", "7719792", "4552594", "19159006", "25977789", "18171485", "12123149", "27441408" ]
[ { "pmid": "17098763", "title": "Googling for a diagnosis--use of Google as a diagnostic aid: internet based study.", "abstract": "OBJECTIVE\nTo determine how often searching with Google (the most popular search engine on the world wide web) leads doctors to the correct diagnosis.\n\n\nDESIGN\nInternet based study using Google to search for diagnoses; researchers were blind to the correct diagnoses.\n\n\nSETTING\nOne year's (2005) diagnostic cases published in the case records of the New England Journal of Medicine.\n\n\nCASES\n26 cases from the New England Journal of Medicine; management cases were excluded.\n\n\nMAIN OUTCOME MEASURE\nPercentage of correct diagnoses from Google searches (compared with the diagnoses as published in the New England Journal of Medicine).\n\n\nRESULTS\nGoogle searches revealed the correct diagnosis in 15 (58%, 95% confidence interval 38% to 77%) cases.\n\n\nCONCLUSION\nAs internet access becomes more readily available in outpatient clinics and hospital wards, the web is rapidly becoming an important clinical tool for doctors. The use of web based searching may help doctors to diagnose difficult cases." }, { "pmid": "3295316", "title": "DXplain. An evolving diagnostic decision-support system.", "abstract": "DXplain is an evolving computer-based diagnostic decision-support system designed for use by the physician who has no computer expertise. DXplain accepts a list of clinical manifestations and then proposes diagnostic hypotheses. The program explains and justifies its interpretations and provides access to a knowledge base concerning the differential diagnosis of the signs and symptoms. DXplain was developed with the support and cooperation of the American Medical Association. The system is distributed to the medical community through AMA/NET--a nationwide computer communications network sponsored by the American Medical Association--and through the Massachusetts General Hospital Continuing Education Network. A key element in the distribution of DXplain is the planned collaboration with its physician-users whose comments, criticisms, and suggestions will play an important role in modifying and enhancing the knowledge base." }, { "pmid": "1762578", "title": "Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base. I. The probabilistic model and inference algorithms.", "abstract": "In Part I of this two-part series, we report the design of a probabilistic reformulation of the Quick Medical Reference (QMR) diagnostic decision-support tool. We describe a two-level multiply connected belief-network representation of the QMR knowledge base of internal medicine. In the belief-network representation of the QMR knowledge base, we use probabilities derived from the QMR disease profiles, from QMR imports of findings, and from National Center for Health Statistics hospital-discharge statistics. We use a stochastic simulation algorithm for inference on the belief network. This algorithm computes estimates of the posterior marginal probabilities of diseases given a set of findings. In Part II of the series, we compare the performance of QMR to that of our probabilistic system on cases abstracted from continuing medical education materials from Scientific American Medicine. In addition, we analyze empirically several components of the probabilistic model and simulation algorithm." }, { "pmid": "2695783", "title": "Use of the Quick Medical Reference (QMR) program as a tool for medical education.", "abstract": "The original goal of the INTERNIST-1 project, as formulated in the early 1970s, was to develop an expert consultant program for diagnosis in general internal medicine. By the early 1980s, it was recognized that the most valuable product of the project was its medical knowledge base (KB). The INTERNIST-1/QMR KB comprehensively summarizes information contained in the medical literature regarding diagnosis of disorders seen in internal medicine. The QMR program was developed to enable its users to exploit the contents of the INTERNIST-1/QMR KB in educationally, clinically, and computationally useful ways. Utilizing commonly available microcomputers, the program operates at three levels--as an electronic textbook, as an intermediate level spreadsheet for the combination and exploration of simple diagnostic concepts, and as an expert consultant program. The electronic textbook contains an average of 85 findings and 8 associated disorders relevant to the diagnosis of approximately 600 disorders in internal medicine. Inverting the disease profiles creates extensive differential diagnosis lists for the over 4250 patient findings known to the system. Unlike a standard printed medical textbook, the QMR knowledge base can be manipulated \"on the fly\" to format displays that match the information needs of users. Preliminary use of the program for education of medical students and medical house officers at several sites has met with an enthusiastic response." }, { "pmid": "7719792", "title": "Medical diagnostic decision support systems--past, present, and future: a threaded bibliography and brief commentary.", "abstract": "Articles about medical diagnostic decision support (MDDS) systems often begin with a disclaimer such as, \"despite many years of research and millions of dollars of expenditures on medical diagnostic systems, none is in widespread use at the present time.\" While this statement remains true in the sense that no single diagnostic system is in widespread use, it is misleading with regard to the state of the art of these systems. Diagnostic systems, many simple and some complex, are now ubiquitous, and research on MDDS systems is growing. The nature of MDDS systems has diversified over time. The prospects for adoption of large-scale diagnostic systems are better now than ever before, due to enthusiasm for implementation of the electronic medical record in academic, commercial, and primary care settings. Diagnostic decision support systems have become an established component of medical technology. This paper provides a review and a threaded bibliography for some of the important work on MDDS systems over the years from 1954 to 1993." }, { "pmid": "4552594", "title": "Computer-aided diagnosis of acute abdominal pain.", "abstract": "This paper reports a controlled prospective unselected real-time comparison of human and computer-aided diagnosis in a series of 304 patients suffering from abdominal pain of acute onset.The computing system's overall diagnostic accuracy (91.8%) was significantly higher than that of the most senior member of the clinical team to see each case (79.6%). It is suggested as a result of these studies that the provision of such a system to aid the clinician is both feasible in a real-time clinical setting, and likely to be of practical value, albeit in a small percentage of cases." }, { "pmid": "25977789", "title": "Building the graph of medicine from millions of clinical narratives.", "abstract": "Electronic health records (EHR) represent a rich and relatively untapped resource for characterizing the true nature of clinical practice and for quantifying the degree of inter-relatedness of medical entities such as drugs, diseases, procedures and devices. We provide a unique set of co-occurrence matrices, quantifying the pairwise mentions of 3 million terms mapped onto 1 million clinical concepts, calculated from the raw text of 20 million clinical notes spanning 19 years of data. Co-frequencies were computed by means of a parallelized annotation, hashing, and counting pipeline that was applied over clinical notes from Stanford Hospitals and Clinics. The co-occurrence matrix quantifies the relatedness among medical concepts which can serve as the basis for many statistical tests, and can be used to directly compute Bayesian conditional probabilities, association rules, as well as a range of test statistics such as relative risks and odds ratios. This dataset can be leveraged to quantitatively assess comorbidity, drug-drug, and drug-disease patterns for a range of clinical, epidemiological, and financial applications." }, { "pmid": "18171485", "title": "A decision-analytic approach to define poor prognosis patients: a case study for non-seminomatous germ cell cancer patients.", "abstract": "BACKGROUND\nClassification systems may be useful to direct more aggressive treatment to cancer patients with a relatively poor prognosis. The definition of 'poor prognosis' often lacks a formal basis. We propose a decision analytic approach to weigh benefits and harms explicitly to define the treatment threshold for more aggressive treatment. This approach is illustrated by a case study in advanced testicular cancer, where patients with a high risk of mortality under standard treatment may be eligible for high-dose chemotherapy with stem cell support, which is currently defined by the IGCC classification.\n\n\nMETHODS\nWe used published literature to estimate the benefit and harm of high-dose chemotherapy (HD-CT) versus standard-dose chemotherapy (SD-CT) for patients with advanced non-seminomatous germ cell cancer. Benefit and harm were defined as the reduction and increase in absolute risk of mortality due to HD-CT respectively. Harm included early and late treatment related death, and treatment related morbidity (weighted by 'utility').\n\n\nRESULTS\nWe considered a conservative and an optimistic benefit of 30 and 40% risk reduction respectively. We estimated the excess treatment related mortality at 2%. When treatment related morbidity was taken into account, the harm of HD-CT increased to 5%. With a relative benefit of 30% and harm of 2 or 5%, HD-CT might be beneficial for patients with over 7 or 17% risk of cancer specific mortality with SD chemotherapy, while with a relative benefit of 40% HD-CT was beneficial over 5 and 12.5% risk respectively. Compared to the IGCC classification 14% of the patients would receive more aggressive treatment, and 2% less intensive treatment.\n\n\nCONCLUSION\nBenefit and harm can be used to define 'poor prognosis' explicitly for non-seminomatous germ cell cancer patients who are considered for high-dose chemotherapy. This approach can readily be adapted to new results and extended to other cancers to define candidates for more aggressive treatments." }, { "pmid": "12123149", "title": "A simple algorithm for identifying negated findings and diseases in discharge summaries.", "abstract": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries." }, { "pmid": "27441408", "title": "Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.", "abstract": "We present a new approach to population health, in which data-driven predictive models are learned for outcomes such as type 2 diabetes. Our approach enables risk assessment from readily available electronic claims data on large populations, without additional screening cost. Proposed model uncovers early and late-stage risk factors. Using administrative claims, pharmacy records, healthcare utilization, and laboratory results of 4.1 million individuals between 2005 and 2009, an initial set of 42,000 variables were derived that together describe the full health status and history of every individual. Machine learning was then used to methodically enhance predictive variable set and fit models predicting onset of type 2 diabetes in 2009-2011, 2010-2012, and 2011-2013. We compared the enhanced model with a parsimonious model consisting of known diabetes risk factors in a real-world environment, where missing values are common and prevalent. Furthermore, we analyzed novel and known risk factors emerging from the model at different age groups at different stages before the onset. Parsimonious model using 21 classic diabetes risk factors resulted in area under ROC curve (AUC) of 0.75 for diabetes prediction within a 2-year window following the baseline. The enhanced model increased the AUC to 0.80, with about 900 variables selected as predictive (p < 0.0001 for differences between AUCs). Similar improvements were observed for models predicting diabetes onset 1-3 years and 2-4 years after baseline. The enhanced model improved positive predictive value by at least 50% and identified novel surrogate risk factors for type 2 diabetes, such as chronic liver disease (odds ratio [OR] 3.71), high alanine aminotransferase (OR 2.26), esophageal reflux (OR 1.85), and history of acute bronchitis (OR 1.45). Liver risk factors emerge later in the process of diabetes development compared with obesity-related factors such as hypertension and high hemoglobin A1c. In conclusion, population-level risk prediction for type 2 diabetes using readily available administrative data is feasible and has better prediction performance than classical diabetes risk prediction algorithms on very large populations with missing data. The new model enables intervention allocation at national scale quickly and accurately and recovers potentially novel risk factors at different stages before the disease onset." } ]
GigaScience
28327936
PMC5530313
10.1093/gigascience/giw013
GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software
Abstract Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead.
Related workSoftware containers and DockerA software container packages an application with everything it needs to run, including supporting libraries and system resources. Containers differ from traditional virtual machines (VMs) in that the resources of the operating system (OS), and not the hardware, are virtualized. In addition, multiple containers share a single OS kernel, thus saving considerable resources over multiple VMs.Linux has supported OS-level virtualization for several years. Docker (http://www.docker.com/) is an open source project that provides tools to setup and deploy Linux software containers. While Docker can run natively on Linux hosts, a small Linux VM is necessary to provide the virtualization services on Mac OS and Windows systems. On non-Linux systems, a single Docker container consists of a mini-VM, the Docker software layer, and the software container. However, multiple Docker containers can share the same mini-VM, saving considerable resources over using multiple individual VMs. Recently, support for OS-level virtualization has been added to Windows and the Macintosh operating system (Mac OS). Beta versions of Docker for both Windows and Mac OS are now available that allow Docker to run natively. Subsequently, these beta versions allow native Windows and Mac OS software to be containerized and deployed in a similar manner [7]. Docker containers therefore provide a convenient and light method for deploying open source workflows on multiple platforms.GUIdock-X11Although Docker provides a container with the original software environment, the host system, where the container software is executed, is responsible for rendering graphics. Our previous work, GUIdock-X11 [3], is one of the solutions in bridging the graphical information from user and Docker containers by using the X11 common graphic interface. GUIdock-X11 passes the container X11 commands to a host X11 client, which renders the GUI. Security is handled by encrypting the commands through secure shell (ssh) tunneling. We demonstrated the use of GUIdock-X11 [3] for systems biology applications, including Bioconductor packages written in R, C++, and Fortran, as well as Cytoscape, a standalone Java-based application with a graphical user interface. Neither Windows nor Mac OS uses X11 natively to render their graphics. Additional software such as MobaXterm [8] or socat [9] is needed to emulate X11 and locally render the graphics commands exported by the Docker container. However, a major advantage of the X11 method is that the commands to render the graphics and not the graphics themselves are transmitted, potentially reducing the total bandwidth required.Table 1 summarizes the differences between GUIdock-VNC and our previous work, GUIdock-X11.Table 1:Comparison between GUIdock-X11 and GUIdock-VNCFeatureGUIdock-X11GUIdock-VNCCan be deployed on phones/tablets?NoYesSecurityssh-tunnelOAuth2BandwidthLowLow to mediumCloud integration difficultyMediumSimpleDockerfile setupManual editingAutomatic conversion of base Docker imagesCase study: inference of gene networksThe inference of gene networks is a fundamental challenge in systems biology. We use gene network inference as a case study to demonstrate that GUIdock-X11 and GUIdock-VNC can be used to yield reproducible results from bioinformatics workflows. We have previously developed inference methods using a regression-based framework, in which we searched for candidate regulators (i.e., parent nodes) for each target gene [10–12]. Our methods are implemented in R, C++, and Fortran, and the implementation is available as a Bioconductor package called networkBMA (http://bioconductor.org/packages/release/bioc/html/networkBMA.html) [13]. In order to visualize the resulting gene networks, we previously developed a Cytoscape app called CyNetworkBMA (http://apps.cytoscape.org/apps/cynetworkbma) [14]. Cytoscape is a Java-based stand-alone application with a GUI to analyze and visualize graphs and networks [15–17]. Our app, CyNetworkBMA [14], integrates our networkBMA Bioconductor package into Cytoscape, allowing the user to directly visualize the resulting gene networks inferred from networkBMA using the Cytoscape utilities. The integration of multiple pieces of software, each with its own software dependencies, makes CyNetworkBMA an ideal proof-of-concept application for the illustration of the utility of GUIdock-VNC.
[ "26913191", "22084118", "22898396", "24742092", "14597658", "17947979", "25485619", "20308593" ]
[ { "pmid": "26913191", "title": "BioShaDock: a community driven bioinformatics shared Docker-based tools registry.", "abstract": "Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community." }, { "pmid": "22084118", "title": "Construction of regulatory networks using expression time-series data of a genotyped population.", "abstract": "The inference of regulatory and biochemical networks from large-scale genomics data is a basic problem in molecular biology. The goal is to generate testable hypotheses of gene-to-gene influences and subsequently to design bench experiments to confirm these network predictions. Coexpression of genes in large-scale gene-expression data implies coregulation and potential gene-gene interactions, but provide little information about the direction of influences. Here, we use both time-series data and genetics data to infer directionality of edges in regulatory networks: time-series data contain information about the chronological order of regulatory events and genetics data allow us to map DNA variations to variations at the RNA level. We generate microarray data measuring time-dependent gene-expression levels in 95 genotyped yeast segregants subjected to a drug perturbation. We develop a Bayesian model averaging regression algorithm that incorporates external information from diverse data types to infer regulatory networks from the time-series and genetics data. Our algorithm is capable of generating feedback loops. We show that our inferred network recovers existing and novel regulatory relationships. Following network construction, we generate independent microarray data on selected deletion mutants to prospectively test network predictions. We demonstrate the potential of our network to discover de novo transcription-factor binding sites. Applying our construction method to previously published data demonstrates that our method is competitive with leading network construction algorithms in the literature." }, { "pmid": "22898396", "title": "Integrating external biological knowledge in the construction of regulatory networks from time-series expression data.", "abstract": "BACKGROUND\nInference about regulatory networks from high-throughput genomics data is of great interest in systems biology. We present a Bayesian approach to infer gene regulatory networks from time series expression data by integrating various types of biological knowledge.\n\n\nRESULTS\nWe formulate network construction as a series of variable selection problems and use linear regression to model the data. Our method summarizes additional data sources with an informative prior probability distribution over candidate regression models. We extend the Bayesian model averaging (BMA) variable selection method to select regulators in the regression framework. We summarize the external biological knowledge by an informative prior probability distribution over the candidate regression models.\n\n\nCONCLUSIONS\nWe demonstrate our method on simulated data and a set of time-series microarray experiments measuring the effect of a drug perturbation on gene expression levels, and show that it outperforms leading regression-based methods in the literature." }, { "pmid": "24742092", "title": "Fast Bayesian inference for gene regulatory networks using ScanBMA.", "abstract": "BACKGROUND\nGenome-wide time-series data provide a rich set of information for discovering gene regulatory relationships. As genome-wide data for mammalian systems are being generated, it is critical to develop network inference methods that can handle tens of thousands of genes efficiently, provide a systematic framework for the integration of multiple data sources, and yield robust, accurate and compact gene-to-gene relationships.\n\n\nRESULTS\nWe developed and applied ScanBMA, a Bayesian inference method that incorporates external information to improve the accuracy of the inferred network. In particular, we developed a new strategy to efficiently search the model space, applied data transformations to reduce the effect of spurious relationships, and adopted the g-prior to guide the search for candidate regulators. Our method is highly computationally efficient, thus addressing the scalability issue with network inference. The method is implemented as the ScanBMA function in the networkBMA Bioconductor software package.\n\n\nCONCLUSIONS\nWe compared ScanBMA to other popular methods using time series yeast data as well as time-series simulated data from the DREAM competition. We found that ScanBMA produced more compact networks with a greater proportion of true positives than the competing methods. Specifically, ScanBMA generally produced more favorable areas under the Receiver-Operating Characteristic and Precision-Recall curves than other regression-based methods and mutual-information based methods. In addition, ScanBMA is competitive with other network inference methods in terms of running time." }, { "pmid": "14597658", "title": "Cytoscape: a software environment for integrated models of biomolecular interaction networks.", "abstract": "Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models." }, { "pmid": "17947979", "title": "Integration of biological networks and gene expression data using Cytoscape.", "abstract": "Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape." }, { "pmid": "25485619", "title": "A comprehensive transcriptional portrait of human cancer cell lines.", "abstract": "Tumor-derived cell lines have served as vital models to advance our understanding of oncogene function and therapeutic responses. Although substantial effort has been made to define the genomic constitution of cancer cell line panels, the transcriptome remains understudied. Here we describe RNA sequencing and single-nucleotide polymorphism (SNP) array analysis of 675 human cancer cell lines. We report comprehensive analyses of transcriptome features including gene expression, mutations, gene fusions and expression of non-human sequences. Of the 2,200 gene fusions catalogued, 1,435 consist of genes not previously found in fusions, providing many leads for further investigation. We combine multiple genome and transcriptome features in a pathway-based approach to enhance prediction of response to targeted therapeutics. Our results provide a valuable resource for studies that use cancer cell lines." }, { "pmid": "20308593", "title": "Revealing strengths and weaknesses of methods for gene network inference.", "abstract": "Numerous methods have been developed for inferring gene regulatory networks from expression data, however, both their absolute and comparative performance remain poorly understood. In this paper, we introduce a framework for critical performance assessment of methods for gene network inference. We present an in silico benchmark suite that we provided as a blinded, community-wide challenge within the context of the DREAM (Dialogue on Reverse Engineering Assessment and Methods) project. We assess the performance of 29 gene-network-inference methods, which have been applied independently by participating teams. Performance profiling reveals that current inference methods are affected, to various degrees, by different types of systematic prediction errors. In particular, all but the best-performing method failed to accurately infer multiple regulatory inputs (combinatorial regulation) of genes. The results of this community-wide experiment show that reliable network inference from gene expression data remains an unsolved problem, and they indicate potential ways of network reconstruction improvements." } ]
Journal of the Association for Information Science and Technology
28758138
PMC5530597
10.1002/asi.23063
Author Name Disambiguation for PubMed
Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%.
Related WorkDue to the limitations of manual authorship management, numerous recent name disambiguation studies focus on automatic techniques for large-scale literature systems. To process large-scale data efficiently, it is necessary to define the scope (block) of author name disambiguation appropriately to minimize the computation cost without losing significant clustering opportunities. A block is a set of name variants that are considered candidates for possible identity. Several blocking methods handling name variants are discussed (On, Lee, Kang, & Mitra, 2005) to find the appropriate block size. Collective entity resolution (Bhattacharya & Getoor, 2007) shows that clustering quality can be improved at the cost of additional computational complexity by examining clustering beyond block division. In our work, a block (namespace) consists of citations sharing a common last name and first name initial.Normally disambiguation methods estimate authorship relationships within the same block by clustering similar citations with high intercitation similarity while separating citations with low similarity. The intercitation similarities are estimated based on different features. Some systems use coauthor information alone (On et al., 2005; Kang, Na, Lee, Jung, Kim, et al., 2009). Some systems combine various citation features (Han, Zha, & Giles, 2005; Torvik, Weeber, Swanson, & Smalheiser, 2005; Soler, 2007; Yin et al., 2007), and some combine citation features with disambiguating heuristics based on predefined patterns (Cota, Ferreira, Nascimento, Gonçalves, & Laender, 2010). Besides conventional citation information, some works (Song et al., 2007; Yang, Peng, Jiang, Lee, & Ho, 2008; Bernardi & Le, 2011) also exploit topic models to obtain features. McRae-Spencer and Shadbolt (2006) include self-citation information as features and Levin et al. (2012) expand features to the citation information between articles. Similar to some previous works, the features in our machine-learning method are also created from PubMed metadata which are strongly indicative of author identity. However, instead of using the name to be disambiguated as part of the similarity profile, we use author name compatibility in restricting the clustering process.Numerous methods have been developed to convert authorship features to intercitation distance. In unsupervised learning (Mann & Yarowsky, 2003; Ferreira, Veloso, Gonçalves, & Laender, 2010), extraction patterns based on biographic data and recurring coauthorship patterns are employed to create positive training data. Supervised machine-learning approaches require training data sets to learn the feature weighting (Han, Giles, Zha, Li, & Tsioutsiouliklis, 2004; On et al., 2005; Torvik et al., 2005; Treeratpituk & Giles, 2009). Some supervised methods (Han et al., 2004; On et al., 2005; Huang et al., 2006) train machine-learning kernels such as SVM with training data to find the optimal separating hyperplane and perform feature selection as well. In Treeratpituk and Giles (2009), a Random Forest approach is shown to benefit from variable importance in feature selection, which helps to outperform other techniques. We use a supervised machine-learning method to characterize the same-author relationship with large training data sets. Our positive training data are assembled based on the assumption that a rare author name generally represents the same author throughout, similar to the construction of matching data in Torvik and Smalheiser (2009). However, our training data sets are much larger and the positive data are filtered with a name compatibility check and a publication year check to minimize false-positive pairs.Typical methods for computing authorship probability include computing similarity based on term frequency (On et al., 2005; Treeratpituk & Giles, 2009), and estimating the prior probability from random samples (Soler, 2007; Torvik & Smalheiser, 2009). In our work, the similarity profile for a citation pair is converted to a pairwise probability with the PAV algorithm. In this process, both name frequency and a prior estimation of the proportion of positive pairs play an important role.Name disambiguation is implemented by assigning citations to a named author based on the relationship among citations, usually during a clustering process. Various clustering algorithms include agglomerative clustering with intercitation distance (Mann & Yarowsky, 2003; Culotta, Kanani, Hall, Wick, & McCallum, 2007; Song et al., 2007; Kang et al., 2009; Torvik & Smalheiser, 2009), K-way spectral clustering (Han et al., 2005), boosted-tree (Wang, Berzins, Hicks, Melkers, Xiao, et al., 2012), affinity propagation (Fan, Wang, Pu, Zhou, & Lv, 2011), quasi-cliques (On, Elmacioglu, Lee, Kang, & Pei, 2006), latent topic model (Shu, Long, & Meng, 2009), and density-based spatial clustering of applications with noise (DBSCAN) (Huang et al., 2006). To minimize the impact of data noise on clustering quality, transitivity violations of pairwise relationships are corrected in DBSCAN (Huang et al., 2006) and with a triplet correction scheme in (Torvik & Smalheiser, 2009). To correct transitivity violations, we apply a correction scheme similar to Torvik and Smalheiser (2009) with a stronger boosting for false-negative pairs. As previously (Mann & Yarowsky, 2003; Torvik & Smalheiser, 2009), an unsupervised agglomerative clustering algorithm disambiguates citations within a namespace. As pointed out before (Mann & Yarowsky, 2003; Tang, 2012), clustering the most similar citations first can help create a high-quality clustering result. In addition to ordering clustering by similarity level, our clustering process is also regulated by another ordering scheme based on name compatibility, which schedules merging clusters with closer name information at earlier stages.
[ "17971238" ]
[ { "pmid": "17971238", "title": "PubMed related articles: a probabilistic topic-based model for content similarity.", "abstract": "BACKGROUND\nWe present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a document is about a particular topic is computed from term frequencies, modeled as Poisson distributions. Unlike previous probabilistic retrieval models, we do not attempt to estimate relevance-but rather our focus is \"relatedness\", the probability that a user would want to examine a particular document given known interest in another. We also describe a novel technique for estimating parameters that does not require human relevance judgments; instead, the process is based on the existence of MeSH in MEDLINE.\n\n\nRESULTS\nThe pmra retrieval model was compared against bm25, a competitive probabilistic model that shares theoretical similarities. Experiments using the test collection from the TREC 2005 genomics track shows a small but statistically significant improvement of pmra over bm25 in terms of precision.\n\n\nCONCLUSION\nOur experiments suggest that the pmra model provides an effective ranking algorithm for related article search." } ]
Nutrients
28653995
PMC5537777
10.3390/nu9070657
NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment
Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients.
1.1. Related WorkWhile there have not been any dedicated drink image recognition systems, there have been multiple approaches to food image recognition in the past, and we will briefly mention the most important ones here. In 2009, an extensive food image and video dataset was built to encourage further research in the field: the Pittsburgh Fast-Food Image Dataset (PFID), containing 4545 still images, 606 stereo image pairs, 303 360° food videos and 27 eating videos of 101 different food items, such as “chicken nuggets” and “cheese pizza” [4]. Unfortunately, this dataset focuses only on fast-food items, not on foods in general. The authors provided the results of two baseline recognition methods tested on the PFID dataset, both using an SVM (Support Vector Machine) classifier to differentiate between the learned features; they achieved a classification accuracy of 11% with the color histogram method and 24% with the bag-of-SIFT-features method. The latter method counts the occurrences of local image features described by the popular SIFT (Scale-Invariant Feature Transform) descriptor [5]. These two methods were chosen based on their popularity in computer vision applications, but the low classification accuracy showed that food image recognition is a challenging computer vision task, requiring a more complex feature representation.In the same year, a food image recognition system that uses the multiple kernel learning method was introduced, which tested different feature extractors, and their combination, on a self-acquired dataset [6]. This proved to be a step in the right direction, as the authors achieved an accuracy of 26% to 38% for the individual features they used and an accuracy of 61.34% when these features were combined; the features include color, texture and SIFT information. Upon conducting a real-world test on 166 food images taken with mobile phones, the authors reported a lower classification accuracy of 37.35%, which was due to factors like occlusion, noise and additional items being present in the real-world images. The fact that the combination of features performed better than the individual features further hinted at the need for a more in-depth representation of the food images. Next year, the pairwise local features method, which applies the specifics of food images to their recognition, was presented [7]. This method analyzes the ingredient relations in the food image, such as the relations between bread and meat in a sandwich, by computing pairwise statistics between the local features. The authors performed an evaluation of their algorithm on the PFID dataset and achieved an accuracy of 19% to 28%, depending on which measure they employed in the pairwise local features method. However, they also noted that the dataset had narrowly-defined food classes, and after joining them into 7 classes, they reported an accuracy of 69% to 78%. This further confirmed the limitations of food image recognition approaches of that time: if a food image recognition algorithm achieved a high classification accuracy, it was only because the food classes were very general (e.g., “chicken”).In 2014, another approach was presented that uses an optimized bag-of-features model for food image recognition [8]. The authors tested 14 different color and texture descriptors for this model and found that the HSV-SIFT descriptor provided the best result. This descriptor describes the local textures in all three color channels of the HSV color space. The model was tested on a food image dataset that was built for the aims of the project Type 1 Diabetes Self-Management and Carbohydrate Counting: A Computer Vision Based Approach (GoCARB) [9], in the scope of which they constructed a food recognition system for diabetes patients. The authors achieved an accuracy of 77.80%, which was considerably higher than previous approaches.All of the previously-described solutions are based on manually-defined feature extractors that rely on specific features, such as color or texture, to recognize the entire range of food images. Furthermore, the images used in the recognition systems presented in these solutions were taken under strict conditions, containing only one food dish per image and often perfectly cropped. The images that contained multiple items were manually segmented and annotated, so the final inputs for these hand-crafted recognition systems were always ideally-prepared images. The results from these research works are therefore not indicative of general real-world performance due to the same problems with real-world images as listed above.These issues show that hand-crafted approaches are not ideal for a task as complex as food image recognition, where it seems the best approach is to use a complex combination of a large number of features, which is why deep convolutional neural networks, a method that automatically learns appropriate image features, achieved the best results in the field. Deep neural networks can also learn to disregard surrounding noise with sufficient training data, eliminating the need for perfect image cropping. Another approach for the image segmentation task is to train a neural network that performs semantic segmentation, which directly assigns class labels to each region of the input image [10,11]. Furthermore, deep neural networks can be trained in such a way that they perform both object detection and recognition in the same network [12,13].In 2014, Kawano et al. used deep convolutional neural networks to complement hand-crafted image features [14] and achieved a 72.26% accuracy on the University of Electro-Communications Food 100 (UEC-FOOD100) dataset that was made publicly available in 2012 [15]; this was the highest accuracy on the dataset at that time. Also in 2014, a larger version of the UEC-FOOD100 dataset was introduced, the University of Electro-Communications Food 256 (UEC-FOOD256), which contains 256 as opposed to 100 food classes [16]; while UEC-FOOD100 is composed of mostly Japanese food dishes, UEC-FOOD256 expands on this dataset with some international dishes. At that time, another food image dataset was made publicly available: the Food-101 dataset. This dataset contains 101,000 images of 101 different food items, and the authors used the popular random forest method for the recognition task, with which they achieved an accuracy of 50.76% [17]. They reported that while this result outperformed other hand-crafted efforts, it could not match the accuracy that deep learning approaches provided. This was further confirmed by the subsequently published research works, such as by Kagaya et al., who tested both food detection and food recognition using deep convolutional neural networks on a self-acquired dataset and achieved encouraging results: a classification accuracy of 73.70% for the recognition and 93.80% for the detection task [18]. In 2015, Yanai et al. improved on the best UEC-FOOD100 result, again with deep convolutional neural networks, only this time, with pre-training on the ImageNet dataset [19]. The accuracy they achieved was 78.77% [20]. A few months later, Christodoulidis et al. presented their own food recognition system that uses deep convolutional neural networks, and with it, they achieved an accuracy of 84.90% on a self-acquired and manually-annotated dataset [21].In 2016, Singla et al. used the famous GoogLeNet deep learning architecture [22], which is described in Section 2.2, on two datasets of food images, collected using cameras and combined with images from existing image datasets and social media. With a pre-trained model, they reported a recognition accuracy of 83.60% and a detection accuracy of 99.20% [23]. Also in 2016, Liu et al. achieved similarly encouraging results on the UEC-FOOD100, UEC-FOOD256 and Food-101 datasets by using an optimized convolution technique in their neural network architecture [24], which allowed them to reach an accuracy of 76.30%, 54.70% and 77.40%, respectively. Furthermore, Tanno et al. introduced DeepFoodCam, which is a smartphone food image recognition application that uses deep convolutional neural networks with a focus on recognition speed [25]. Another food image dataset was made publicly available in that year: the University of Milano-Bicocca 2016 (UNIMIB2016) dataset [26]. This dataset is composed of images of 1027 food trays from an Italian canteen, containing a total of 3616 food instances, divided into 73 food classes. The authors tested a combined segmentation and recognition deep convolutional neural network model on this dataset and achieved an accuracy of 78.30%. Finally, in 2016, Hassannejad et al. achieved the current best classification accuracy values of 81.45% on the UEC-FOOD100 dataset, 76.17% on the UEC-FOOD256 dataset and 88.28% on the Food-101 dataset [27]. All three results were obtained by using a deep neural network model based on the Google architecture Inception; this architecture is the basis for the previously-mentioned GoogLeNet.It seems that deep learning is a very promising approach in the field of food image recognition. Previous deep learning research reported high classification accuracy values, thus confirming the viability of the approach, but they focused on smaller food image datasets, often limited to 100 different food items or less. Moreover, none of these solutions recognize drink images. In this paper, we will present our solution that addresses these issues. We developed a new deep convolutional neural network architecture called NutriNet and trained it on images acquired from web searches for individual food and drink items. With this architecture, we achieved a higher classification accuracy than most of the results presented above and found that, on our recognition dataset, it performs better than AlexNet, which is the deep learning architecture it is based on; the results are described in-depth in Section 3. Additionally, we developed an online training component that automatically fine-tunes the deep learning image recognition model upon receiving new images from users, thus increasing the number of recognizable items and the classification accuracy over time. The online training is described in Section 2.3.By trying to solve the computer vision problem of recognizing food and drink items from images, we are hoping to alleviate the issue of dietary assessment, which is why our recognition system is integrated into the PD Nutrition application for the dietary assessment of Parkinson’s disease patients [28], which is being developed in the scope of the project mHealth Platform for Parkinson’s Disease Management (PD_manager) [29]. In practice, the system works in the following way: Parkinson’s disease patients take an image of food or drink items using a smartphone camera, and our system performs recognition using deep convolutional neural networks on this image. The result is a food or drink label, which is then matched against a database of nutritional information, thus providing the patients with an automatic solution for food logging and dietary assessment.
[ "26017442", "14449617", "25014934", "28114043" ]
[ { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "25014934", "title": "A food recognition system for diabetic patients based on an optimized bag-of-features model.", "abstract": "Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset." }, { "pmid": "28114043", "title": "Food Recognition: A New Dataset, Experiments, and Results.", "abstract": "We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community." } ]
Frontiers in Psychology
28824478
PMC5541010
10.3389/fpsyg.2017.01255
Toward Studying Music Cognition with Information Retrieval Techniques: Lessons Learned from the OpenMIIR Initiative
As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience.
3. Related workRetrieval based on brain wave recordings is still a very young and largely unexplored domain. EEG signals have been used to recognize emotions induced by music perception (Lin et al., 2009; Cabredo et al., 2012) and to distinguish perceived rhythmic stimuli (Stober et al., 2014). It has been shown that oscillatory neural activity in the gamma frequency band (20–60 Hz) is sensitiv to accented tones in a rhythmic sequence (Snyder and Large, 2005) and that oscillations in the beta band (20–30 Hz) increase in anticipation of strong tones in a non-isochronous sequence (Fujioka et al., 2009, 2012; Iversen et al., 2009). While listening to rhythmic sequences, the magnitude of steady state evoked potentials (SSEPs), i.e., reflecting neural oscillations entrained to the stimulus, changes for frequencies related to the metrical structure of the rhythm as a sign of entrainment to beat and meter (Nozaradan et al., 2011, 2012).EEG studies by Geiser et al. (2009) have further shown that perturbations of the rhythmic pattern lead to distinguishable electrophysiological responses–commonly referred to as Event-Related Potentials (ERPs). This effect appears to be independent of the listener's level of musical proficiency. Furthermore, Vlek et al. (2011) showed that imagined auditory accents imposed on top of a steady metronome click can be recognized from ERPs. However, as usual for ERP analysis to deal with noise in the EEG signal and reduce the impact of unrelated brain activity, this requires averaging the brain responses recorded for many events. In contrast, retrieval scenarios usually only consider single trials. Nevertheless, findings from ERP studies can guide the design of single-trial approaches as demonstrated in subsection 4.1.EEG has also been successfully used to distinguish perceived melodies. In a study conducted by Schaefer et al. (2011), 10 participants listened to 7 short melody clips with a length between 3.26 and 4.36 s. For single-trial classification, each stimulus was presented for a total of 140 trials in randomized back-to-back sequences of all stimuli. Using quadratically regularized linear logistic-regression classifier with 10-fold cross-validation, they were able to successfully classify the ERPs of single trials. Within subjects, the accuracy varied between 25 and 70%. Applying the same classification scheme across participants, they obtained between 35 and 53% accuracy. In a further analysis, they combined all trials from all subjects and stimuli into a grand average ERP. Using singular-value decomposition, they obtained a fronto-central component that explained 23% of the total signal variance. The related time courses showed significant differences between stimuli that were strong enough for cross-participant classification. Furthermore, a correlation with the stimulus envelopes of up to .48 was observed with the highest value over all stimuli at a time lag of 70–100 ms.Results from fMRI studies by Herholz et al. (2012) and Halpern et al. (2004) provide strong evidence that perception and imagination of music share common processes in the brain, which is beneficial for training MIIR systems. As Hubbard (2010) concludes in his review of the literature on auditory imagery, “auditory imagery preserves many structural and temporal properties of auditory stimuli” and “involves many of the same brain areas as auditory perception”. This is also underlined by Schaefer (2011, p. 142) whose “most important conclusion is that there is a substantial amount of overlap between the two tasks [music perception and imagination], and that ‘internally’ creating a perceptual experience uses functionalities of ‘normal’ perception.” Thus, brain signals recorded while listening to a music piece could serve as reference data for a retrieval system in order to detect salient elements in the signal that could be expected during imagination as well.A recent meta-analysis of Schaefer et al. (2013) summarized evidence that EEG is capable of detecting brain activity during the imagination of music. Most notably, encouraging preliminary results for recognizing purely imagined music fragments from EEG recordings were reported in Schaefer et al. (2009) where 4 out of 8 participants produced imagery that was classifiable (in a binary comparison) with an accuracy between 70 and 90% after 11 trials.Another closely related field of research is the reconstruction of auditory stimuli from EEG recordings. Deng et al. (2013) observed that EEG recorded during listening to natural speech contains traces of the speech amplitude envelope. They used ICA and a source localization technique to enhance the strength of this signal and successfully identify heard sentences. Applying their technique to imagined speech, they reported statistically significant single-sentence classification performance for 2 of 8 subjects with performance increasing when several sentences were combined for a longer trial duration.More recently, O'Sullivan et al. (2015) proposed a method for decoding attentional selection in a cocktail party environment from single-trial EEG recordings of approximately one minute length. In their experiment, 40 subjects were presented with 2 classic works of fiction at the same time—each one to a different ear—for 30 trials. In order to determine which of the 2 stimuli a subject attended to, they reconstructed both stimuli envelopes from the recorded EEG. To this end, they trained two different decoders per trial using a linear regression approach—one to reconstruct the attended stimulus and the other to reconstruct the unattended one. This resulted in 60 decoders per subject. These decoders where then averaged in a leave-on-out cross-validation scheme. During testing, each decoder would predict the stimulus with the best reconstruction from the EEG using the Pearson correlation of the envelopes as measure of quality. Using subject-specific decoders averaged from 29 training trials, the prediction of the attended stimulus decoder was correct for 89% of the trials whereas the mean accuracy of the unattended stimulus decoder was 78.9%. Alternatively, using a grand-average decoding method that combined the decoders from every other subject and every other trial, they obtained a mean accuracy of 82 and 75% respectively.
[ "23787338", "18254699", "24650597", "23297754", "19673759", "22302818", "19100973", "24431986", "15178179", "24239590", "22360595", "23558170", "20192565", "19673755", "9950738", "19081384", "21945275", "21753000", "23223281", "24429136", "23298753", "20541612", "15922164", "21353631" ]
[ { "pmid": "23787338", "title": "Representation learning: a review and new perspectives.", "abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning." }, { "pmid": "18254699", "title": "Representational power of restricted boltzmann machines and deep belief networks.", "abstract": "Deep belief networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton, Osindero, and Teh (2006) along with a greedy layer-wise unsupervised learning algorithm. The building block of a DBN is a probabilistic model called a restricted Boltzmann machine (RBM), used to represent one layer of the model. Restricted Boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs." }, { "pmid": "24650597", "title": "Neural portraits of perception: reconstructing face images from evoked brain activity.", "abstract": "Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity. While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex. However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions. Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and (b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps. (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces. (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces. (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores. (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing 'offline' visual experiences-including dreams, memories, and imagination-which are chiefly represented in higher-level cortical areas." }, { "pmid": "23297754", "title": "Reducing and meta-analysing estimates from distributed lag non-linear models.", "abstract": "BACKGROUND\nThe two-stage time series design represents a powerful analytical tool in environmental epidemiology. Recently, models for both stages have been extended with the development of distributed lag non-linear models (DLNMs), a methodology for investigating simultaneously non-linear and lagged relationships, and multivariate meta-analysis, a methodology to pool estimates of multi-parameter associations. However, the application of both methods in two-stage analyses is prevented by the high-dimensional definition of DLNMs.\n\n\nMETHODS\nIn this contribution we propose a method to synthesize DLNMs to simpler summaries, expressed by a reduced set of parameters of one-dimensional functions, which are compatible with current multivariate meta-analytical techniques. The methodology and modelling framework are implemented in R through the packages dlnm and mvmeta.\n\n\nRESULTS\nAs an illustrative application, the method is adopted for the two-stage time series analysis of temperature-mortality associations using data from 10 regions in England and Wales. R code and data are available as supplementary online material.\n\n\nDISCUSSION AND CONCLUSIONS\nThe methodology proposed here extends the use of DLNMs in two-stage analyses, obtaining meta-analytical estimates of easily interpretable summaries from complex non-linear and delayed associations. The approach relaxes the assumptions and avoids simplifications required by simpler modelling approaches." }, { "pmid": "19673759", "title": "Beta and gamma rhythms in human auditory cortex during musical beat processing.", "abstract": "We examined beta- (approximately 20 Hz) and gamma- (approximately 40 Hz) band activity in auditory cortices by means of magnetoencephalography (MEG) during passive listening to a regular musical beat with occasional omission of single tones. The beta activity decreased after each tone, followed by an increase, thus forming a periodic modulation synchronized with the stimulus. The beta decrease was absent after omissions. In contrast, gamma-band activity showed a peak after tone and omission, suggesting underlying endogenous anticipatory processes. We propose that auditory beta and gamma oscillations have different roles in musical beat encoding and auditory-motor interaction." }, { "pmid": "22302818", "title": "Internalized timing of isochronous sounds is represented in neuromagnetic β oscillations.", "abstract": "Moving in synchrony with an auditory rhythm requires predictive action based on neurodynamic representation of temporal information. Although it is known that a regular auditory rhythm can facilitate rhythmic movement, the neural mechanisms underlying this phenomenon remain poorly understood. In this experiment using human magnetoencephalography, 12 young healthy adults listened passively to an isochronous auditory rhythm without producing rhythmic movement. We hypothesized that the dynamics of neuromagnetic beta-band oscillations (~20 Hz)-which are known to reflect changes in an active status of sensorimotor functions-would show modulations in both power and phase-coherence related to the rate of the auditory rhythm across both auditory and motor systems. Despite the absence of an intention to move, modulation of beta amplitude as well as changes in cortico-cortical coherence followed the tempo of sound stimulation in auditory cortices and motor-related areas including the sensorimotor cortex, inferior-frontal gyrus, supplementary motor area, and the cerebellum. The time course of beta decrease after stimulus onset was consistent regardless of the rate or regularity of the stimulus, but the time course of the following beta rebound depended on the stimulus rate only in the regular stimulus conditions such that the beta amplitude reached its maximum just before the occurrence of the next sound. Our results suggest that the time course of beta modulation provides a mechanism for maintaining predictive timing, that beta oscillations reflect functional coordination between auditory and motor systems, and that coherence in beta oscillations dynamically configure the sensorimotor networks for auditory-motor coupling." }, { "pmid": "19100973", "title": "Early electrophysiological correlates of meter and rhythm processing in music perception.", "abstract": "The two main characteristics of temporal structuring in music are meter and rhythm. The present experiment investigated the event-related potentials (ERP) of these two structural elements with a focus on differential effects of attended and unattended processing. The stimulus material consisted of an auditory rhythm presented repetitively to subjects in which metrical and rhythmical changes as well as pitch changes were inserted. Subjects were to detect and categorize either temporal changes (attended condition) or pitch changes (unattended condition). Furthermore, we compared a group of long-term trained subjects (musicians) to non-musicians. As expected, behavioural data revealed that trained subjects performed significantly better than untrained subjects. This effect was mainly due to the better detection of the meter deviants. Rhythm as well as meter changes elicited an early negative deflection compared to standard tones in the attended processing condition, while in the unattended processing condition only the rhythm change elicited this negative deflection. Both effects were found across all experimental subjects with no difference between the two groups. Thus, our data suggest that meter and rhythm perception could differ with respect to the time course of processing and lend credence to the notion of different neurophysiological processes underlying the auditory perception of rhythm and meter in music. Furthermore, the data indicate that non-musicians are as proficient as musicians when it comes to rhythm perception, suggesting that correct rhythm perception is crucial not only for musicians but for every individual." }, { "pmid": "24431986", "title": "MEG and EEG data analysis with MNE-Python.", "abstract": "Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals generated by neuronal activity in the brain. Using these signals to characterize and locate neural activation in the brain is a challenge that requires expertise in physics, signal processing, statistics, and numerical methods. As part of the MNE software suite, MNE-Python is an open-source software package that addresses this challenge by providing state-of-the-art algorithms implemented in Python that cover multiple methods of data preprocessing, source localization, statistical analysis, and estimation of functional connectivity between distributed brain regions. All algorithms and utility functions are implemented in a consistent manner with well-documented interfaces, enabling users to create M/EEG data analysis pipelines by writing Python scripts. Moreover, MNE-Python is tightly integrated with the core Python libraries for scientific comptutation (NumPy, SciPy) and visualization (matplotlib and Mayavi), as well as the greater neuroimaging ecosystem in Python via the Nibabel package. The code is provided under the new BSD license allowing code reuse, even in commercial products. Although MNE-Python has only been under heavy development for a couple of years, it has rapidly evolved with expanded analysis capabilities and pedagogical tutorials because multiple labs have collaborated during code development to help share best practices. MNE-Python also gives easy access to preprocessed datasets, helping users to get started quickly and facilitating reproducibility of methods by other researchers. Full documentation, including dozens of examples, is available at http://martinos.org/mne." }, { "pmid": "15178179", "title": "Behavioral and neural correlates of perceived and imagined musical timbre.", "abstract": "The generality of findings implicating secondary auditory areas in auditory imagery was tested by using a timbre imagery task with fMRI. Another aim was to test whether activity in supplementary motor area (SMA) seen in prior studies might have been related to subvocalization. Participants with moderate musical background were scanned while making similarity judgments about the timbre of heard or imagined musical instrument sounds. The critical control condition was a visual imagery task. The pattern of judgments in perceived and imagined conditions was similar, suggesting that perception and imagery access similar cognitive representations of timbre. As expected, judgments of heard timbres, relative to the visual imagery control, activated primary and secondary auditory areas with some right-sided asymmetry. Timbre imagery also activated secondary auditory areas relative to the visual imagery control, although less strongly, in accord with previous data. Significant overlap was observed in these regions between perceptual and imagery conditions. Because the visual control task resulted in deactivation of auditory areas relative to a silent baseline, we interpret the timbre imagery effect as a reversal of that deactivation. Despite the lack of an obvious subvocalization component to timbre imagery, some activity in SMA was observed, suggesting that SMA may have a more general role in imagery beyond any motor component." }, { "pmid": "24239590", "title": "On the interpretation of weight vectors of linear models in multivariate neuroimaging.", "abstract": "The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses." }, { "pmid": "22360595", "title": "Neuronal correlates of perception, imagery, and memory for familiar tunes.", "abstract": "We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas." }, { "pmid": "23558170", "title": "Neural decoding of visual imagery during sleep.", "abstract": "Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement." }, { "pmid": "20192565", "title": "Auditory imagery: empirical findings.", "abstract": "The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear)." }, { "pmid": "19673755", "title": "Top-down control of rhythm perception modulates early auditory responses.", "abstract": "Our perceptions are shaped by both extrinsic stimuli and intrinsic interpretation. The perceptual experience of a simple rhythm, for example, depends upon its metrical interpretation (where one hears the beat). Such interpretation can be altered at will, providing a model to study the interaction of endogenous and exogenous influences in the cognitive organization of perception. Using magnetoencephalography (MEG), we measured brain responses evoked by a repeating, rhythmically ambiguous phrase (two tones followed by a rest). In separate trials listeners were instructed to impose different metrical organizations on the rhythm by mentally placing the downbeat on either the first or the second tone. Since the stimulus was invariant, differences in brain activity between the two conditions should relate to endogenous metrical interpretation. Metrical interpretation influenced early evoked neural responses to tones, specifically in the upper beta range (20-30 Hz). Beta response was stronger (by 64% on average) when a tone was imagined to be the beat, compared to when it was not. A second experiment established that the beta increase closely resembles that due to physical accents, and thus may represent the genesis of a subjective accent. The results demonstrate endogenous modulation of early auditory responses, and suggest a unique role for the beta band in linking of endogenous and exogenous processing. Given the suggested role of beta in motor processing and long-range intracortical coordination, it is hypothesized that the motor system influences metrical interpretation of sound, even in the absence of overt movement." }, { "pmid": "9950738", "title": "Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources.", "abstract": "An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain." }, { "pmid": "19081384", "title": "Visual image reconstruction from human brain activity using a combination of multiscale local image decoders.", "abstract": "Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2(100) possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns." }, { "pmid": "21945275", "title": "Reconstructing visual experiences from brain activity evoked by natural movies.", "abstract": "Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology." }, { "pmid": "21753000", "title": "Tagging the neuronal entrainment to beat and meter.", "abstract": "Feeling the beat and meter is fundamental to the experience of music. However, how these periodicities are represented in the brain remains largely unknown. Here, we test whether this function emerges from the entrainment of neurons resonating to the beat and meter. We recorded the electroencephalogram while participants listened to a musical beat and imagined a binary or a ternary meter on this beat (i.e., a march or a waltz). We found that the beat elicits a sustained periodic EEG response tuned to the beat frequency. Most importantly, we found that meter imagery elicits an additional frequency tuned to the corresponding metric interpretation of this beat. These results provide compelling evidence that neural entrainment to beat and meter can be captured directly in the electroencephalogram. More generally, our results suggest that music constitutes a unique context to explore entrainment phenomena in dynamic cognitive processing at the level of neural networks." }, { "pmid": "23223281", "title": "Selective neuronal entrainment to the beat and meter embedded in a musical rhythm.", "abstract": "Fundamental to the experience of music, beat and meter perception refers to the perception of periodicities while listening to music occurring within the frequency range of musical tempo. Here, we explored the spontaneous building of beat and meter hypothesized to emerge from the selective entrainment of neuronal populations at beat and meter frequencies. The electroencephalogram (EEG) was recorded while human participants listened to rhythms consisting of short sounds alternating with silences to induce a spontaneous perception of beat and meter. We found that the rhythmic stimuli elicited multiple steady state-evoked potentials (SS-EPs) observed in the EEG spectrum at frequencies corresponding to the rhythmic pattern envelope. Most importantly, the amplitude of the SS-EPs obtained at beat and meter frequencies were selectively enhanced even though the acoustic energy was not necessarily predominant at these frequencies. Furthermore, accelerating the tempo of the rhythmic stimuli so as to move away from the range of frequencies at which beats are usually perceived impaired the selective enhancement of SS-EPs at these frequencies. The observation that beat- and meter-related SS-EPs are selectively enhanced at frequencies compatible with beat and meter perception indicates that these responses do not merely reflect the physical structure of the sound envelope but, instead, reflect the spontaneous emergence of an internal representation of beat, possibly through a mechanism of selective neuronal entrainment within a resonance frequency range. Taken together, these results suggest that musical rhythms constitute a unique context to gain insight on general mechanisms of entrainment, from the neuronal level to individual level." }, { "pmid": "24429136", "title": "Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG.", "abstract": "How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces." }, { "pmid": "23298753", "title": "Shared processing of perception and imagery of music in decomposed EEG.", "abstract": "The current work investigates the brain activation shared between perception and imagery of music as measured with electroencephalography (EEG). Meta-analyses of four separate EEG experiments are presented, each focusing on perception and imagination of musical sound, with differing levels of stimulus complexity. Imagination and perception of simple accented metronome trains, as manifested in the clock illusion, as well as monophonic melodies are discussed, as well as more complex rhythmic patterns and ecologically natural music stimuli. By decomposing the data with principal component analysis (PCA), similar component distributions are found to explain most of the variance in each experiment. All data sets show a fronto-central and a more central component as the largest sources of variance, fitting with projections seen for the network of areas contributing to the N1/P2 complex. We expanded on these results using tensor decomposition. This allows us to add in the tasks to find shared activation, but does not make assumptions of independence or orthogonality and calculates the relative strengths of these components for each task. The components found in the PCA were shown to be further decomposable into parts that load primarily on to the perception or imagery task, or both, thereby adding more detail. It is shown that the frontal and central components have multiple parts that are differentially active during perception and imagination. A number of possible interpretations of these results are discussed, taking into account the different stimulus materials and measurement conditions." }, { "pmid": "20541612", "title": "Name that tune: decoding music from the listening brain.", "abstract": "In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed." }, { "pmid": "15922164", "title": "Gamma-band activity reflects the metric structure of rhythmic tone sequences.", "abstract": "Relatively little is known about the dynamics of auditory cortical rhythm processing using non-invasive methods, partly because resolving responses to events in patterns is difficult using long-latency auditory neuroelectric responses. We studied the relationship between short-latency gamma-band (20-60 Hz) activity (GBA) and the structure of rhythmic tone sequences. We show that induced (non-phase-locked) GBA predicts tone onsets and persists when expected tones are omitted. Evoked (phase-locked) GBA occurs in response to tone onsets with approximately 50 ms latency, and is strongly diminished during tone omissions. These properties of auditory GBA correspond with perception of meter in acoustic sequences and provide evidence for the dynamic allocation of attention to temporally structured auditory sequences." }, { "pmid": "21353631", "title": "Shared mechanisms in perception and imagery of auditory accents.", "abstract": "OBJECTIVE\nAn auditory rhythm can be perceived as a sequence of accented (loud) and non-accented (soft) beats or it can be imagined. Subjective rhythmization refers to the induction of accenting patterns during the presentation of identical auditory pulses at an isochronous rate. It can be an automatic process, but it can also be voluntarily controlled. We investigated whether imagined accents can be decoded from brain signals on a single-trial basis, and if there is information shared between perception and imagery in the contrast of accents and non-accents.\n\n\nMETHODS\nTen subjects perceived and imagined three different metric patterns (two-, three-, and four-beat) superimposed on a steady metronome while electroencephalography (EEG) measurements were made. Shared information between perception and imagery EEG is investigated by means of principal component analysis and by means of single-trial classification.\n\n\nRESULTS\nClassification of accented from non-accented beats was possible with an average accuracy of 70% for perception and 61% for imagery data. Cross-condition classification yielded significant performance above chance level for a classifier trained on perception and tested on imagery data (up to 66%), and vice versa (up to 60%).\n\n\nCONCLUSIONS\nResults show that detection of imagined accents is possible and reveal similarity in brain signatures relevant to distinction of accents from non-accents in perception and imagery.\n\n\nSIGNIFICANCE\nOur results support the idea of shared mechanisms in perception and imagery for auditory processing. This is relevant for a number of clinical settings, most notably by elucidating the basic mechanisms of rhythmic auditory cuing paradigms, e.g. as used in motor rehabilitation or therapy for Parkinson's disease. As a novel Brain-Computer Interface (BCI) paradigm, our results imply a reduction of the necessary BCI training in healthy subjects and in patients." } ]
BMC Medical Informatics and Decision Making
28789686
PMC5549299
10.1186/s12911-017-0512-7
Developing a cardiovascular disease risk factor annotated corpus of Chinese electronic medical records
BackgroundCardiovascular disease (CVD) has become the leading cause of death in China, and most of the cases can be prevented by controlling risk factors. The goal of this study was to build a corpus of CVD risk factor annotations based on Chinese electronic medical records (CEMRs). This corpus is intended to be used to develop a risk factor information extraction system that, in turn, can be applied as a foundation for the further study of the progress of risk factors and CVD.ResultsWe designed a light annotation task to capture CVD risk factors with indicators, temporal attributes and assertions that were explicitly or implicitly displayed in the records. The task included: 1) preparing data; 2) creating guidelines for capturing annotations (these were created with the help of clinicians); 3) proposing an annotation method including building the guidelines draft, training the annotators and updating the guidelines, and corpus construction. Meanwhile, we proposed some creative annotation guidelines: (1) the under-threshold medical examination values were annotated for our purpose of studying the progress of risk factors and CVD; (2) possible and negative risk factors were concerned for the same reason, and we created assertions for annotations; (3) we added four temporal attributes to CVD risk factors in CEMRs for constructing long term variations. Then, a risk factor annotated corpus based on de-identified discharge summaries and progress notes from 600 patients was developed. Built with the help of clinicians, this corpus has an inter-annotator agreement (IAA) F1-measure of 0.968, indicating a high reliability.ConclusionTo the best of our knowledge, this is the first annotated corpus concerning CVD risk factors in CEMRs and the guidelines for capturing CVD risk factor annotations from CEMRs were proposed. The obtained document-level annotations can be applied in future studies to monitor risk factors and CVD over the long term.
Related works based on CEMRsWang et al. [16] focused on recognizing and normalizing the names of symptoms in traditional Chinese medicine EMRs. To perform judgements, this system used a set of manually annotated clinical symptom names. Jiang et al. [14] proposed a complete annotation scheme for building a corpus of word segmentation and part-of-speech (POS) from CEMRs. Yang et al. [11] focused on designing an annotation scheme and constructing a corpus of named entities and entity relationships from CEMRs; they formulated an annotation specification and built a corpus based on 992 medical discharge summaries and progress notes. Lei [17] and Lei et al. [18] focused on recognizing named entities in Chinese medical discharge summaries. They classified the entities into four categories: clinical problems, procedures, labs, and medications. Finally, they annotated an entities corpus based on CEMRs. Xu et al. [19] studied a joint model that performed segmentation and named entity recognition in Chinese discharge summaries and built a set of 336 annotated Chinese discharge summaries. Wang et al. [20] researched the extraction of tumor-related information from Chinese-language operation notes of patients with hepatic carcinomas, and annotated a corpus contains 961 entities. He et al. [21] proposed a comprehensive corpus of syntactic and semantic annotations from Chinese clinical texts.Despite the similar intent of these works, research into extracting CVD risk factors from CEMRs has not yet been studied. Meanwhile, for the IE tasks in the biomedical field, the number of accessible corpora is far fewer than those for more general extractions. However, corpora are important for building IE system. Thus, constructing a CVD risk factor annotated corpus is both a necessary and fundamental task. Moreover, unlike annotation tasks for texts that require less specialized knowledge, linguists require the assistance of medical experts to perform annotations in the biomedical field.
[ "24070769", "20089162", "24347408", "23934949", "24486562", "17947624", "19390096", "20819854", "20819855", "21685143", "23564629", "23872518", "26210362", "26004790", "25147248", "15684123" ]
[ { "pmid": "24070769", "title": "Supervised methods for symptom name recognition in free-text clinical records of traditional Chinese medicine: an empirical study.", "abstract": "Clinical records of traditional Chinese medicine (TCM) are documented by TCM doctors during their routine diagnostic work. These records contain abundant knowledge and reflect the clinical experience of TCM doctors. In recent years, with the modernization of TCM clinical practice, these clinical records have begun to be digitized. Data mining (DM) and machine learning (ML) methods provide an opportunity for researchers to discover TCM regularities buried in the large volume of clinical records. There has been some work on this problem. Existing methods have been validated on a limited amount of manually well-structured data. However, the contents of most fields in the clinical records are unstructured. As a result, the previous methods verified on the well-structured data will not work effectively on the free-text clinical records (FCRs), and the FCRs are, consequently, required to be structured in advance. Manually structuring the large volume of TCM FCRs is time-consuming and labor-intensive, but the development of automatic methods for the structuring task is at an early stage. Therefore, in this paper, symptom name recognition (SNR) in the chief complaints, which is one of the important tasks to structure the FCRs of TCM, is carefully studied. The SNR task is reasonably treated as a sequence labeling problem, and several fundamental and practical problems in the SNR task are studied, such as how to adapt a general sequence labeling strategy for the SNR task according to the domain-specific characteristics of the chief complaints and which sequence classifier is more appropriate to solve the SNR task. To answer these questions, a series of elaborate experiments were performed, and the results are explained in detail." }, { "pmid": "20089162", "title": "Automatic symptom name normalization in clinical records of traditional Chinese medicine.", "abstract": "BACKGROUND\nIn recent years, Data Mining technology has been applied more than ever before in the field of traditional Chinese medicine (TCM) to discover regularities from the experience accumulated in the past thousands of years in China. Electronic medical records (or clinical records) of TCM, containing larger amount of information than well-structured data of prescriptions extracted manually from TCM literature such as information related to medical treatment process, could be an important source for discovering valuable regularities of TCM. However, they are collected by TCM doctors on a day to day basis without the support of authoritative editorial board, and owing to different experience and background of TCM doctors, the same concept might be described in several different terms. Therefore, clinical records of TCM cannot be used directly to Data Mining and Knowledge Discovery. This paper focuses its attention on the phenomena of \"one symptom with different names\" and investigates a series of metrics for automatically normalizing symptom names in clinical records of TCM.\n\n\nRESULTS\nA series of extensive experiments were performed to validate the metrics proposed, and they have shown that the hybrid similarity metrics integrating literal similarity and remedy-based similarity are more accurate than the others which are based on literal similarity or remedy-based similarity alone, and the highest F-Measure (65.62%) of all the metrics is achieved by hybrid similarity metric VSM+TFIDF+SWD.\n\n\nCONCLUSIONS\nAutomatic symptom name normalization is an essential task for discovering knowledge from clinical data of TCM. The problem is introduced for the first time by this paper. The results have verified that the investigated metrics are reasonable and accurate, and the hybrid similarity metrics are much better than the metrics based on literal similarity or remedy-based similarity alone." }, { "pmid": "24347408", "title": "A comprehensive study of named entity recognition in Chinese clinical text.", "abstract": "OBJECTIVE\nNamed entity recognition (NER) is one of the fundamental tasks in natural language processing. In the medical domain, there have been a number of studies on NER in English clinical notes; however, very limited NER research has been carried out on clinical notes written in Chinese. The goal of this study was to systematically investigate features and machine learning algorithms for NER in Chinese clinical text.\n\n\nMATERIALS AND METHODS\nWe randomly selected 400 admission notes and 400 discharge summaries from Peking Union Medical College Hospital in China. For each note, four types of entity-clinical problems, procedures, laboratory test, and medications-were annotated according to a predefined guideline. Two-thirds of the 400 notes were used to train the NER systems and one-third for testing. We investigated the effects of different types of feature including bag-of-characters, word segmentation, part-of-speech, and section information, and different machine learning algorithms including conditional random fields (CRF), support vector machines (SVM), maximum entropy (ME), and structural SVM (SSVM) on the Chinese clinical NER task. All classifiers were trained on the training dataset and evaluated on the test set, and micro-averaged precision, recall, and F-measure were reported.\n\n\nRESULTS\nOur evaluation on the independent test set showed that most types of feature were beneficial to Chinese NER systems, although the improvements were limited. The system achieved the highest performance by combining word segmentation and section information, indicating that these two types of feature complement each other. When the same types of optimized feature were used, CRF and SSVM outperformed SVM and ME. More specifically, SSVM achieved the highest performance of the four algorithms, with F-measures of 93.51% and 90.01% for admission notes and discharge summaries, respectively." }, { "pmid": "23934949", "title": "Joint segmentation and named entity recognition using dual decomposition in Chinese discharge summaries.", "abstract": "OBJECTIVE\nIn this paper, we focus on three aspects: (1) to annotate a set of standard corpus in Chinese discharge summaries; (2) to perform word segmentation and named entity recognition in the above corpus; (3) to build a joint model that performs word segmentation and named entity recognition.\n\n\nDESIGN\nTwo independent systems of word segmentation and named entity recognition were built based on conditional random field models. In the field of natural language processing, while most approaches use a single model to predict outputs, many works have proved that performance of many tasks can be improved by exploiting combined techniques. Therefore, in this paper, we proposed a joint model using dual decomposition to perform both the two tasks in order to exploit correlations between the two tasks. Three sets of features were designed to demonstrate the advantage of the joint model we proposed, compared with independent models, incremental models and a joint model trained on combined labels.\n\n\nMEASUREMENTS\nMicro-averaged precision (P), recall (R), and F-measure (F) were used to evaluate results.\n\n\nRESULTS\nThe gold standard corpus is created using 336 Chinese discharge summaries of 71 355 words. The framework using dual decomposition achieved 0.2% improvement for segmentation and 1% improvement for recognition, compared with each of the two tasks alone.\n\n\nCONCLUSIONS\nThe joint model is efficient and effective in both segmentation and recognition compared with the two individual tasks. The model achieved encouraging results, demonstrating the feasibility of the two tasks." }, { "pmid": "24486562", "title": "Extracting important information from Chinese Operation Notes with natural language processing methods.", "abstract": "Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score." }, { "pmid": "17947624", "title": "Identifying patient smoking status from medical discharge records.", "abstract": "The authors organized a Natural Language Processing (NLP) challenge on automatically determining the smoking status of patients from information found in their discharge records. This challenge was issued as a part of the i2b2 (Informatics for Integrating Biology to the Bedside) project, to survey, facilitate, and examine studies in medical language understanding for clinical narratives. This article describes the smoking challenge, details the data and the annotation process, explains the evaluation metrics, discusses the characteristics of the systems developed for the challenge, presents an analysis of the results of received system runs, draws conclusions about the state of the art, and identifies directions for future research. A total of 11 teams participated in the smoking challenge. Each team submitted up to three system runs, providing a total of 23 submissions. The submitted system runs were evaluated with microaveraged and macroaveraged precision, recall, and F-measure. The systems submitted to the smoking challenge represented a variety of machine learning and rule-based algorithms. Despite the differences in their approaches to smoking status identification, many of these systems provided good results. There were 12 system runs with microaveraged F-measures above 0.84. Analysis of the results highlighted the fact that discharge summaries express smoking status using a limited number of textual features (e.g., \"smok\", \"tobac\", \"cigar\", Social History, etc.). Many of the effective smoking status identifiers benefit from these features." }, { "pmid": "19390096", "title": "Recognizing obesity and comorbidities in sparse data.", "abstract": "In order to survey, facilitate, and evaluate studies of medical language processing on clinical narratives, i2b2 (Informatics for Integrating Biology to the Bedside) organized its second challenge and workshop. This challenge focused on automatically extracting information on obesity and fifteen of its most common comorbidities from patient discharge summaries. For each patient, obesity and any of the comorbidities could be Present, Absent, or Questionable (i.e., possible) in the patient, or Unmentioned in the discharge summary of the patient. i2b2 provided data for, and invited the development of, automated systems that can classify obesity and its comorbidities into these four classes based on individual discharge summaries. This article refers to obesity and comorbidities as diseases. It refers to the categories Present, Absent, Questionable, and Unmentioned as classes. The task of classifying obesity and its comorbidities is called the Obesity Challenge. The data released by i2b2 was annotated for textual judgments reflecting the explicitly reported information on diseases, and intuitive judgments reflecting medical professionals' reading of the information presented in discharge summaries. There were very few examples of some disease classes in the data. The Obesity Challenge paid particular attention to the performance of systems on these less well-represented classes. A total of 30 teams participated in the Obesity Challenge. Each team was allowed to submit two sets of up to three system runs for evaluation, resulting in a total of 136 submissions. The submissions represented a combination of rule-based and machine learning approaches. Evaluation of system runs shows that the best predictions of textual judgments come from systems that filter the potentially noisy portions of the narratives, project dictionaries of disease names onto the remaining text, apply negation extraction, and process the text through rules. Information on disease-related concepts, such as symptoms and medications, and general medical knowledge help systems infer intuitive judgments on the diseases." }, { "pmid": "20819854", "title": "Extracting medication information from clinical text.", "abstract": "The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records focused on the identification of medications, their dosages, modes (routes) of administration, frequencies, durations, and reasons for administration in discharge summaries. This challenge is referred to as the medication challenge. For the medication challenge, i2b2 released detailed annotation guidelines along with a set of annotated discharge summaries. Twenty teams representing 23 organizations and nine countries participated in the medication challenge. The teams produced rule-based, machine learning, and hybrid systems targeted to the task. Although rule-based systems dominated the top 10, the best performing system was a hybrid. Of all medication-related fields, durations and reasons were the most difficult for all systems to detect. While medications themselves were identified with better than 0.75 F-measure by all of the top 10 systems, the best F-measure for durations and reasons were 0.525 and 0.459, respectively. State-of-the-art natural language processing systems go a long way toward extracting medication names, dosages, modes, and frequencies. However, they are limited in recognizing duration and reason fields and would benefit from future research." }, { "pmid": "20819855", "title": "Community annotation experiment for ground truth generation for the i2b2 medication challenge.", "abstract": "OBJECTIVE\nWithin the context of the Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records, the authors (also referred to as 'the i2b2 medication challenge team' or 'the i2b2 team' for short) organized a community annotation experiment.\n\n\nDESIGN\nFor this experiment, the authors released annotation guidelines and a small set of annotated discharge summaries. They asked the participants of the Third i2b2 Workshop to annotate 10 discharge summaries per person; each discharge summary was annotated by two annotators from two different teams, and a third annotator from a third team resolved disagreements.\n\n\nMEASUREMENTS\nIn order to evaluate the reliability of the annotations thus produced, the authors measured community inter-annotator agreement and compared it with the inter-annotator agreement of expert annotators when both the community and the expert annotators generated ground truth based on pooled system outputs. For this purpose, the pool consisted of the three most densely populated automatic annotations of each record. The authors also compared the community inter-annotator agreement with expert inter-annotator agreement when the experts annotated raw records without using the pool. Finally, they measured the quality of the community ground truth by comparing it with the expert ground truth.\n\n\nRESULTS AND CONCLUSIONS\nThe authors found that the community annotators achieved comparable inter-annotator agreement to expert annotators, regardless of whether the experts annotated from the pool. Furthermore, the ground truth generated by the community obtained F-measures above 0.90 against the ground truth of the experts, indicating the value of the community as a source of high-quality ground truth even on intricate and domain-specific annotation tasks." }, { "pmid": "21685143", "title": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text.", "abstract": "The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate." }, { "pmid": "23564629", "title": "Evaluating temporal relations in clinical text: 2012 i2b2 Challenge.", "abstract": "BACKGROUND\nThe Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the temporal relations in clinical narratives. The organizers provided the research community with a corpus of discharge summaries annotated with temporal information, to be used for the development and evaluation of temporal reasoning systems. 18 teams from around the world participated in the challenge. During the workshop, participating teams presented comprehensive reviews and analysis of their systems, and outlined future research directions suggested by the challenge contributions.\n\n\nMETHODS\nThe challenge evaluated systems on the information extraction tasks that targeted: (1) clinically significant events, including both clinical concepts such as problems, tests, treatments, and clinical departments, and events relevant to the patient's clinical timeline, such as admissions, transfers between departments, etc; (2) temporal expressions, referring to the dates, times, durations, or frequencies phrases in the clinical text. The values of the extracted temporal expressions had to be normalized to an ISO specification standard; and (3) temporal relations, between the clinical events and temporal expressions. Participants determined pairs of events and temporal expressions that exhibited a temporal relation, and identified the temporal relation between them.\n\n\nRESULTS\nFor event detection, statistical machine learning (ML) methods consistently showed superior performance. While ML and rule based methods seemed to detect temporal expressions equally well, the best systems overwhelmingly adopted a rule based approach for value normalization. For temporal relation classification, the systems using hybrid approaches that combined ML and heuristics based methods produced the best results." }, { "pmid": "23872518", "title": "Annotating temporal information in clinical narratives.", "abstract": "Temporal information in clinical narratives plays an important role in patients' diagnosis, treatment and prognosis. In order to represent narrative information accurately, medical natural language processing (MLP) systems need to correctly identify and interpret temporal information. To promote research in this area, the Informatics for Integrating Biology and the Bedside (i2b2) project developed a temporally annotated corpus of clinical narratives. This corpus contains 310 de-identified discharge summaries, with annotations of clinical events, temporal expressions and temporal relations. This paper describes the process followed for the development of this corpus and discusses annotation guideline development, annotation methodology, and corpus quality." }, { "pmid": "26210362", "title": "Identifying risk factors for heart disease over time: Overview of 2014 i2b2/UTHealth shared task Track 2.", "abstract": "The second track of the 2014 i2b2/UTHealth natural language processing shared task focused on identifying medical risk factors related to Coronary Artery Disease (CAD) in the narratives of longitudinal medical records of diabetic patients. The risk factors included hypertension, hyperlipidemia, obesity, smoking status, and family history, as well as diabetes and CAD, and indicators that suggest the presence of those diseases. In addition to identifying the risk factors, this track of the 2014 i2b2/UTHealth shared task studied the presence and progression of the risk factors in longitudinal medical records. Twenty teams participated in this track, and submitted 49 system runs for evaluation. Six of the top 10 teams achieved F1 scores over 0.90, and all 10 scored over 0.87. The most successful system used a combination of additional annotations, external lexicons, hand-written rules and Support Vector Machines. The results of this track indicate that identification of risk factors and their progression over time is well within the reach of automated systems." }, { "pmid": "26004790", "title": "Annotating risk factors for heart disease in clinical narratives for diabetic patients.", "abstract": "The 2014 i2b2/UTHealth natural language processing shared task featured a track focused on identifying risk factors for heart disease (specifically, Cardiac Artery Disease) in clinical narratives. For this track, we used a \"light\" annotation paradigm to annotate a set of 1304 longitudinal medical records describing 296 patients for risk factors and the times they were present. We designed the annotation task for this track with the goal of balancing annotation load and time with quality, so as to generate a gold standard corpus that can benefit a clinically-relevant task. We applied light annotation procedures and determined the gold standard using majority voting. On average, the agreement of annotators with the gold standard was above 0.95, indicating high reliability. The resulting document-level annotations generated for each record in each longitudinal EMR in this corpus provide information that can support studies of progression of heart disease risk factors in the included patients over time. These annotations were used in the Risk Factor track of the 2014 i2b2/UTHealth shared task. Participating systems achieved a mean micro-averaged F1 measure of 0.815 and a maximum F1 measure of 0.928 for identifying these risk factors in patient records." }, { "pmid": "25147248", "title": "Evaluating the state of the art in disorder recognition and normalization of the clinical narrative.", "abstract": "OBJECTIVE\nThe ShARe/CLEF eHealth 2013 Evaluation Lab Task 1 was organized to evaluate the state of the art on the clinical text in (i) disorder mention identification/recognition based on Unified Medical Language System (UMLS) definition (Task 1a) and (ii) disorder mention normalization to an ontology (Task 1b). Such a community evaluation has not been previously executed. Task 1a included a total of 22 system submissions, and Task 1b included 17. Most of the systems employed a combination of rules and machine learners.\n\n\nMATERIALS AND METHODS\nWe used a subset of the Shared Annotated Resources (ShARe) corpus of annotated clinical text--199 clinical notes for training and 99 for testing (roughly 180 K words in total). We provided the community with the annotated gold standard training documents to build systems to identify and normalize disorder mentions. The systems were tested on a held-out gold standard test set to measure their performance.\n\n\nRESULTS\nFor Task 1a, the best-performing system achieved an F1 score of 0.75 (0.80 precision; 0.71 recall). For Task 1b, another system performed best with an accuracy of 0.59.\n\n\nDISCUSSION\nMost of the participating systems used a hybrid approach by supplementing machine-learning algorithms with features generated by rules and gazetteers created from the training data and from external resources.\n\n\nCONCLUSIONS\nThe task of disorder normalization is more challenging than that of identification. The ShARe corpus is available to the community as a reference standard for future studies." }, { "pmid": "15684123", "title": "Agreement, the f-measure, and reliability in information retrieval.", "abstract": "Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies." } ]
Frontiers in Genetics
28848600
PMC5552671
10.3389/fgene.2017.00104
A Novel Efficient Graph Model for the Multiple Longest Common Subsequences (MLCS) Problem
Searching for the Multiple Longest Common Subsequences (MLCS) of multiple sequences is a classical NP-hard problem, which has been used in many applications. One of the most effective exact approaches for the MLCS problem is based on dominant point graph, which is a kind of directed acyclic graph (DAG). However, the time and space efficiency of the leading dominant point graph based approaches is still unsatisfactory: constructing the dominated point graph used by these approaches requires a huge amount of time and space, which hinders the applications of these approaches to large-scale and long sequences. To address this issue, in this paper, we propose a new time and space efficient graph model called the Leveled-DAG for the MLCS problem. The Leveled-DAG can timely eliminate all the nodes in the graph that cannot contribute to the construction of MLCS during constructing. At any moment, only the current level and some previously generated nodes in the graph need to be kept in memory, which can greatly reduce the memory consumption. Also, the final graph contains only one node in which all of the wanted MLCS are saved, thus, no additional operations for searching the MLCS are needed. The experiments are conducted on real biological sequences with different numbers and lengths respectively, and the proposed algorithm is compared with three state-of-the-art algorithms. The experimental results show that the time and space needed for the Leveled-DAG approach are smaller than those for the compared algorithms especially on large-scale and long sequences.
2.1. Preliminaries and related workFirst of all, let Σ denote the alphabet of the sequences, i.e., a finite set of symbols. For example, the alphabet of the DNA sequences is Σ = {A, C, G, T}.Definition 1. Let Σ denote the alphabet and s = c1c2…cn be a sequence of length n with each symbol ci ∈ Σ, for i = 1, 2, ⋯ , n. The i-th symbol of s is denoted by s[i] = ci. If a sequence s′ is obtained by deleting zero or more symbols (not necessarily consecutive) from s, i.e., s′=ci1ci2…cik satisfying 1 ≤ i1 < i2 < ⋯ < ik ≤ n, then s′ is called a length k subsequence of s.Definition 2. Given d sequences s1, s2, …, sd on Σ, if a sequence s′=ci1ci2…cik satisfies: (1) It is a subsequence of each of these d sequences. (2) It is the longest subsequence of these d sequences. Then s′ is called a Longest Common Subsequence (LCS) of these d sequences.Generally, LCS of multiple sequences is not unique. For example, given three DNA sequences ACTAGTGC, TGCTAGCA and CATGCGAT, there exists two LCSs of length 4, which are CAGC and CTGC, respectively. The multiple longest common subsequences (MLCS) problem is to find all the longest common subsequences of three or more sequences.Many algorithms have been proposed for the MLCS problem in the past decades. According to the models on which the algorithms are based, the existing MLCS algorithms can be classified into two categories: the dynamic programming based approaches and the dominant point based approaches. Next, we will give an brief introduction to each of the two approaches.2.1.1. Dynamic programming based approachesThe classical approaches for the MLCS problem are based on dynamic programming (Sankoff, 1972; Smith and Waterman, 1981). Given d sequences s1, s2, …, sd of length n1, n2, …, nd, respectively, these approaches recursively construct a score table T having n1 × n2 × … × nd cells, in which the cell T[i1, i2, …, id] records the length of MLCS of the prefixes s1[1…i1], s2[1…i2], …, sd[1…id]. Specifically, T[i1, i2, …, id] can be computed recursively by the following formula:(1)T[i1,i2,…,id]     ={0if ∃j(1≤j≤d),ij=0T[i1−1,…,id−1]+1if s1[i1]=s2[i2]=…=sd[id]max(T¯)otherwisewhere T-={T[i1-1,i2,…,id],T[i1,i2-1,…,id],…,T[i1,i2,…,id-1]}. Once the score table T is constructed, the MLCS can be collected by tracing back from the last cell T[n1, n2, …, nd] to the first cell T[0, 0, …, 0]. Figure 1A shows the score table T of two sequences s1 = ACTAGCTA and s2 = TCAGGTAT. The MLCS of these two sequences, which are TAGTA and CAGTA, can be found by tracing back from T[8, 8] to T[0, 0], as shown in Figure 1B.Figure 1(A) The score table of two DNA sequences ACTAGCTA and TCAGGTAT. (B) Constructing the MLCS from the score table, where the shaded cells conspond to dominant points.Obviously, both time and space complexity of dynamic programming based approaches for a MLCS problem with d sequences of length n are O(nd) (Hsu and Du, 1984). Many methods have been proposed to improve the efficiency, Hirschberg (1977), Apostolico et al. (1992), Masek and Paterson (1980), and Rick et al. (1994). However, with the increase of d and n, all these approaches are still inefficient from practical use.2.1.2. Dominant point based approachesIn order to reduce the time and space complexity of the dynamic programming based approaches, many other methods have been proposed, among which the dominant point based approaches are the most efficient ones until now. Before discussing the dominant point based approaches, some related definitions are introduced first:Definition 3. Given d sequences s1, s2, …, sd on Σ, a vector p = (p1, p2, …, pd) is called a match point of the d sequences, if s1[p1] = s2[p2] = … = sd[pd] = δ, i.e., δ is a common symbol appearing at the position pi of sequence si for i = 1, 2, ⋯ , d. The corresponding symbol δ of match point p is denoted by C(p).Definition 4. Given two match points p = (p1, p2, …, pd) and q = (q1, q2, …, qd) of d sequences, we call: (1) p = q if and only if pi = qi (1 ≤ i ≤ d). (2) p dominates q (or q is dominated by p), which is denoted by p ≼ q, if pi ≤ qi for each i (1 ≤ i ≤ d), and pj < qj for some j (1 ≤ j ≤ d). (3) p strongly dominates q (or q is strongly dominated by p), which is denoted by p ≺ q, if pi < qi for each i (1 ≤ i ≤ d). (4) q is a successor of p (or p is a precursor of q), if p ≺ q and there is no match point r such that p ≺ r ≺ q and C(q) = C(r).Not that, one match point can have at most |Σ| successors with each successor corresponding to one symbol in Σ.Definition 5. The level of a match point p = (p1, p2, …, pd) is defined to be L(p) = T[p1, p2, …, pd], where T is the score table computed by Formula (1). A match point p is called a k-dominant point (k-dominant for short) if and only if: (1) L(p) = k. (2) There is no other match point q such that: L(q) = k and q ≼ p. All the k-dominants form a set Dk.The motivation of the dominant point based approaches is to reduce the time and space complexity of the basic dynamic programming based methods. The key idea is based on the observation that only the dominant points can contribute to the construction of the MLCS (as shown in Figure 1B, the shaded cells correspond to the dominant points). Since the number of dominant points can be much smaller than the number of all cells in the score table T, a dominant point approach that only identifies the dominant points, without filling the whole score table, can greatly reduce the time and space complexity.The search space of the dominant point based approaches can be organized to a directed acyclic graph (DAG): a node in DAG represents a match point, while an edge 〈p, q〉 in DAG represents that q is a successor of p, i.e., p ≺ q and L(q) = L(p) + 1. Initially, the DAG contains only a source node (0, 0, …, 0) with no incoming edges as well as an end node (∞, ∞, …, ∞) with no outgoing edges. Next, the DAG is constructed level by level as follows: at first, let the level k = 0, and D0 = {(0, 0, …, 0)}, and then, with a forward iteration procedure, the (k + 1)-dominants Dk + 1 are computed based on the k-dominants Dk, and this procedure is denoted by Dk → Dk + 1. Specifically, each node in Dk is expanded by generating all its |Σ| successors, then a pruning operation called Minima is performed to identify those successors who dominant others, and only those dominants are reserved to Dk + 1. Once all the nodes in the graph have been expanded, the whole DAG is constructed, in which a longest path from the source node to the end node corresponds to a LCS, thus, the MLCS problem becomes finding all the longest paths from the source node to the end node. In the following, we will use a simple example to illustrate the above procedure.Example 1. Finding the MLCS of sequences ACTAGCTA and TCAGGTAT based on the dominant point based approaches, as shown in Figure 2.Step 0. Set source node (0, 0) and end node (∞, ∞).Step 1. Construct nodes in level 1. For symbol A, the components of match point (1, 3) are the first positions of A in the two input sequences from beginning. Thus, node A(1, 3) is a successor of the source node corresponding to symbol A in level 1. Similarly, node C(2, 2), G(5, 4), and T(3, 1) are also the successors of the source node corresponding to symbol C, G and T, respectively. Among these four nodes in level 1, find and delete dominated node G(5, 4) (using the Minima operation), as shown in gray in Figure 2. The left three dominant nodes form D1 = {A(1, 3), C(2, 2), T(3, 1)}.Step 2. Construct nodes in level 2. For each node in D1, e.g., for A(1, 3)∈D1, symbol A with match point (4, 7) is the first common symbol A in the two sequences after symbol A with match point (1, 3) (i.e., after node A(1, 3)∈D1). Thus, node A(4, 7) is a successor of A(1, 3) corresponding to symbol A in level 2. Similarly, nodes T(3, 6) and G(5, 4) are also successors of A(1, 3) corresponding to symbol T and G, respectively, in level 2. In the same way, node C(2, 2) in level 1 can generate three successors A(4, 3), G(5, 3) and T(3, 6) in level 2, and node T(3, 1) in level 1 can generate four successors A(4, 3), C(6, 2), G(5, 4), and T(7, 6) in level 2. Note that, some nodes appear more than one times. Among these ten nodes in level 2, find and delete the duplicated nodes (using Minima) A(4, 3), G(5, 4) (delete two times) and T(3, 6) as shown in black in level 2. Also, find and delete all dominated nodes (using Minima) (4, 7), (5, 4), and (7, 6) as shown in gray in level 2. The left dominant points form the set D2 = {T(3, 6), A(4, 3), C(6, 2)}, which forms the final level 2 of the graph. Note that, if a node has no successors, let the end node to be its only successor.Step 3. Repeat the above construction process level by level until the whole DAG is constructed.Figure 2The DAG of two sequences ACTAGCTA and TCAGGTAT constructed by the general dominant point based algorithms, in which the black and gray nodes will be eliminated by the Minima operation.It can be seen from the above example that the dominant point based approaches have the following main drawbacks: There are a huge number of duplicated nodes and dominated nodes in each level, which consumes a lot of memory.All these duplicated nodes and dominated nodes in each level should be deleted, and finding all these nodes in each level needs a lot of pairwise comparisons of two d dimensional vectors (while each pairwise comparison of two d dimensional vectors needs d pairwise comparisons of two integers). Thus, the deletions of duplicated nodes and dominated nodes in all levels will be very time consuming.Hunt and Szymanski (1977) proposed the first dominant point based algorithm for two sequences with time complexity O((r + n)logn), where r is the number of nodes in DAG and n is the length of the two sequences. Afterwards, to further improve the efficiency, a variety of dominant point based LCS/MLCS algorithms have been presented. Korkin (2001) proposed the first parallel MLCS algorithm with time complexity O(|Σ||D|), where |D| is the number of dominants in the graph. Chen et al. (2006) presented an efficient MLCS algorithm—FAST-LCS for DNA sequences, it introduced a novel data structure called successor table to obtain the successors of nodes in constant time and used a pruning operation to eliminate the non-dominant nodes in each level. Wang et al. (2011) proposed an algorithm Quick-DPAR to improve the FAST-MLCS algorithm, it uses a divide-and-conquer strategy to eliminate the non-dominant nodes, which is very suitable for parallelization, it is indicated that the parallelized algorithm Quick-DPPAR had gained a near-linear speedup compared to its serial version. Li et al. (2012) and Yang et al. (2010) made efforts to develop efficient parallel algorithms on GPUs for the LCS problem and on cloud platform for the MLCS problem, respectively. Unfortunately, Yang et al. (2010) is not suitable for the MLCS problem with many sequences due to the large synchronous costs. Recently, Li et al. (2016b,a) proposed two algorithms: PTop-MLCS and RLP-MLCS based on dominant points, these algorithms used a novel graph model called Non-redundant Common Subsequence Graph (NCSG) which can greatly reduce the redundant nodes during processing, and adopted a two-passes topological sorting procedure to find the MLCS. The authors claimed that the time and space complexity of their algorithms is linear to the number of nodes in NCSG.In practice, for MLCS problems with large number of sequences, the traditional algorithms usually need a long time and large space to find the optimal solution (the complete MLCS), to address this issue, approximate algorithms have been investigated to quickly produce a suboptimal solution (partial MLCS) and gradually improve it when given more time, until an optimal one is found. Yang et al. (2013) proposed an approximate algorithm Pro-MLCS as well as its efficient parallelization based on the dominant point model. Pro-MLCS can find an approximate solution quickly, which only takes around 3% of the entire running time, and then progressively generates better solutions until obtaining the optimal one. Recently, Yang et al. (2014) proposed another two approximate algorithms SA-MLCS and SLA-MLCS. SA-MLCS used an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding the available memory.
[ "28187279", "17217522", "4500555", "7265238", "25400485" ]
[ { "pmid": "28187279", "title": "Next-Generation Sequencing of Circulating Tumor DNA for Early Cancer Detection.", "abstract": "Curative therapies are most successful when cancer is diagnosed and treated at an early stage. We advocate that technological advances in next-generation sequencing of circulating, tumor-derived nucleic acids hold promise for addressing the challenge of developing safe and effective cancer screening tests." }, { "pmid": "17217522", "title": "A fast parallel algorithm for finding the longest common sequence of multiple biosequences.", "abstract": "BACKGROUND\nSearching for the longest common sequence (LCS) of multiple biosequences is one of the most fundamental tasks in bioinformatics. In this paper, we present a parallel algorithm named FAST_LCS to speedup the computation for finding LCS.\n\n\nRESULTS\nA fast parallel algorithm for LCS is presented. The algorithm first constructs a novel successor table to obtain all the identical pairs and their levels. It then obtains the LCS by tracing back from the identical character pairs at the last level. Effective pruning techniques are developed to significantly reduce the computational complexity. Experimental results on gene sequences in the tigr database show that our algorithm is optimal and much more efficient than other leading LCS algorithms.\n\n\nCONCLUSION\nWe have developed one of the fastest parallel LCS algorithms on an MPP parallel computing model. For two sequences X and Y with lengths n and m, respectively, the memory required is max{4*(n+1)+4*(m+1), L}, where L is the number of identical character pairs. The time complexity is O(L) for sequential execution, and O(|LCS(X, Y)|) for parallel execution, where |LCS(X, Y)| is the length of the LCS of X and Y. For n sequences X1, X2, ..., Xn, the time complexity is O(L) for sequential execution, and O(|LCS(X1, X2, ..., Xn)|) for parallel execution. Experimental results support our analysis by showing significant improvement of the proposed method over other leading LCS algorithms." }, { "pmid": "4500555", "title": "Matching sequences under deletion-insertion constraints.", "abstract": "Given two finite sequences, we wish to find the longest common subsequences satisfying certain deletion/insertion constraints. Consider two successive terms in the desired subsequence. The distance between their positions must be the same in the two original sequences for all but a limited number of such pairs of successive terms. Needleman and Wunsch gave an algorithm for finding longest common subsequences without constraints. This is improved from the viewpoint of computational economy. An economical algorithm is then elaborated for finding subsequences satisfying deletion/insertion constraints. This result is useful in the study of genetic homology based on nucleotide or amino-acid sequences." }, { "pmid": "25400485", "title": "A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem.", "abstract": "The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances." } ]
JMIR Medical Informatics
28760726
PMC5556254
10.2196/medinform.7140
Triaging Patient Complaints: Monte Carlo Cross-Validation of Six Machine Learning Classifiers
BackgroundUnsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability.ObjectiveThe aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate.MethodsWe implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results.ResultsWe achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively.ConclusionsWe demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action.
Related WorkThe bulk of the textual artifacts in health care can be found in two main sources: clinical and nonclinical. Clinical textual artifacts are largely entries in the medical chart, comments on the case, or physician notes. Medical chart notes tend to be consciously made and well structured, whereas case comments and physician notes focus on treatment (including diagnoses) of the patient. Nonclinical textual artifacts include unsolicited patient feedback and often revolve around complaints. The text is variable, may contain abbreviations, and may extend beyond the actual treatment or diagnosis.Previous research has focused on clinical textual artifacts [8]. Recent research demonstrates the possibility to apply natural language processing (NLP) on electronic medical records to identify postoperative complications [9]. Bejan and Denny [10] showed how to identify treatment relationships in clinical text using a supervised learning system that is able to predict whether or not a whether or not a treatment relation exists between any two medical concepts mentioned in the clinical notes exists between any two medical concepts mentioned in the clinical notes.Cui et al [11] explored a large number of consumer health questions. For each question, they selected a smaller set of the most relevant concepts adopting the idea of the term frequency-inverse document frequency (TF-IDF) metric. Instead of computing the TF-IDF based on the terms, they used concept unique identifiers. Their results indicate that we can infer more information from patient comments than commonly thought. However, questions are short and limited, whereas patient complaints are rich and elaborate.Sakai et al [12] concluded that how risk assessment and classification is configured is often a decisive intervention in the reorganization of the work process in emergency services. They demonstrated the textual analysis of feedback provided by nurses can expose the sentiment and feelings of the emergency workers and help improve the outcomes.Temporal information in discharge summaries has been successfully used [13] to classify encounters, enabling the placement of data within the structure to provide a foundational representation on which further reasoning, including the addition of domain knowledge, can be accomplished.Additional research [14] extended the clinical Text Analysis and Knowledge Extraction System (cTAKES) with a simplified feature extraction, and the development of both rule and machine learning-based document classifiers. The resulting system, the Yale cTAKES Extensions (YTEX), can help classify radiology reports containing findings suggestive of hepatic decompensation. A recent systematic literature review of 85 articles focusing on the secondary use of structured patient records showed that electronic health record data structuring methods are often described ambiguously and may lack clear definition as such [15].
[ "21226384", "17971689", "25916627", "11367775", "24195197", "1635463", "21862746", "16169282", "21622934", "25991152", "12668687", "20808728", "16119262" ]
[ { "pmid": "21226384", "title": "Best practices for basic and advanced skills in health care service recovery: a case study of a re-admitted patient.", "abstract": "BACKGROUND\nService recovery refers to an organizations entire process for facilitating resolution of dissatisfactions, whether or not visible to patients and families. Patients are an important resource for reporting miscommunications, provider inattention, rudeness, or delays, especially if they perceive a connection to misdiagnosis or failed treatment. Health systems that encourage patients to be \"the eyes and ears\" of individual and team performance capitalize on a rich source of data for quality improvement and risk prevention. Effective service recovery requires organizations (1) to learn about negative perceptions and experiences and (2) to create an infrastructure that supports staff's ability to respond. Service recovery requires the exercise of both basic and advanced skills. We term certain skills as advanced because of the significant variation in their use or endorsement among 30 health care organizations in the United States.\n\n\nBEST PRACTICES FOR BASIC SERVICE RECOVERY\nOn the basis of our work with the 30 organizations, a mnemonic, HEARD, incorporates best practices for basic service recovery processes: Hearing the person's concern; Empathizing with the person raising the issue; Acknowledging, expressing appreciation to the person for sharing, and Apologizing when warranted; Responding to the problem, setting time lines and expectations for follow-up; and Documenting or Delegating the documentation to the appropriate person.\n\n\nBEST PRACTICES FOR ADVANCED SERVICE RECOVERY\nImpartiality, chain of command, setting boundaries, and Documentation represent four advanced service recovery skills critical for addressing challenging situations.\n\n\nCONCLUSION\nUsing best practices in service recovery enables the organization to do its best to make right what patients and family members experience as wrong." }, { "pmid": "17971689", "title": "A complementary approach to promoting professionalism: identifying, measuring, and addressing unprofessional behaviors.", "abstract": "Vanderbilt University School of Medicine (VUSM) employs several strategies for teaching professionalism. This article, however, reviews VUSM's alternative, complementary approach: identifying, measuring, and addressing unprofessional behaviors. The key to this alternative approach is a supportive infrastructure that includes VUSM leadership's commitment to addressing unprofessional/disruptive behaviors, a model to guide intervention, supportive institutional policies, surveillance tools for capturing patients' and staff members' allegations, review processes, multilevel training, and resources for addressing disruptive behavior.Our model for addressing disruptive behavior focuses on four graduated interventions: informal conversations for single incidents, nonpunitive \"awareness\" interventions when data reveal patterns, leader-developed action plans if patterns persist, and imposition of disciplinary processes if the plans fail. Every physician needs skills for conducting informal interventions with peers; therefore, these are taught throughout VUSM's curriculum. Physician leaders receive skills training for conducting higher-level interventions. No single strategy fits every situation, so we teach a balance beam approach to understanding and weighing the pros and cons of alternative intervention-related communications. Understanding common excuses, rationalizations, denials, and barriers to change prepares physicians to appropriately, consistently, and professionally address the real issues. Failing to address unprofessional behavior simply promotes more of it. Besides being the right thing to do, addressing unprofessional behavior can yield improved staff satisfaction and retention, enhanced reputation, professionals who model the curriculum as taught, improved patient safety and risk-management experience, and better, more productive work environments." }, { "pmid": "25916627", "title": "Patient Complaints and Adverse Surgical Outcomes.", "abstract": "One factor that affects surgical team performance is unprofessional behavior exhibited by the surgeon, which may be observed by patients and families and reported to health care organizations in the form of spontaneous complaints. The objective of this study was to assess the relationship between patient complaints and adverse surgical outcomes. A retrospective cohort study used American College of Surgeons National Surgical Quality Improvement Program data from an academic medical center and included 10 536 patients with surgical procedures performed by 66 general and vascular surgeons. The number of complaints for a surgeon was correlated with surgical occurrences (P < .01). Surgeons with more patient complaints had a greater rate of surgical occurrences if the surgeon's aggregate preoperative risk was higher (β = .25, P < .05), whereas there was no statistically significant relationship between patient complaints and surgical occurrences for surgeons with lower aggregate perioperative risk (β = -.20, P = .77)." }, { "pmid": "11367775", "title": "The role of complaint management in the service recovery process.", "abstract": "BACKGROUND\nPatient satisfaction and retention can be influenced by the development of an effective service recovery program that can identify complaints and remedy failure points in the service system. Patient complaints provide organizations with an opportunity to resolve unsatisfactory situations and to track complaint data for quality improvement purposes.\n\n\nSERVICE RECOVERY\nService recovery is an important and effective customer retention tool. One way an organization can ensure repeat business is by developing a strong customer service program that includes service recovery as an essential component. The concept of service recovery involves the service provider taking responsive action to \"recover\" lost or dissatisfied customers and convert them into satisfied customers. Service recovery has proven to be cost-effective in other service industries.\n\n\nTHE COMPLAINT MANAGEMENT PROCESS\nThe complaint management process involves six steps that organizations can use to influence effective service recovery: (1) encourage complaints as a quality improvement tool; (2) establish a team of representatives to handle complaints; (3) resolve customer problems quickly and effectively; (4) develop a complaint database; (5) commit to identifying failure points in the service system; and (6) track trends and use information to improve service processes.\n\n\nSUMMARY AND CONCLUSIONS\nCustomer retention is enhanced when an organization can reclaim disgruntled patients through the development of effective service recovery programs. Health care organizations can become more customer oriented by taking advantage of the information provided by patient complaints, increasing patient satisfaction and retention in the process." }, { "pmid": "24195197", "title": "An intervention model that promotes accountability: peer messengers and patient/family complaints.", "abstract": "BACKGROUND\nPatients and their families are well positioned to partner with health care organizations to help identify unsafe and dissatisfying behaviors and performance. A peer messenger process was designed by the Center for Professional and Patient Advocacy at Vanderbilt University Medical Center (Nashville, Tennessee) to address \"high-risk\" physicians identified through analysis of unsolicited patient complaints, a proxy for risk of lawsuits.\n\n\nMETHODS\nThis retrospective, descriptive study used peer messenger debriefing results from data-driven interventions at 16 geographically disparate community (n = 7) and academic (n = 9) medical centers in the United States. Some 178 physicians served as peer messengers, conducting interventions from 2005, through 2009 on 373 physicians identified as high risk.\n\n\nRESULTS\nMost (97%) of the high-risk physicians received the feedback professionally, and 64% were \"Responders.\" Responders' risk scores improved at least 15%, where Nonresponders' scores worsened (17%) or remained unchanged (19%) (p < or = .001). Responders were more often physicians practicing in medicine and surgery than emergency medicine physicians, had longer organizational tenures, and engaged in lengthier first-time intervention meetings with messengers. Years to achieve responder status correlated positively with initial communication-related complaints (r = .32, p < .001), but all complaint categories were equally likely to change over time.\n\n\nCONCLUSIONS\nPeer messengers, recognized by leaders and appropriately supported with ongoing training, high-quality data, and evidence of positive outcomes, are willing to intervene with colleagues over an extended period of time. The physician peer messenger process reduces patient complaints and is adaptable to addressing unnecessary variation in other quality/safety metrics." }, { "pmid": "1635463", "title": "Natural language processing and semantical representation of medical texts.", "abstract": "For medical records, the challenge for the present decade is Natural Language Processing (NLP) of texts, and the construction of an adequate Knowledge Representation. This article describes the components of an NLP system, which is currently being developed in the Geneva Hospital, and within the European Community's AIM programme. They are: a Natural Language Analyser, a Conceptual Graphs Builder, a Data Base Storage component, a Query Processor, a Natural Language Generator and, in addition, a Translator, a Diagnosis Encoding System and a Literature Indexing System. Taking advantage of a closed domain of knowledge, defined around a medical specialty, a method called proximity processing has been developed. In this situation no parser of the initial text is needed, and the system is based on semantical information of near words in sentences. The benefits are: easy implementation, portability between languages, robustness towards badly-formed sentences, and a sound representation using conceptual graphs." }, { "pmid": "21862746", "title": "Automated identification of postoperative complications within an electronic medical record using natural language processing.", "abstract": "CONTEXT\nCurrently most automated methods to identify patient safety occurrences rely on administrative data codes; however, free-text searches of electronic medical records could represent an additional surveillance approach.\n\n\nOBJECTIVE\nTo evaluate a natural language processing search-approach to identify postoperative surgical complications within a comprehensive electronic medical record.\n\n\nDESIGN, SETTING, AND PATIENTS\nCross-sectional study involving 2974 patients undergoing inpatient surgical procedures at 6 Veterans Health Administration (VHA) medical centers from 1999 to 2006.\n\n\nMAIN OUTCOME MEASURES\nPostoperative occurrences of acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction identified through medical record review as part of the VA Surgical Quality Improvement Program. We determined the sensitivity and specificity of the natural language processing approach to identify these complications and compared its performance with patient safety indicators that use discharge coding information.\n\n\nRESULTS\nThe proportion of postoperative events for each sample was 2% (39 of 1924) for acute renal failure requiring dialysis, 0.7% (18 of 2327) for pulmonary embolism, 1% (29 of 2327) for deep vein thrombosis, 7% (61 of 866) for sepsis, 16% (222 of 1405) for pneumonia, and 2% (35 of 1822) for myocardial infarction. Natural language processing correctly identified 82% (95% confidence interval [CI], 67%-91%) of acute renal failure cases compared with 38% (95% CI, 25%-54%) for patient safety indicators. Similar results were obtained for venous thromboembolism (59%, 95% CI, 44%-72% vs 46%, 95% CI, 32%-60%), pneumonia (64%, 95% CI, 58%-70% vs 5%, 95% CI, 3%-9%), sepsis (89%, 95% CI, 78%-94% vs 34%, 95% CI, 24%-47%), and postoperative myocardial infarction (91%, 95% CI, 78%-97%) vs 89%, 95% CI, 74%-96%). Both natural language processing and patient safety indicators were highly specific for these diagnoses.\n\n\nCONCLUSION\nAmong patients undergoing inpatient surgical procedures at VA medical centers, natural language processing analysis of electronic medical records to identify postoperative complications had higher sensitivity and lower specificity compared with patient safety indicators based on discharge coding." }, { "pmid": "16169282", "title": "A temporal constraint structure for extracting temporal information from clinical narrative.", "abstract": "INTRODUCTION\nTime is an essential element in medical data and knowledge which is intrinsically connected with medical reasoning tasks. Many temporal reasoning mechanisms use constraint-based approaches. Our previous research demonstrates that electronic discharge summaries can be modeled as a simple temporal problem (STP).\n\n\nOBJECTIVE\nTo categorize temporal expressions in clinical narrative text and to propose and evaluate a temporal constraint structure designed to model this temporal information and to support the implementation of higher-level temporal reasoning.\n\n\nMETHODS\nA corpus of 200 random discharge summaries across 18 years was applied in a grounded approach to construct a representation structure. Then, a subset of 100 discharge summaries was used to tally the frequency of each identified time category and the percentage of temporal expressions modeled by the structure. Fifty random expressions were used to assess inter-coder agreement.\n\n\nRESULTS\nSix main categories of temporal expressions were identified. The constructed temporal constraint structure models time over which an event occurs by constraining its starting time and ending time. It includes a set of fields for the endpoint(s) of an event, anchor information, qualitative and metric temporal relations, and vagueness. In 100 discharge summaries, 1961 of 2022 (97%) identified temporal expressions were effectively modeled using the temporal constraint structure. Inter-coder evaluation of 50 expressions yielded exact match in 90%, partial match with trivial differences in 8%, partial match with large differences in 2%, and total mismatch in 0%.\n\n\nCONCLUSION\nThe proposed temporal constraint structure embodies a sufficient and successful implementation method to encode the diversity of temporal information in discharge summaries. Placing data within the structure provides a foundational representation upon which further reasoning, including the addition of domain knowledge and other post-processing to implement an STP, can be accomplished." }, { "pmid": "21622934", "title": "The Yale cTAKES extensions for document classification: architecture and application.", "abstract": "BACKGROUND\nOpen-source clinical natural-language-processing (NLP) systems have lowered the barrier to the development of effective clinical document classification systems. Clinical natural-language-processing systems annotate the syntax and semantics of clinical text; however, feature extraction and representation for document classification pose technical challenges.\n\n\nMETHODS\nThe authors developed extensions to the clinical Text Analysis and Knowledge Extraction System (cTAKES) that simplify feature extraction, experimentation with various feature representations, and the development of both rule and machine-learning based document classifiers. The authors describe and evaluate their system, the Yale cTAKES Extensions (YTEX), on the classification of radiology reports that contain findings suggestive of hepatic decompensation.\n\n\nRESULTS AND DISCUSSION\nThe F(1)-Score of the system for the retrieval of abdominal radiology reports was 96%, and was 79%, 91%, and 95% for the presence of liver masses, ascites, and varices, respectively. The authors released YTEX as open source, available at http://code.google.com/p/ytex." }, { "pmid": "25991152", "title": "Secondary use of structured patient data: interim results of a systematic review.", "abstract": "In addition to patient care, EHR data are increasingly in demand for secondary purposes, e.g. administration, research and enterprise resource planning. We conducted a systematic literature review and subsequent analysis of 85 articles focusing on the secondary use of structured patient records. We grounded the analysis on how patient records have been structured, how these structures have been evaluated and what are the main results achieved from the secondary use viewpoint. We conclude that secondary use requires complete and interoperable patient records, which in turn depend on better alignment of primary and secondary users' needs and benefits." }, { "pmid": "12668687", "title": "The role of domain knowledge in automating medical text report classification.", "abstract": "OBJECTIVE\nTo analyze the effect of expert knowledge on the inductive learning process in creating classifiers for medical text reports.\n\n\nDESIGN\nThe authors converted medical text reports to a structured form through natural language processing. They then inductively created classifiers for medical text reports using varying degrees and types of expert knowledge and different inductive learning algorithms. The authors measured performance of the different classifiers as well as the costs to induce classifiers and acquire expert knowledge.\n\n\nMEASUREMENTS\nThe measurements used were classifier performance, training-set size efficiency, and classifier creation cost.\n\n\nRESULTS\nExpert knowledge was shown to be the most significant factor affecting inductive learning performance, outweighing differences in learning algorithms. The use of expert knowledge can affect comparisons between learning algorithms. This expert knowledge may be obtained and represented separately as knowledge about the clinical task or about the data representation used. The benefit of the expert knowledge is more than that of inductive learning itself, with less cost to obtain.\n\n\nCONCLUSION\nFor medical text report classification, expert knowledge acquisition is more significant to performance and more cost-effective to obtain than knowledge discovery. Building classifiers should therefore focus more on acquiring knowledge from experts than trying to learn this knowledge inductively." }, { "pmid": "20808728", "title": "Regularization Paths for Generalized Linear Models via Coordinate Descent.", "abstract": "We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods." }, { "pmid": "16119262", "title": "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.", "abstract": "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy." } ]
JMIR mHealth and uHealth
28778851
PMC5562934
10.2196/mhealth.7521
Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation
BackgroundInertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development.ObjectiveThe study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network.MethodsWe applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers.ResultsWith the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED.ConclusionsThe high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds.
Related WorkThe three main topics in this section are as follows: (1) a brief overview of the current human activity recognition (HAR) and exercise detection (ED) literature, (2) an account of some of the newer advances in the field that are using neural networks for certain parts of the feature discovery and reduction process, and (3) an introduction to transfer learning, highlighting its benefits in terms of time and resource savings, and working with smaller datasets.Activity Classification for Inertial Sensor DataOver the past 15 years, inertial sensors have become increasingly ubiquitous due to their presence in mobile phones and wearable activity trackers [2]. This has enabled countless applications in the monitoring of human activity and performance spanning applications in general HAR, gait analysis, the military field, the medical field, and exercise recognition and analysis [3-6]. Across all these application spaces, there are common challenges and steps which must be overcome and implemented to successfully create functional motion classification systems.Human activity recognition with wearable sensors usually pertains to the detection of gross motor movements such as walking, jogging, cycling, swimming, and sleeping [5,7]. In this field of motion tracking with inertial sensors, the key challenges are often considered to be (1) the selection of the attributes to be measured; (2) the construction of a portable, unobtrusive, and inexpensive data acquisition system; (3) the design of feature extraction and inference methods; (4) the collection of data under realistic conditions; (5) the flexibility to support new users without the need for retraining the system; and (6) the implementation in mobile devices meeting energy and processing requirements [3,7]. With the ever-increasing computational power and battery life of mobile devices, many of these challenges are becoming easier to overcome.Whereas system functionality is dependent on hardware constraints, the accuracy, sensitivity, and specificity of HAR systems are most reliant on building large, balanced, labeled datasets; the identification of strong features for classification; and the selection of the best machine learning method for each application [3,8-10]. Investigating the best features and machine learning methods for each HAR application requires an individual or team appropriately skilled in signal processing and machine learning and a large amount of time. They must understand how to compute time-domain, frequency-domain, and time-frequency domain features from inertial sensor data and train and evaluate multiple machine learning methods (eg, random forests [11], support vector machines [12], k-nearest neighbors [13], and logistical regression [14]) with such features [3-5]. This means that those who may be most interested in the output of inertial sensor based activity recognition systems (eg, medical professionals, exercise professionals, and biomechanists) are unable to design and create the systems without significant engagement with machine learning experts [4].The above challenges in system design and implementation are replicated in activity recognition pertaining to more specific or acute movements. In the past decade, there has been a vast amount of work in the detection and quantification of specific rehabilitation and strength and conditioning exercises [15-17]. Such work has also endeavored to detect aberrant exercise technique and specific mistakes that system users make while exercising, which can increase their chance of injury or decrease their body’s beneficial adaptation due to the stimulus of exercise [17,18]. The key steps in the development of such systems have been recently outlined as (1) inertial sensor data collection, (2) data preprocessing, (3) feature extraction, and (4) classification (Figure 1) [4]. Whereas the first step can generally be completed by exercise professionals (eg, physiotherapists and strength and conditioning coaches), the remaining steps require skills outside that included in the training of such experts. Similarly, when analyzing gait with wearable sensors, feature extraction and classification have been highlighted as essential in the development of each application [19,20]. This again limits the type of professional who can create such systems and the rate at which hypotheses for new systems can be tested.Figure 1Steps involved in the development of an inertial measurement unit (IMU)-based exercise classification system.Neural Networks and Activity RecognitionIn the past few years, CNNs have been applied in a variety of manners to HAR, in both the fields of ambient and wearable sensing. Mo et al applied a novel approach utilizing machine vision methods to recognize twelve daily living tasks with the Microsoft Kinect. Rather than extract features from the Kinect data streams, they developed 144×48 images using 48 successive frames from skeleton data and 15×3 joint position coordinates and 11×3×3 joint rotation matrices. These images were then used as input to a multilayer CNN which automatically extracted features from the images that were fed in to a multilayer perceptron for classification [21]. Stefic and Patras utilized CNNs to extract areas of gaze fixation in raw image training data as participants watched videos of multiple activities [22]. This produced strong results in identifying salient regions of images that were then used for action recognition. Ma et al also combined a variety of CNNs to complete tasks, such as segmenting hands and objects from first-person camera images and then using these segmented images and motion images to train an action-based and motion-based CNN [23]. This novel use of CNNs allowed an increase in activity recognition rates of 6.6%, on average. These research efforts demonstrated the power of utilizing CNNs in multiple ways for HAR.Research utilizing CNNs for HAR with wearable inertial sensors has also been published recently. Zeng et al implemented a method based on CNNs which captures the local dependency and scale invariance of an inertial sensor signal [24]. This allows features for activity recognition to be identified automatically. The motivation for developing this method was the difficulties in identifying strong features for HAR. Yang et al also highlighted the challenge and importance of identifying strong features for HAR [25]. They also employed CNNs for feature learning from raw inertial sensor signals. The strength of CNNs in HAR was again demonstrated here as its use in this circumstance outperformed other HAR algorithms, on multiple datasets, which utilized heuristic hand-crafting of features or shallow learning architectures for feature learning. Radu et al also recently demonstrated that the use of CNNs to identify discriminative features for HAR when using multiple sensor inputs from various mobile phones and smartwatches, which have different sampling rates, data generation models, and sensitivities, outperforms classic methods of identifying such features [26]. The implementation of such feature learning techniques with CNNs is clearly beneficial but is complex and may not be suitable for HAR system developers without strong experience in machine learning and DSP. From a CNN perspective, these results are interesting and suggest significant scope for further exploration for machine learning researchers. However, for the purposes of this paper, their inclusion is to both succinctly acknowledge that CNN has been applied to HAR previously and to distinguish the present approach which seeks to use well developed CNN platforms tailored for machine vision tasks in a transfer learning context for HAR recognition using basic time series as the only user created features.Transfer Learning in Machine VisionDeep learning-based machine vision techniques are used in many disciplines, from speech, video, and audio processing [27], through to HAR [21] and cancer research [28].Training deep neural networks is a time consuming and resource intensive task, not only needing specialized hardware (graphics processing unit [GPU]) but also large datasets of labeled data. Unlike other machine learning techniques, once the training work is completed, querying the resulting models to predict results on new data is fast. In addition, trained networks can be repurposed for other specific uses which are not required to be known in advance of the initial training [29]. This arises from the generalized vision capabilities that can emerge with suitable training. More precisely, each layer of the network learns a number of features from the input data and that knowledge is refined through iterations. In fact, the learning that happens at different layers seems to be nonspecific to the dataset, including the identification of simple edges in the first few layers, the subsequent identification of boundaries and shapes, and growing toward object identification in the last few layers. These learned visual operators are applicable to other sets of data [30]. Transfer learning then is the generic name given to a classification effort when a pretrained network is reused for a task for which it was not specifically trained for. Deep learning frameworks such as Caffe [31] and TensorFlow can make use of pretrained networks, many of which have been made available by researchers in repositories such as the Caffe Model Zoo, available in their github repository.Retraining requires not only a fraction of the time that a full training session would need (min/h instead of weeks), but more importantly in many cases, allows for the use of much smaller datasets. An example of this is the inception model provided by Google, whose engineers reportedly spent several weeks training on ImageNet [32] (a dataset of over 14 million images in over 2 thousand categories), using multiple GPUs and the TensorFlow framework. In their example [33], they use in the order of 3500 pictures of flowers in 5 different categories to retrain the generic model, producing a model with a fair accuracy rating on new data. In fact, during the retraining stage, the network is left almost intact. The final classifier is the only part that is fully replaced, and “bottlenecks” (the layer before the final one) are calculated to integrate the new training data into the already “cognizant” network. After that, the last layer is trained to work with the new classification categories. This happens in image batches of a size that can be adapted to the needs of the new dataset (alongside other hyperparameters such as learning rate and training steps).Each step of the training process outputs values for training accuracy, validation accuracy, and cross entropy. A large difference between training and validation accuracy can indicate potential “overfitting” of the data, which can be a problem especially with small datasets, whereas the cross entropy is a loss function that provides an indication of how the training is progressing (decreasing values are expected).
[ "27782290", "19342767", "25004153", "21258659", "25246403", "24807526", "21816105", "21889629", "22520559", "16212968", "22438763", "26017442", "24579166", "22498149", "19936042" ]
[ { "pmid": "27782290", "title": "Technology in Rehabilitation: Evaluating the Single Leg Squat Exercise with Wearable Inertial Measurement Units.", "abstract": "BACKGROUND\nThe single leg squat (SLS) is a common lower limb rehabilitation exercise. It is also frequently used as an evaluative exercise to screen for an increased risk of lower limb injury. To date athlete / patient SLS technique has been assessed using expensive laboratory equipment or subjective clinical judgement; both of which are not without shortcomings. Inertial measurement units (IMUs) may offer a low cost solution for the objective evaluation of athlete / patient SLS technique.\n\n\nOBJECTIVES\nThe aims of this study were to determine if in combination or in isolation IMUs positioned on the lumbar spine, thigh and shank are capable of: (a) distinguishing between acceptable and aberrant SLS technique; (b) identifying specific deviations from acceptable SLS technique.\n\n\nMETHODS\nEighty-three healthy volunteers participated (60 males, 23 females, age: 24.68 + / - 4.91 years, height: 1.75 + / - 0.09 m, body mass: 76.01 + / - 13.29 kg). All participants performed 10 SLSs on their left leg. IMUs were positioned on participants' lumbar spine, left shank and left thigh. These were utilized to record tri-axial accelerometer, gyroscope and magnetometer data during all repetitions of the SLS. SLS technique was labelled by a Chartered Physiotherapist using an evaluation framework. Features were extracted from the labelled sensor data. These features were used to train and evaluate a variety of random-forests classifiers that assessed SLS technique.\n\n\nRESULTS\nA three IMU system was moderately successful in detecting the overall quality of SLS performance (77 % accuracy, 77 % sensitivity and 78 % specificity). A single IMU worn on the shank can complete the same analysis with 76 % accuracy, 75 % sensitivity and 76 % specificity. Single sensors also produce competitive classification scores relative to multi-sensor systems in identifying specific deviations from acceptable SLS technique.\n\n\nCONCLUSIONS\nA single IMU positioned on the shank can differentiate between acceptable and aberrant SLS technique with moderate levels of accuracy. It can also capably identify specific deviations from optimal SLS performance. IMUs may offer a low cost solution for the objective evaluation of SLS performance. Additionally, the classifiers described may provide useful input to an exercise biofeedback application." }, { "pmid": "19342767", "title": "Activity identification using body-mounted sensors--a review of classification techniques.", "abstract": "With the advent of miniaturized sensing technology, which can be body-worn, it is now possible to collect and store data on different aspects of human movement under the conditions of free living. This technology has the potential to be used in automated activity profiling systems which produce a continuous record of activity patterns over extended periods of time. Such activity profiling systems are dependent on classification algorithms which can effectively interpret body-worn sensor data and identify different activities. This article reviews the different techniques which have been used to classify normal activities and/or identify falls from body-worn sensor data. The review is structured according to the different analytical techniques and illustrates the variety of approaches which have previously been applied in this field. Although significant progress has been made in this important area, there is still significant scope for further work, particularly in the application of advanced classification techniques to problems involving many different activities." }, { "pmid": "25004153", "title": "Wearable electronics and smart textiles: a critical review.", "abstract": "Electronic Textiles (e-textiles) are fabrics that feature electronics and interconnections woven into them, presenting physical flexibility and typical size that cannot be achieved with other existing electronic manufacturing techniques. Components and interconnections are intrinsic to the fabric and thus are less visible and not susceptible of becoming tangled or snagged by surrounding objects. E-textiles can also more easily adapt to fast changes in the computational and sensing requirements of any specific application, this one representing a useful feature for power management and context awareness. The vision behind wearable computing foresees future electronic systems to be an integral part of our everyday outfits. Such electronic devices have to meet special requirements concerning wearability. Wearable systems will be characterized by their ability to automatically recognize the activity and the behavioral status of their own user as well as of the situation around her/him, and to use this information to adjust the systems' configuration and functionality. This review focuses on recent advances in the field of Smart Textiles and pays particular attention to the materials and their manufacturing process. Each technique shows advantages and disadvantages and our aim is to highlight a possible trade-off between flexibility, ergonomics, low power consumption, integration and eventually autonomy." }, { "pmid": "25246403", "title": "Properties of AdeABC and AdeIJK efflux systems of Acinetobacter baumannii compared with those of the AcrAB-TolC system of Escherichia coli.", "abstract": "Acinetobacter baumannii contains RND-family efflux systems AdeABC and AdeIJK, which pump out a wide range of antimicrobial compounds, as judged from the MIC changes occurring upon deletion of the responsible genes. However, these studies may miss changes because of the high backgrounds generated by the remaining pumps and by β-lactamases, and it is unclear how the activities of these pumps compare quantitatively with those of the well-studied AcrAB-TolC system of Escherichia coli. We expressed adeABC and adeIJK of A. baumannii, as well as E. coli acrAB, in an E. coli host from which acrAB was deleted. The A. baumannii pumps were functional in E. coli, and the MIC changes that were observed largely confirmed the substrate range already reported, with important differences. Thus, the AdeABC system pumped out all β-lactams, an activity that was often missed in deletion studies. When the expression level of the pump genes was adjusted to a similar level for a comparison with AcrAB-TolC, we found that both A. baumannii efflux systems pumped out a wide range of compounds, but AdeABC was less effective than AcrAB-TolC in the extrusion of lipophilic β-lactams, novobiocin, and ethidium bromide, although it was more effective at tetracycline efflux. AdeIJK was remarkably more effective than a similar level of AcrAB-TolC in the efflux of β-lactams, novobiocin, and ethidium bromide, although it was less so in the efflux of erythromycin. These results thus allow us to compare these efflux systems on a quantitative basis, if we can assume that the heterologous systems are fully functional in the E. coli host." }, { "pmid": "24807526", "title": "Study on the impact of partition-induced dataset shift on k-fold cross-validation.", "abstract": "Cross-validation is a very commonly employed technique used to evaluate classifier performance. However, it can potentially introduce dataset shift, a harmful factor that is often not taken into account and can result in inaccurate performance estimation. This paper analyzes the prevalence and impact of partition-induced covariate shift on different k-fold cross-validation schemes. From the experimental results obtained, we conclude that the degree of partition-induced covariate shift depends on the cross-validation scheme considered. In this way, worse schemes may harm the correctness of a single-classifier performance estimation and also increase the needed number of repetitions of cross-validation to reach a stable performance estimation." }, { "pmid": "21816105", "title": "Random forests for verbal autopsy analysis: multisite validation study using clinical diagnostic gold standards.", "abstract": "BACKGROUND\nComputer-coded verbal autopsy (CCVA) is a promising alternative to the standard approach of physician-certified verbal autopsy (PCVA), because of its high speed, low cost, and reliability. This study introduces a new CCVA technique and validates its performance using defined clinical diagnostic criteria as a gold standard for a multisite sample of 12,542 verbal autopsies (VAs).\n\n\nMETHODS\nThe Random Forest (RF) Method from machine learning (ML) was adapted to predict cause of death by training random forests to distinguish between each pair of causes, and then combining the results through a novel ranking technique. We assessed quality of the new method at the individual level using chance-corrected concordance and at the population level using cause-specific mortality fraction (CSMF) accuracy as well as linear regression. We also compared the quality of RF to PCVA for all of these metrics. We performed this analysis separately for adult, child, and neonatal VAs. We also assessed the variation in performance with and without household recall of health care experience (HCE).\n\n\nRESULTS\nFor all metrics, for all settings, RF was as good as or better than PCVA, with the exception of a nonsignificantly lower CSMF accuracy for neonates with HCE information. With HCE, the chance-corrected concordance of RF was 3.4 percentage points higher for adults, 3.2 percentage points higher for children, and 1.6 percentage points higher for neonates. The CSMF accuracy was 0.097 higher for adults, 0.097 higher for children, and 0.007 lower for neonates. Without HCE, the chance-corrected concordance of RF was 8.1 percentage points higher than PCVA for adults, 10.2 percentage points higher for children, and 5.9 percentage points higher for neonates. The CSMF accuracy was higher for RF by 0.102 for adults, 0.131 for children, and 0.025 for neonates.\n\n\nCONCLUSIONS\nWe found that our RF Method outperformed the PCVA method in terms of chance-corrected concordance and CSMF accuracy for adult and child VA with and without HCE and for neonatal VA without HCE. It is also preferable to PCVA in terms of time and cost. Therefore, we recommend it as the technique of choice for analyzing past and current verbal autopsies." }, { "pmid": "21889629", "title": "Support vector machines in water quality management.", "abstract": "Support vector classification (SVC) and regression (SVR) models were constructed and applied to the surface water quality data to optimize the monitoring program. The data set comprised of 1500 water samples representing 10 different sites monitored for 15 years. The objectives of the study were to classify the sampling sites (spatial) and months (temporal) to group the similar ones in terms of water quality with a view to reduce their number; and to develop a suitable SVR model for predicting the biochemical oxygen demand (BOD) of water using a set of variables. The spatial and temporal SVC models rendered grouping of 10 monitoring sites and 12 sampling months into the clusters of 3 each with misclassification rates of 12.39% and 17.61% in training, 17.70% and 26.38% in validation, and 14.86% and 31.41% in test sets, respectively. The SVR model predicted water BOD values in training, validation, and test sets with reasonably high correlation (0.952, 0.909, and 0.907) with the measured values, and low root mean squared errors of 1.53, 1.44, and 1.32, respectively. The values of the performance criteria parameters suggested for the adequacy of the constructed models and their good predictive capabilities. The SVC model achieved a data reduction of 92.5% for redesigning the future monitoring program and the SVR model provided a tool for the prediction of the water BOD using set of a few measurable variables. The performance of the nonlinear models (SVM, KDA, KPLS) was comparable and these performed relatively better than the corresponding linear methods (DA, PLS) of classification and regression modeling." }, { "pmid": "22520559", "title": "A review of wearable sensors and systems with application in rehabilitation.", "abstract": "The aim of this review paper is to summarize recent developments in the field of wearable sensors and systems that are relevant to the field of rehabilitation. The growing body of work focused on the application of wearable technology to monitor older adults and subjects with chronic conditions in the home and community settings justifies the emphasis of this review paper on summarizing clinical applications of wearable technology currently undergoing assessment rather than describing the development of new wearable sensors and systems. A short description of key enabling technologies (i.e. sensor technology, communication technology, and data analysis techniques) that have allowed researchers to implement wearable systems is followed by a detailed description of major areas of application of wearable technology. Applications described in this review paper include those that focus on health and wellness, safety, home rehabilitation, assessment of treatment efficacy, and early detection of disorders. The integration of wearable and ambient sensors is discussed in the context of achieving home monitoring of older adults and subjects with chronic conditions. Future work required to advance the field toward clinical deployment of wearable sensors and systems is discussed." }, { "pmid": "16212968", "title": "Classification of gait patterns in the time-frequency domain.", "abstract": "This paper describes the classification of gait patterns among descending stairs, ascending stairs and level walking activities using accelerometers arranged in antero-posterior and vertical direction on the shoulder of a garment. Gait patterns in continuous accelerometer records were classified in two steps. In the first step, direct spatial correlation of discrete dyadic wavelet coefficients was applied to separate the segments of gait patterns in the continuous accelerometer record. Compared to the reference system, averaged absolute error 0.387 s for ascending stairs and 0.404 s for descending stairs were achieved. The overall sensitivity and specificity of ascending stairs were 98.79% and 99.52%, and those of descending stairs were 97.35% and 99.62%. In the second step, powers of wavelet coefficients of 2 s time duration from separated segments of vertical and antero-posterior acceleration signals were used as features in classification. Our results proved a reliable technique of measuring gait patterns during physical activity." }, { "pmid": "22438763", "title": "Gait analysis using wearable sensors.", "abstract": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "24579166", "title": "A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection.", "abstract": "This paper presents and evaluates a deep learning architecture for automated basal cell carcinoma cancer detection that integrates (1) image representation learning, (2) image classification and (3) result interpretability. A novel characteristic of this approach is that it extends the deep learning architecture to also include an interpretable layer that highlights the visual patterns that contribute to discriminate between cancerous and normal tissues patterns, working akin to a digital staining which spotlights image regions important for diagnostic decisions. Experimental evaluation was performed on set of 1,417 images from 308 regions of interest of skin histopathology slides, where the presence of absence of basal cell carcinoma needs to be determined. Different image representation strategies, including bag of features (BOF), canonical (discrete cosine transform (DCT) and Haar-based wavelet transform (Haar)) and proposed learned-from-data representations, were evaluated for comparison. Experimental results show that the representation learned from a large histology image data set has the best overall performance (89.4% in F-measure and 91.4% in balanced accuracy), which represents an improvement of around 7% over canonical representations and 3% over the best equivalent BOF representation." }, { "pmid": "22498149", "title": "Physiotherapist agreement when visually rating movement quality during lower extremity functional screening tests.", "abstract": "OBJECTIVES\nTo investigate physiotherapist agreement in rating movement quality during lower extremity functional tests using two visual rating methods and physiotherapists with differing clinical experience.\n\n\nDESIGN\nClinical measurement.\n\n\nPARTICIPANTS\nSix healthy individuals were rated by 44 physiotherapists. These raters were in three groups (inexperienced, novice, experienced).\n\n\nMAIN MEASURES\nVideo recordings of all six individuals performing four lower extremity functional tests were visually rated (dichotomous or ordinal scale) using two rating methods (overall or segment) on two occasions separated by 3-4 weeks. Intra and inter-rater agreement for physiotherapists was determined using overall percentage agreement (OPA) and the first order agreement coefficient (AC1).\n\n\nRESULTS\nIntra-rater agreement for overall and segment methods ranged from slight to almost perfect (OPA: 29-96%, AC1: 0.01 to 0.96). AC1 agreement was better in the experienced group (84-99% likelihood) and for dichotomous rating (97-100% likelihood). Inter-rater agreement ranged from fair to good (OPA: 45-79%; AC1: 0.22-0.71). AC1 agreement was not influenced by clinical experience but was again better using dichotomous rating.\n\n\nCONCLUSIONS\nPhysiotherapists' visual rating of movement quality during lower extremity functional tests resulted in slight to almost perfect intra-rater agreement and fair to good inter-rater agreement. Agreement improved with increased level of clinical experience and use of dichotomous rating." } ]
Frontiers in Neurorobotics
28883790
PMC5573722
10.3389/fnbot.2017.00043
Impedance Control for Robotic Rehabilitation: A Robust Markovian Approach
The human-robot interaction has played an important role in rehabilitation robotics and impedance control has been used in the regulation of interaction forces between the robot actuator and human limbs. Series elastic actuators (SEAs) have been an efficient solution in the design of this kind of robotic application. Standard implementations of impedance control with SEAs require an internal force control loop for guaranteeing the desired impedance output. However, nonlinearities and uncertainties hamper such a guarantee of an accurate force level in this human-robot interaction. This paper addresses the dependence of the impedance control performance on the force control and proposes a control approach that improves the force control robustness. A unified model of the human-robot system that considers the ankle impedance by a second-order dynamics subject to uncertainties in the stiffness, damping, and inertia parameters has been developed. Fixed, resistive, and passive operation modes of the robotics system were defined, where transition probabilities among the modes were modeled through a Markov chain. A robust regulator for Markovian jump linear systems was used in the design of the force control. Experimental results show the approach improves the impedance control performance. For comparison purposes, a standard H∞ force controller based on the fixed operation mode has also been designed. The Markovian control approach outperformed the H∞ control when all operation modes were taken into account.
5.1. Related workThe impedance control configuration used is based on Hogan (1985) and it is aimed at the regulation of the dynamical behavior in the interaction port by variables that do not depend on the environment. The actuator together with the controller are modeled as an impedance, Zr, with velocity inputs (angular) and force outputs (torque). The environment is considered an admittance, Ye, in the interaction port. Colgate and Hogan (1988, 1989) presented sufficient conditions for the determination of stability of two coupled systems and explained how two physically coupled systems with Zr and Ye with passive port functions can guarantee stability. These concepts have been useful for the implementation of interaction controls for almost three decades. The stability of two coupled systems is given by Zr and Ye eigenvalues and the performance is evaluated by through the impedance Zr.Buerger and Hogan (2007) described a methodology in which an interaction control is designed for a robot module used for rehabilitation purposes. They considered an environment with restricted uncertain characteristics, therefore the admittance is rewritten as Ye(s) = Yn(s) + W(s)Δ(s). The authors also used a second-order dynamics to model the stiffness, damping, and inertia of the human parameters. Complementary stability for interacting systems was defined, where stability is determined by an environment subject to uncertainties. Therefore, a coupled stability problem is considered a robust stability problem.Regarding the human modeling, the dynamic properties of the lower limbs and muscular activities vary considerably among subjects. This is relevant since SRPAR has been designed for users that suffer diseases that affect the human motor control system, e.g., stroke and other conditions that cause hemiplegia. Typically, such diseases change stiffness and damping in the ankle and knee joints, hence producing spasticity or hypertonia (Lin et al., 2006; Chow et al., 2012). Therefore, the development of a control strategy that guarantees a safe interaction between patient and platform, mainly in virtue of uncertainties related to the human being, is fundamental.Li et al. (2017) and Pan et al. (2017) proposed adaptive control schemes for SEA-driven robots. They considered two operation modes in the adaptation process, namely robot-in-charge and human-in-charge, which are close related to the passive and resistive operation modes, respectively, proposed in this paper. However, the control adaptation is based on changes in the desired position input of the SEA controller and estimation of coordinate accelerations through nonlinear filtering.In human-robot interaction control systems, the efficiency of the force actuator operation deserves special attention. Although SEAs are characterized by a low output impedance, an important requirement for improving such efficiency is the achievement of a precise and proportional output torque with respect to the desired input. Pratt (2002), Au et al. (2006), Kong et al. (2009), Mehling and O'Malley (2014), and dos Santos et al. (2015) developed force controllers for ankle actuators using SEA. In this paper, we proposed a force control methodology that can deal with system uncertainties and guarantee robust mean square stability. Similar performance was obtained in different tests performed. Accuracies of 98.14% for resistive mode and 92.47% for passive mode were obtained in the pure stiffness configuration. In the stiffness-damping configuration, with Kv = 15 and Bv = 5, the accuracy obtained in the resistive case was of 97.47% for stiffness and 97.2% for damping, and in the passive case was of 93.94% for stiffness and 94% for damping.On the contrary, using a fixed-gain control approach based on H∞ synthesis, the performance was not similar among operation modes. We showed that this strategy can guarantee coupled stability; nevertheless, force control performance decreases when the system is in the passive operation mode. This is reflected in the impedance control accuracy for the pure stiffness configuration, falling from 90.4% in the resistive mode to 46.67% in the passive mode.
[ "12048669", "24396811", "22325644", "1088404", "19004697", "27766187", "16571398", "12098155", "21708707" ]
[ { "pmid": "12048669", "title": "Risks of falls in subjects with multiple sclerosis.", "abstract": "OBJECTIVES\nTo quantify fall risk among patients with multiple sclerosis (MS) and to report the importance of variables associated with falls.\n\n\nDESIGN\nRetrospective case-control study design with a 2-group sample of convenience.\n\n\nSETTING\nA hospital and home settings in Italy.\n\n\nPARTICIPANTS\nA convenience sample of 50 people with MS divided into 2 groups according to their reports of falls.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURE\nSubjects were assessed with questionnaires for cognitive ability and were measured on their ability to maintain balance, to walk, and to perform daily life activities. Data regarding patients' strength, spasticity, and transfer skills impairment were also collected.\n\n\nRESULTS\nNo statistical differences were found between groups of fallers and nonfallers using variables pertaining to years after onset, age, gender, and Mini-Mental State Examination. Near statistically significant differences were found in activities of daily living and transfer skills (P<.05). Three variables were associated with fall status: balance, ability to walk, and use of a cane (P<.01). Those variables were analyzed using a logistic regression. The model was able to predict fallers with a sensitivity of 90.9% and a specificity of 58.8%.\n\n\nCONCLUSIONS\nVariables pertaining to balance skills, gait impairment, and use of a cane differed between fallers and nonfallers groups and the incidence of those variables can be used as a predictive model to quantify fall risk in patients suffering from MS. These findings emphasize the multifactorial nature of falls in this patient population. Assessment of different aspects of motor impairment and the accurate determination of factors contributing to falls are necessary for individual patient management and therapy and for the development of a prevention program for falls." }, { "pmid": "24396811", "title": "Robot-assisted Therapy in Stroke Rehabilitation.", "abstract": "Research into rehabilitation robotics has grown rapidly and the number of therapeutic rehabilitation robots has expanded dramatically during the last two decades. Robotic rehabilitation therapy can deliver high-dosage and high-intensity training, making it useful for patients with motor disorders caused by stroke or spinal cord disease. Robotic devices used for motor rehabilitation include end-effector and exoskeleton types; herein, we review the clinical use of both types. One application of robot-assisted therapy is improvement of gait function in patients with stroke. Both end-effector and the exoskeleton devices have proven to be effective complements to conventional physiotherapy in patients with subacute stroke, but there is no clear evidence that robotic gait training is superior to conventional physiotherapy in patients with chronic stroke or when delivered alone. In another application, upper limb motor function training in patients recovering from stroke, robot-assisted therapy was comparable or superior to conventional therapy in patients with subacute stroke. With end-effector devices, the intensity of therapy was the most important determinant of upper limb motor recovery. However, there is insufficient evidence for the use of exoskeleton devices for upper limb motor function in patients with stroke. For rehabilitation of hand motor function, either end-effector and exoskeleton devices showed similar or additive effects relative to conventional therapy in patients with chronic stroke. The present evidence supports the use of robot-assisted therapy for improving motor function in stroke patients as an additional therapeutic intervention in combination with the conventional rehabilitation therapies. Nevertheless, there will be substantial opportunities for technical development in near future." }, { "pmid": "22325644", "title": "Coactivation of ankle muscles during stance phase of gait in patients with lower limb hypertonia after acquired brain injury.", "abstract": "OBJECTIVE\nExamine (1) coactivation between tibialis anterior (TA) and medial gastrocnemius (MG) muscles during stance phase of gait in patients with moderate-to-severe resting hypertonia after stroke or traumatic brain injury (TBI) and (2) the relationship between coactivation and stretch velocity-dependent increase in MG activity.\n\n\nMETHODS\nGait and surface EMG were recorded from patients with stroke or TBI (11 each) and corresponding healthy controls (n=11) to determine the magnitude and duration of TA-MG coactivation. The frequency and gain of positive (>0) and significant positive (p<0.05) EMG-lengthening velocity (EMG-LV) slope in MG were related to coactivation parameters.\n\n\nRESULTS\nThe magnitude of coactivation was increased on the more-affected (MA) side, whereas the duration was prolonged on the less-affected (LA) side of both stroke and TBI patients. The difference reached significance during the initial and late double support. The magnitude of coactivation positively correlated with the gain of significant positive EMG-LV slope in TBI patients.\n\n\nCONCLUSIONS\nIncreased coactivation between TA and MG during initial and late double support is a unique feature of gait in stroke and TBI patients with muscle hypertonia.\n\n\nSIGNIFICANCE\nIncreased coactivation may represent an adaptation to compensate for impaired stability during step transition after stroke and TBI." }, { "pmid": "1088404", "title": "Experience from a multicentre stroke register: a preliminary report.", "abstract": "In collaboration with 15 centres in 10 countries of Africa, Asia, and Europe, WHO started a pilot study of a community-based stroke register, with standardized methods. Preliminary data were obtained on 6395 new cases of stroke in defined study communities, from May 1971 to September 1974. Information on incidence rates, clinical profiles, diagnosis, management, and course and prognosis for these patients is given." }, { "pmid": "27766187", "title": "Summary of Human Ankle Mechanical Impedance During Walking.", "abstract": "The human ankle joint plays a critical role during walking and understanding the biomechanical factors that govern ankle behavior and provides fundamental insight into normal and pathologically altered gait. Previous researchers have comprehensively studied ankle joint kinetics and kinematics during many biomechanical tasks, including locomotion; however, only recently have researchers been able to quantify how the mechanical impedance of the ankle varies during walking. The mechanical impedance describes the dynamic relationship between the joint position and the joint torque during perturbation, and is often represented in terms of stiffness, damping, and inertia. The purpose of this short communication is to unify the results of the first two studies measuring ankle mechanical impedance in the sagittal plane during walking, where each study investigated differing regions of the gait cycle. Rouse et al. measured ankle impedance from late loading response to terminal stance, where Lee et al. quantified ankle impedance from pre-swing to early loading response. While stiffness component of impedance increases significantly as the stance phase of walking progressed, the change in damping during the gait cycle is much less than the changes observed in stiffness. In addition, both stiffness and damping remained low during the swing phase of walking. Future work will focus on quantifying impedance during the \"push off\" region of stance phase, as well as measurement of these properties in the coronal plane." }, { "pmid": "16571398", "title": "The relation between ankle impairments and gait velocity and symmetry in people with stroke.", "abstract": "OBJECTIVE\nTo identify the most important factor among the ankle impairments on gait velocity and symmetry in stroke patients.\n\n\nDESIGN\nCross-sectional, descriptive analysis of convenience sample.\n\n\nSETTING\nPatients from outpatient rehabilitation and neurovascular neurology departments in medical centers and municipal hospitals in Taiwan.\n\n\nPARTICIPANTS\nSixty-eight subjects with hemiparesis poststroke with the ability to walk independently.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nMaximal isometric strength of plantarflexors and dorsiflexors were examined by a handheld dynamometer. Spasticity index, slope magnitudes between electromyographic activities, and muscle lengthening velocity of gastrocnemius during lengthening period of stance phases were measured to represent the dynamic spasticity. Passive stiffness of pantarflexors was indicated by degrees of dorsiflexion range that were less than normative values. Position error was measured by the degree of proprioceptive deficits of ankle joint by evaluating the joint position sense. Gait velocity, symmetry, and other gait parameters were measured by the GAITRite system.\n\n\nRESULTS\nRegression analyses revealed that the dorsiflexors strength was the most important factor for gait velocity and temporal symmetry (R(2)=.30 for gait velocity, P<.001; R(2)=.36 for temporal asymmetry, P<.001). Dynamic spasticity was the most important determinant for gait spatial symmetry (R(2)=.53, P<.001).\n\n\nCONCLUSIONS\nGait velocity and temporal asymmetry are mainly affected by the dorsiflexors strength, whereas dynamic spasticity of plantarflexors influenced the degree of spatial gait asymmetry in our patients who were able to walk outdoors. Treatment aiming to improve different aspects of gait performance should emphasize on different ankle impairments." }, { "pmid": "12098155", "title": "Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke.", "abstract": "OBJECTIVE\nTo compare the effects of robot-assisted movement training with conventional techniques for the rehabilitation of upper-limb motor function after stroke.\n\n\nDESIGN\nRandomized controlled trial, 6-month follow-up.\n\n\nSETTING\nA Department of Veterans Affairs rehabilitation research and development center.\n\n\nPARTICIPANTS\nConsecutive sample of 27 subjects with chronic hemiparesis (>6mo after cerebrovascular accident) randomly allocated to group.\n\n\nINTERVENTIONS\nAll subjects received twenty-four 1-hour sessions over 2 months. Subjects in the robot group practiced shoulder and elbow movements while assisted by a robot manipulator. Subjects in the control group received neurodevelopmental therapy (targeting proximal upper limb function) and 5 minutes of exposure to the robot in each session.\n\n\nMAIN OUTCOME MEASURES\nFugl-Meyer assessment of motor impairment, FIMtrade mark instrument, and biomechanic measures of strength and reaching kinematics. Clinical evaluations were performed by a therapist blinded to group assignments.\n\n\nRESULTS\nCompared with the control group, the robot group had larger improvements in the proximal movement portion of the Fugl-Meyer test after 1 month of treatment (P<.05) and also after 2 months of treatment (P<.05). The robot group had larger gains in strength (P<.02) and larger increases in reach extent (P<.01) after 2 months of treatment. At the 6-month follow-up, the groups no longer differed in terms of the Fugl-Meyer test (P>.30); however, the robot group had larger improvements in the FIM (P<.04).\n\n\nCONCLUSIONS\nCompared with conventional treatment, robot-assisted movements had advantages in terms of clinical and biomechanical measures. Further research into the use of robotic manipulation for motor rehabilitation is justified." }, { "pmid": "21708707", "title": "Low impedance walking robots.", "abstract": "For both historical and technological reasons, most robots, including those meant to mimic animals or operate in natural environments,3 use actuators and control systems that have high (stiff) mechanical impedance. By contrast, most animals exhibit low (soft) impedance. While a robot's stiff joints may be programmed to closely imitate the recorded motion of an animal's soft joints, any unexpected position disturbances will generate reactive forces and torques much higher for the robot than for the animal. The dual of this is also true: while an animal will react to a force disturbance by significantly yielding position, a typical robot will greatly resist.These differences cause three deleterious effects for high impedance robots. First, the higher forces may cause damage to the robot or to its environment (which is particularly important if that environment includes people). Second, the robot must acquire very precise information about its position relative to the environment so as to minimize its velocity upon impact. Third, many of the self-stabilizing effects of natural dynamics are \"shorted out\"4 by the robot's high impedance, so that stabilization requires more effort from the control system.Over the past 5 yr, our laboratory has designed a series of walking robots based on \"Series-Elastic Actuators\" and \"Virtual Model Control.\" Using these two techniques, we have been able to build low-impedance walking robots that are both safe and robust, that operate blindly without any model of upcoming terrain, and that add minimal control effort in parallel to their self-stabilizing passive dynamics. We have discovered that it is possible to achieve surprisingly effective ambulation from rather simple mechanisms and control systems. After describing the historical and technological motivations for our approach, this paper gives an overview of our methods and shows some of the results we have obtained." } ]
Frontiers in Neurorobotics
28900394
PMC5581837
10.3389/fnbot.2017.00046
Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Related workOur proposal involves the use of visual graphs, in which each node stores images associated to landmarks, and the arcs represent the paths that the UAV must follow to reach the next node. Therefore, these graphs can be used to generate the best path for an UAV to reach a specific destination, as it has been suggested in other works. Practically each traditional method used in ground robots for trajectory planning has been considered for aerial ones (Goerzen et al., 2010). Some of those methods use graph-like models and generally they use algorithms such as improved versions of the classic A* (MacAllister et al., 2013; Zhan et al., 2014) and Rapidly-exploring Random Tree Star (RRT) (Noreen et al., 2016) or reinforcement learning (RL) (Sharma and Taylor, 2012) for planning. RL is even used by methods that consider the path-planning task in cooperative multi-vehicle systems (Wang and Phillips, 2014), in which coordinated maneuvers are required (Lopez-Guede and Graña, 2015).On the other hand, the obstacle avoidance task is also addressed here, which is particularly important in the UAV domain because of a collision in flight surely implies a danger and the partial or total destruction of the vehicle. Thus, the Collision Avoidance System (CAS) (Albaker and Rahim, 2009; Pham et al., 2015) is a fundamental part of control systems. Its goal is to allow UAVs to operate safely within the non-segregated civil and military airspace on a routinely basis. Basically, the CAS must detect and predict traffic conflicts in order to perform an avoidance maneuver to avoid a possible collision. Specific approaches are usually defined for outdoor or indoor vehicles. Predefined collision avoidance based on sets of rules and protocols are mainly used outdoors (Bilimoria et al., 1996) although classic methods such as artificial potential fields are also employed (Gudmundsson, 2016). These and other conventional well-known techniques in wheeled and legged robots are also considered for being used in UAVs (Bhavesh, 2015).Since most of the current UAVs have monocular onboard cameras as main source of information several computer vision techniques are used. A combination of the Canny edge detector and the Hough transform is used to identify corridors and staircases for trajectory planning (Bills et al., 2011). Also, feature points detectors such as SURF (Aguilar et al., 2017) and SIFT (Al-Kaff et al., 2017) are used to analyze the images and to determine free collision trajectories. However, the most usual technique is optic flow (Zufferey and Floreano, 2006; Zufferey et al., 2006; Beyeler et al., 2007; Green and Oh, 2008; Sagar and Visser, 2014; Bhavesh, 2015; Simpson and Sabo, 2016). Sometimes optic flow is combined with artificial neural networks (Oh et al., 2004) or other computer vision techniques (Soundararaj et al., 2009). Some of those techniques (de Croon, 2012) are based on the analysis of the image textures for estimating the possible number of objects that are captured. Finally, as in other scopes of research, deep learning techniques are also been used to explore alternatives to the traditional approaches (Yang et al., 2017).
[ "28481277", "10607637" ]
[ { "pmid": "28481277", "title": "Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.", "abstract": "One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works." }, { "pmid": "10607637", "title": "Internal models for motor control and trajectory planning.", "abstract": "A number of internal model concepts are now widespread in neuroscience and cognitive science. These concepts are supported by behavioral, neurophysiological, and imaging data; furthermore, these models have had their structures and functions revealed by such data. In particular, a specific theory on inverse dynamics model learning is directly supported by unit recordings from cerebellar Purkinje cells. Multiple paired forward inverse models describing how diverse objects and environments can be controlled and learned separately have recently been proposed. The 'minimum variance model' is another major recent advance in the computational theory of motor control. This model integrates two furiously disputed approaches on trajectory planning, strongly suggesting that both kinematic and dynamic internal models are utilized in movement planning and control." } ]
Research Synthesis Methods
28677322
PMC5589498
10.1002/jrsm.1252
An exploration of crowdsourcing citation screening for systematic reviews
Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews.
2.1Related workOver the past decade, crowdsourcing has become an established methodology across a diverse set of domains.6 Indeed, researchers have demonstrated the promise of harnessing the “wisdom of the crowd” with respect to everything from conducting user studies7 to aiding disaster relief.8, 9 Perhaps most relevant to the task of citation screening for systematic reviews, crowdsourcing has also been used extensively to collect relevance judgements to build and evaluate information retrieval (IR) systems.10 In such efforts, workers are asked to determine how relevant retrieved documents are to a given query. In the context of IR system evaluation, crowdsourcing has now been established as a reliable, low cost means of acquiring “gold standard” relevance judgements.11 Using crowdsourcing to acquire assessments of the relevance of articles with respect to systematic reviews is thus a natural extension of this prior work. However, the notion of “relevance” is stricter here than in general IR tasks, because of a well‐defined set of inclusion criteria (codified in the specific questions).A related line of work concerns “citizen science” initiatives.12 These involve interested remote, distributed individuals—usually volunteers—to contribute to a problem by completing small tasks. A prominent example of this is the Galaxy zoo project,13 in which crowdworkers were tasked with classifying galaxies by their morphological features. This project has been immensely successful in turn demonstrating that having laypeople volunteer to perform scientific tasks is an efficient, scalable approach. While we have used paid workers in the present work, we believe that in light of the nature of systematic reviews, recruiting volunteer workers (citizen scientists) may represent a promising future direction.Indeed, members of the Cochrane collaboration have investigated leveraging volunteers to identify randomized controlled trials.14 This project has been remarkable in its success; over 200 000 articles have now been labeled as being randomized controlled trials (or not). Noel‐Stor et al of the Cochrane collaboration have also explored harnessing distributed workers to screen a small set of 250 citations for a diagnostic test accuracy review (Noel‐Stor, 2013). In this case, however, 92% of the workers had some knowledge of the subject matter, which contrasts to the use of laypeople in our project.The above work has demonstrated that crowdsourcing is a useful approach generally, and for some large‐scale scientific tasks specifically. However, as far as we are aware, ours is the first study to investigate the use of crowdsourcing citation screening for specific systematic reviews to laypersons.
[ "10517715", "25588314", "23305843", "24236626", "19586682", "19755348" ]
[ { "pmid": "25588314", "title": "Using text mining for study identification in systematic reviews: a systematic review of current approaches.", "abstract": "BACKGROUND\nThe large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant studies in an unbiased way for inclusion in systematic reviews both complex and time consuming. Text mining has been offered as a potential solution: through automating some of the screening process, reviewer time can be saved. The evidence base around the use of text mining for screening has not yet been pulled together systematically; this systematic review fills that research gap. Focusing mainly on non-technical issues, the review aims to increase awareness of the potential of these technologies and promote further collaborative research between the computer science and systematic review communities.\n\n\nMETHODS\nFive research questions led our review: what is the state of the evidence base; how has workload reduction been evaluated; what are the purposes of semi-automation and how effective are they; how have key contextual problems of applying text mining to the systematic review field been addressed; and what challenges to implementation have emerged? We answered these questions using standard systematic review methods: systematic and exhaustive searching, quality-assured data extraction and a narrative synthesis to synthesise findings.\n\n\nRESULTS\nThe evidence base is active and diverse; there is almost no replication between studies or collaboration between research teams and, whilst it is difficult to establish any overall conclusions about best approaches, it is clear that efficiencies and reductions in workload are potentially achievable. On the whole, most suggested that a saving in workload of between 30% and 70% might be possible, though sometimes the saving in workload is accompanied by the loss of 5% of relevant studies (i.e. a 95% recall).\n\n\nCONCLUSIONS\nUsing text mining to prioritise the order in which items are screened should be considered safe and ready for use in 'live' reviews. The use of text mining as a 'second screener' may also be used cautiously. The use of text mining to eliminate studies automatically should be considered promising, but not yet fully proven. In highly technical/clinical areas, it may be used with a high degree of confidence; but more developmental and evaluative work is needed in other disciplines." }, { "pmid": "24236626", "title": "Modernizing the systematic review process to inform comparative effectiveness: tools and methods.", "abstract": "Systematic reviews are being increasingly used to inform all levels of healthcare, from bedside decisions to policy-making. Since they are designed to minimize bias and subjectivity, they are a preferred option to assess the comparative effectiveness and safety of healthcare interventions. However, producing systematic reviews and keeping them up-to-date is becoming increasingly onerous for three reasons. First, the body of biomedical literature is expanding exponentially with no indication of slowing down. Second, as systematic reviews gain wide acceptance, they are also being used to address more complex questions (e.g., evaluating the comparative effectiveness of many interventions together rather than focusing only on pairs of interventions). Third, the standards for performing systematic reviews have become substantially more rigorous over time. To address these challenges, we must carefully prioritize the questions that should be addressed by systematic reviews and optimize the processes of research synthesis. In addition to reducing the workload involved in planning and conducting systematic reviews, we also need to make efforts to increase the transparency, reliability and validity of the review process; these aims can be grouped under the umbrella of 'modernization' of the systematic review process." }, { "pmid": "19586682", "title": "A new dawn for citizen science.", "abstract": "A citizen scientist is a volunteer who collects and/or processes data as part of a scientific enquiry. Projects that involve citizen scientists are burgeoning, particularly in ecology and the environmental sciences, although the roots of citizen science go back to the very beginnings of modern science itself." }, { "pmid": "19755348", "title": "Systematic review: charged-particle radiation therapy for cancer.", "abstract": "BACKGROUND\nRadiation therapy with charged particles can potentially deliver maximum doses while minimizing irradiation of surrounding tissues, and it may be more effective or less harmful than other forms of radiation therapy.\n\n\nPURPOSE\nTo review evidence about the benefits and harms of charged-particle radiation therapy for patients with cancer.\n\n\nDATA SOURCES\nMEDLINE (inception to 11 July 2009) was searched for publications in English, German, French, Italian, and Japanese. Web sites of manufacturers, treatment centers, and professional organizations were searched for relevant information.\n\n\nSTUDY SELECTION\nFour reviewers identified studies of any design that described clinical outcomes or adverse events in 10 or more patients with cancer treated with charged-particle radiation therapy.\n\n\nDATA EXTRACTION\nThe 4 reviewers extracted study, patient, and treatment characteristics; clinical outcomes; and adverse events for nonoverlapping sets of articles. A fifth reviewer verified data on comparative studies.\n\n\nDATA SYNTHESIS\nCurrently, 7 centers in the United States have facilities for particle (proton)-beam irradiation, and at least 4 are under construction, each costing between $100 and $225 million. In 243 eligible articles, charged-particle radiation therapy was used alone or in combination with other interventions for common (for example, lung, prostate, or breast) or uncommon (for example, skull-base tumors or uveal melanomas) types of cancer. Of 243 articles, 185 were single-group retrospective studies. Eight randomized and 9 nonrandomized clinical trials compared treatments with or without charged particles. No comparative study reported statistically significant or important differences in overall or cancer-specific survival or in total serious adverse events.\n\n\nLIMITATION\nFew studies directly compared treatments with or without particle irradiation.\n\n\nCONCLUSION\nEvidence on the comparative effectiveness and safety of charged-particle radiation therapy in cancer is needed to assess the benefits, risks, and costs of treatment alternatives." } ]
Scientific Reports
28924196
PMC5603591
10.1038/s41598-017-12141-9
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
Related WorkA wide range of approaches for projection alignment are employed in electron tomography14. The most common approach is to use cross-correlation between projections acquired at adjacent rotation angles15–18, or correlation of vertical variations in the mass of the sample19. However, two features in 3D space can have their apparent separation change in projections as a function of rotation angle, leading to ambiguities on which feature dominates the cross-correlation. These ambiguities exponentiate as the number of features are increased, or as the rotation angles between projection images are widened. As a result, while cross-correlation alignment can remove frame-to-frame “jitter” in tomographic datasets, it cannot be relied upon to find a common rotation axis for complete set of projections20.When specimens are mounted within semi-transparent capillary holders, one can use high-contrast capillary edges to correct for jitter21. An alternative approach is to place fiducial markers such as small gold beads22 or silica spheres23 on the specimen mount or directly on the specimen, and identify them either manually or automatically24,25; their positions can then be used to correct for alignment errors26. This approach is quite successful, and is frequently employed; however, it comes at the cost of adding objects that can complicate sample preparation, obscure specimen features in certain projections, and add material that may complicate analytical methods such as the analysis of fluorescent x-rays.For those situations where the addition of fiducial marker materials is problematic, one can instead use a variety of feature detection schemes to identify marker positions intrinsic to the specimen, after which various alignment procedures are applied27–30 including the use of bundle adjustment31. Among these natural feature selection schemes are object corner detection32, wavelet-based detection33, Canny edge detection34, feature curvature detection35, or common-line approach for registration of features in Fourier space36. One can use Markov random fields37 or the Scale-Invariant Feature Transform (SIFT)38,39 to refine the correspondence of features throughout the tomographic dataset. Finally, in the much simpler case of very sparse and local features, one can simply fit sinusoidal curves onto features in the sinogram representation of a set of line projections, and shift projections onto the fitted sinusoid curve40.These methods are widely employed with good success in electron tomography, where the mean free path for inelastic scattering is often in the 100–200 nm range even for biological specimens with low atomic number, so that it is rare to study samples thicker than about 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document}μ m (high angle dark field can allow scanning transmission electron microscopes, or STEMs, to image somewhat thicker materials41,42). The situation can be more challenging in nanotomography with x-ray microscopes, where the great penetration of x-rays means that samples tens of micrometers or more in size can be studied43–48. This freedom to work with larger specimens means that while feature-based alignment can still be employed for imaging thin specimens with low contrast10,43, in STEM tomography and especially in hard x-ray nanotomography it becomes increasingly challenging to track fiducials or intrinsic features due to the overlap of a large number of features in depth as seen from any one projection.In all of the above techniques, the primary strategy is to perform an alignment that is as accurate as possible before tomographic reconstruction. In the last few decades, a new set of automatic alignment techniques have been introduced based on a “bootstrap” process49, now commonly referred to as “iterative reprojection”. These techniques attempt to achieve simultaneous alignment and reconstruction through an iterative refinement process. They are based on the fact that the measurement process (forward model) and object reconstruction (inverse model) should be consistent only for a correct alignment geometry. Already in its initial implementation for electron tomography49, a multiscale approach of the method has been used, where first a downsampled version of the projections is used to generate a lower resolution 3D object reconstruction for a first pass of alignment; this first pass with a smaller dataset can align large features in images and works quickly, after which one can improve the alignment at higher resolution until finally the full resolution dataset is used49,50. A variation on this approach is to generate a low-quality object reconstruction from a single projection and then use all the remaining projections for another object reconstruction, and to then align these 3D objects51. One can use projection cross-correlation and the common-line approach for an initial alignment so as to improve convergence times52. Within an optimization framework, iterative reprojection can incorporate a variety of criteria to seek optimal alignment parameters, including contrast maximization in the overall image49 or in sub-tomogram features53, cross-correlation of reprojection and original projection images54, and for cost function reduction a quasi-Newton distance minimization55 or a Levenberg-Marquardt distance minimization56. While initially developed for electron nanotomography, iterative reprojection schemes have also been applied in x-ray microscopy10,57 and with commercial x-ray microtomography systems58,59. As was noted above, all of these prior approaches used variations on Algorithm 1, whereas our approach using Algorithm 2 produces faster convergence rates and more robust reconstructions, and can yield better accuracy especially in the case of tomograms with a limited set of projection angles.
[ "22852697", "23556845", "5492997", "5070894", "18238264", "22155289", "20888968", "4449573", "20382542", "22108985", "6382732", "27577781", "6836293", "24971982", "21075783", "26433028", "2063493", "10425746", "19397789", "17651988", "20117216", "16542854", "11356060", "12051900", "7571119", "17855124", "22338691", "24582712", "24416264", "10945329", "10620151", "14699066", "20864997", "24457289", "8785010", "21930024", "16137829", "15629657", "18045320", "18243972", "25320994", "28244442", "28120881", "25178011", "18197224", "28244451", "24365918", "25723934", "26846188" ]
[ { "pmid": "22852697", "title": "An instrument for 3D x-ray nano-imaging.", "abstract": "We present an instrument dedicated to 3D scanning x-ray microscopy, allowing a sample to be precisely scanned through a beam while the angle of x-ray incidence can be changed. The position of the sample is controlled with respect to the beam-defining optics by laser interferometry. The instrument achieves a position stability better than 10 nm standard deviation. The instrument performance is assessed using scanning x-ray diffraction microscopy and we demonstrate a resolution of 18 nm in 2D imaging of a lithographic test pattern while the beam was defined by a pinhole of 3 μm in diameter. In 3D on a test object of copper interconnects of a microprocessor, a resolution of 53 nm is achieved." }, { "pmid": "23556845", "title": "Compact prototype apparatus for reducing the circle of confusion down to 40 nm for x-ray nanotomography.", "abstract": "We have constructed a compact prototype apparatus for active correction of circle of confusion during rotational motion. Our system combines fiber optic interferometry as a sensing element, the reference cylinder along with the nanopositioning system, and a robust correction algorithm. We demonstrate dynamic correction of run-out errors down to 40 nm; the resolution is limited by ambient environment and accuracy of correcting nanopositioners. Our approach provides a compact solution for in-vacuum scanning nanotomography x-ray experiments with a potential to reach sub-nm level of correction." }, { "pmid": "18238264", "title": "Maximum likelihood reconstruction for emission tomography.", "abstract": "Previous models for emission tomography (ET) do not distinguish the physics of ET from that of transmission tomography. We give a more accurate general mathematical model for ET where an unknown emission density lambda = lambda(x, y, z) generates, and is to be reconstructed from, the number of counts n(*)(d) in each of D detector units d. Within the model, we give an algorithm for determining an estimate lambdainsertion mark of lambda which maximizes the probability p(n(*)|lambda) of observing the actual detector count data n(*) over all possible densities lambda. Let independent Poisson variables n(b) with unknown means lambda(b), b = 1, ..., B represent the number of unobserved emissions in each of B boxes (pixels) partitioning an object containing an emitter. Suppose each emission in box b is detected in detector unit d with probability p(b, d), d = 1, ..., D with p(b,d) a one-step transition matrix, assumed known. We observe the total number n(*) = n(*)(d) of emissions in each detector unit d and want to estimate the unknown lambda = lambda(b), b = 1, ..., B. For each lambda, the observed data n(*) has probability or likelihood p(n(*)|lambda). The EM algorithm of mathematical statistics starts with an initial estimate lambda(0) and gives the following simple iterative procedure for obtaining a new estimate lambdainsertion mark(new), from an old estimate lambdainsertion mark(old), to obtain lambdainsertion mark(k), k = 1, 2, ..., lambdainsertion mark(new)(b)= lambdainsertion mark(old)(b)Sum of (n(*)p(b,d) from d=1 to D/Sum of lambdainsertion mark()old(b('))p(b('),d) from b(')=1 to B), b=1,...B." }, { "pmid": "22155289", "title": "Automatic alignment and reconstruction of images for soft X-ray tomography.", "abstract": "Soft X-ray tomography (SXT) is a powerful imaging technique that generates quantitative, 3D images of the structural organization of whole cells in a near-native state. SXT is also a high-throughput imaging technique. At the National Center for X-ray Tomography (NCXT), specimen preparation and image collection for tomographic reconstruction of a whole cell require only minutes. Aligning and reconstructing the data, however, take significantly longer. Here we describe a new component of the high throughput computational pipeline used for processing data at the NCXT. We have developed a new method for automatic alignment of projection images that does not require fiducial markers or manual interaction with the software. This method has been optimized for SXT data sets, which routinely involve full rotation of the specimen. This software gives users of the NCXT SXT instrument a new capability - virtually real-time initial 3D results during an imaging experiment, which can later be further refined. The new code, Automatic Reconstruction 3D (AREC3D), is also fast, reliable, and robust. The fundamental architecture of the code is also adaptable to high performance GPU processing, which enables significant improvements in speed and fidelity." }, { "pmid": "20888968", "title": "Alignment of cryo-electron tomography datasets.", "abstract": "Data acquisition of cryo-electron tomography (CET) samples described in previous chapters involves relatively imprecise mechanical motions: the tilt series has shifts, rotations, and several other distortions between projections. Alignment is the procedure of correcting for these effects in each image and requires the estimation of a projection model that describes how points from the sample in three-dimensions are projected to generate two-dimensional images. This estimation is enabled by finding corresponding common features between images. This chapter reviews several software packages that perform alignment and reconstruction tasks completely automatically (or with minimal user intervention) in two main scenarios: using gold fiducial markers as high contrast features or using relevant biological structures present in the image (marker-free). In particular, we emphasize the key decision points in the process that users should focus on in order to obtain high-resolution reconstructions." }, { "pmid": "20382542", "title": "Automatic coarse-alignment for TEM tilt series of rod-shaped specimens collected with a full angular range.", "abstract": "An automatic coarse-alignment method for a tilt series of rod-shaped specimen collected with a full angular range (from alpha=-90 degrees to +90 degrees, alpha is the tilt angle of the specimen) is presented; this method is based on a cross-correlation method and uses the outline of the specimen shape. Both the rotational angle of the tilt axis and translational value of each image can be detected in the images without the use of markers. This method is performed on the basis of the assumption that the images taken at alpha=-90 degrees and alpha=+90 degrees are symmetric about the tilt axis. In this study, a carbon rod on which gold particles have been deposited is used as a test specimen for the demonstration. This method can be used as an automatic coarse-alignment method prior to the application of a highly accurate alignment method because the alignment procedure can be performed automatically except for the initial setup of some parameters." }, { "pmid": "22108985", "title": "Phase tomography from x-ray coherent diffractive imaging projections.", "abstract": "Coherent diffractive imaging provides accurate phase projections that can be tomographically combined to yield detailed quantitative 3D reconstructions with a resolution that is not limited by imaging optics. We present robust algorithms for post-processing and alignment of these tomographic phase projections. A simple method to remove undesired constant and linear phase terms on the reconstructions is given. Also, we provide an algorithm for automatic alignment of projections that has good performance even for samples with no fiducial markers. Currently applied to phase projections, this alignment algorithm has proven to be robust and should also be useful for lens-based tomography techniques that pursue nanoscale 3D imaging. Lastly, we provide a method for tomographic reconstruction that works on phase projections that are known modulo 2π, such that the phase unwrapping step is avoided. We demonstrate the performance of these algorithms by 3D imaging of bacteria population in legume root-nodule cells." }, { "pmid": "6382732", "title": "Three-dimensional reconstruction of imperfect two-dimensional crystals.", "abstract": "An outline is given of the general methodology of 3D reconstruction on the basis of correlation averages of the 2D projections: this hybrid real space/Fourier space approach substantially alleviates one of the most serious limitations on obtaining high resolution 3D structures, namely crystal distortions. The paper discusses some of the technical problems involved, namely optimisation of tilt increments, a posteriori tilt angle determination, extraction of lattice line data from averaged unit cells, and stain/protein boundary determination. The approach is illustrated by application to a 2D crystal from a bacterial cell envelope." }, { "pmid": "27577781", "title": "Runout error correction in tomographic reconstruction by intensity summation method.", "abstract": "An alignment method for correction of the axial and radial runout errors of the rotation stage in X-ray phase-contrast computed tomography has been developed. Only intensity information was used, without extra hardware or complicated calculation. Notably, the method, as demonstrated herein, can utilize the halo artifact to determine displacement." }, { "pmid": "6836293", "title": "Electron microscope tomography: transcription in three dimensions.", "abstract": "Three-dimensional reconstruction of an asymmetric biological ultrastructure has been achieved by tomographic analysis of electron micrographs of sections tilted on a goniometer specimen stage. Aligned micrographs could be displayed as red-green three-dimensional movies. The techniques have been applied to portions of in situ transcription units of a Balbiani ring in the polytene chromosomes of the midge Chironomus tentans. Current data suggest a DNA compaction of about 8 to 1 in a transcription unit. Nascent ribonucleoprotein granules display an imperfect sixfold helical arrangement around the chromatin axis." }, { "pmid": "24971982", "title": "Hard X-ray nanotomography beamline 7C XNI at PLS-II.", "abstract": "The synchrotron-based hard X-ray nanotomography beamline, named 7C X-ray Nano Imaging (XNI), was recently established at Pohang Light Source II. This beamline was constructed primarily for full-field imaging of the inner structures of biological and material samples. The beamline normally provides 46 nm resolution for still images and 100 nm resolution for tomographic images, with a 40 µm field of view. Additionally, for large-scale application, it is capable of a 110 µm field of view with an intermediate resolution." }, { "pmid": "21075783", "title": "An automatic method of detecting and tracking fiducial markers for alignment in electron tomography.", "abstract": "We presented an automatic method for detecting and tracking colloidal gold fiducial markers for alignment in electron tomography (ET). The second-order derivative of direction was used to detect a fiducial marker accurately. The detection was optimized to be selective to the size of fiducial markers. A preliminary tracking result from the normalized correlation coefficient was refined using the detector. A constraint model considering the relationship among the fiducial markers on different images was developed for removing outlier. The three-dimensional positions of the detected fiducial markers and the projection parameters of tilt images were calculated for post process. The accuracy of detection and tracking results was evaluated from the residues by the software IMOD. Application on transmission electron microscopic images also indicated that the presented method could provide a useful approach to automatic alignment in ET." }, { "pmid": "26433028", "title": "A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.", "abstract": "Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction." }, { "pmid": "2063493", "title": "Alignment of tomographic projections using an incomplete set of fiducial markers.", "abstract": "Reconstruction of three-dimensional images using tomography requires that the projections be aligned along a common rotational axis. We present here a solution to the problem of alignment for single-axis tomography using fiducial markers. The algorithm is based on iterative linearization of the projection equations and is least-squares-optimized by a linear least-squares solution instead of a gradient search. The algorithm does not require markers to be available in every projection, and initial estimates are unnecessary. Program execution is robust, fast, and can quickly align large data sets containing 256 or more projections." }, { "pmid": "10425746", "title": "Automatic acquisition of fiducial markers and alignment of images in tilt series for electron tomography.", "abstract": "Three-dimensional reconstruction of a section of biological tissue by electron tomography requires precise alignment of a series of two-dimensional images of the section made at numerous successive tilt angles. Gold beads on or in the section serve as fiducial markers. A scheme is described that automatically detects the position of these markers and indexes them from image to image. The resulting set of position vectors are arranged in a matrix representation of the tilt geometry and, by inversion, alignment information is obtained. The scheme is convenient, requires little operator time and provides an accuracy of < 2 pixels RMS. A tilt series of 60-70 images can be aligned in approximately 30 min on any modern desktop computer." }, { "pmid": "19397789", "title": "Marker-free image registration of electron tomography tilt-series.", "abstract": "BACKGROUND\nTilt series are commonly used in electron tomography as a means of collecting three-dimensional information from two-dimensional projections. A common problem encountered is the projection alignment prior to 3D reconstruction. Current alignment techniques usually employ gold particles or image derived markers to correctly align the images. When these markers are not present, correlation between adjacent views is used to align them. However, sequential pairwise correlation is prone to bias and the resulting alignment is not always optimal.\n\n\nRESULTS\nIn this paper we introduce an algorithm to find regions of the tilt series which can be tracked within a subseries of the tilt series. These regions act as landmarks allowing the determination of the alignment parameters. We show our results with synthetic data as well as experimental cryo electron tomography.\n\n\nCONCLUSION\nOur algorithm is able to correctly align a single-tilt tomographic series without the help of fiducial markers thanks to the detection of thousands of small image patches that can be tracked over a short number of images in the series." }, { "pmid": "17651988", "title": "Fiducial-less alignment of cryo-sections.", "abstract": "Cryo-electron tomography of vitreous sections is currently the most promising technique for visualizing arbitrary regions of eukaryotic cells or tissue at molecular resolution. Despite significant progress in the sample preparation techniques over the past few years, the three dimensional reconstruction using electron tomography is not as simple as in plunge frozen samples for various reasons, but mainly due to the effects of irradiation on the sections and the resulting poor alignment. Here, we present a new algorithm, which can provide a useful three-dimensional marker model after investigation of hundreds to thousands of observations calculated using local cross-correlation throughout the tilt series. The observations are chosen according to their coherence to a particular model and assigned to virtual markers. Through this type of measurement a merit figure can be calculated, precisely estimating the quality of the reconstruction. The merit figures of this alignment method are comparable to those obtained with plunge frozen samples using fiducial gold markers. An additional advantage of the algorithm is the implicit detection of areas in the sections that behave as rigid bodies and can thus be properly reconstructed." }, { "pmid": "20117216", "title": "Alignator: a GPU powered software package for robust fiducial-less alignment of cryo tilt-series.", "abstract": "The robust alignment of tilt-series collected for cryo-electron tomography in the absence of fiducial markers, is a problem that, especially for tilt-series of vitreous sections, still represents a significant challenge. Here we present a complete software package that implements a cross-correlation-based procedure that tracks similar image features that are present in several micrographs and explores them implicitly as substitutes for fiducials like gold beads and quantum dots. The added value compared to previous approaches, is that the algorithm explores a huge number of random positions, which are tracked on several micrographs, while being able to identify trace failures, using a cross-validation procedure based on the 3D marker model of the tilt-series. Furthermore, this method allows the reliable identification of areas which behave as a rigid body during the tilt-series and hence addresses specific difficulties for the alignment of vitreous sections, by correcting practical caveats. The resulting alignments can attain sub-pixel precision at the local level and is able to yield a substantial number of usable tilt-series (around 60%). In principle, the algorithm has the potential to run in a fully automated fashion, and could be used to align any tilt-series directly from the microscope. Finally, we have significantly improved the user interface and implemented the source code on the graphics processing unit (GPU) to accelerate the computations." }, { "pmid": "16542854", "title": "Transform-based backprojection for volume reconstruction of large format electron microscope tilt series.", "abstract": "Alignment of the individual images of a tilt series is a critical step in obtaining high-quality electron microscope reconstructions. We report on general methods for producing good alignments, and utilizing the alignment data in subsequent reconstruction steps. Our alignment techniques utilize bundle adjustment. Bundle adjustment is the simultaneous calculation of the position of distinguished markers in the object space and the transforms of these markers to their positions in the observed images, along the bundle of particle trajectories along which the object is projected to each EM image. Bundle adjustment techniques are general enough to encompass the computation of linear, projective or nonlinear transforms for backprojection, and can compensate for curvilinear trajectories through the object, sample warping, and optical aberration. We will also report on new reconstruction codes and describe our results using these codes." }, { "pmid": "11356060", "title": "Multiphase method for automatic alignment of transmission electron microscope images using markers.", "abstract": "In order to successfully perform the 3D reconstruction in electron tomography, transmission electron microscope images must be accurately aligned or registered. So far, the problem is solved by either manually showing the corresponding fiducial markers from the set of images or automatically using simple correlation between the images on several rotations and scales. The present solutions, however, share the problem of being inefficient and/or inaccurate. We therefore propose a method in which the registration is automated using conventional colloidal gold particles as reference markers between images. We approach the problem from the computer vision viewpoint; hence, the alignment problem is divided into several subproblems: (1) finding initial matches from successive images, (2) estimating the epipolar geometry between consecutive images, (3) finding and localizing the gold particles with subpixel accuracy in each image, (4) predicting the probable matching gold particles using the epipolar constraint and its uncertainty, (5) matching and tracking the gold beads through the tilt series, and (6) optimizing the transformation parameters for the whole image set. The results show not only the reliability of the suggested method but also a high level of accuracy in alignment, since practically all the visible gold markers can be used." }, { "pmid": "12051900", "title": "Automatic alignment of transmission electron microscope tilt series without fiducial markers.", "abstract": "Accurate image alignment is needed for computing three-dimensional reconstructions from transmission electron microscope tilt series. So far, the best results have been obtained by using colloidal gold beads as fiducial markers. If their use has not been possible for some reason, the only option has been the automatic cross-correlation-based registration methods. However, the latter methods are inaccurate and, as we will show, inappropriate for the whole problem. Conversely, we propose a novel method that uses the actual 3D motion model but works without any fiducial markers in the images. The method is based on matching and tracking some interest points of the intensity surface by first solving the underlying geometrical constraint of consecutive images in the tilt series. The results show that our method is near the gold marker alignment in the level of accuracy and hence opens the way for new opportunities in the analysis of electron tomography reconstructions, especially when markers cannot be used." }, { "pmid": "7571119", "title": "A marker-free alignment method for electron tomography.", "abstract": "In electron tomography of biological specimens, fiducial markers are normally used to achieve accurate alignment of the input projections. We address the problem of alignment of projections from objects that are freely supported and do not permit the use of markers. To this end we present a new alignment algorithm for single-axis tilt geometry based on the principle of Fourier-space common lines. An iterative scheme has been developed to overcome the noise-sensitivity of the common-line method. This algorithm was used to align a data set that was not amenable to alignment with fiducial markers." }, { "pmid": "17855124", "title": "Markov random field based automatic image alignment for electron tomography.", "abstract": "We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets." }, { "pmid": "22338691", "title": "TXM-Wizard: a program for advanced data collection and evaluation in full-field transmission X-ray microscopy.", "abstract": "Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available." }, { "pmid": "24582712", "title": "A marker-free automatic alignment method based on scale-invariant features.", "abstract": "In electron tomography, alignment accuracy is critical for high-resolution reconstruction. However, the automatic alignment of a tilt series without fiducial markers remains a challenge. Here, we propose a new alignment method based on Scale-Invariant Feature Transform (SIFT) for marker-free alignment. The method covers the detection and localization of interest points (features), feature matching, feature tracking and optimization of projection parameters. The proposed method implements a highly reliable matching strategy and tracking model to detect a huge number of feature tracks. Furthermore, an incremental bundle adjustment method is devised to tolerate noise data and ensure the accurate estimation of projection parameters. Our method was evaluated with a number of experimental data, and the results exhibit an improved alignment accuracy comparable with current fiducial marker alignment and subsequent higher resolution of tomography." }, { "pmid": "24416264", "title": "Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.", "abstract": "A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the \"projected feature points\" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx." }, { "pmid": "10945329", "title": "Computed tomography of cryogenic biological specimens based on X-ray microscopic images.", "abstract": "Soft X-ray microscopy employs the photoelectric absorption contrast between water and protein in the 2.34-4.38 nm wavelength region to visualize protein structures down to 30 nm size without any staining methods. Due to the large depth of focus of the Fresnel zone plates used as X-ray objectives, computed tomography based on the X-ray microscopic images can be used to reconstruct the local linear absorption coefficient inside the three-dimensional specimen volume. High-resolution X-ray images require a high specimen radiation dose, and a series of images taken at different viewing angles is needed for computed tomography. Therefore, cryo microscopy is necessary to preserve the structural integrity of hydrated biological specimens during image acquisition. The cryo transmission X-ray microscope at the electron storage ring BESSY I (Berlin) was used to obtain a tilt series of images of the frozen-hydrated green alga Chlamydomonas reinhardtii. The living specimens were inserted into borosilicate glass capillaries and, in this first experiment, rapidly cooled by plunging into liquid nitrogen. The capillary specimen holders allow image acquisition over the full angular range of 180 degrees. The reconstruction shows for the first time details down to 60 nm size inside a frozen-hydrated biological specimen and conveys a clear impression of the internal structures. This technique is expected to be applicable to a wide range of biological specimens, such as the cell nucleus. It offers the possibility of imaging the three-dimensional structure of hydrated biological specimens close to their natural living state." }, { "pmid": "10620151", "title": "Soft X-ray microscopy with a cryo scanning transmission X-ray microscope: II. Tomography.", "abstract": "Using a cryo scanning transmission X-ray microscope (Maser, et al. (2000) Soft X-ray microscopy with a cryo scanning transmission X-ray microscope: I. Instrumentation, imaging and spectroscopy. J. Microsc. 197, 68-79), we have obtained tomographic data-sets of frozen hydrated mouse 3T3 fibroblasts. The ice thickess was several micrometres throughout the reconstruction volume, precluding cryo electron tomography. Projections were acquired within the depth of focus of the focusing optics, and the three-dimensional reconstruction was obtained using an algebraic reconstruction technique. In this first demonstration, 100 nm lateral and 250 nm longitudinal resolution was obtained in images of unlabelled cells, with potential for substantial further gains in resolution. Future efforts towards tomography of spectroscopically highlighted subcellular components in whole cells are discussed." }, { "pmid": "14699066", "title": "X-ray tomography generates 3-D reconstructions of the yeast, saccharomyces cerevisiae, at 60-nm resolution.", "abstract": "We examined the yeast, Saccharomyces cerevisiae, using X-ray tomography and demonstrate unique views of the internal structural organization of these cells at 60-nm resolution. Cryo X-ray tomography is a new imaging technique that generates three-dimensional (3-D) information of whole cells. In the energy range of X-rays used to examine cells, organic material absorbs approximately an order of magnitude more strongly than water. This produces a quantifiable natural contrast in fully hydrated cells and eliminates the need for chemical fixatives or contrast enhancement reagents to visualize cellular structures. Because proteins can be localized in the X-ray microscope using immunogold labeling protocols (Meyer-Ilse et al., 2001. J. Microsc. 201, 395-403), tomography enables 3-D molecular localization. The time required to collect the data for each cell shown here was <15 min and has recently been reduced to 3 min, making it possible to examine numerous yeast and to collect statistically significant high-resolution data. In this video essay, we show examples of 3-D tomographic reconstructions of whole yeast and demonstrate the power of this technology to obtain quantifiable information from whole, hydrated cells." }, { "pmid": "20864997", "title": "Ptychographic X-ray computed tomography at the nanoscale.", "abstract": "X-ray tomography is an invaluable tool in biomedical imaging. It can deliver the three-dimensional internal structure of entire organisms as well as that of single cells, and even gives access to quantitative information, crucially important both for medical applications and for basic research. Most frequently such information is based on X-ray attenuation. Phase contrast is sometimes used for improved visibility but remains significantly harder to quantify. Here we describe an X-ray computed tomography technique that generates quantitative high-contrast three-dimensional electron density maps from phase contrast information without reverting to assumptions of a weak phase object or negligible absorption. This method uses a ptychographic coherent imaging approach to record tomographic data sets, exploiting both the high penetration power of hard X-rays and the high sensitivity of lensless imaging. As an example, we present images of a bone sample in which structures on the 100 nm length scale such as the osteocyte lacunae and the interconnective canalicular network are clearly resolved. The recovered electron density map provides a contrast high enough to estimate nanoscale bone density variations of less than one per cent. We expect this high-resolution tomography technique to provide invaluable information for both the life and materials sciences." }, { "pmid": "24457289", "title": "X-ray ptychographic computed tomography at 16 nm isotropic 3D resolution.", "abstract": "X-ray ptychography is a scanning variant of coherent diffractive imaging with the ability to image large fields of view at high resolution. It further allows imaging of non-isolated specimens and can produce quantitative mapping of the electron density distribution in 3D when combined with computed tomography. The method does not require imaging lenses, which makes it dose efficient and suitable to multi-keV X-rays, where efficient photon counting, pixelated detectors are available. Here we present the first highly resolved quantitative X-ray ptychographic tomography of an extended object yielding 16 nm isotropic 3D resolution recorded at 2 Å wavelength. This first-of-its-kind demonstration paves the way for ptychographic X-ray tomography to become a promising method for X-ray imaging of representative sample volumes at unmatched resolution, opening tremendous potential for characterizing samples in materials science and biology by filling the resolution gap between electron microscopy and other X-ray imaging techniques." }, { "pmid": "8785010", "title": "Alignment of electron tomographic series by correlation without the use of gold particles.", "abstract": "Electron tomography requires that a tilt series of micrographs be aligned and that the orientation of the tilt axis be known. This has been done conveniently with gold markers applied to the surface of a specimen to provide easily accessible information on the orientation of each tilt projection. Where gold markers are absent, another approach to alignment must be used. A method is presented here for aligning tilt projections without the use of markers, utilizing correlation methods. The technique is iterative, drawing principally on the work of Dengler [Ultramicroscopy 30 (1989) 337], and consists of computing a low resolution back projection image from which computed tomographic projections can be generated. These in turn serve as reference images for the next alignment of the tomographic series. An initial alignment must be made before the first back projection, and this is done following the method of Guckenberger [Ultramicroscopy 9 (1982) 167] for translational alignment and by common lines analysis [Liu et al., Ultramicroscopy 58 (1995) 393] for identification of the tilt axis. Four tomographic series of a biological nature were aligned and analyzed, and the method has proven to be both accurate and reproducible for the data presented here." }, { "pmid": "21930024", "title": "Refinement procedure for the image alignment in high-resolution electron tomography.", "abstract": "High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data." }, { "pmid": "16137829", "title": "Accurate marker-free alignment with simultaneous geometry determination and reconstruction of tilt series in electron tomography.", "abstract": "An image alignment method for electron tomography is presented which is based on cross-correlation techniques and which includes a simultaneous refinement of the tilt geometry. A coarsely aligned tilt series is iteratively refined with a procedure consisting of two steps for each cycle: area matching and subsequent geometry correction. The first step, area matching, brings into register equivalent specimen regions in all images of the tilt series. It determines four parameters of a linear two-dimensional transformation, not just translation and rotation as is done during the preceding coarse alignment with conventional methods. The refinement procedure also differs from earlier methods in that the alignment references are now computed from already aligned images by reprojection of a backprojected volume. The second step, geometry correction, refines the initially inaccurate estimates of the geometrical parameters, including the direction of the tilt axis, a tilt angle offset, and the inclination of the specimen with respect to the support film or specimen holder. The correction values serve as an indicator for the progress of the refinement. For each new iteration, the correction values are used to compute an updated set of geometry parameters by a least squares fit. Model calculations show that it is essential to refine the geometrical parameters as well as the accurate alignment of the images to obtain a faithful map of the original structure." }, { "pmid": "15629657", "title": "Unified 3-D structure and projection orientation refinement using quasi-Newton algorithm.", "abstract": "We describe an algorithm for simultaneous refinement of a three-dimensional (3-D) density map and of the orientation parameters of two-dimensional (2-D) projections that are used to reconstruct this map. The application is in electron microscopy, where the 3-D structure of a protein has to be determined from a set of 2-D projections collected at random but initially unknown angles. The design of the algorithm is based on the assumption that initial low resolution approximation of the density map and reasonable guesses for orientation parameters are available. Thus, the algorithm is applicable in final stages of the structure refinement, when the quality of the results is of main concern. We define the objective function to be minimized in real space and solve the resulting nonlinear optimization problem using a Quasi-Newton algorithm. We calculate analytical derivatives with respect to density distribution and the finite difference approximations of derivatives with respect to orientation parameters. We demonstrate that calculation of derivatives is robust with respect to noise in the data. This is due to the fact that noise is annihilated by the back-projection operations. Our algorithm is distinguished from other orientation refinement methods (i) by the simultaneous update of the density map and orientation parameters resulting in a highly efficient computational scheme and (ii) by the high quality of the results produced by a direct minimization of the discrepancy between the 2-D data and the projected views of the reconstructed 3-D structure. We demonstrate the speed and accuracy of our method by using simulated data." }, { "pmid": "18045320", "title": "Software image alignment for X-ray microtomography with submicrometre resolution using a SEM-based X-ray microscope.", "abstract": "Improved X-ray sources and optics now enable X-ray imaging resolution down to approximately 50 nm for laboratory-based X-ray microscopy systems. This offers the potential for submicrometre resolution in tomography; however, achieving this resolution presents challenges due to system stability. We describe the use of software methods to enable submicrometre resolution of approximately 560 nm. This is a very high resolution for a modest laboratory-based point-projection X-ray tomography system. The hardware is based on a scanning electron microscope, and benefits from inline X-ray phase contrast to improve visibility of fine features. Improving the resolution achievable with the system enables it to be used to address a greater range of samples." }, { "pmid": "18243972", "title": "A fast sinc function gridding algorithm for fourier inversion in computer tomography.", "abstract": "The Fourier inversion method for reconstruction of images in computerized tomography has not been widely used owing to the perceived difficulty of interpolating from polar or other measurement grids to the Cartesian grid required for fast numerical Fourier inversion. Although the Fourier inversion method is recognized as being computationally faster than the back-projection method for parallel ray projection data, the artifacts resulting from inaccurate interpolation have generally limited application of the method. This paper presents a computationally efficient gridding algorithm which can be used with direct Fourier transformation to achieve arbitrarily small artifact levels. The method has potential for application to other measurement geometries such as fan-beam projections and diffraction tomography and NMR imaging." }, { "pmid": "25320994", "title": "Reliable method for calculating the center of rotation in parallel-beam tomography.", "abstract": "High-throughput processing of parallel-beam X-ray tomography at synchrotron facilities is lacking a reliable and robust method to determine the center of rotation in an automated fashion, i.e. without the need for a human scorer. Well-known techniques based on center of mass calculation, image registration, or reconstruction evaluation work well under favourable conditions but they fail in cases where samples are larger than field of view, when the projections show low signal-to-noise, or when optical defects dominate the contrast. Here we propose an alternative technique which is based on the Fourier analysis of the sinogram. Our technique shows excellent performance particularly on challenging data." }, { "pmid": "28244442", "title": "A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.", "abstract": "This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided." }, { "pmid": "28120881", "title": "Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.", "abstract": "Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors." }, { "pmid": "25178011", "title": "TomoPy: a framework for the analysis of synchrotron tomographic data.", "abstract": "Analysis of tomographic datasets at synchrotron light sources (including X-ray transmission tomography, X-ray fluorescence microscopy and X-ray diffraction tomography) is becoming progressively more challenging due to the increasing data acquisition rates that new technologies in X-ray sources and detectors enable. The next generation of synchrotron facilities that are currently under design or construction throughout the world will provide diffraction-limited X-ray sources and are expected to boost the current data rates by several orders of magnitude, stressing the need for the development and integration of efficient analysis tools. Here an attempt to provide a collaborative framework for the analysis of synchrotron tomographic data that has the potential to unify the effort of different facilities and beamlines performing similar tasks is described in detail. The proposed Python-based framework is open-source, platform- and data-format-independent, has multiprocessing capability and supports procedural programming that many researchers prefer. This collaborative platform could affect all major synchrotron facilities where new effort is now dedicated to developing new tools that can be deployed at the facility for real-time processing, as well as distributed to users for off-site data processing." }, { "pmid": "18197224", "title": "Efficient subpixel image registration algorithms.", "abstract": "Three new algorithms for 2D translation image registration to within a small fraction of a pixel that use nonlinear optimization and matrix-multiply discrete Fourier transforms are compared. These algorithms can achieve registration with an accuracy equivalent to that of the conventional fast Fourier transform upsampling approach in a small fraction of the computation time and with greatly reduced memory requirements. Their accuracy and computation time are compared for the purpose of evaluating a translation-invariant error metric." }, { "pmid": "28244451", "title": "XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.", "abstract": "The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications." }, { "pmid": "24365918", "title": "The Bionanoprobe: hard X-ray fluorescence nanoprobe with cryogenic capabilities.", "abstract": "Hard X-ray fluorescence microscopy is one of the most sensitive techniques for performing trace elemental analysis of biological samples such as whole cells and tissues. Conventional sample preparation methods usually involve dehydration, which removes cellular water and may consequently cause structural collapse, or invasive processes such as embedding. Radiation-induced artifacts may also become an issue, particularly as the spatial resolution increases beyond the sub-micrometer scale. To allow imaging under hydrated conditions, close to the `natural state', as well as to reduce structural radiation damage, the Bionanoprobe (BNP) has been developed, a hard X-ray fluorescence nanoprobe with cryogenic sample environment and cryo transfer capabilities, dedicated to studying trace elements in frozen-hydrated biological systems. The BNP is installed at an undulator beamline at sector 21 of the Advanced Photon Source. It provides a spatial resolution of 30 nm for two-dimensional fluorescence imaging. In this first demonstration the instrument design and motion control principles are described, the instrument performance is quantified, and the first results obtained with the BNP on frozen-hydrated whole cells are reported." }, { "pmid": "25723934", "title": "Pushing the limits: an instrument for hard X-ray imaging below 20 nm.", "abstract": "Hard X-ray microscopy is a prominent tool suitable for nanoscale-resolution non-destructive imaging of various materials used in different areas of science and technology. With an ongoing effort to push the 2D/3D imaging resolution down to 10 nm in the hard X-ray regime, both the fabrication of nano-focusing optics and the stability of the microscope using those optics become extremely challenging. In this work a microscopy system designed and constructed to accommodate multilayer Laue lenses as nanofocusing optics is presented. The developed apparatus has been thoroughly characterized in terms of resolution and stability followed by imaging experiments at a synchrotron facility. Drift rates of ∼2 nm h(-1) accompanied by 13 nm × 33 nm imaging resolution at 11.8 keV are reported." }, { "pmid": "26846188", "title": "Multimodality hard-x-ray imaging of a chromosome with nanoscale spatial resolution.", "abstract": "We developed a scanning hard x-ray microscope using a new class of x-ray nano-focusing optic called a multilayer Laue lens and imaged a chromosome with nanoscale spatial resolution. The combination of the hard x-ray's superior penetration power, high sensitivity to elemental composition, high spatial-resolution and quantitative analysis creates a unique tool with capabilities that other microscopy techniques cannot provide. Using this microscope, we simultaneously obtained absorption-, phase-, and fluorescence-contrast images of Pt-stained human chromosome samples. The high spatial-resolution of the microscope and its multi-modality imaging capabilities enabled us to observe the internal ultra-structures of a thick chromosome without sectioning it." } ]
Toxics
29051407
PMC5606636
10.3390/toxics4010001
Farmers’ Exposure to Pesticides: Toxicity Types and Ways of Prevention
Synthetic pesticides are extensively used in agriculture to control harmful pests and prevent crop yield losses or product damage. Because of high biological activity and, in certain cases, long persistence in the environment, pesticides may cause undesirable effects to human health and to the environment. Farmers are routinely exposed to high levels of pesticides, usually much greater than those of consumers. Farmers’ exposure mainly occurs during the preparation and application of the pesticide spray solutions and during the cleaning-up of spraying equipment. Farmers who mix, load, and spray pesticides can be exposed to these chemicals due to spills and splashes, direct spray contact as a result of faulty or missing protective equipment, or even drift. However, farmers can be also exposed to pesticides even when performing activities not directly related to pesticide use. Farmers who perform manual labor in areas treated with pesticides can face major exposure from direct spray, drift from neighboring fields, or by contact with pesticide residues on the crop or soil. This kind of exposure is often underestimated. The dermal and inhalation routes of entry are typically the most common routes of farmers’ exposure to pesticides. Dermal exposure during usual pesticide handling takes place in body areas that remain uncovered by protective clothing, such as the face and the hands. Farmers’ exposure to pesticides can be reduced through less use of pesticides and through the correct use of the appropriate type of personal protective equipment in all stages of pesticide handling.
5. Pesticide-Related Work TasksPesticide use is typically associated with three basic stages: (i) mixing and loading the pesticide product, (ii) application of the spray solution, and (iii) clean-up of the spraying equipment. Mixing and loading are the tasks associated with the greatest intensity of pesticide exposure, given that during this phase farmers are exposed to the concentrated product and, therefore, often face high exposure events (e.g., spills). However, the total exposure during pesticide application may exceed that incurred during mixing and loading, given that pesticide application typically takes more time than the tasks of mixing and loading. Pesticide drift is also a permanent hazard in pesticide use, because it exists even in the most careful applications, and therefore, can increase the possibility of detrimental effects of pesticide use on the users and the environment [28]. There is also evidence that cleaning the equipment after spraying may also be an important source of exposure. The level of pesticide exposure to the operator depends on the type of spraying equipment used. Hand spraying with wide-area spray nozzles (when large areas need to be treated) is associated with greater exposure to the operator than narrowly focused spray nozzles. When pesticides are applied with tractors, the application equipment is mounted directly on the tractor and is associated with a higher degree of operator exposure than when the spray equipment is attached to a trailer. Pesticide deposition on different parts of the operator’s body may vary largely due to differences in individual work habits. Several studies on the contamination of the body in pesticide applicators showed that the hands and the forearms suffer the greatest pesticide contamination during preparation and application of pesticides. However, other body parts such as the thighs, the forearms, the chest, and the back may also be subject to significant contamination.Clean-up of the spraying equipment is an important task in the use of pesticides. The time given to the task of cleaning may occupy a considerable part of the basic stages of pesticide handling [29,30]. Despite considerable variation among farm workers, equipment cleaning has been found to contribute greatly to workers’ daily dermal exposure [29]. Unexpected events, such as spills and splashes, are also a major source of dermal contamination for pesticide applicators, and often the exposure from these events can result in significant acute and long-term health effects [30]. Spills and splashes usually occur during mixing or loading and application, but may also appear in the stage of equipment clean-up [29]. Farmers (or farm workers) who make the spray solutions and apply pesticides have been at the center of attention of most research thus far, but often farmers re-entering the sprayed fields may also face pesticide exposure, sometimes to significant levels [31,32]. It is not surprising that re-entry farm workers may face even greater exposure than pesticide applicators, possibly because safety training and the use of PPE are usually less, and the duration of exposure may be greater than that of the applicators [31,32,33]. Exposure by re-entry in the sprayed fields may become a serious problem if farm workers re-enter the treated fields soon after pesticide application [34]. Spray drift from neighboring fields and overexposure events of this kind, each involving groups of workers, have been documented as inadvertent events of farmers’ exposure to pesticides [35].
[ "25096494", "24462777", "23664459", "17017381", "7713022", "10414791", "24287863", "21655127", "18666136", "11246121", "21217838", "26296070", "25666658", "11884232", "12826473", "24993452", "23820846", "24106643", "16175199", "19022871", "11385639", "15551369", "11675623", "17560990", "19967906", "15657814", "18784953", "12363182", "10793284", "17822819", "8239718", "10204668", "12505907" ]
[ { "pmid": "25096494", "title": "Global trends of research on emerging contaminants in the environment and humans: a literature assimilation.", "abstract": "Available literature data on five typical groups of emerging contaminants (EMCs), i.e., chlorinated paraffins (CPs), dechlorane plus and related compounds (DPs), hexabromocyclododecanes (HBCDs), phthalate esters, and pyrethroids, accumulated between 2003 and 2013 were assimilated. Research efforts were categorized by environmental compartments and countries, so that global trends of research on EMCs and data gaps can be identified. The number of articles on the target EMCs ranged from 126 to 1,379 between 2003 and 2013. The numbers of articles on CPs, DPs, HBCDs, and pyrethroids largely followed the sequence of biota > sediment ≥ air > water ≥ soil > human tissue, whereas the sequence for phthalate esters was water > sediment > soil > human tissue ≥ biota ≥ air. Comprehensive studies on the target EMCs in biological samples and human tissues have been conducted worldwide. However, investigations into the occurrence of the target EMCs in soil of background areas and water are still scarce. Finally, developed and moderately developed countries, such as the USA, China, Canada, Japan, and Germany, were the main contributors to the global research efforts on EMCs, suggesting that economic prosperity may be one of the main factors propelling scientific research on EMCs." }, { "pmid": "24462777", "title": "Emerging pollutants in the environment: present and future challenges in biomonitoring, ecological risks and bioremediation.", "abstract": "Emerging pollutants reach the environment from various anthropogenic sources and are distributed throughout environmental matrices. Although great advances have been made in the detection and analysis of trace pollutants during recent decades, due to the continued development and refinement of specific techniques, a wide array of undetected contaminants of emerging environmental concern need to be identified and quantified in various environmental components and biological tissues. These pollutants may be mobile and persistent in air, water, soil, sediments and ecological receptors even at low concentrations. Robust data on their fate and behaviour in the environment, as well as on threats to ecological and human health, are still lacking. Moreover, the ecotoxicological significance of some emerging micropollutants remains largely unknown, because satisfactory data to determine their risk often do not exist. This paper discusses the fate, behaviour, (bio)monitoring, environmental and health risks associated with emerging chemical (pharmaceuticals, endocrine disruptors, hormones, toxins, among others) and biological (bacteria, viruses) micropollutants in soils, sediments, groundwater, industrial and municipal wastewaters, aquaculture effluents, and freshwater and marine ecosystems, and highlights new horizons for their (bio)removal. Our study aims to demonstrate the imperative need to boost research and innovation for new and cost-effective treatment technologies, in line with the uptake, mode of action and consequences of each emerging contaminant. We also address the topic of innovative tools for the evaluation of the effects of toxicity on human health and for the prediction of microbial availability and degradation in the environment. Additionally, we consider the development of (bio)sensors to perform environmental monitoring in real-time mode. This needs to address multiple species, along with a more effective exploitation of specialised microbes or enzymes capable of degrading endocrine disruptors and other micropollutants. In practical terms, the outcomes of these activities will build up the knowledge base and develop solutions to fill the significant innovation gap faced worldwide." }, { "pmid": "17017381", "title": "Risk assessment and management of occupational exposure to pesticides in agriculture.", "abstract": "Nearly 50% of the world labour force is employed in agriculture. Over the last 50 years, agriculture has deeply changed with a massive utilisation of pesticides and fertilisers to enhance crop protection and production, food quality and food preservation. Pesticides are also increasingly employed for public health purposes and for domestic use. Pesticide are unique chemicals as they are intrinsically toxic for several biological targets, are deliberately spread into the environment, and their toxicity has a limited species selectivity. Pesticide toxicity depends on the compound family and is generally greater for the older compounds; in humans, they are responsible for acute poisonings as well as for long term health effects, including cancer and adverse effects on reproduction. Due to their intrinsic toxicity, in most countries a specific and complex legislation prescribes a thorough risk assessment process for pesticides prior to their entrance to the market (pre-marketing risk assessment). The post-marketing risk assessment takes place during the use of pesticides and aims at assessing the risk for exposed operators. The results of the risk assessment are the base for the health surveillance of exposed workers. Occupational exposure to pesticides in agriculture concerns product distributors, mixers and loaders, applicators, bystanders, and rural workers re-entering the fields shortly after treatment. Assessing and managing the occupational health risks posed by the use of pesticides in agriculture is a complex but essential task for occupational health specialists and toxicologists. In spite of the economic and social importance of agriculture, the health protection of agricultural workforce has been overlooked for too many years, causing an heavy tribute paid in terms of avoidable diseases, human sufferance, and economic losses. Particularly in the developing countries, where agricultural work is one of the predominant job, a sustainable model of development calls for more attention to occupational risks in agriculture. The experience of many countries has shown that prevention of health risk caused by pesticides is technically feasible and economically rewarding for the individuals and the whole community. A proper risk assessment and management of pesticide use is an essential component of this preventative" }, { "pmid": "7713022", "title": "Evaluating health risks from occupational exposure to pesticides and the regulatory response.", "abstract": "In this study, we used measurements of occupational exposures to pesticides in agriculture to evaluate health risks and analyzed how the federal regulatory program is addressing these risks. Dose estimates developed by the State of California from measured occupational exposures to 41 pesticides were compared to standard indices of acute toxicity (LD50) and chronic effects (reference dose). Lifetime cancer risks were estimated using cancer potencies. Estimated absorbed daily doses for mixers, loaders, and applicators of pesticides ranged from less than 0.0001% to 48% of the estimated human LD50 values, and doses for 10 of 40 pesticides exceeded 1% of the estimated human LD50 values. Estimated lifetime absorbed daily doses ranged from 0.1% to 114,000% of the reference doses developed by the U.S. Environmental Protection Agency, and doses for 13 of 25 pesticides were above them. Lifetime cancer risks ranged from 1 per million to 1700 per million, and estimates for 12 of 13 pesticides were above 1 per million. Similar results were obtained for field workers and flaggers. For the pesticides examined, exposures pose greater risks of chronic effects than acute effects. Exposure reduction measures, including use of closed mixing systems and personal protective equipment, significantly reduced exposures. Proposed regulations rely primarily on requirements for personal protective equipment and use restrictions to protect workers. Chronic health risks are not considered in setting these requirements. Reviews of pesticides by the federal pesticide regulatory program have had little effect on occupational risks. Policy strategies that offer immediate protection for workers and that are not dependent on extensive review of individual pesticides should be pursued." }, { "pmid": "10414791", "title": "Risk assessment and management of occupational exposure to pesticides.", "abstract": "Occupational exposure to pesticides in agriculture and public health applications may cause acute and long-term health effects. Prevention of adverse effects in the users requires actions to be undertaken in the pre-marketing and post-marketing phase of these products. The pre-marketing preventive actions are primary responsibility of industry and the public administration. Admission of pesticide use (registration) is carried out by considering the toxicological properties of each pesticide (hazard identification), determining the dose-response relationship (NOEL identification), assessing or predicting the exposure level in the various scenarios of their use, and characterising the risk. The decision about admission takes into consideration the balance between risks and benefits. The post-marketing preventive activities consist of the promotion of a proper risk management at the workplace. Such a management includes the risk assessment of the specific conditions of use, the adoption of proper work practices, and the health surveillance of the workers. Each country should develop an adequate National Plan for Prevention of Pesticide Risk which allocates different roles and tasks at the central, regional and local level." }, { "pmid": "24287863", "title": "Occupational pesticide exposures and respiratory health.", "abstract": "Pesticides have been widely used to control pest and pest-related diseases in agriculture, fishery, forestry and the food industry. In this review, we identify a number of respiratory symptoms and diseases that have been associated with occupational pesticide exposures. Impaired lung function has also been observed among people occupationally exposed to pesticides. There was strong evidence for an association between occupational pesticide exposure and asthma, especially in agricultural occupations. In addition, we found suggestive evidence for a link between occupational pesticide exposure and chronic bronchitis or COPD. There was inconclusive evidence for the association between occupational pesticide exposure and lung cancer. Better control of pesticide uses and enforcement of safety behaviors, such as using personal protection equipment (PPE) in the workplace, are critical for reducing the risk of developing pesticide-related symptoms and diseases. Educational training programs focusing on basic safety precautions and proper uses of personal protection equipment (PPE) are possible interventions that could be used to control the respiratory diseases associated with pesticide exposure in occupational setting." }, { "pmid": "21655127", "title": "Pesticide exposure, safety issues, and risk assessment indicators.", "abstract": "Pesticides are widely used in agricultural production to prevent or control pests, diseases, weeds, and other plant pathogens in an effort to reduce or eliminate yield losses and maintain high product quality. Although pesticides are developed through very strict regulation processes to function with reasonable certainty and minimal impact on human health and the environment, serious concerns have been raised about health risks resulting from occupational exposure and from residues in food and drinking water. Occupational exposure to pesticides often occurs in the case of agricultural workers in open fields and greenhouses, workers in the pesticide industry, and exterminators of house pests. Exposure of the general population to pesticides occurs primarily through eating food and drinking water contaminated with pesticide residues, whereas substantial exposure can also occur in or around the home. Regarding the adverse effects on the environment (water, soil and air contamination from leaching, runoff, and spray drift, as well as the detrimental effects on wildlife, fish, plants, and other non-target organisms), many of these effects depend on the toxicity of the pesticide, the measures taken during its application, the dosage applied, the adsorption on soil colloids, the weather conditions prevailing after application, and how long the pesticide persists in the environment. Therefore, the risk assessment of the impact of pesticides either on human health or on the environment is not an easy and particularly accurate process because of differences in the periods and levels of exposure, the types of pesticides used (regarding toxicity and persistence), and the environmental characteristics of the areas where pesticides are usually applied. Also, the number of the criteria used and the method of their implementation to assess the adverse effects of pesticides on human health could affect risk assessment and would possibly affect the characterization of the already approved pesticides and the approval of the new compounds in the near future. Thus, new tools or techniques with greater reliability than those already existing are needed to predict the potential hazards of pesticides and thus contribute to reduction of the adverse effects on human health and the environment. On the other hand, the implementation of alternative cropping systems that are less dependent on pesticides, the development of new pesticides with novel modes of action and improved safety profiles, and the improvement of the already used pesticide formulations towards safer formulations (e.g., microcapsule suspensions) could reduce the adverse effects of farming and particularly the toxic effects of pesticides. In addition, the use of appropriate and well-maintained spraying equipment along with taking all precautions that are required in all stages of pesticide handling could minimize human exposure to pesticides and their potential adverse effects on the environment." }, { "pmid": "18666136", "title": "Acute pesticide poisoning among agricultural workers in the United States, 1998-2005.", "abstract": "BACKGROUND\nApproximately 75% of pesticide usage in the United States occurs in agriculture. As such, agricultural workers are at greater risk of pesticide exposure than non-agricultural workers. However, the magnitude, characteristics and trend of acute pesticide poisoning among agricultural workers are unknown.\n\n\nMETHODS\nWe identified acute pesticide poisoning cases in agricultural workers between the ages of 15 and 64 years that occurred from 1998 to 2005. The California Department of Pesticide Regulation and the SENSOR-Pesticides program provided the cases. Acute occupational pesticide poisoning incidence rates (IR) for those employed in agriculture were calculated, as were incidence rate ratios (IRR) among agricultural workers relative to non-agricultural workers.\n\n\nRESULTS\nOf the 3,271 cases included in the analysis, 2,334 (71%) were employed as farmworkers. The remaining cases were employed as processing/packing plant workers (12%), farmers (3%), and other miscellaneous agricultural workers (19%). The majority of cases had low severity illness (N = 2,848, 87%), while 402 (12%) were of medium severity and 20 (0.6%) were of high severity. One case was fatal. Rates of illness among various agricultural worker categories were highly variable but all, except farmers, showed risk for agricultural workers greater than risk for non-agricultural workers by an order of magnitude or more. Also, the rate among female agricultural workers was almost twofold higher compared to males.\n\n\nCONCLUSION\nThe findings from this study suggest that acute pesticide poisoning in the agricultural industry continues to be an important problem. These findings reinforce the need for heightened efforts to better protect farmworkers from pesticide exposure." }, { "pmid": "11246121", "title": "Pesticide use in developing countries.", "abstract": "Chemical pesticides have been a boon to equatorial, developing nations in their efforts to eradicate insect-borne, endemic diseases, to produce adequate food and to protect forests, plantations and fibre (wood, cotton, clothing, etc.). Controversy exists over the global dependence on such agents, given their excessive use/misuse, their volatility, long-distance transport and eventual environmental contamination in colder climates. Many developing countries are in transitional phases with migration of the agricultural workforce to urban centres in search of better-paying jobs, leaving fewer people responsible for raising traditional foods for themselves and for the new, industrialized workforce. Capable of growing two or three crops per year, these same countries are becoming \"breadbaskets\" for the world, exporting nontraditional agricultural produce to regions having colder climates and shorter growing seasons, thereby earning much needed international trade credits. To attain these goals, there has been increased reliance on chemical pesticides. Many older, nonpatented, more toxic, environmentally persistent and inexpensive chemicals are used extensively in developing nations, creating serious acute health problems and local and global environmental contamination. There is growing public concern in these countries that no one is aware of the extent of pesticide residue contamination on local, fresh produce purchased daily or of potential, long-term, adverse health effects on consumers. Few developing nations have a clearly expressed \"philosophy\" concerning pesticides. There is a lack of rigorous legislation and regulations to control pesticides as well as training programs for personnel to inspect and monitor use and to initiate training programs for pesticide consumers." }, { "pmid": "26296070", "title": "Overuse or underuse? An observation of pesticide use in China.", "abstract": "Pesticide use has experienced a dramatic increase worldwide, especially in China, where a wide variety of pesticides are used in large amounts by farmers to control crop pests. While Chinese farmers are often criticized for pesticide overuse, this study shows the coexistence of overuse and underuse of pesticide based on the survey data of pesticide use in rice, cotton, maize, and wheat production in three provinces in China. A novel index amount approach is proposed to convert the amount of multiple pesticides used to control the same pest into an index amount of a referenced pesticide. We compare the summed index amount with the recommended dosage range of the referenced pesticide to classify whether pesticides are overused or underused. Using this new approach, the following main results were obtained. Pesticide overuse and underuse coexist after examining a total of 107 pesticides used to control up to 54 crop pests in rice, cotton, maize, and wheat production. In particular, pesticide overuse in more than half of the total cases for 9 crop pest species is detected. In contrast, pesticide underuse accounts for more than 20% of the total cases for 11 pests. We further indicate that the lack of knowledge and information on pesticide use and pest control among Chinese farmers may cause the coexistence of pesticide overuse and underuse. Our analysis provides indirect evidence that the commercialized agricultural extension system in China probably contributes to the coexistence of overuse and underuse. To improve pesticide use, it is urgent to reestablish the monitoring and forecasting system regarding pest control in China." }, { "pmid": "25666658", "title": "An approach to the identification and regulation of endocrine disrupting pesticides.", "abstract": "Recent decades have seen an increasing interest in chemicals that interact with the endocrine system and have the potential to alter the normal function of this system in humans and wildlife. Chemicals that produce adverse effects caused by interaction with endocrine systems are termed Endocrine Disrupters (EDs). This interest has led regulatory authorities around the world (including the European Union) to consider whether potential endocrine disrupters should be identified and assessed for effects on human health and wildlife and what harmonised criteria could be used for such an assessment. This paper reviews the results of a study whereby toxicity data relating to human health effects of 98 pesticides were assessed for endocrine disruption potential using a number of criteria including the Specific Target Organ Toxicity for repeat exposure (STOT-RE) guidance values used in the European Classification, Labelling and Packaging (CLP) Regulation. Of the pesticides assessed, 27% required further information in order to make a more definitive assessment, 14% were considered to be endocrine disrupters, more or less likely to pose a risk, and 59% were considered not to be endocrine disrupters." }, { "pmid": "11884232", "title": "Effects of currently used pesticides in assays for estrogenicity, androgenicity, and aromatase activity in vitro.", "abstract": "Twenty-four pesticides were tested for interactions with the estrogen receptor (ER) and the androgen receptor (AR) in transactivation assays. Estrogen-like effects on MCF-7 cell proliferation and effects on CYP19 aromatase activity in human placental microsomes were also investigated. Pesticides (endosulfan, methiocarb, methomyl, pirimicarb, propamocarb, deltamethrin, fenpropathrin, dimethoate, chlorpyriphos, dichlorvos, tolchlofos-methyl, vinclozolin, iprodion, fenarimol, prochloraz, fosetyl-aluminum, chlorothalonil, daminozid, paclobutrazol, chlormequat chlorid, and ethephon) were selected according to their frequent use in Danish greenhouses. In addition, the metabolite mercaptodimethur sulfoxide, the herbicide tribenuron-methyl, and the organochlorine dieldrin, were included. Several of the pesticides, dieldrin, endosulfan, methiocarb, and fenarimol, acted both as estrogen agonists and androgen antagonists. Prochloraz reacted as both an estrogen and an androgen antagonist. Furthermore, fenarimol and prochloraz were potent aromatase inhibitors while endosulfan was a weak inhibitor. Hence, these three pesticides possess at least three different ways to potentially disturb sex hormone actions. In addition, chlorpyrifos, deltamethrin, tolclofos-methyl, and tribenuron-methyl induced weak responses in one or both estrogenicity assays. Upon cotreatment with 17beta-estradiol, the response was potentiated by endosulfan in the proliferation assay and by pirimicarb, propamocarb, and daminozid in the ER transactivation assay. Vinclozolin reacted as a potent AR antagonist and dichlorvos as a very weak one. Methomyl, pirimicarb, propamocarb, and iprodion weakly stimulated aromatase activity. Although the potencies of the pesticides to react as hormone agonists or antagonists are low compared to the natural ligands, the integrated response in the organism might be amplified by the ability of the pesticides to act via several mechanism and the frequent simultaneous exposure to several pesticides." }, { "pmid": "12826473", "title": "Large effects from small exposures. I. Mechanisms for endocrine-disrupting chemicals with estrogenic activity.", "abstract": "Information concerning the fundamental mechanisms of action of both natural and environmental hormones, combined with information concerning endogenous hormone concentrations, reveals how endocrine-disrupting chemicals with estrogenic activity (EEDCs) can be active at concentrations far below those currently being tested in toxicological studies. Using only very high doses in toxicological studies of EEDCs thus can dramatically underestimate bioactivity. Specifically: a) The hormonal action mechanisms and the physiology of delivery of EEDCs predict with accuracy the low-dose ranges of biological activity, which have been missed by traditional toxicological testing. b) Toxicology assumes that it is valid to extrapolate linearly from high doses over a very wide dose range to predict responses at doses within the physiological range of receptor occupancy for an EEDC; however, because receptor-mediated responses saturate, this assumption is invalid. c) Furthermore, receptor-mediated responses can first increase and then decrease as dose increases, contradicting the assumption that dose-response relationships are monotonic. d) Exogenous estrogens modulate a system that is physiologically active and thus is already above threshold, contradicting the traditional toxicological assumption of thresholds for endocrine responses to EEDCs. These four fundamental issues are problematic for risk assessment methods used by regulatory agencies, because they challenge the traditional use of extrapolation from high-dose testing to predict responses at the much lower environmentally relevant doses. These doses are within the range of current exposures to numerous chemicals in wildlife and humans. These problems are exacerbated by the fact that the type of positive and negative controls appropriate to the study of endocrine responses are not part of traditional toxicological testing and are frequently omitted, or when present, have been misinterpreted." }, { "pmid": "24993452", "title": "Delayed and time-cumulative toxicity of imidacloprid in bees, ants and termites.", "abstract": "Imidacloprid, one of the most commonly used insecticides, is highly toxic to bees and other beneficial insects. The regulatory challenge to determine safe levels of residual pesticides can benefit from information about the time-dependent toxicity of this chemical. Using published toxicity data for imidacloprid for several insect species, we construct time-to-lethal-effect toxicity plots and fit temporal power-law scaling curves to the data. The level of toxic exposure that results in 50% mortality after time t is found to scale as t(1.7) for ants, from t(1.6) to t(5) for honeybees, and from t(1.46) to t(2.9) for termites. We present a simple toxicological model that can explain t(2) scaling. Extrapolating the toxicity scaling for honeybees to the lifespan of winter bees suggests that imidacloprid in honey at 0.25 μg/kg would be lethal to a large proportion of bees nearing the end of their life." }, { "pmid": "23820846", "title": "Human skin in vitro permeation of bentazon and isoproturon formulations with or without protective clothing suit.", "abstract": "Skin exposures to chemicals may lead, through percutaneous permeation, to a significant increase in systemic circulation. Skin is the primary route of entry during some occupational activities, especially in agriculture. To reduce skin exposures, the use of personal protective equipment (PPE) is recommended. PPE efficiency is characterized as the time until products permeate through material (lag time, Tlag). Both skin and PPE permeations are assessed using similar in vitro methods; the diffusion cell system. Flow-through diffusion cells were used in this study to assess the permeation of two herbicides, bentazon and isoproturon, as well as four related commercial formulations (Basagran(®), Basamais(®), Arelon(®) and Matara(®)). Permeation was measured through fresh excised human skin, protective clothing suits (suits) (Microchem(®) 3000, AgriSafe Pro(®), Proshield(®) and Microgard(®) 2000 Plus Green), and a combination of skin and suits. Both herbicides, tested by itself or as an active ingredient in formulations, permeated readily through human skin and tested suits (Tlag < 2 h). High permeation coefficients were obtained regardless of formulations or tested membranes, except for Microchem(®) 3000. Short Tlag, were observed even when skin was covered with suits, except for Microchem(®) 3000. Kp values tended to decrease when suits covered the skin (except when Arelon(®) was applied to skin covered with AgriSafe Pro and Microgard(®) 2000), suggesting that Tlag alone is insufficient in characterizing suits. To better estimate human skin permeations, in vitro experiments should not only use human skin but also consider the intended use of the suit, i.e., the active ingredient concentrations and type of formulations, which significantly affect skin permeation." }, { "pmid": "24106643", "title": "Dermal exposure associated with occupational end use of pesticides and the role of protective measures.", "abstract": "BACKGROUND\nOccupational end users of pesticides may experience bodily absorption of the pesticide products they use, risking possible health effects. The purpose of this paper is to provide a guide for researchers, practitioners, and policy makers working in the field of agricultural health or other areas where occupational end use of pesticides and exposure issues are of interest.\n\n\nMETHODS\nThis paper characterizes the health effects of pesticide exposure, jobs associated with pesticide use, pesticide-related tasks, absorption of pesticides through the skin, and the use of personal protective equipment (PPE) for reducing exposure.\n\n\nCONCLUSIONS\nAlthough international and national efforts to reduce pesticide exposure through regulatory means should continue, it is difficult in the agricultural sector to implement engineering or system controls. It is clear that use of PPE does reduce dermal pesticide exposure but compliance among the majority of occupationally exposed pesticide end users appears to be poor. More research is needed on higher-order controls to reduce pesticide exposure and to understand the reasons for poor compliance with PPE and identify effective training methods." }, { "pmid": "16175199", "title": "Pesticide contamination of workers in vineyards in France.", "abstract": "In order to build tools to quantify exposure to pesticides of farmers included into epidemiological studies, we performed a field study in Bordeaux vineyards during the 2001 and 2002 treatment seasons to identify parameters related to external contamination of workers. In total, 37 treatment days were observed in tractor operators corresponding to 65 mixing operations, 71 spraying operations and 26 equipment cleaning. In all, four operators with backpack sprayers and seven re-entry workers were also monitored. We performed both detailed observations of treatment characteristics on the whole day and pesticide measurements of external contamination (dermal and inhalation) for each operation. The median dermal contamination was 40.5 mg of active ingredient per day for tractor operators, 68.8 mg for backpack sprayers and 1.3 mg for vineyard workers. Most of the contamination was observed on the hands (49% and 56.2% for mixing and spraying, respectively). The median contribution of respiratory route in the total contamination was 1.1%. A cleaning operation resulted in a 4.20 mg dermal contamination intermediate between a mixing (2.85 mg) and a spraying operation (6.13 mg). Farm owners experienced higher levels than workers and lower contaminations were observed in larger farms. The contamination increased with the number of spraying phases and when equipment cleaning was performed. Types of equipment influenced significantly the daily contamination, whereas personal protective equipment only resulted in a limited decrease of contamination." }, { "pmid": "19022871", "title": "Exposure to pesticides in open-field farming in France.", "abstract": "OBJECTIVES\nIdentification of parameters associated with measured pesticide exposure of farmers in open-field farming in France.\n\n\nMETHODS\nOpen-field volunteer farmers were monitored during 1 day use of the herbicide isoproturon on wheat and/or barley during the winters 2001 (n = 9) or 2002 (n = 38) under usual conditions of work. The whole-body method was used to assess potential dermal exposure using coveralls and cotton gloves. Mixing-loading and application tasks were assessed separately with 12 different body areas (hands, arms, forearms, legs, chest, back and thighs) measured for each task (mixing-loading and application separately).\n\n\nRESULTS\nDaily potential dermal exposure to isoproturon ranged from 2.0 to 567.8 mg (median = 57.8 mg) in 47 farmers. Exposure during mixing-loading tasks accounted for 13.9-98.1% of the total exposure (median = 74.8%). For mixing-loading, hands and forearms were the most contaminated body areas accounting for an average of 64 and 14%, respectively. For application, hands were also the most contaminated part of the body, accounting for an average of 57%, and thighs, forearms and chest or back were in the same range as one another, 3-10%. No correlations were observed between potential dermal exposure and area sprayed, duration of spraying or size of the farm. However, a significant relationship was observed between exposure and the type of spraying equipment, with a rear-mounted sprayer leading to a higher exposure level than trailer sprayers. Technical problems, particularly the unplugging of nozzles, and the numbers mixing-loading or application tasks performed were also significantly related with higher levels of exposure.\n\n\nCONCLUSIONS\nThe main results obtained in this study on a large number of observation days are as follows: (i) the mixing-loading step was the most contaminated task in open field accounting for two-thirds of the total daily exposure, (ii) no positive correlation was noted with classically used pesticide-related parameters: farm area, area sprayed and duration of application and (iii) relevant parameters were the type of spraying equipment, the type and number of tasks and technical problems or cases of overflowing." }, { "pmid": "11385639", "title": "Nested case-control analysis of high pesticide exposure events from the Agricultural Health Study.", "abstract": "BACKGROUND\nA nested case-control analysis of high pesticide exposure events (HPEEs) was conducted using the Iowa farmers enrolled in the Agricultural Health Study (AHS).\n\n\nMETHODS\nIn the 12 months of the study, 36 of the 5,970 farmer applicators randomly chosen from the AHS cohort (six per 1,000 farmer applicators per year) met our definition of an HPEE, by reporting \"an incident with fertilizers, weed killers, or other pesticides that caused an unusually high personal exposure\" resulting in physical symptoms or a visit to a health care provider or hospital. Eligibility criteria were met by 25 HPEE cases and 603 randomly selected controls.\n\n\nRESULTS\nSignificant risk factors for an HPEE included: poor financial condition of the farm which limited the purchase of rollover protective structures OR = 4.6 (1.5-16.6), and having a high score on a risk acceptance scale OR = 3.8 (1.4-11.2). Other non-significant factors were also identified.\n\n\nCONCLUSIONS\nThe limited statistical power of this study necessitates replication of these analyses with a larger sample. Nonetheless, the observed elevated odds ratios of an HPEE provide hypotheses for future studies that may lead to preventive action." }, { "pmid": "15551369", "title": "Health symptoms and exposure to organophosphate pesticides in farmworkers.", "abstract": "BACKGROUND\nFew studies have examined the relationship between reported health symptoms and exposure to organophosphate (OP) pesticides.\n\n\nMETHODS\nFisher's exact test was used to assess the relationship between self-reported health symptoms and indicators of exposure to OP pesticides in 211 farmworkers in Eastern Washington.\n\n\nRESULTS\nThe health symptoms most commonly reported included headaches (50%), burning eyes (39%), pain in muscles, joints, or bones (35%), a rash or itchy skin (25%), and blurred vision (23%). Exposure to pesticides was prevalent. The proportion of detectable samples of various pesticide residues in house and vehicle dust was weakly associated with reporting certain health symptoms, particularly burning eyes and shortness of breath. No significant associations were found between reporting health symptoms and the proportion of detectable urinary pesticide metabolites.\n\n\nCONCLUSIONS\nCertain self-reported health symptoms in farmworkers may be associated with indicators of exposure to pesticides. Longitudinal studies with more precise health symptom data are needed to explore this relationship further." }, { "pmid": "11675623", "title": "Determining the probability of pesticide exposures among migrant farmworkers: results from a feasibility study.", "abstract": "BACKGROUND\nMigrant and seasonal farmworkers are exposed to pesticides through their work with crops and livestock. Because workers are usually unaware of the pesticides applied, specific pesticide exposures cannot be determined by interviews. We conducted a study to determine the feasibility of identifying probable pesticide exposures based on work histories.\n\n\nMETHODS\nThe study included 162 farm workers in seven states. Interviewers obtained a lifetime work history including the crops, tasks, months, and locations worked. We investigated the availability of survey data on pesticide use for crops and livestock in the seven pilot states. Probabilities of use for pesticide types (herbicides, insecticides, fungicides, etc.) and specific chemicals were calculated from the available data for two farm workers. The work histories were chosen to illustrate how the quality of the pesticide use information varied across crops, states, and years.\n\n\nRESULTS\nFor most vegetable and fruit crops there were regional pesticide use data in the late 1970s, no data in the 1980s, and state-specific data every other year in the 1990s. Annual use surveys for cotton and potatoes began in the late 1980s. For a few crops, including asparagus, broccoli, lettuce, strawberries, plums, and Christmas trees, there were no federal data or data from the seven states before the 1990s.\n\n\nCONCLUSIONS\nWe conclude that identifying probable pesticide exposures is feasible in some locations. However, the lack of pesticide use data before the 1990s for many crops will limit the quality of historic exposure assessment for most workers." }, { "pmid": "19967906", "title": "Use of engineering controls and personal protective equipment by certified pesticide applicators.", "abstract": "A convenience survey of 702 certified pesticide applicators was conducted in three states to assess the use of 16 types of engineering controls and 13 types of personal protective equipment (PPE). Results showed that 8 out of 16 engineering devices were adopted by more than 50% of the respondents. The type of crop, size of agricultural operation, and the type of pesticide application equipment were found to influence the adoption of engineering controls. Applicators working on large farms, users of boom and hydraulic sprayers, and growers of field crops were more likely to use engineering devices. Respondents reported a high level of PPE use, with chemical-resistant gloves showing the highest level of compliance. An increase in pesticide applicators wearing appropriate headgear was reported. The majority of respondents did not wear less PPE simply because they used engineering controls. Those who did modify their PPE choices when employing engineering controls used tractors with enclosed cabs and/or were vegetable growers." }, { "pmid": "15657814", "title": "Evaluation of skin and respiratory doses and urinary excretion of alkylphosphates in workers exposed to dimethoate during treatment of olive trees.", "abstract": "This article describes a study of exposure to dimethoate during spraying of olive trees in Viterbo province in central Italy. Airborne concentrations of dimethoate were in the range 1.5 to 56.7 nmol/m(3). Total skin contamination was in the range 228.4 to 3200.7 nmol/d and averaged 96.0% +/- 3.6% of the total potential dose. Cotton garments afforded less skin protection than waterproof ones, which were in turn associated with higher skin contamination than disposable Tyvek overalls. Total potential doses and estimated absorbed doses, including their maxima, were below the acceptable daily intake of dimethoate, which is 43.6 nmol/kg body weight (b.w.). Urinary excretion of alkylphosphates was significantly higher than in the general population, increasing with exposure and usually showing a peak in the urine sample collected after treatment. Metabolite concentrations were influenced by the type of individual protection used: minimum levels were associated with the closed cabin and maximum levels with absence of any respiratory or hand protection. Urinary alkylphosphates showed a good correlation with estimated absorbed doses and are confirmed as sensitive biologic indicators of exposure to phosphoric esters." }, { "pmid": "18784953", "title": "Operative modalities and exposure to pesticides during open field treatments among a group of agricultural subcontractors.", "abstract": "This paper reports the results of a field study of occupational pesticide exposure (respiratory and dermal) among a group of Italian agricultural subcontractors. These workers consistently use pesticides during much of the year, thus resulting in a high exposure risk. Ten complete treatments were monitored during spring/summer. Pesticides that were applied included azinphos-methyl, dicamba, dimethoate, terbuthylazine, and alachlor. Several observations were made on worker operative modalities and the use of personal protective equipment (PPE) during work. Total potential and actual exposure ranged from 14 to 5700 microg and from 0.04 to 4600 microg, respectively. Dermal exposure contributed substantially more than inhalation to the total exposure (93.9-100%). Hand contamination ranged from 0.04 to 4600 microg and was the major contributor to dermal exposure. Penetration through specific protective garments was less than 2.4% in all cases, although penetration through general work clothing was as high as 26.8%. The risk evaluation, based on comparison between acceptable daily intake and total absorbed doses, demonstrates that it is presumable to expect possible health effects for workers regularly operating without PPE and improper tractors. Comparisons between exposure levels and operative modalities highlighted that complete PPE and properly equipped tractors contributed to a significant reduction in total exposure to pesticides during agricultural activities. In conclusion, monitored agricultural subcontractors presented very different levels of pesticide exposure due to the high variability of operative modalities and use of PPE. These results indicate the need to critically evaluate the efficacy of training programs required for obtaining a pesticide license in Italy." }, { "pmid": "12363182", "title": "Fluorescent tracer evaluation of chemical protective clothing during pesticide applications in central Florida citrus groves.", "abstract": "Chemical protective clothing (CPC) is often recommended as a method of exposure mitigation among pesticide applicators. This study evaluated four CPC regimens (cotton work shirts and work pants, cotton/polyester coveralls, and two non-woven garments) during 33 airblast applications of the organophosphorus insecticide ethion in central Florida citrus groves. CPC performance was determined by measurement of fluorescent tracer deposition on skin surfaces beneath garments with a video imaging analysis instrument (VITAE system), and by alpha-cellulose patches placed outside and beneath the garments. Non-woven coveralls allowed significantly greater exposure than did traditional woven garments, primarily because of design factors (e.g., large sleeve and neck openings). The greatest exposure occurred on the forearms beneath the non-woven garments. Fabric penetration was detected for all test garments; 5% to 7% of the ethion measured outside the garments was found beneath the garments. The clothing materials tested were not chemically resistant under these field conditions. Exposurepathways that would probably be undetected by the patch technique were characterized effectively with fluorescent tracers and video imaging analysis. However, the patch technique was more sensitive in detecting fabric penetration. CPC garments have been improved since this study was conducted, but performance testing under field conditions is not widespread. Workers conducting airblast applications would be better protected by closed cab systems or any technology that places an effective barrier between the worker and the pesticide spray." }, { "pmid": "10793284", "title": "Effectiveness of interventions in reducing pesticide overexposure and poisonings.", "abstract": "OBJECTIVE\nThe objective of this paper was to review the effectiveness of interventions to reduce pesticide overexposure and poisonings in worker populations.\n\n\nMETHODS\nWe used the Cochrane Collaboration search strategy to search the following databases for articles that tested the effectiveness of interventions in reducing human pesticide exposure or poisonings: MEDLINE, EMBASE, and Occupational Safety and Health (NIOSHTC). Interventions considered included comparisons of pesticide application methods, pesticide mixing methods, worker education, biological monitoring programs, personal protective equipment (PPE) use, pesticide substitutions, and legislation. The outcomes of interest included biological monitoring measures or personal exposure monitoring indicating a reduction of pesticide exposure, observed increased use of PPE, reduction in lost workdays, and where possible, evidence of changes in pesticide poisoning rates as identified by registries and population surveys. Studies were reviewed in depth with special attention to size and study design.\n\n\nRESULTS\nMost studies evaluated exposure during differing configurations of PPE or during different mixing or handling methods. Most studies were small field tests of protective equipment involving less than 20 workers. Some studies examined biological indices of exposure such as cholinesterase or urinary metabolites. Studies showed that PPE was effective in reducing exposure. No controlled studies were found that addressed reducing pesticide poisonings.\n\n\nCONCLUSIONS\nChanges in application procedures, packaging, mixing, use of personal protective equipment, and biological monitoring reduced pesticide exposure under controlled conditions. Cholinesterase monitoring can identify workers with a higher risk of overexposure. Most techniques were not tested in actual worksite programs. Interventions should be examined for their ability to reduce pesticide overexposure in actual working populations. No controlled evaluations of large legislative initiatives were found." }, { "pmid": "17822819", "title": "Derivation of single layer clothing penetration factors from the pesticide handlers exposure database.", "abstract": "Quantitative characterization of the penetration of chemical residues through various types and configurations of clothing is an important underpinning of mitigation strategies to reduce dermal exposure to occupational cohorts. The objective of the evaluation presented herein is the development of pesticide clothing penetration (or conversely protection) factors for single layer clothing (i.e., long-sleeved shirt, long pants; gloves are not included) based on dermal exposure monitoring data (passive dosimetry) included in the Environmental Protection Agency's Pesticide Handlers Exposure Database (PHED). The analysis of penetration per replicate was conducted by comparison of the inside and outside (total deposition), expressed as mug/cm(2), for each replicate pair of dermal dosimeters. Clothing penetration was investigated as a function of job classification, dosimetry sampling method, body part, application method, and type of formulation. Grand mean single layer clothing penetration values for patch (n=2029) and whole-body (n=100) dosimeter samples from PHED were 12.12 (SE=0.33; SD=15.02) and 8.21 (SE=1.01; SD=10.14) percent, respectively. Linear regression was used to evaluate clothing penetration as a function of outer dosimeter loading. The regression analysis supports the hypothesis that single layer clothing penetration increases with decreasing outer dosimeter loading." }, { "pmid": "8239718", "title": "Protection afforded greenhouse pesticide applicators by coveralls: a field test.", "abstract": "Applicators of chlorpyrifos, fluvalinate, and ethazol to ornamentals in a Florida greenhouse were monitored for exposure in a replicated experiment. Pesticide exposure was assessed, using pads placed inside and outside three types of protective coveralls. Potential total body accumulation rates, excluding hands, as calculated from outside pads, depended strongly upon the rate at which pesticide left the spray nozzles. When these total body rates were normalized for spray rate, the mean results, in mg-deposited/kg-sprayed, ranged from 166 to 1126, depending upon the compound applied and the application device. Overall penetration of pesticide through a disposable synthetic coverall was 3 +/- 1% for chlorpyrifos and fluvalinate, and 35 +/- 9% for ethazol. Penetration through a reusable treated twill coverall was 19 +/- 6% for chlorpyrifos, 22 +/- 13% for fluvalinate, and 38 +/- 5% for ethazol." }, { "pmid": "10204668", "title": "Exposure of farmers to phosmet, a swine insecticide.", "abstract": "OBJECTIVES\nThe goal of this study was to measure dermal and inhalation exposures to phosmet during application to animals and to identify what determinants of exposure influence the exposure levels.\n\n\nMETHODS\nTen farmers were monitored using dermal patches, gloves, and air sampling media during normal activities of applying phosmet to pigs for insect control. Exposures were measured on the clothing (outer), under the clothing (inner), on the hands, and in the air. Possible exposure determinants were identified, and a questionnaire on work practices was administered.\n\n\nRESULTS\nThe geometric mean of the outer exposure measurements was 79 microg/h, whereas the geometric mean of the inner exposure measurements was 6 microg/h. The geometric mean for hand exposure was 534 microg/h, and the mean air concentration was 0.2 microg/m3. Glove use was associated with the hand and total dermal exposure levels, but no other determinant was associated with any of the exposure measures. The average penetration through the clothing was 54%, which dropped to 8% when the farmers wearing short sleeves were excluded. The farmers reported an average of 40 hours a year performing insecticide-related tasks.\n\n\nCONCLUSIONS\nFarmers who applied phosmet to animals had measurable exposures, but the levels were lower than what has been seen in other pesticide applications. Inhalation exposures were insignificant when compared with dermal exposures, which came primarily from the hands. Clothing, particularly gloves, provided substantial protection from exposures. No other exposure determinant was identified." }, { "pmid": "12505907", "title": "Determination of potential dermal and inhalation operator exposure to malathion in greenhouses with the whole body dosimetry method.", "abstract": "One of the steps during the authorization process of plant protection products (PPP) in the European Union is to evaluate the safety of the operator. For this purpose, information on the probable levels of operator exposure during the proposed uses of the PPP is required. These levels can be estimated by using existing mathematical models or from field study data. However, the existing models have several shortcomings, including the lack of data for operator exposure levels during spray applications by hand lance, especially in greenhouses. The present study monitored the potential dermal and inhalation operator exposure from hand-held lance applications of malathion on greenhouse tomatoes at low and high spraying pressures. The methodology for monitoring potential exposure was based on the whole body dosimetry method. Inhalation exposure was monitored using personal air pumps and XAD-2 sampling tubes. For the monitoring of hand exposure, cotton gloves were used in two trials and rubber gloves in another three. The total volumes of spray solution contaminating the body of the operator were 25.37 and 35.83 ml/h, corresponding to 0.05 and 0.07% of the applied spray solution, respectively, in the case of low pressure knapsack applications and from 160.76 to 283.45 ml/h, corresponding to 0.09-0.19% of the spray solution applied, in the case of hand lance applications with tractor-generated high pressure. Counts on gloves depended on the absorbance/repellency of the glove material. The potential inhalation exposures were estimated at 0.07 and 0.09 ml/h in the case of low pressure knapsack applications, based on a ventilation rate of 25 l/min. Both potential dermal operator exposure (excluding hands) and potential inhalation exposure were increased by a factor of approximately 7 when the application pressure was increased from 3 to 18 bar in greenhouse trials with a tractor-assisted hand lance, the rest of the application conditions being very similar." } ]
Scientific Reports
28959011
PMC5620056
10.1038/s41598-017-12569-z
Discriminative Scale Learning (DiScrn): Applications to Prostate Cancer Detection from MRI and Needle Biopsies
There has been recent substantial interest in extracting sub-visual features from medical images for improved disease characterization compared to what might be achievable via visual inspection alone. Features such as Haralick and Gabor can provide a multi-scale representation of the original image by extracting measurements across differently sized neighborhoods. While these multi-scale features are effective, on large-scale digital pathological images, the process of extracting these features is computationally expensive. Moreover for different problems, different scales and neighborhood sizes may be more or less important and thus a large number of features extracted might end up being redundant. In this paper, we present a Discriminative Scale learning (DiScrn) approach that attempts to automatically identify the distinctive scales at which features are able to best separate cancerous from non-cancerous regions on both radiologic and digital pathology tissue images. To evaluate the efficacy of our approach, our approach was employed to detect presence and extent of prostate cancer on a total of 60 MRI and digitized histopathology images. Compared to a multi-scale feature analysis approach invoking features across all scales, DiScrn achieved 66% computational efficiency while also achieving comparable or even better classifier performance.
Related Work and Brief Overview of DiScrnScale selection has been a key research issue in the computer vision community since the 1990s15. Early investigations in scale selection were based on identifying scale-invariant locations of interest10,13,16,17.Although the idea of locating high interest points is interesting, it is not very feasible for applications where there is a need to investigate every image pixels, e.g., scenarios where one is attempting to identify the spatial location of cancer presence on a radiographic image. In these settings the ability to identify a single, most discriminating scale associated with each individual image pixel is computationally untenable. To address this challenge, Wang et al.18 presented a scale learning approach for finding the most discriminative scales for Local Binary Patterns (LBP) for prostate cancer detection on T2W MRI.While a number of recent papers have focused on computer assisted and radiomic analysis of prostate cancer from MRI19,20, these approaches typically involve extraction of a number of different texture features (Haralick co-occurrence, Gabor filter, and LBP texture features) to define “signatures” for the cancer and non-cancerous classes. Similarly, some researchers have taken a computer based feature analysis approach to detecting and grading prostate cancer from digitized prostate pathology images using shape, morphologic, and texture based features2,6,21–23. However with all these approaches, features are typically either extracted at a single scale or then extracted across multiple scales. Feature selection is then employed for identifying the most optimally discriminating scales2,3.In this paper we present a new generalized discriminative scale learning (DiScrn) framework that can be applied across an arbitrary number of feature scales. The conventional dissimilarity measurement for multi-scale feature is to assign a uniform weight to each scale. Based on this weighting idea, DiScrn invokes a scale selection scheme that retain the scales associated with large weights and ignores those scales with relatively trivial weights. Figure 1 illustrates the pipeline of the new DiScrn approach. It consists of two stages: training and testing. At each stage, we first perform superpixel detection on each image to cluster homogeneous neighboring pixels. This greatly reduces the overall computational cost of the approach. At the training stage, we sample an equal number of positive and negative pixels from each of the labeled training images via the superpixel based approach. We subsequently extract four types of multi-scale features for each pixel: local binary patterns (LBP)12, Gabor wavelet (Gabor)8, Haralick9 and Pyramid Histogram of Visual Words (PHOW)24. The discriminability of these features has been previously and substantively demonstrated for medical images2,3. For each feature type, the corresponding most discriminating scales are independently learned via the DiScrn algorithm.Figure 1Pipeline of the new DiScrn approach. At the training stage, superpixel detection is performed. An equal number of positive and negative pixels based off the superpixels (see details in Section III.D) are selected during the training phase. Up-to-N different textural features are extracted at various scales for each sampled pixels. For each feature class, its most discriminating scales are learned via DiScrn. Subsequently, a cancer/non-cancer classifier is trained only with the features extracted at the learned scales. At the testing stage, with superpixels detected on a test image, the features and the corresponding scales identified during the learning phase are employed for creating new image representations. Exhaustive labeling over the entire input image is performed to generate a probability map reflecting the probability of cancerous and non-cancerous regions. Majority voting within each superpixel is finally applied to smooth the generated probability map. DiScrn is different compared to traditional feature selection approaches25–28 in that DiScrn specifically aims at selecting most discriminative feature scales while traditional feature selection approach aims to directly select the most discriminating subset of features. Both could potentially reduce the number of features, and therefore may significantly reduce the computational burden associated with feature extraction. However, only DiScrn guarantees that only the most predictive feature scales will be used for subsequent feature extraction during the testing phase. This is particularly beneficial for feature extraction in parallel.Once the DiScrn approach has been applied, texture features will only be extracted at the learned scales for both the classifier training and subsequent detection. In particular, cancerous regions are detected via exhaustive classification over the entire input image. This results in a statistical probability heatmap, where coordinates having higher probabilities represent cancerous regions. Majority voting within each superpixel is finally applied to smooth the generated probability map. To evaluate the performance of DiScrn, multi-site datasets (MRI and histopathology) and testing are employed.
[ "20570758", "17948727", "18812252", "20443509", "20493759", "19164079", "21988838", "17482752", "16350920", "22214541", "24875018", "25203987", "16119262", "22641706", "21911913", "21255974", "21626933", "22337003", "21960175", "23294985" ]
[ { "pmid": "20570758", "title": "A boosted Bayesian multiresolution classifier for prostate cancer detection from digitized needle biopsies.", "abstract": "Diagnosis of prostate cancer (CaP) currently involves examining tissue samples for CaP presence and extent via a microscope, a time-consuming and subjective process. With the advent of digital pathology, computer-aided algorithms can now be applied to disease detection on digitized glass slides. The size of these digitized histology images (hundreds of millions of pixels) presents a formidable challenge for any computerized image analysis program. In this paper, we present a boosted Bayesian multiresolution (BBMR) system to identify regions of CaP on digital biopsy slides. Such a system would serve as an important preceding step to a Gleason grading algorithm, where the objective would be to score the invasiveness and severity of the disease. In the first step, our algorithm decomposes the whole-slide image into an image pyramid comprising multiple resolution levels. Regions identified as cancer via a Bayesian classifier at lower resolution levels are subsequently examined in greater detail at higher resolution levels, thereby allowing for rapid and efficient analysis of large images. At each resolution level, ten image features are chosen from a pool of over 900 first-order statistical, second-order co-occurrence, and Gabor filter features using an AdaBoost ensemble method. The BBMR scheme, operating on 100 images obtained from 58 patients, yielded: 1) areas under the receiver operating characteristic curve (AUC) of 0.84, 0.83, and 0.76, respectively, at the lowest, intermediate, and highest resolution levels and 2) an eightfold savings in terms of computational time compared to running the algorithm directly at full (highest) resolution. The BBMR model outperformed (in terms of AUC): 1) individual features (no ensemble) and 2) a random forest classifier ensemble obtained by bagging multiple decision tree classifiers. The apparent drop-off in AUC at higher image resolutions is due to lack of fine detail in the expert annotation of CaP and is not an artifact of the classifier. The implicit feature selection done via the AdaBoost component of the BBMR classifier reveals that different classes and types of image features become more relevant for discriminating between CaP and benign areas at different image resolutions." }, { "pmid": "17948727", "title": "Multifeature prostate cancer diagnosis and Gleason grading of histological images.", "abstract": "We present a study of image features for cancer diagnosis and Gleason grading of the histological images of prostate. In diagnosis, the tissue image is classified into the tumor and nontumor classes. In Gleason grading, which characterizes tumor aggressiveness, the image is classified as containing a low- or high-grade tumor. The image sets used in this paper consisted of 367 and 268 color images for the diagnosis and Gleason grading problems, respectively, and were captured from representative areas of hematoxylin and eosin-stained tissue retrieved from tissue microarray cores or whole sections. The primary contribution of this paper is to aggregate color, texture, and morphometric cues at the global and histological object levels for classification. Features representing different visual cues were combined in a supervised learning framework. We compared the performance of Gaussian, k-nearest neighbor, and support vector machine classifiers together with the sequential forward feature selection algorithm. On diagnosis, using a five-fold cross-validation estimate, an accuracy of 96.7% was obtained. On Gleason grading, the achieved accuracy of classification into low- and high-grade classes was 81.0%." }, { "pmid": "18812252", "title": "A qualitative and a quantitative analysis of an auto-segmentation module for prostate cancer.", "abstract": "PURPOSE\nThis work describes the clinical validation of an automatic segmentation algorithm in CT-based radiotherapy planning for prostate cancer patients.\n\n\nMATERIAL AND METHODS\nThe validated auto-segmentation algorithm (Smart Segmentation, version 1.0.05) is a rule-based algorithm using anatomical reference points and organ-specific segmentation methods, developed by Varian Medical Systems (Varian Medical Systems iLab, Baden, Switzerland). For the qualitative analysis, 39 prostate patients are analysed by six clinicians. Clinicians are asked to rate the auto-segmented organs (prostate, bladder, rectum and femoral heads) and to indicate the number of slices to correct. For the quantitative analysis, seven radiation oncologists are asked to contour seven prostate patients. The individual clinician contour variations are compared to the automatic contours by means of surface and volume statistics, calculating the relative volume errors and both the volume and slice-by-slice degree of support, a statistical metric developed for the purposes of this validation.\n\n\nRESULTS\nThe mean time needed for the automatic module to contour the four structures is about one minute on a standard computer. The qualitative evaluation using a score with four levels (\"not acceptable\", \"acceptable\", \"good\" and \"excellent\") shows that the mean score for the automatically contoured prostate is \"good\"; the bladder scores between \"excellent\" and \"good\"; the rectum scores between \"acceptable\" and \"not acceptable\". Using the concept of surface and volume degree of support, the degree of support given to the automatic module is comparable to the relative agreement among the clinicians for prostate and bladder. The slice-by-slice analysis of the surface degree of support pinpointed the areas of disagreement among the clinicians as well as between the clinicians and the automatic module.\n\n\nCONCLUSION\nThe efficiency and the limits of the automatic module are investigated with both a qualitative and a quantitative analysis. In general, with efficient correction tools at hand, the use of this auto-segmentation module will lead to a time gain for the prostate and the bladder; with the present version of the algorithm, modelling of the rectum still needs improvement. For the quantitative validation, the concept of relative volume error and degree of support proved very useful." }, { "pmid": "20443509", "title": "Supervised and unsupervised methods for prostate cancer segmentation with multispectral MRI.", "abstract": "PURPOSE\nMagnetic resonance imaging (MRI) has been proposed as a promising alternative to transrectal ultrasound for the detection and localization of prostate cancer and fusing the information from multispectral MR images is currently an active research area. In this study, the goal is to develop automated methods that combine the pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI with quantitative T2 MRI and diffusion weighted imaging (DWI) in contrast to most of the studies which were performed with human readers. The main advantages of the automated methods are that the observer variability is removed and easily reproducible results can be efficiently obtained when the methods are applied to a test data. The goal is also to compare the performance of automated supervised and unsupervised methods for prostate cancer localization with multispectral MRI.\n\n\nMETHODS\nThe authors use multispectral MRI data from 20 patients with biopsy-confirmed prostate cancer patients, and the image set consists of parameters derived from T2, DWI, and DCE-MRI. The authors utilize large margin classifiers for prostate cancer segmentation and compare them to an unsupervised method the authors have previously developed. The authors also develop thresholding schemes to tune support vector machines (SVMs) and their probabilistic counterparts, relevance vector machines (RVMs), for an improved performance with respect to a selected criterion. Moreover, the authors apply a thresholding method to make the unsupervised fuzzy Markov random fields method fully automatic.\n\n\nRESULTS\nThe authors have developed a supervised machine learning method that performs better than the previously developed unsupervised method and, additionally, have found that there is no significant difference between the SVM and RVM segmentation results. The results also show that the proposed methods for threshold selection can be used to tune the automated segmentation methods to optimize results for certain criteria such as accuracy or sensitivity. The test results of the automated algorithms indicate that using multispectral MRI improves prostate cancer segmentation performance when compared to single MR images, a result similar to the human reader studies that were performed before.\n\n\nCONCLUSIONS\nThe automated methods presented here can help diagnose and detect prostate cancer, and improve segmentation results. For that purpose, multispectral MRI provides better information about cancer and normal regions in the prostate when compared to methods that use single MRI techniques; thus, the different MRI measurements provide complementary information in the automated methods. Moreover, the use of supervised algorithms in such automated methods remain a good alternative to the use of unsupervised algorithms." }, { "pmid": "20493759", "title": "High-throughput detection of prostate cancer in histological sections using probabilistic pairwise Markov models.", "abstract": "In this paper we present a high-throughput system for detecting regions of carcinoma of the prostate (CaP) in HSs from radical prostatectomies (RPs) using probabilistic pairwise Markov models (PPMMs), a novel type of Markov random field (MRF). At diagnostic resolution a digitized HS can contain 80Kx70K pixels - far too many for current automated Gleason grading algorithms to process. However, grading can be separated into two distinct steps: (1) detecting cancerous regions and (2) then grading these regions. The detection step does not require diagnostic resolution and can be performed much more quickly. Thus, we introduce a CaP detection system capable of analyzing an entire digitized whole-mount HS (2x1.75cm(2)) in under three minutes (on a desktop computer) while achieving a CaP detection sensitivity and specificity of 0.87 and 0.90, respectively. We obtain this high-throughput by tailoring the system to analyze the HSs at low resolution (8microm per pixel). This motivates the following algorithm: (Step 1) glands are segmented, (Step 2) the segmented glands are classified as malignant or benign, and (Step 3) the malignant glands are consolidated into continuous regions. The classification of individual glands leverages two features: gland size and the tendency for proximate glands to share the same class. The latter feature describes a spatial dependency which we model using a Markov prior. Typically, Markov priors are expressed as the product of potential functions. Unfortunately, potential functions are mathematical abstractions, and constructing priors through their selection becomes an ad hoc procedure, resulting in simplistic models such as the Potts. Addressing this problem, we introduce PPMMs which formulate priors in terms of probability density functions, allowing the creation of more sophisticated models. To demonstrate the efficacy of our CaP detection system and assess the advantages of using a PPMM prior instead of the Potts, we alternately incorporate both priors into our algorithm and rigorously evaluate system performance, extracting statistics from over 6000 simulations run across 40 RP specimens. Perhaps the most indicative result is as follows: at a CaP sensitivity of 0.87 the accompanying false positive rates of the system when alternately employing the PPMM and Potts priors are 0.10 and 0.20, respectively." }, { "pmid": "19164079", "title": "Prostate cancer segmentation with simultaneous estimation of Markov random field parameters and class.", "abstract": "Prostate cancer is one of the leading causes of death from cancer among men in the United States. Currently, high-resolution magnetic resonance imaging (MRI) has been shown to have higher accuracy than trans-rectal ultrasound (TRUS) when used to ascertain the presence of prostate cancer. As MRI can provide both morphological and functional images for a tissue of interest, some researchers are exploring the uses of multispectral MRI to guide prostate biopsies and radiation therapy. However, success with prostate cancer localization based on current imaging methods has been limited due to overlap in feature space of benign and malignant tissues using any one MRI method and the interobserver variability. In this paper, we present a new unsupervised segmentation method for prostate cancer detection, using fuzzy Markov random fields (fuzzy MRFs) for the segmentation of multispectral MR prostate images. Typically, both hard and fuzzy MRF models have two groups of parameters to be estimated: the MRF parameters and class parameters for each pixel in the image. To date, these two parameters have been treated separately, and estimated in an alternating fashion. In this paper, we develop a new method to estimate the parameters defining the Markovian distribution of the measured data, while performing the data clustering simultaneously. We perform computer simulations on synthetic test images and multispectral MR prostate datasets to demonstrate the efficacy and efficiency of the proposed method and also provide a comparison with some of the commonly used methods." }, { "pmid": "21988838", "title": "Optimization of complex cancer morphology detection using the SIVQ pattern recognition algorithm.", "abstract": "For personalization of medicine, increasingly clinical and demographic data are integrated into nomograms for prognostic use, while molecular biomarkers are being developed to add independent diagnostic, prognostic, or management information. In a number of cases in surgical pathology, morphometric quantitation is already performed manually or semi-quantitatively, with this effort contributing to diagnostic workup. Digital whole slide imaging, coupled with emerging image analysis algorithms, offers great promise as an adjunctive tool for the surgical pathologist in areas of screening, quality assurance, consistency, and quantitation. We have recently reported such an algorithm, SIVQ (Spatially Invariant Vector Quantization), which avails itself of the geometric advantages of ring vectors for pattern matching, and have proposed a number of potential applications. One key test, however, remains the need for demonstration and optimization of SIVQ for discrimination between foreground (neoplasm- malignant epithelium) and background (normal parenchyma, stroma, vessels, inflammatory cells). Especially important is the determination of relative contributions of each key SIVQ matching parameter with respect to the algorithm's overall detection performance. Herein, by combinatorial testing of SIVQ ring size, sub-ring number, and inter-ring wobble parameters, in the setting of a morphologically complex bladder cancer use case, we ascertain the relative contributions of each of these parameters towards overall detection optimization using urothelial carcinoma as a use case, providing an exemplar by which this algorithm and future histology-oriented pattern matching tools may be validated and subsequently, implemented broadly in other appropriate microscopic classification settings." }, { "pmid": "17482752", "title": "Computer-aided diagnosis of prostate cancer with emphasis on ultrasound-based approaches: a review.", "abstract": "This paper reviews the state of the art in computer-aided diagnosis of prostate cancer and focuses, in particular, on ultrasound-based techniques for detection of cancer in prostate tissue. The current standard procedure for diagnosis of prostate cancer, i.e., ultrasound-guided biopsy followed by histopathological analysis of tissue samples, is invasive and produces a high rate of false negatives resulting in the need for repeated trials. It is against these backdrops that the search for new methods to diagnose prostate cancer continues. Image-based approaches (such as MRI, ultrasound and elastography) represent a major research trend for diagnosis of prostate cancer. Due to the integration of ultrasound imaging in the current clinical procedure for detection of prostate cancer, we specifically provide a more detailed review of methodologies that use ultrasound RF-spectrum parameters, B-scan texture features and Doppler measures for prostate tissue characterization. We present current and future directions of research aimed at computer-aided detection of prostate cancer and conclude that ultrasound is likely to play an important role in the field." }, { "pmid": "16350920", "title": "Automated detection of prostatic adenocarcinoma from high-resolution ex vivo MRI.", "abstract": "Prostatic adenocarcinoma is the most commonly occurring cancer among men in the United States, second only to skin cancer. Currently, the only definitive method to ascertain the presence of prostatic cancer is by trans-rectal ultrasound (TRUS) directed biopsy. Owing to the poor image quality of ultrasound, the accuracy of TRUS is only 20%-25%. High-resolution magnetic resonance imaging (MRI) has been shown to have a higher accuracy of prostate cancer detection compared to ultrasound. Consequently, several researchers have been exploring the use of high resolution MRI in performing prostate biopsies. Visual detection of prostate cancer, however, continues to be difficult owing to its apparent lack of shape, and the fact that several malignant and benign structures have overlapping intensity and texture characteristics. In this paper, we present a fully automated computer-aided detection (CAD) system for detecting prostatic adenocarcinoma from 4 Tesla ex vivo magnetic resonance (MR) imagery of the prostate. After the acquired MR images have been corrected for background inhomogeneity and nonstandardness, novel three-dimensional (3-D) texture features are extracted from the 3-D MRI scene. A Bayesian classifier then assigns each image voxel a \"likelihood\" of malignancy for each feature independently. The \"likelihood\" images generated in this fashion are then combined using an optimally weighted feature combination scheme. Quantitative evaluation was performed by comparing the CAD results with the manually ascertained ground truth for the tumor on the MRI. The tumor labels on the MR slices were determined manually by an expert by visually registering the MR slices with the corresponding regions on the histology slices. We evaluated our CAD system on a total of 33 two-dimensional (2-D) MR slices from five different 3-D MR prostate studies. Five slices from two different glands were used for training. Our feature combination scheme was found to outperform the individual texture features, and also other popularly used feature combination methods, including AdaBoost, ensemble averaging, and majority voting. Further, in several instances our CAD system performed better than the experts in terms of accuracy, the expert segmentations being determined solely from visual inspection of the MRI data. In addition, the intrasystem variability (changes in CAD accuracy with changes in values of system parameters) was significantly lower than the corresponding intraobserver and interobserver variability. CAD performance was found to be very similar for different training sets. Future work will focus on extending the methodology to guide high-resolution MRI-assisted in vivo prostate biopsies." }, { "pmid": "22214541", "title": "GuiTope: an application for mapping random-sequence peptides to protein sequences.", "abstract": "BACKGROUND\nRandom-sequence peptide libraries are a commonly used tool to identify novel ligands for binding antibodies, other proteins, and small molecules. It is often of interest to compare the selected peptide sequences to the natural protein binding partners to infer the exact binding site or the importance of particular residues. The ability to search a set of sequences for similarity to a set of peptides may sometimes enable the prediction of an antibody epitope or a novel binding partner. We have developed a software application designed specifically for this task.\n\n\nRESULTS\nGuiTope provides a graphical user interface for aligning peptide sequences to protein sequences. All alignment parameters are accessible to the user including the ability to specify the amino acid frequency in the peptide library; these frequencies often differ significantly from those assumed by popular alignment programs. It also includes a novel feature to align di-peptide inversions, which we have found improves the accuracy of antibody epitope prediction from peptide microarray data and shows utility in analyzing phage display datasets. Finally, GuiTope can randomly select peptides from a given library to estimate a null distribution of scores and calculate statistical significance.\n\n\nCONCLUSIONS\nGuiTope provides a convenient method for comparing selected peptide sequences to protein sequences, including flexible alignment parameters, novel alignment features, ability to search a database, and statistical significance of results. The software is available as an executable (for PC) at http://www.immunosignature.com/software and ongoing updates and source code will be available at sourceforge.net." }, { "pmid": "24875018", "title": "Co-occurring gland angularity in localized subgraphs: predicting biochemical recurrence in intermediate-risk prostate cancer patients.", "abstract": "Quantitative histomorphometry (QH) refers to the application of advanced computational image analysis to reproducibly describe disease appearance on digitized histopathology images. QH thus could serve as an important complementary tool for pathologists in interrogating and interpreting cancer morphology and malignancy. In the US, annually, over 60,000 prostate cancer patients undergo radical prostatectomy treatment. Around 10,000 of these men experience biochemical recurrence within 5 years of surgery, a marker for local or distant disease recurrence. The ability to predict the risk of biochemical recurrence soon after surgery could allow for adjuvant therapies to be prescribed as necessary to improve long term treatment outcomes. The underlying hypothesis with our approach, co-occurring gland angularity (CGA), is that in benign or less aggressive prostate cancer, gland orientations within local neighborhoods are similar to each other but are more chaotically arranged in aggressive disease. By modeling the extent of the disorder, we can differentiate surgically removed prostate tissue sections from (a) benign and malignant regions and (b) more and less aggressive prostate cancer. For a cohort of 40 intermediate-risk (mostly Gleason sum 7) surgically cured prostate cancer patients where half suffered biochemical recurrence, the CGA features were able to predict biochemical recurrence with 73% accuracy. Additionally, for 80 regions of interest chosen from the 40 studies, corresponding to both normal and cancerous cases, the CGA features yielded a 99% accuracy. CGAs were shown to be statistically signicantly ([Formula: see text]) better at predicting BCR compared to state-of-the-art QH methods and postoperative prostate cancer nomograms." }, { "pmid": "25203987", "title": "Supervised multi-view canonical correlation analysis (sMVCCA): integrating histologic and proteomic features for predicting recurrent prostate cancer.", "abstract": "In this work, we present a new methodology to facilitate prediction of recurrent prostate cancer (CaP) following radical prostatectomy (RP) via the integration of quantitative image features and protein expression in the excised prostate. Creating a fused predictor from high-dimensional data streams is challenging because the classifier must 1) account for the \"curse of dimensionality\" problem, which hinders classifier performance when the number of features exceeds the number of patient studies and 2) balance potential mismatches in the number of features across different channels to avoid classifier bias towards channels with more features. Our new data integration methodology, supervised Multi-view Canonical Correlation Analysis (sMVCCA), aims to integrate infinite views of highdimensional data to provide more amenable data representations for disease classification. Additionally, we demonstrate sMVCCA using Spearman's rank correlation which, unlike Pearson's correlation, can account for nonlinear correlations and outliers. Forty CaP patients with pathological Gleason scores 6-8 were considered for this study. 21 of these men revealed biochemical recurrence (BCR) following RP, while 19 did not. For each patient, 189 quantitative histomorphometric attributes and 650 protein expression levels were extracted from the primary tumor nodule. The fused histomorphometric/proteomic representation via sMVCCA combined with a random forest classifier predicted BCR with a mean AUC of 0.74 and a maximum AUC of 0.9286. We found sMVCCA to perform statistically significantly (p < 0.05) better than comparative state-of-the-art data fusion strategies for predicting BCR. Furthermore, Kaplan-Meier analysis demonstrated improved BCR-free survival prediction for the sMVCCA-fused classifier as compared to histology or proteomic features alone." }, { "pmid": "16119262", "title": "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.", "abstract": "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy." }, { "pmid": "22641706", "title": "SLIC superpixels compared to state-of-the-art superpixel methods.", "abstract": "Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation." }, { "pmid": "21911913", "title": "A least-squares framework for Component Analysis.", "abstract": "Over the last century, Component Analysis (CA) methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Locality Preserving Projections (LPP), and Spectral Clustering (SC) have been extensively used as a feature extraction step for modeling, classification, visualization, and clustering. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and nonlinear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LPP, SC, and its kernel and regularized extensions correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS--WKRRR). The LS-WKRRR formulation of CA methods has several benefits: 1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; 2) yields efficient numerical schemes to solve CA techniques; 3) overcomes the small sample size problem; 4) provides a framework to easily extend CA methods. We derive weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques." }, { "pmid": "21255974", "title": "Determining histology-MRI slice correspondences for defining MRI-based disease signatures of prostate cancer.", "abstract": "Mapping the spatial disease extent in a certain anatomical organ/tissue from histology images to radiological images is important in defining the disease signature in the radiological images. One such scenario is in the context of men with prostate cancer who have had pre-operative magnetic resonance imaging (MRI) before radical prostatectomy. For these cases, the prostate cancer extent from ex vivo whole-mount histology is to be mapped to in vivo MRI. The need for determining radiology-image-based disease signatures is important for (a) training radiologist residents and (b) for constructing an MRI-based computer aided diagnosis (CAD) system for disease detection in vivo. However, a prerequisite for this data mapping is the determination of slice correspondences (i.e. indices of each pair of corresponding image slices) between histological and magnetic resonance images. The explicit determination of such slice correspondences is especially indispensable when an accurate 3D reconstruction of the histological volume cannot be achieved because of (a) the limited tissue slices with unknown inter-slice spacing, and (b) obvious histological image artifacts (tissue loss or distortion). In the clinic practice, the histology-MRI slice correspondences are often determined visually by experienced radiologists and pathologists working in unison, but this procedure is laborious and time-consuming. We present an iterative method to automatically determine slice correspondence between images from histology and MRI via a group-wise comparison scheme, followed by 2D and 3D registration. The image slice correspondences obtained using our method were compared with the ground truth correspondences determined via consensus of multiple experts over a total of 23 patient studies. In most instances, the results of our method were very close to the results obtained via visual inspection by these experts." }, { "pmid": "21626933", "title": "Elastic registration of multimodal prostate MRI and histology via multiattribute combined mutual information.", "abstract": "PURPOSE\nBy performing registration of preoperative multiprotocol in vivo magnetic resonance (MR) images of the prostate with corresponding whole-mount histology (WMH) sections from postoperative radical prostatectomy specimens, an accurate estimate of the spatial extent of prostate cancer (CaP) on in vivo MR imaging (MRI) can be retrospectively established. This could allow for definition of quantitative image-based disease signatures and lead to development of classifiers for disease detection on multiprotocol in vivo MRI. Automated registration of MR and WMH images of the prostate is complicated by dissimilar image intensities, acquisition artifacts, and nonlinear shape differences.\n\n\nMETHODS\nThe authors present a method for automated elastic registration of multiprotocol in vivo MRI and WMH sections of the prostate. The method, multiattribute combined mutual information (MACMI), leverages all available multiprotocol image data to drive image registration using a multivariate formulation of mutual information.\n\n\nRESULTS\nElastic registration using the multivariate MI formulation is demonstrated for 150 corresponding sets of prostate images from 25 patient studies with T2-weighted and dynamic-contrast enhanced MRI and 85 image sets from 15 studies with an additional functional apparent diffusion coefficient MRI series. Qualitative results of MACMI evaluation via visual inspection suggest that an accurate delineation of CaP extent on MRI is obtained. Results of quantitative evaluation on 150 clinical and 20 synthetic image sets indicate improved registration accuracy using MACMI compared to conventional pairwise mutual information-based approaches.\n\n\nCONCLUSIONS\nThe authors' approach to the registration of in vivo multiprotocol MRI and ex vivo WMH of the prostate using MACMI is unique, in that (1) information from all available image protocols is utilized to drive the registration with histology, (2) no additional, intermediate ex vivo radiology or gross histology images need be obtained in addition to the routinely acquired in vivo MRI series, and (3) no corresponding anatomical landmarks are required to be identified manually or automatically on the images." }, { "pmid": "22337003", "title": "Central gland and peripheral zone prostate tumors have significantly different quantitative imaging signatures on 3 Tesla endorectal, in vivo T2-weighted MR imagery.", "abstract": "PURPOSE\nTo identify and evaluate textural quantitative imaging signatures (QISes) for tumors occurring within the central gland (CG) and peripheral zone (PZ) of the prostate, respectively, as seen on in vivo 3 Tesla (T) endorectal T2-weighted (T2w) MRI.\n\n\nMATERIALS AND METHODS\nThis study used 22 preoperative prostate MRI data sets (16 PZ, 6 CG) acquired from men with confirmed prostate cancer (CaP) and scheduled for radical prostatectomy (RP). The prostate region-of-interest (ROI) was automatically delineated on T2w MRI, following which it was corrected for intensity-based acquisition artifacts. An expert pathologist manually delineated the dominant tumor regions on ex vivo sectioned and stained RP specimens as well as identified each of the studies as either a CG or PZ CaP. A nonlinear registration scheme was used to spatially align and then map CaP extent from the ex vivo RP sections onto the corresponding MRI slices. A total of 110 texture features were then extracted on a per-voxel basis from all T2w MRI data sets. An information theoretic feature selection procedure was then applied to identify QISes comprising T2w MRI textural features specific to CG and PZ CaP, respectively. The QISes for CG and PZ CaP were evaluated by means of Quadratic Discriminant Analysis (QDA) on a per-voxel basis against the ground truth for CaP on T2w MRI, mapped from corresponding histology.\n\n\nRESULTS\nThe QDA classifier yielded an area under the Receiver Operating characteristic curve of 0.86 for the CG CaP studies, and 0.73 for the PZ CaP studies over 25 runs of randomized three-fold cross-validation. By comparison, the accuracy of the QDA classifier was significantly lower when (a) using all 110 texture features (with no feature selection applied), as well as (b) a randomly selected combination of texture features.\n\n\nCONCLUSION\nCG and PZ prostate cancers have significantly differing textural quantitative imaging signatures on T2w endorectal in vivo MRI." }, { "pmid": "21960175", "title": "Multimodal wavelet embedding representation for data combination (MaWERiC): integrating magnetic resonance imaging and spectroscopy for prostate cancer detection.", "abstract": "Recently, both Magnetic Resonance (MR) Imaging (MRI) and Spectroscopy (MRS) have emerged as promising tools for detection of prostate cancer (CaP). However, due to the inherent dimensionality differences in MR imaging and spectral information, quantitative integration of T(2) weighted MRI (T(2)w MRI) and MRS for improved CaP detection has been a major challenge. In this paper, we present a novel computerized decision support system called multimodal wavelet embedding representation for data combination (MaWERiC) that employs, (i) wavelet theory to extract 171 Haar wavelet features from MRS and 54 Gabor features from T(2)w MRI, (ii) dimensionality reduction to individually project wavelet features from MRS and T(2)w MRI into a common reduced Eigen vector space, and (iii), a random forest classifier for automated prostate cancer detection on a per voxel basis from combined 1.5 T in vivo MRI and MRS. A total of 36 1.5 T endorectal in vivo T(2)w MRI and MRS patient studies were evaluated per voxel by MaWERiC using a three-fold cross validation approach over 25 iterations. Ground truth for evaluation of results was obtained by an expert radiologist annotations of prostate cancer on a per voxel basis who compared each MRI section with corresponding ex vivo wholemount histology sections with the disease extent mapped out on histology. Results suggest that MaWERiC based MRS T(2)w meta-classifier (mean AUC, μ = 0.89 ± 0.02) significantly outperformed (i) a T(2)w MRI (using wavelet texture features) classifier (μ = 0.55 ± 0.02), (ii) a MRS (using metabolite ratios) classifier (μ = 0.77 ± 0.03), (iii) a decision fusion classifier obtained by combining individual T(2)w MRI and MRS classifier outputs (μ = 0.85 ± 0.03), and (iv) a data combination method involving a combination of metabolic MRS and MR signal intensity features (μ = 0.66 ± 0.02)." }, { "pmid": "23294985", "title": "Multi-kernel graph embedding for detection, Gleason grading of prostate cancer via MRI/MRS.", "abstract": "Even though 1 in 6 men in the US, in their lifetime are expected to be diagnosed with prostate cancer (CaP), only 1 in 37 is expected to die on account of it. Consequently, among many men diagnosed with CaP, there has been a recent trend to resort to active surveillance (wait and watch) if diagnosed with a lower Gleason score on biopsy, as opposed to seeking immediate treatment. Some researchers have recently identified imaging markers for low and high grade CaP on multi-parametric (MP) magnetic resonance (MR) imaging (such as T2 weighted MR imaging (T2w MRI) and MR spectroscopy (MRS)). In this paper, we present a novel computerized decision support system (DSS), called Semi Supervised Multi Kernel Graph Embedding (SeSMiK-GE), that quantitatively combines structural, and metabolic imaging data for distinguishing (a) benign versus cancerous, and (b) high- versus low-Gleason grade CaP regions from in vivo MP-MRI. A total of 29 1.5Tesla endorectal pre-operative in vivo MP MRI (T2w MRI, MRS) studies from patients undergoing radical prostatectomy were considered in this study. Ground truth for evaluation of the SeSMiK-GE classifier was obtained via annotation of disease extent on the pre-operative imaging by visually correlating the MRI to the ex vivo whole mount histologic specimens. The SeSMiK-GE framework comprises of three main modules: (1) multi-kernel learning, (2) semi-supervised learning, and (3) dimensionality reduction, which are leveraged for the construction of an integrated low dimensional representation of the different imaging and non-imaging MRI protocols. Hierarchical classifiers for diagnosis and Gleason grading of CaP are then constructed within this unified low dimensional representation. Step 1 of the hierarchical classifier employs a random forest classifier in conjunction with the SeSMiK-GE based data representation and a probabilistic pairwise Markov Random Field algorithm (which allows for imposition of local spatial constraints) to yield a voxel based classification of CaP presence. The CaP region of interest identified in Step 1 is then subsequently classified as either high or low Gleason grade CaP in Step 2. Comparing SeSMiK-GE with unimodal T2w MRI, MRS classifiers and a commonly used feature concatenation (COD) strategy, yielded areas (AUC) under the receiver operative curve (ROC) of (a) 0.89±0.09 (SeSMiK), 0.54±0.18 (T2w MRI), 0.61±0.20 (MRS), and 0.64±0.23 (COD) for distinguishing benign from CaP regions, and (b) 0.84±0.07 (SeSMiK),0.54±0.13 (MRI), 0.59±0.19 (MRS), and 0.62±0.18 (COD) for distinguishing high and low grade CaP using a leave one out cross-validation strategy, all evaluations being performed on a per voxel basis. Our results suggest that following further rigorous validation, SeSMiK-GE could be developed into a powerful diagnostic and prognostic tool for detection and grading of CaP in vivo and in helping to determine the appropriate treatment option. Identifying low grade disease in vivo might allow CaP patients to opt for active surveillance rather than immediately opt for aggressive therapy such as radical prostatectomy." } ]
Frontiers in Plant Science
29033961
PMC5625571
10.3389/fpls.2017.01680
Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction
Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants.
1.1. Related workMeasuring plant geometry from single view-point 2D images often suffers from insufficient information, especially when plant organs occlude each other (self-occlusion). In order to achieve more detailed information and recover the plants 3D geometric structure volume carving is a well established method to generate 3D point clouds of plant shoots (Koenderink et al., 2009; Golbach et al., 2015; Klodt and Cremers, 2015), seeds (Roussel et al., 2015, 2016; Jahnke et al., 2016), and roots (Clark et al., 2011; Zheng et al., 2011; Topp et al., 2013). Volume carving can be applied in high-throughput scenarios (Golbach et al., 2015): For the reconstruction of relatively simple plant structures like tomato seedlings image reconstruction takes ~25–60 ms, based on a well though out camera geometry using 10 cameras and a suitably low voxel resolution 240 × 240 × 300 voxels at 0.25 mm voxel width. Short reconstruction times are achieved by precomputing voxel to pixel projections for each of the fully calibrated cameras. However, precomputing lookup-tables is not feasible for high voxel resolutions due to storage restrictions (Ladikos et al., 2008). Current implementations popular in plant sciences suffer from high computational complexity, when voxel resolutions are high. We therefore implemented and tested a fast and reliable volume carving algorithm based on octrees (cmp. Klodt and Cremers, 2015) and integral images (cmp. Veksler, 2003), and investigate different refinement strategies. This work summarizes and extends our findings presented in Embgenbroich (2015).Visual hull reconstruction via volume carving is a well-known shape-from-silhouette technique (Martin and Aggarwal, 1983; Potmesil, 1987; Laurentini, 1994) and found many applications. Also octree as multigrid approach and integral image for reliable and fast foreground testing have been used successfully with volume carving in medical applications (Ladikos et al., 2008) and human pose reconstruction (Kanaujia et al., 2013). Realtime applications at 5123 voxel resolution have been achieved where suitable caching strategies on GPUs can be applied e.g., for video conferencing (Waizenegger et al., 2009). Here we demonstrate that even higher spatial resolutions are achievable on consumer computer hardware without prohibitively large computational cost. Subsequent octree-voxel-based processing allows extraction of plant structural features suitable for plant phenotypic trait extraction.
[ "27853175", "25801304", "24139902", "25501589", "26535051", "24721154", "23451789", "22074787", "21284859", "27663410", "21869096", "25774205", "27547208", "22553969", "27375628", "23580618", "17388907" ]
[ { "pmid": "27853175", "title": "Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping.", "abstract": "High-throughput phenotyping produces multiple measurements over time, which require new methods of analyses that are flexible in their quantification of plant growth and transpiration, yet are computationally economic. Here we develop such analyses and apply this to a rice population genotyped with a 700k SNP high-density array. Two rice diversity panels, indica and aus, containing a total of 553 genotypes, are phenotyped in waterlogged conditions. Using cubic smoothing splines to estimate plant growth and transpiration, we identify four time intervals that characterize the early responses of rice to salinity. Relative growth rate, transpiration rate and transpiration use efficiency (TUE) are analysed using a new association model that takes into account the interaction between treatment (control and salt) and genetic marker. This model allows the identification of previously undetected loci affecting TUE on chromosome 11, providing insights into the early responses of rice to salinity, in particular into the effects of salinity on plant growth and transpiration." }, { "pmid": "25801304", "title": "Phytotyping(4D) : a light-field imaging system for non-invasive and accurate monitoring of spatio-temporal plant growth.", "abstract": "Integrative studies of plant growth require spatially and temporally resolved information from high-throughput imaging systems. However, analysis and interpretation of conventional two-dimensional images is complicated by the three-dimensional nature of shoot architecture and by changes in leaf position over time, termed hyponasty. To solve this problem, Phytotyping(4D) uses a light-field camera that simultaneously provides a focus image and a depth image, which contains distance information about the object surface. Our automated pipeline segments the focus images, integrates depth information to reconstruct the three-dimensional architecture, and analyses time series to provide information about the relative expansion rate, the timing of leaf appearance, hyponastic movement, and shape for individual leaves and the whole rosette. Phytotyping(4D) was calibrated and validated using discs of known sizes, and plants tilted at various orientations. Information from this analysis was integrated into the pipeline to allow error assessment during routine operation. To illustrate the utility of Phytotyping(4D) , we compare diurnal changes in Arabidopsis thaliana wild-type Col-0 and the starchless pgm mutant. Compared to Col-0, pgm showed very low relative expansion rate in the second half of the night, a transiently increased relative expansion rate at the onset of light period, and smaller hyponastic movement including delayed movement after dusk, both at the level of the rosette and individual leaves. Our study introduces light-field camera systems as a tool to accurately measure morphological and growth-related features in plants." }, { "pmid": "24139902", "title": "Field high-throughput phenotyping: the new crop breeding frontier.", "abstract": "Constraints in field phenotyping capability limit our ability to dissect the genetics of quantitative traits, particularly those related to yield and stress tolerance (e.g., yield potential as well as increased drought, heat tolerance, and nutrient efficiency, etc.). The development of effective field-based high-throughput phenotyping platforms (HTPPs) remains a bottleneck for future breeding advances. However, progress in sensors, aeronautics, and high-performance computing are paving the way. Here, we review recent advances in field HTPPs, which should combine at an affordable cost, high capacity for data recording, scoring and processing, and non-invasive remote sensing methods, together with automated environmental data collection. Laboratory analyses of key plant parts may complement direct phenotyping under field conditions. Improvements in user-friendly data management together with a more powerful interpretation of results should increase the use of field HTPPs, therefore increasing the efficiency of crop genetic improvement to meet the needs of future generations." }, { "pmid": "25501589", "title": "Dissecting the phenotypic components of crop plant growth and drought responses based on high-throughput image analysis.", "abstract": "Significantly improved crop varieties are urgently needed to feed the rapidly growing human population under changing climates. While genome sequence information and excellent genomic tools are in place for major crop species, the systematic quantification of phenotypic traits or components thereof in a high-throughput fashion remains an enormous challenge. In order to help bridge the genotype to phenotype gap, we developed a comprehensive framework for high-throughput phenotype data analysis in plants, which enables the extraction of an extensive list of phenotypic traits from nondestructive plant imaging over time. As a proof of concept, we investigated the phenotypic components of the drought responses of 18 different barley (Hordeum vulgare) cultivars during vegetative growth. We analyzed dynamic properties of trait expression over growth time based on 54 representative phenotypic features. The data are highly valuable to understand plant development and to further quantify growth and crop performance features. We tested various growth models to predict plant biomass accumulation and identified several relevant parameters that support biological interpretation of plant growth and stress tolerance. These image-based traits and model-derived parameters are promising for subsequent genetic mapping to uncover the genetic basis of complex agronomic traits. Taken together, we anticipate that the analytical framework and analysis results presented here will be useful to advance our views of phenotypic trait components underlying plant development and their responses to environmental cues." }, { "pmid": "26535051", "title": "Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.", "abstract": "BACKGROUND\nPlant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput.\n\n\nDESCRIPTION\nHere, we present an open-source phenomics platform \"DIRT\", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute \"commons\" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size.\n\n\nCONCLUSION\nDIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science." }, { "pmid": "24721154", "title": "Rapid determination of leaf area and plant height by using light curtain arrays in four species with contrasting shoot architecture.", "abstract": "BACKGROUND\nLight curtain arrays (LC), a recently introduced phenotyping method, yield a binary data matrix from which a shoot silhouette is reconstructed. We addressed the accuracy and applicability of LC in assessing leaf area and maximum height (base to the highest leaf tip) in a phenotyping platform. LC were integrated to an automated routine for positioning, allowing in situ measurements. Two dicotyledonous (rapeseed, tomato) and two monocotyledonous (maize, barley) species with contrasting shoot architecture were investigated. To evaluate if averaging multiple view angles helps in resolving self-overlaps, we acquired a data set by rotating plants every 10° for 170°. To test how rapid these measurements can be without loss of information, we evaluated nine scanning speeds. Leaf area of overlapping plants was also estimated to assess the possibility to scale this method for plant stands.\n\n\nRESULTS\nThe relation between measured and calculated maximum height was linear and nearly the same for all species. Linear relations were also found between plant leaf area and calculated pixel area. However, the regression slope was different between monocotyledonous and dicotyledonous species. Increasing the scanning speed stepwise from 0.9 to 23.4 m s-1 did not affect the estimation of maximum height. Instead, the calculated pixel area was inversely proportional to scanning speed. The estimation of plant leaf area by means of calculated pixel area became more accurate by averaging consecutive silhouettes and/or increasing the angle between them. Simulations showed that decreasing plant distance gradually from 20 to 0 cm, led to underestimation of plant leaf area owing to overlaps. This underestimation was more important for large plants of dicotyledonous species and for small plants of monocotyledonous ones.\n\n\nCONCLUSIONS\nLC offer an accurate estimation of plant leaf area and maximum height, while the number of consecutive silhouettes that needs to be averaged is species-dependent. A constant scanning speed is important for leaf area estimations by using LC. Simulations of the effect of varying plant spacing gave promising results for method application in sets of partly overlapping plants, which applies also to field conditions during and after canopy closure for crops sown in rows." }, { "pmid": "23451789", "title": "Future scenarios for plant phenotyping.", "abstract": "With increasing demand to support and accelerate progress in breeding for novel traits, the plant research community faces the need to accurately measure increasingly large numbers of plants and plant parameters. The goal is to provide quantitative analyses of plant structure and function relevant for traits that help plants better adapt to low-input agriculture and resource-limited environments. We provide an overview of the inherently multidisciplinary research in plant phenotyping, focusing on traits that will assist in selecting genotypes with increased resource use efficiency. We highlight opportunities and challenges for integrating noninvasive or minimally invasive technologies into screening protocols to characterize plant responses to environmental challenges for both controlled and field experimentation. Although technology evolves rapidly, parallel efforts are still required because large-scale phenotyping demands accurate reporting of at least a minimum set of information concerning experimental protocols, data management schemas, and integration with modeling. The journey toward systematic plant phenotyping has only just begun." }, { "pmid": "22074787", "title": "Phenomics--technologies to relieve the phenotyping bottleneck.", "abstract": "Global agriculture is facing major challenges to ensure global food security, such as the need to breed high-yielding crops adapted to future climates and the identification of dedicated feedstock crops for biofuel production (biofuel feedstocks). Plant phenomics offers a suite of new technologies to accelerate progress in understanding gene function and environmental responses. This will enable breeders to develop new agricultural germplasm to support future agricultural production. In this review we present plant physiology in an 'omics' perspective, review some of the new high-throughput and high-resolution phenotyping tools and discuss their application to plant biology, functional genomics and crop breeding." }, { "pmid": "21284859", "title": "Accurate inference of shoot biomass from high-throughput images of cereal plants.", "abstract": "With the establishment of advanced technology facilities for high throughput plant phenotyping, the problem of estimating plant biomass of individual plants from their two dimensional images is becoming increasingly important. The approach predominantly cited in literature is to estimate the biomass of a plant as a linear function of the projected shoot area of plants in the images. However, the estimation error from this model, which is solely a function of projected shoot area, is large, prohibiting accurate estimation of the biomass of plants, particularly for the salt-stressed plants. In this paper, we propose a method based on plant specific weight for improving the accuracy of the linear model and reducing the estimation bias (the difference between actual shoot dry weight and the value of the shoot dry weight estimated with a predictive model). For the proposed method in this study, we modeled the plant shoot dry weight as a function of plant area and plant age. The data used for developing our model and comparing the results with the linear model were collected from a completely randomized block design experiment. A total of 320 plants from two bread wheat varieties were grown in a supported hydroponics system in a greenhouse. The plants were exposed to two levels of hydroponic salt treatments (NaCl at 0 and 100 mM) for 6 weeks. Five harvests were carried out. Each time 64 randomly selected plants were imaged and then harvested to measure the shoot fresh weight and shoot dry weight. The results of statistical analysis showed that with our proposed method, most of the observed variance can be explained, and moreover only a small difference between actual and estimated shoot dry weight was obtained. The low estimation bias indicates that our proposed method can be used to estimate biomass of individual plants regardless of what variety the plant is and what salt treatment has been applied. We validated this model on an independent set of barley data. The technique presented in this paper may extend to other plants and types of stresses." }, { "pmid": "27663410", "title": "phenoSeeder - A Robot System for Automated Handling and Phenotyping of Individual Seeds.", "abstract": "The enormous diversity of seed traits is an intriguing feature and critical for the overwhelming success of higher plants. In particular, seed mass is generally regarded to be key for seedling development but is mostly approximated by using scanning methods delivering only two-dimensional data, often termed seed size. However, three-dimensional traits, such as the volume or mass of single seeds, are very rarely determined in routine measurements. Here, we introduce a device named phenoSeeder, which enables the handling and phenotyping of individual seeds of very different sizes. The system consists of a pick-and-place robot and a modular setup of sensors that can be versatilely extended. Basic biometric traits detected for individual seeds are two-dimensional data from projections, three-dimensional data from volumetric measures, and mass, from which seed density is also calculated. Each seed is tracked by an identifier and, after phenotyping, can be planted, sorted, or individually stored for further evaluation or processing (e.g. in routine seed-to-plant tracking pipelines). By investigating seeds of Arabidopsis (Arabidopsis thaliana), rapeseed (Brassica napus), and barley (Hordeum vulgare), we observed that, even for apparently round-shaped seeds of rapeseed, correlations between the projected area and the mass of seeds were much weaker than between volume and mass. This indicates that simple projections may not deliver good proxies for seed mass. Although throughput is limited, we expect that automated seed phenotyping on a single-seed basis can contribute valuable information for applications in a wide range of wild or crop species, including seed classification, seed sorting, and assessment of seed quality." }, { "pmid": "21869096", "title": "Volumetric descriptions of objects from multiple views.", "abstract": "Occluding contours from an image sequence with view-point specifications determine a bounding volume approximating the object generating the contours. The initial creation and continual refinement of the approximation requires a volumetric representation that facilitates modification yet is descriptive of surface detail. The ``volume segment'' representation presented in this paper is one such representation." }, { "pmid": "25774205", "title": "The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool.", "abstract": "BACKGROUND\nThree-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution.\n\n\nRESULTS\nWe developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles.\n\n\nCONCLUSIONS\nWe show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management." }, { "pmid": "27547208", "title": "Identification of Water Use Strategies at Early Growth Stages in Durum Wheat from Shoot Phenotyping and Physiological Measurements.", "abstract": "Modern imaging technology provides new approaches to plant phenotyping for traits relevant to crop yield and resource efficiency. Our objective was to investigate water use strategies at early growth stages in durum wheat genetic resources using shoot imaging at the ScreenHouse phenotyping facility combined with physiological measurements. Twelve durum landraces from different pedoclimatic backgrounds were compared to three modern check cultivars in a greenhouse pot experiment under well-watered (75% plant available water, PAW) and drought (25% PAW) conditions. Transpiration rate was analyzed for the underlying main morphological (leaf area duration) and physiological (stomata conductance) factors. Combining both morphological and physiological regulation of transpiration, four distinct water use types were identified. Most landraces had high transpiration rates either due to extensive leaf area (area types) or both large leaf areas together with high stomata conductance (spender types). All modern cultivars were distinguished by high stomata conductance with comparatively compact canopies (conductance types). Only few landraces were water saver types with both small canopy and low stomata conductance. During early growth, genotypes with large leaf area had high dry-matter accumulation under both well-watered and drought conditions compared to genotypes with compact stature. However, high stomata conductance was the basis to achieve high dry matter per unit leaf area, indicating high assimilation capacity as a key for productivity in modern cultivars. We conclude that the identified water use strategies based on early growth shoot phenotyping combined with stomata conductance provide an appropriate framework for targeted selection of distinct pre-breeding material adapted to different types of water limited environments." }, { "pmid": "22553969", "title": "A novel mesh processing based technique for 3D plant analysis.", "abstract": "BACKGROUND\nIn recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time.\n\n\nRESULT\nIn this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated.\n\n\nCONCLUSION\nBy directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features." }, { "pmid": "27375628", "title": "3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies.", "abstract": "We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes." }, { "pmid": "23580618", "title": "3D phenotyping and quantitative trait locus mapping identify core regions of the rice genome controlling root architecture.", "abstract": "Identification of genes that control root system architecture in crop plants requires innovations that enable high-throughput and accurate measurements of root system architecture through time. We demonstrate the ability of a semiautomated 3D in vivo imaging and digital phenotyping pipeline to interrogate the quantitative genetic basis of root system growth in a rice biparental mapping population, Bala × Azucena. We phenotyped >1,400 3D root models and >57,000 2D images for a suite of 25 traits that quantified the distribution, shape, extent of exploration, and the intrinsic size of root networks at days 12, 14, and 16 of growth in a gellan gum medium. From these data we identified 89 quantitative trait loci, some of which correspond to those found previously in soil-grown plants, and provide evidence for genetic tradeoffs in root growth allocations, such as between the extent and thoroughness of exploration. We also developed a multivariate method for generating and mapping central root architecture phenotypes and used it to identify five major quantitative trait loci (r(2) = 24-37%), two of which were not identified by our univariate analysis. Our imaging and analytical platform provides a means to identify genes with high potential for improving root traits and agronomic qualities of crops." }, { "pmid": "17388907", "title": "Dynamics of seedling growth acclimation towards altered light conditions can be quantified via GROWSCREEN: a setup and procedure designed for rapid optical phenotyping of different plant species.", "abstract": "Using a novel setup, we assessed how fast growth of Nicotiana tabacum seedlings responds to alterations in the light regime and investigated whether starch-free mutants of Arabidopsis thaliana show decreased growth potential at an early developmental stage. Leaf area and relative growth rate were measured based on pictures from a camera automatically placed above an array of 120 seedlings. Detection of total seedling leaf area was performed via global segmentation of colour images for preset thresholds of the parameters hue, saturation and value. Dynamic acclimation of relative growth rate towards altered light conditions occurred within 1 d in N. tabacum exposed to high nutrient availability, but not in plants exposed to low nutrient availability. Increased leaf area was correlated with an increase in shoot fresh and dry weight as well as root growth in N. tabacum. Relative growth rate was shown to be a more appropriate parameter than leaf area for detection of dynamic growth acclimation. Clear differences in leaf growth activity were also observed for A. thaliana. As growth responses are generally most flexible in early developmental stages, the procedure described here is an important step towards standardized protocols for rapid detection of the effects of changes in internal (genetic) and external (environmental) parameters regulating plant growth." } ]
Frontiers in Neuroscience
29056897
PMC5635061
10.3389/fnins.2017.00550
Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns
We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand.
1.1. Related workThere are two alternative approaches to addressing the problem of a large amount of EEG data (Naeem et al., 2009): (i) Channel selection that intends to choose a subset of electrodes contributing the most to the desired performance. Besides of avoiding redundancy of non-focal/unnecessary channels, this procedure makes visual EEG monitoring more practical when the number of needed channels becomes very few (Alotaiby et al., 2015). A significant disadvantage of decreasing the number of EEG channels is the unrealistic assumption that cortical activity is produced by EEG signals coming only from its immediate vicinity (Haufe et al., 2014). (ii) Dimensionality Reduction that projects the original feature space into a smaller space representation, aiming to reduce the overwhelming number of extracted features (Birjandtalab et al., 2017).Although either approach to dimensionality reduction can be performed separately, there is a growing interest in minimizing together the number of channels and features to be handled by the classification algorithms (Martinez-Leon et al., 2015). According to the way the input data points are mapped into a lower-dimensional space, dimensionality reduction methods can be categorized as linear or non-linear. The former approaches (like Principal Component Analysis (Zajacova et al., 2015), Discriminant and Common Spatial Patterns (Liao et al., 2007; Zhang et al., 2015), and Spatio-Spectral Decomposition) are popular choices for either EEG representation case (channels or features) with the benefit of computational efficiency, numerical stabilization, and denoising capability. Nevertheless, they face a deficiency, namely, the feature spaces extracted from EEG signals can induce significant and complex variations regarding the nonlinearity and sparsity of the manifolds that hardly can be encoded by linear decompositions (Sturm et al., 2016). Moreover, based on their contribution to a linear regression model, linear dimensionality reduction methods usually select the most compact and relevant set of features, which might not be the best option for a non-linear classifier (Adeli et al., 2017).In turn, the non-linear mapping can more precisely preserve the information about the local neighborhoods of data-points by introducing either locally linearized structures or pairwise distances along the subtle non-linear manifold, attempting to unfold more complex high-dimensional data as separable groups (Lee and Verleysen, 2007). Among machine learning approaches to dimensionality reduction, the Kernel-based analysis is promising because of the following properties (Chu et al., 2011): (i) kernel methods apply a non-linear mapping to a higher dimensional space where the original non-linear data become linear or near-linear. (ii) The Kernel trick decreases the computational complexity of high dimensional data as the parameter evaluation domain is lessened from the explicit feature space into the Kernel space. In practice, an open issue is the definition of the kernel transformation that can be more connected with the appropriate type of application nonlinearity (Zimmer et al., 2015). Nevertheless, more efforts are spent in the development of a metric learning that allows a kernel to adjust the importance of individual features of tasks under consideration, usually exploiting a given amount of supervisory information (Hurtado-Rincón et al., 2016). Hence, the kernel-based relevance analysis can handle the estimated weights to highlight the features or dimensions relevant for improving the classification performance (Brockmeier et al., 2014).
[ "28120883", "24967316", "24165805", "28161592", "17475513", "24684447", "27138114", "20348000", "27103525", "25168571", "25799903", "23844021", "18269986", "12574475", "25003816", "24302929", "17518278", "25977685", "22438708", "25769276", "16235818", "27746229", "22563146", "19304486", "25802861", "26584486", "11761077", "28194613", "25953813", "20659825", "22347153", "27666975", "26277421", "25008538" ]
[ { "pmid": "28120883", "title": "Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson's Disease.", "abstract": "Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson's disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods." }, { "pmid": "24967316", "title": "Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains.", "abstract": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance." }, { "pmid": "24165805", "title": "Automatic feature selection of motor imagery EEG signals using differential evolution and learning automata.", "abstract": "Brain-computer interfacing (BCI) has been the most researched technology in neuroprosthesis in the last two decades. Feature extractors and classifiers play an important role in BCI research for the generation of suitable control signals to drive an assistive device. Due to the high dimensionality of feature vectors in practical BCI systems, implantation of efficient feature selection algorithms has been an integral area of research in the past decade. This article proposes an efficient feature selection technique, realized by means of an evolutionary algorithm, which attempts to overcome some of the shortcomings of several state-of-the-art approaches in this field. The outlined scheme produces a subset of salient features which improves the classification accuracy while maintaining a trade-off with the computational speed of the complete scheme. For this purpose, an efficient memetic algorithm has also been proposed for the optimization purpose. Extensive experimental validations have been conducted on two real-world datasets to establish the efficacy of our approach. We have compared our approach to existing algorithms and have established the superiority of our algorithm to the rest." }, { "pmid": "28161592", "title": "Automated seizure detection using limited-channel EEG and non-linear dimension reduction.", "abstract": "Electroencephalography (EEG) is an essential component in evaluation of epilepsy. However, full-channel EEG signals recorded from 18 to 23 electrodes on the scalp is neither wearable nor computationally effective. This paper presents advantages of both channel selection and nonlinear dimension reduction for accurate automatic seizure detection. We first extract the frequency domain features from the full-channel EEG signals. Then, we use a random forest algorithm to determine which channels contribute the most in discriminating seizure from non-seizure events. Next, we apply a non-linear dimension reduction technique to capture the relationship among data elements and map them in low dimension. Finally, we apply a KNN classifier technique to discriminate between seizure and non-seizure events. The experimental results for 23 patients show that our proposed approach outperforms other techniques in terms of accuracy. It also visualizes long-term data in 2D to enhance physician cognition of occurrence and disease progression." }, { "pmid": "17475513", "title": "The non-invasive Berlin Brain-Computer Interface: fast acquisition of effective performance in untrained subjects.", "abstract": "Brain-Computer Interface (BCI) systems establish a direct communication channel from the brain to an output device. These systems use brain signals recorded from the scalp, the surface of the cortex, or from inside the brain to enable users to control a variety of applications. BCI systems that bypass conventional motor output pathways of nerves and muscles can provide novel control options for paralyzed patients. One classical approach to establish EEG-based control is to set up a system that is controlled by a specific EEG feature which is known to be susceptible to conditioning and to let the subjects learn the voluntary control of that feature. In contrast, the Berlin Brain-Computer Interface (BBCI) uses well established motor competencies of its users and a machine learning approach to extract subject-specific patterns from high-dimensional features optimized for detecting the user's intent. Thus the long subject training is replaced by a short calibration measurement (20 min) and machine learning (1 min). We report results from a study in which 10 subjects, who had no or little experience with BCI feedback, controlled computer applications by voluntary imagination of limb movements: these intentions led to modulations of spontaneous brain activity specifically, somatotopically matched sensorimotor 7-30 Hz rhythms were diminished over pericentral cortices. The peak information transfer rate was above 35 bits per minute (bpm) for 3 subjects, above 23 bpm for two, and above 12 bpm for 3 subjects, while one subject could achieve no BCI control. Compared to other BCI systems which need longer subject training to achieve comparable results, we propose that the key to quick efficiency in the BBCI system is its flexibility due to complex but physiologically meaningful features and its adaptivity which respects the enormous inter-subject variability." }, { "pmid": "24684447", "title": "Neural decoding with kernel-based metric learning.", "abstract": "In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis." }, { "pmid": "27138114", "title": "Impact of the reference choice on scalp EEG connectivity estimation.", "abstract": "OBJECTIVE\nSeveral scalp EEG functional connectivity studies, mostly clinical, seem to overlook the reference electrode impact. The subsequent interpretation of brain connectivity is thus often biased by the choice of a non-neutral reference. This study aims at systematically investigating these effects.\n\n\nAPPROACH\nAs EEG reference, we examined the vertex electrode (Cz), the digitally linked mastoids (DLM), the average reference (AVE), and the reference electrode standardization technique (REST). As a connectivity metric, we used the imaginary part of the coherency. We tested simulated and real data (eyes-open resting state) by evaluating the influence of electrode density, the effect of head model accuracy in the REST transformation, and the impact on the characterization of the topology of functional networks from graph analysis.\n\n\nMAIN RESULTS\nSimulations demonstrated that REST significantly reduced the distortion of connectivity patterns when compared to AVE, Cz, and DLM references. Moreover, the availability of high-density EEG systems and an accurate knowledge of the head model are crucial elements to improve REST performance, with the individual realistic head model being preferable to the standard realistic head model. For real data, a systematic change of the spatial pattern of functional connectivity depending on the chosen reference was also observed. The distortion of connectivity patterns was larger for the Cz reference, and progressively decreased when using the DLM, the AVE, and the REST. Strikingly, we also showed that network attributes derived from graph analysis, i.e. node degree and local efficiency, are significantly influenced by the EEG reference choice.\n\n\nSIGNIFICANCE\nOverall, this study highlights that significant differences arise in scalp EEG functional connectivity and graph network properties, in dependence on the chosen reference. We hope that our study will convey the message that caution should be used when interpreting and comparing results obtained from different laboratories using different reference schemes." }, { "pmid": "20348000", "title": "Kernel regression for fMRI pattern prediction.", "abstract": "This paper introduces two kernel-based regression schemes to decode or predict brain states from functional brain scans as part of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) 2007, in which our team was awarded first place. Our procedure involved image realignment, spatial smoothing, detrending of low-frequency drifts, and application of multivariate linear and non-linear kernel regression methods: namely kernel ridge regression (KRR) and relevance vector regression (RVR). RVR is based on a Bayesian framework, which automatically determines a sparse solution through maximization of marginal likelihood. KRR is the dual-form formulation of ridge regression, which solves regression problems with high dimensional data in a computationally efficient way. Feature selection based on prior knowledge about human brain function was also used. Post-processing by constrained deconvolution and re-convolution was used to furnish the prediction. This paper also contains a detailed description of how prior knowledge was used to fine tune predictions of specific \"feature ratings,\" which we believe is one of the key factors in our prediction accuracy. The impact of pre-processing was also evaluated, demonstrating that different pre-processing may lead to significantly different accuracies. Although the original work was aimed at the PBAIC, many techniques described in this paper can be generally applied to any fMRI decoding works to increase the prediction accuracy." }, { "pmid": "27103525", "title": "EEG-directed connectivity from posterior brain regions is decreased in dementia with Lewy bodies: a comparison with Alzheimer's disease and controls.", "abstract": "Directed information flow between brain regions might be disrupted in dementia with Lewy bodies (DLB) and relate to the clinical syndrome of DLB. To investigate this hypothesis, resting-state electroencephalography recordings were obtained in patients with probable DLB and Alzheimer's disease (AD), and controls (N = 66 per group, matched for age and gender). Phase transfer entropy was used to measure directed connectivity in the groups for the theta, alpha, and beta frequency band. A posterior-to-anterior phase transfer entropy gradient, with occipital channels driving the frontal channels, was found in controls in all frequency bands. This posterior-to-anterior gradient was largely lost in DLB in the alpha band (p < 0.05). In the beta band, posterior brain regions were less driving in information flow in AD than in DLB and controls. In conclusion, the common posterior-to-anterior pattern of directed connectivity in controls is disturbed in DLB patients in the alpha band, and in AD patients in the beta band. Disrupted alpha band-directed connectivity may underlie the clinical syndrome of DLB and differentiate between DLB and AD." }, { "pmid": "25168571", "title": "Identification and monitoring of brain activity based on stochastic relevance analysis of short-time EEG rhythms.", "abstract": "BACKGROUND\nThe extraction of physiological rhythms from electroencephalography (EEG) data and their automated analyses are extensively studied in clinical monitoring, to find traces of interictal/ictal states of epilepsy.\n\n\nMETHODS\nBecause brain wave rhythms in normal and interictal/ictal events, differently influence neuronal activity, our proposed methodology measures the contribution of each rhythm. These contributions are measured in terms of their stochastic variability and are extracted from a Short Time Fourier Transform to highlight the non-stationary behavior of the EEG data. Then, we performed a variability-based relevance analysis by handling the multivariate short-time rhythm representation within a subspace framework. This maximizes the usability of the input information and preserves only the data that contribute to the brain activity classification. For neural activity monitoring, we also developed a new relevance rhythm diagram that qualitatively evaluates the rhythm variability throughout long time periods in order to distinguish events with different neuronal activities.\n\n\nRESULTS\nEvaluations were carried out over two EEG datasets, one of which was recorded in a noise-filled environment. The method was evaluated for three different classification problems, each of which addressed a different interpretation of a medical problem. We perform a blinded study of 40 patients using the support-vector machine classifier cross-validation scheme. The obtained results show that the developed relevance analysis was capable of accurately differentiating normal, ictal and interictal activities.\n\n\nCONCLUSIONS\nThe proposed approach provides the reliable identification of traces of interictal/ictal states of epilepsy. The introduced relevance rhythm diagrams of physiological rhythms provides effective means of monitoring epileptic seizures; additionally, these diagrams are easily implemented and provide simple clinical interpretation. The developed variability-based relevance analysis can be translated to other monitoring applications involving time-variant biomedical data." }, { "pmid": "25799903", "title": "Wavelet-based EEG processing for computer-aided seizure detection and epilepsy diagnosis.", "abstract": "Electroencephalography (EEG) is an important tool for studying the human brain activity and epileptic processes in particular. EEG signals provide important information about epileptogenic networks that must be analyzed and understood before the initiation of therapeutic procedures. Very small variations in EEG signals depict a definite type of brain abnormality. The challenge is to design and develop signal processing algorithms which extract this subtle information and use it for diagnosis, monitoring and treatment of patients with epilepsy. This paper presents a review of wavelet techniques for computer-aided seizure detection and epilepsy diagnosis with an emphasis on research reported during the past decade. A multiparadigm approach based on the integration of wavelets, nonlinear dynamics and chaos theory, and neural networks advanced by Adeli and associates is the most effective method for automated EEG-based diagnosis of epilepsy." }, { "pmid": "23844021", "title": "Comparison of sensor selection mechanisms for an ERP-based brain-computer interface.", "abstract": "A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system's performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays." }, { "pmid": "18269986", "title": "Principal component analysis-enhanced cosine radial basis function neural network for robust epilepsy and seizure detection.", "abstract": "A novel principal component analysis (PCA)-enhanced cosine radial basis function neural network classifier is presented. The two-stage classifier is integrated with the mixed-band wavelet-chaos methodology, developed earlier by the authors, for accurate and robust classification of electroencephalogram (EEGs) into healthy, ictal, and interictal EEGs. A nine-parameter mixed-band feature space discovered in previous research for effective EEG representation is used as input to the two-stage classifier. In the first stage, PCA is employed for feature enhancement. The rearrangement of the input space along the principal components of the data improves the classification accuracy of the cosine radial basis function neural network (RBFNN) employed in the second stage significantly. The classification accuracy and robustness of the classifier are validated by extensive parametric and sensitivity analysis. The new wavelet-chaos-neural network methodology yields high EEG classification accuracy (96.6%) and is quite robust to changes in training data with a low standard deviation of 1.4%. For epilepsy diagnosis, when only normal and interictal EEGs are considered, the classification accuracy of the proposed model is 99.3%. This statistic is especially remarkable because even the most highly trained neurologists do not appear to be able to detect interictal EEGs more than 80% of the times." }, { "pmid": "12574475", "title": "Functional properties of brain areas associated with motor execution and imagery.", "abstract": "Imagining motor acts is a cognitive task that engages parts of the executive motor system. While motor imagery has been intensively studied using neuroimaging techniques, most studies lack behavioral observations. Here, we used functional MRI to compare the functional neuroanatomy of motor execution and imagery using a task that objectively assesses imagery performance. With surface electromyographic monitoring within a scanner, 10 healthy subjects performed sequential finger-tapping movements according to visually presented number stimuli in either a movement or an imagery mode of performance. We also examined effects of varied and fixed stimulus types that differ in stimulus dependency of the task. Statistical parametric mapping revealed movement-predominant activity, imagery-predominant activity, and activity common to both movement and imagery modes of performance (movement-and-imagery activity). The movement-predominant activity included the primary sensory and motor areas, parietal operculum, and anterior cerebellum that had little imagery-related activity (-0.1 ~ 0.1%), and the caudal premotor areas and area 5 that had mild-to-moderate imagery-related activity (0.2 ~ 0.7%). Many frontoparietal areas and posterior cerebellum demonstrated movement-and-imagery activity. Imagery-predominant areas included the precentral sulcus at the level of middle frontal gyrus and the posterior superior parietal cortex/precuneus. Moreover, activity of the superior precentral sulcus and intraparietal sulcus areas, predominantly on the left, was associated with accuracy of the imagery task performance. Activity of the inferior precentral sulcus (area 6/44) showed stimulus-type effect particularly for the imagery mode. A time-course analysis of activity suggested a functional gradient, which was characterized by a more \"executive\" or more \"imaginative\" property in many areas related to movement and/or imagery. The results from the present study provide new insights into the functional neuroanatomy of motor imagery, including the effects of imagery performance and stimulus-dependency on brain activity." }, { "pmid": "25003816", "title": "Dimensionality reduction for the analysis of brain oscillations.", "abstract": "Neuronal oscillations have been shown to be associated with perceptual, motor and cognitive brain operations. While complex spatio-temporal dynamics are a hallmark of neuronal oscillations, they also represent a formidable challenge for the proper extraction and quantification of oscillatory activity with non-invasive recording techniques such as EEG and MEG. In order to facilitate the study of neuronal oscillations we present a general-purpose pre-processing approach, which can be applied for a wide range of analyses including but not restricted to inverse modeling and multivariate single-trial classification. The idea is to use dimensionality reduction with spatio-spectral decomposition (SSD) instead of the commonly and almost exclusively used principal component analysis (PCA). The key advantage of SSD lies in selecting components explaining oscillations-related variance instead of just any variance as in the case of PCA. For the validation of SSD pre-processing we performed extensive simulations with different inverse modeling algorithms and signal-to-noise ratios. In all these simulations SSD invariably outperformed PCA often by a large margin. Moreover, using a database of multichannel EEG recordings from 80 subjects we show that pre-processing with SSD significantly increases the performance of single-trial classification of imagined movements, compared to the classification with PCA pre-processing or without any dimensionality reduction. Our simulations and analysis of real EEG experiments show that, while not being supervised, the SSD algorithm is capable of extracting components primarily relating to the signal of interest often using as little as 20% of the data variance, instead of > 90% variance as in case of PCA. Given its ease of use, absence of supervision, and capability to efficiently reduce the dimensionality of multivariate EEG/MEG data, we advocate the application of SSD pre-processing for the analysis of spontaneous and induced neuronal oscillations in normal subjects and patients." }, { "pmid": "24302929", "title": "Common spatio-time-frequency patterns for motor imagery-based brain machine interfaces.", "abstract": "For efficient decoding of brain activities in analyzing brain function with an application to brain machine interfacing (BMI), we address a problem of how to determine spatial weights (spatial patterns), bandpass filters (frequency patterns), and time windows (time patterns) by utilizing electroencephalogram (EEG) recordings. To find these parameters, we develop a data-driven criterion that is a natural extension of the so-called common spatial patterns (CSP) that are known to be effective features in BMI. We show that the proposed criterion can be optimized by an alternating procedure to achieve fast convergence. Experiments demonstrate that the proposed method can effectively extract discriminative features for a motor imagery-based BMI." }, { "pmid": "17518278", "title": "Combining spatial filters for the classification of single-trial EEG in a finger movement task.", "abstract": "Brain-computer interface (BCI) is to provide a communication channel that translates human intention reflected by a brain signal such as electroencephalogram (EEG) into a control signal for an output device. In recent years, the event-related desynchronization (ERD) and movement-related potentials (MRPs) are utilized as important features in motor related BCI system, and the common spatial patterns (CSP) algorithm has shown to be very useful for ERD-based classification. However, as MRPs are slow nonoscillatory EEG potential shifts, CSP is not an appropriate approach for MRPs-based classification. Here, another spatial filtering algorithm, discriminative spatial patterns (DSP), is newly introduced for better extraction of the difference in the amplitudes of MRPs, and it is integrated with CSP to extract the features from the EEG signals recorded during voluntary left versus right finger movement tasks. A support vector machines (SVM) based framework is designed as the classifier for the features. The results show that, for MRPs and ERD features, the combined spatial filters can realize the single-trial EEG classification better than anyone of DSP and CSP alone does. Thus, we propose an EEG-based BCI system with the two feature sets, one based on CSP (ERD) and the other based on DSP (MRPs), classified by SVM." }, { "pmid": "25977685", "title": "Feature Selection Applying Statistical and Neurofuzzy Methods to EEG-Based BCI.", "abstract": "This paper presents an investigation aimed at drastically reducing the processing burden required by motor imagery brain-computer interface (BCI) systems based on electroencephalography (EEG). In this research, the focus has moved from the channel to the feature paradigm, and a 96% reduction of the number of features required in the process has been achieved maintaining and even improving the classification success rate. This way, it is possible to build cheaper, quicker, and more portable BCI systems. The data set used was provided within the framework of BCI Competition III, which allows it to compare the presented results with the classification accuracy achieved in the contest. Furthermore, a new three-step methodology has been developed which includes a feature discriminant character calculation stage; a score, order, and selection phase; and a final feature selection step. For the first stage, both statistics method and fuzzy criteria are used. The fuzzy criteria are based on the S-dFasArt classification algorithm which has shown excellent performance in previous papers undertaking the BCI multiclass motor imagery problem. The score, order, and selection stage is used to sort the features according to their discriminant nature. Finally, both order selection and Group Method Data Handling (GMDH) approaches are used to choose the most discriminant ones." }, { "pmid": "22438708", "title": "Brain computer interfaces, a review.", "abstract": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices." }, { "pmid": "25769276", "title": "Hand-in-hand advances in biomedical engineering and sensorimotor restoration.", "abstract": "BACKGROUND\nLiving in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown.\n\n\nNEW METHOD\nThe ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function.\n\n\nRESULTS\nBased on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nTypical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation).\n\n\nCONCLUSIONS\nHere we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities." }, { "pmid": "16235818", "title": "Artificial neural network based epileptic detection using time-domain and frequency-domain features.", "abstract": "Electroencephalogram (EEG) signal plays an important role in the diagnosis of epilepsy. The long-term EEG recordings of an epileptic patient obtained from the ambulatory recording systems contain a large volume of EEG data. Detection of the epileptic activity requires a time consuming analysis of the entire length of the EEG data by an expert. The traditional methods of analysis being tedious, many automated diagnostic systems for epilepsy have emerged in recent years. This paper discusses an automated diagnostic method for epileptic detection using a special type of recurrent neural network known as Elman network. The experiments are carried out by using time-domain as well as frequency-domain features of the EEG signal. Experimental results show that Elman network yields epileptic detection accuracy rates as high as 99.6% with a single input feature which is better than the results obtained by using other types of neural networks with two and more input features." }, { "pmid": "27746229", "title": "Interpretable deep neural networks for single-trial EEG classification.", "abstract": "BACKGROUND\nIn cognitive neuroscience the potential of deep neural networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious 'black boxes' do not provide insight into neurophysiological phenomena underlying a decision. Layer-wise relevance propagation (LRP) has been introduced as a novel method to explain individual network decisions.\n\n\nNEW METHOD\nWe propose the application of DNNs with LRP for the first time for EEG data analysis. Through LRP the single-trial DNN decisions are transformed into heatmaps indicating each data point's relevance for the outcome of the decision.\n\n\nRESULTS\nDNN achieves classification accuracies comparable to those of CSP-LDA. In subjects with low performance subject-to-subject transfer of trained DNNs can improve the results. The single-trial LRP heatmaps reveal neurophysiologically plausible patterns, resembling CSP-derived scalp maps. Critically, while CSP patterns represent class-wise aggregated information, LRP heatmaps pinpoint neural patterns to single time points in single trials.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nWe compare the classification performance of DNNs to that of linear CSP-LDA on two data sets related to motor-imaginary BCI.\n\n\nCONCLUSION\nWe have demonstrated that DNN is a powerful non-linear tool for EEG analysis. With LRP a new quality of high-resolution assessment of neural activity can be reached. LRP is a potential remedy for the lack of interpretability of DNNs that has limited their utility in neuroscientific applications. The extreme specificity of the LRP-derived heatmaps opens up new avenues for investigating neural activity underlying complex perception or decision-related processes." }, { "pmid": "22563146", "title": "A tunable support vector machine assembly classifier for epileptic seizure detection.", "abstract": "Automating the detection of epileptic seizures could reduce the significant human resources necessary for the care of patients suffering from intractable epilepsy and offer improved solutions for closed-loop therapeutic devices such as implantable electrical stimulation systems. While numerous detection algorithms have been published, an effective detector in the clinical setting remains elusive. There are significant challenges facing seizure detection algorithms. The epilepsy EEG morphology can vary widely among the patient population. EEG recordings from the same patient can change over time. EEG recordings can be contaminated with artifacts that often resemble epileptic seizure activity. In order for an epileptic seizure detector to be successful, it must be able to adapt to these different challenges. In this study, a novel detector is proposed based on a support vector machine assembly classifier (SVMA). The SVMA consists of a group of SVMs each trained with a different set of weights between the seizure and non-seizure data and the user can selectively control the output of the SVMA classifier. The algorithm can improve the detection performance compared to traditional methods by providing an effective tuning strategy for specific patients. The proposed algorithm also demonstrates a clear advantage over threshold tuning. When compared with the detection performances reported by other studies using the publicly available epilepsy dataset hosted by the University of BONN, the proposed SVMA detector achieved the best total accuracy of 98.72%. These results demonstrate the efficacy of the proposed SVMA detector and its potential in the clinical setting." }, { "pmid": "19304486", "title": "Epileptic seizure detection in EEGs using time-frequency analysis.", "abstract": "The detection of recorded epileptic seizure activity in EEG segments is crucial for the localization and classification of epileptic seizures. However, since seizure evolution is typically a dynamic and nonstationary process and the signals are composed of multiple frequencies, visual and conventional frequency-based methods have limited application. In this paper, we demonstrate the suitability of the time-frequency (t-f) analysis to classify EEG segments for epileptic seizures, and we compare several methods for t-f analysis of EEGs. Short-time Fourier transform and several t-f distributions are used to calculate the power spectrum density (PSD) of each segment. The analysis is performed in three stages: 1) t-f analysis and calculation of the PSD of each EEG segment; 2) feature extraction, measuring the signal segment fractional energy on specific t-f windows; and 3) classification of the EEG segment (existence of epileptic seizure or not), using artificial neural networks. The methods are evaluated using three classification problems obtained from a benchmark EEG dataset, and qualitative and quantitative results are presented." }, { "pmid": "25802861", "title": "Simultaneous channel and feature selection of fused EEG features based on Sparse Group Lasso.", "abstract": "Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs). Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%." }, { "pmid": "26584486", "title": "Tracking Neural Modulation Depth by Dual Sequential Monte Carlo Estimation on Point Processes for Brain-Machine Interfaces.", "abstract": "Classic brain-machine interface (BMI) approaches decode neural signals from the brain responsible for achieving specific motor movements, which subsequently command prosthetic devices. Brain activities adaptively change during the control of the neuroprosthesis in BMIs, where the alteration of the preferred direction and the modulation of the gain depth are observed. The static neural tuning models have been limited by fixed codes, resulting in a decay of decoding performance over the course of the movement and subsequent instability in motor performance. To achieve stable performance, we propose a dual sequential Monte Carlo adaptive point process method, which models and decodes the gradually changing modulation depth of individual neuron over the course of a movement. We use multichannel neural spike trains from the primary motor cortex of a monkey trained to perform a target pursuit task using a joystick. Our results show that our computational approach successfully tracks the neural modulation depth over time with better goodness-of-fit than classic static neural tuning models, resulting in smaller errors between the true kinematics and the estimations in both simulated and real data. Our novel decoding approach suggests that the brain may employ such strategies to achieve stable motor output, i.e., plastic neural tuning is a feature of neural systems. BMI users may benefit from this adaptive algorithm to achieve more complex and controlled movement outcomes." }, { "pmid": "11761077", "title": "A method to standardize a reference of scalp EEG recordings to a point at infinity.", "abstract": "The effect of an active reference in EEG recording is one of the oldest technical problems in EEG practice. In this paper, a method is proposed to approximately standardize the reference of scalp EEG recordings to a point at infinity. This method is based on the fact that the use of scalp potentials to determine the neural electrical activities or their equivalent sources does not depend on the reference, so we may approximately reconstruct the equivalent sources from scalp EEG recordings with a scalp point or average reference. Then the potentials referenced at infinity are approximately reconstructed from the equivalent sources. As a point at infinity is far from all the possible neural sources, this method may be considered as a reference electrode standardization technique (REST). The simulation studies performed with assumed neural sources included effects of electrode number, volume conductor model and noise on the performance of REST, and the significance of REST in EEG temporal analysis. The results showed that REST is potentially very effective for the most important superficial cortical region and the standardization could be especially important in recovering the temporal information of EEG recordings." }, { "pmid": "28194613", "title": "Is the Surface Potential Integral of a Dipole in a Volume Conductor Always Zero? A Cloud Over the Average Reference of EEG and ERP.", "abstract": "Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies." }, { "pmid": "25953813", "title": "Long-Term BMI Trajectories and Health in Older Adults: Hierarchical Clustering of Functional Curves.", "abstract": "OBJECTIVE\nThis project contributes to the emerging research that aims to identify distinct body mass index (BMI) trajectory types in the population. We identify clusters of long-term BMI curves among older adults and determine how the clusters differ with respect to initial health.\n\n\nMETHOD\nHealth and Retirement Study cohort (N = 9,893) with BMI information collected in up to 10 waves (1992-2010) is analyzed using a powerful cutting-edge approach: hierarchical clustering of BMI functions estimated via the Principal Analysis by Conditional Expectations (PACE) algorithm.\n\n\nRESULTS\nThree BMI trajectory clusters emerged for each gender: stable, gaining, and losing. The initial health of the gaining and stable groups in both genders was comparable; the losing cluster experienced significantly poorer health at baseline.\n\n\nDISCUSSION\nBMI trajectories among older adults cluster into distinct types in both genders, and the clusters vary substantially in initial health. Weight loss but not gain is associated with poor initial health in this age group." }, { "pmid": "20659825", "title": "Automated real-time epileptic seizure detection in scalp EEG recordings using an algorithm based on wavelet packet transform.", "abstract": "A novel wavelet-based algorithm for real-time detection of epileptic seizures using scalp EEG is proposed. In a moving-window analysis, the EEG from each channel is decomposed by wavelet packet transform. Using wavelet coefficients from seizure and nonseizure references, a patient-specific measure is developed to quantify the separation between seizure and nonseizure states for the frequency range of 1-30 Hz. Utilizing this measure, a frequency band representing the maximum separation between the two states is determined and employed to develop a normalized index, called combined seizure index (CSI). CSI is derived for each epoch of every EEG channel based on both rhythmicity and relative energy of that epoch as well as consistency among different channels. Increasing significantly during ictal states, CSI is inspected using one-sided cumulative sum test to generate proper channel alarms. Analyzing alarms from all channels, a seizure alarm is finally generated. The algorithm was tested on scalp EEG recordings from 14 patients, totaling approximately 75.8 h with 63 seizures. Results revealed a high sensitivity of 90.5%, a false detection rate of 0.51 h(-1) and a median detection delay of 7 s. The algorithm could also lateralize the focus side for patients with temporal lobe epilepsy." }, { "pmid": "22347153", "title": "BCI Competition IV - Data Set I: Learning Discriminative Patterns for Self-Paced EEG-Based Motor Imagery Detection.", "abstract": "Detecting motor imagery activities versus non-control in brain signals is the basis of self-paced brain-computer interfaces (BCIs), but also poses a considerable challenge to signal processing due to the complex and non-stationary characteristics of motor imagery as well as non-control. This paper presents a self-paced BCI based on a robust learning mechanism that extracts and selects spatio-spectral features for differentiating multiple EEG classes. It also employs a non-linear regression and post-processing technique for predicting the time-series of class labels from the spatio-spectral features. The method was validated in the BCI Competition IV on Dataset I where it produced the lowest prediction error of class labels continuously. This report also presents and discusses analysis of the method using the competition data set." }, { "pmid": "27666975", "title": "An approach to EEG-based emotion recognition using combined feature extraction method.", "abstract": "EEG signal has been widely used in emotion recognition. However, too many channels and extracted features are used in the current EEG-based emotion recognition methods, which lead to the complexity of these methods This paper studies on feature extraction on EEG-based emotion recognition model to overcome those disadvantages, and proposes an emotion recognition method based on empirical mode decomposition (EMD) and sample entropy. The proposed method first employs EMD strategy to decompose EEG signals only containing two channels into a series of intrinsic mode functions (IMFs). The first 4 IMFs are selected to calculate corresponding sample entropies and then to form feature vectors. These vectors are fed into support vector machine classifier for training and testing. The average accuracy of the proposed method is 94.98% for binary-class tasks and the best accuracy achieves 93.20% for the multi-class task on DEAP database, respectively. The results indicate that the proposed method is more suitable for emotion recognition than several methods of comparison." }, { "pmid": "26277421", "title": "Optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface.", "abstract": "BACKGROUND\nCommon spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually.\n\n\nNEW METHOD\nThis study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification.\n\n\nRESULTS\nTwo public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI.\n\n\nCOMPARISON WITH EXISTING METHODS\nThe optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods.\n\n\nCONCLUSIONS\nThe proposed SFBCSP is a potential method for improving the performance of MI-based BCI." }, { "pmid": "25008538", "title": "A framework for optimal kernel-based manifold embedding of medical image data.", "abstract": "Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images." } ]
Scientific Reports
29042602
PMC5645324
10.1038/s41598-017-13640-5
The optimal window size for analysing longitudinal networks
The time interval between two snapshots is referred to as the window size. A given longitudinal network can be analysed from various actor-level perspectives, such as exploring how actors change their degree centrality values or participation statistics over time. Determining the optimal window size for the analysis of a given longitudinal network from different actor-level perspectives is a well-researched network science problem. Many researchers have attempted to develop a solution to this problem by considering different approaches; however, to date, no comprehensive and well-acknowledged solution that can be applied to various longitudinal networks has been found. We propose a novel approach to this problem that involves determining the correct window size when a given longitudinal network is analysed from different actor-level perspectives. The approach is based on the concept of actor-level dynamicity, which captures variability in the structural behaviours of actors in a given longitudinal network. The approach is applied to four real-world, variable-sized longitudinal networks to determine their optimal window sizes. The optimal window length for each network, determined using the approach proposed in this paper, is further evaluated via time series and data mining methods to validate its optimality. Implications of this approach are discussed in this article.
Related workThe temporal sampling of a longitudinal network is often performed opportunistically20 depending on a wide range of factors. Timmons and Preacher12 identified some of these factors, including types of social networks, competing objectives, processes and measurements, planned time horizons of the respective study, the availability of funds, logistic and organisational constraints, the availability and expected behaviours of study participants and the desired accuracy and precision of study outcomes. Most theoretical and methodological approaches to defining optimal sliding windows of dynamic networks focus on the aggregation of links in time-window graphs18. This emphasis impacts the observation bias and the accuracy and significance of analyses, as dynamic network processes (e.g., the formation and dissolution of ties) can begin or end during inter-event periods21. Longitudinal networks are also analysed based on an assumption that more sampling generates better results15,22 or, in the case of randomised clinical trials, a sliding window size that maximises the efficiency of estimating treatment effects23. Frameworks of statistical analysis such as the separable temporal exponential random graph model 24 can also be used by relating the timing of network snapshots to the precision of parameter estimates.The aforementioned approaches to determine an appropriate or optimal time window for analysing longitudinal networks suffer from their inherent limitations. For example, Timmons and Preacher12 found deteriorating outcomes from studies using more sampling windows and suggested that researchers consider the trade-off between precision and sampling time. On the other hand, statistical frameworks are parameter dependent and only work when applied to small networks with a few hundred actors. Recent studies focus on empirical analysis by comparing network statistics of temporal aggregations or graph metrics over time against threshold values in determining appropriate or meaningful sampling window sizes. For example, the Temporal Window in Network (TWIN) algorithm developed by Sulo, Berger-Wolf and Grossman17 analyses the compression ratio and variances in time series of graph metrics computed over a series of graph snapshots composed of temporal edges as a function of sampling window size. A window size is considered appropriate when the difference between variance and compression ratio for that window size is smaller or equal to a user-defined threshold. Soundarajan et al.25 defined another algorithm that identifies variable-length aggregation intervals by considering a ‘structurally mature graph’ that represents the stability of a network with respect to network statistics. A detailed study by Caceres and Berger-Wolf20 illustrates this windowing problem in reference to different formalisations and initial approaches to identifying the optimal resolution of edge aggregations, including their corresponding advantages and limitations.
[ "24179229", "15190252", "19120606", "26609742", "20691543", "24443639", "27942416", "24141695" ]
[ { "pmid": "24179229", "title": "Structural and functional brain networks: from connections to cognition.", "abstract": "How rich functionality emerges from the invariant structural architecture of the brain remains a major mystery in neuroscience. Recent applications of network theory and theoretical neuroscience to large-scale brain networks have started to dissolve this mystery. Network analyses suggest that hierarchical modular brain networks are particularly suited to facilitate local (segregated) neuronal operations and the global integration of segregated functions. Although functional networks are constrained by structural connections, context-sensitive integration during cognition tasks necessarily entails a divergence between structural and functional networks. This degenerate (many-to-one) function-structure mapping is crucial for understanding the nature of brain networks. The emergence of dynamic functional networks from static structural connections calls for a formal (computational) approach to neuronal information processing that may resolve this dialectic between structure and function." }, { "pmid": "15190252", "title": "Evidence for dynamically organized modularity in the yeast protein-protein interaction network.", "abstract": "In apparently scale-free protein-protein interaction networks, or 'interactome' networks, most proteins interact with few partners, whereas a small but significant proportion of proteins, the 'hubs', interact with many partners. Both biological and non-biological scale-free networks are particularly resistant to random node removal but are extremely sensitive to the targeted removal of hubs. A link between the potential scale-free topology of interactome networks and genetic robustness seems to exist, because knockouts of yeast genes encoding hubs are approximately threefold more likely to confer lethality than those of non-hubs. Here we investigate how hubs might contribute to robustness and other cellular properties for protein-protein interactions dynamically regulated both in time and in space. We uncovered two types of hub: 'party' hubs, which interact with most of their partners simultaneously, and 'date' hubs, which bind their different partners at different times or locations. Both in silico studies of network connectivity and genetic interactions described in vivo support a model of organized modularity in which date hubs organize the proteome, connecting biological processes--or modules--to each other, whereas party hubs function inside modules." }, { "pmid": "19120606", "title": "Ecological networks--beyond food webs.", "abstract": "1. A fundamental goal of ecological network research is to understand how the complexity observed in nature can persist and how this affects ecosystem functioning. This is essential for us to be able to predict, and eventually mitigate, the consequences of increasing environmental perturbations such as habitat loss, climate change, and invasions of exotic species. 2. Ecological networks can be subdivided into three broad types: 'traditional' food webs, mutualistic networks and host-parasitoid networks. There is a recent trend towards cross-comparisons among network types and also to take a more mechanistic, as opposed to phenomenological, perspective. For example, analysis of network configurations, such as compartments, allows us to explore the role of co-evolution in structuring mutualistic networks and host-parasitoid networks, and of body size in food webs. 3. Research into ecological networks has recently undergone a renaissance, leading to the production of a new catalogue of evermore complete, taxonomically resolved, and quantitative data. Novel topological patterns have been unearthed and it is increasingly evident that it is the distribution of interaction strengths and the configuration of complexity, rather than just its magnitude, that governs network stability and structure. 4. Another significant advance is the growing recognition of the importance of individual traits and behaviour: interactions, after all, occur between individuals. The new generation of high-quality networks is now enabling us to move away from describing networks based on species-averaged data and to start exploring patterns based on individuals. Such refinements will enable us to address more general ecological questions relating to foraging theory and the recent metabolic theory of ecology. 5. We conclude by suggesting a number of 'dead ends' and 'fruitful avenues' for future research into ecological networks." }, { "pmid": "26609742", "title": "The Importance of Temporal Design: How Do Measurement Intervals Affect the Accuracy and Efficiency of Parameter Estimates in Longitudinal Research?", "abstract": "The timing (spacing) of assessments is an important component of longitudinal research. The purpose of the present study is to determine methods of timing the collection of longitudinal data that provide better parameter recovery in mixed effects nonlinear growth modeling. A simulation study was conducted, varying function type, as well as the number of measurement occasions, in order to examine the effect of timing on the accuracy and efficiency of parameter estimates. The number of measurement occasions was associated with greater efficiency for all functional forms and was associated with greater accuracy for the intrinsically nonlinear functions. In general, concentrating measurement occasions toward the left or at the extremes was associated with increased efficiency when estimating the intercepts of intrinsically linear functions, and concentrating values where the curvature of the function was greatest generally resulted in the best recovery for intrinsically nonlinear functions. Results from this study can be used in conjunction with theory to improve the design of longitudinal research studies. In addition, an R program is provided for researchers to run customized simulations to identify optimal sampling schedules for their own research." }, { "pmid": "20691543", "title": "Use of cigarettes to improve affect and depressive symptoms in a longitudinal study of adolescents.", "abstract": "Smoking to alleviate negative affect or improve physiological functioning (i.e., self-medication) is one explanation for the association between depression and smoking in adolescents. This study tests whether using cigarettes to improve mood or physiological functioning is associated with the onset, and change over time, of elevated depressive symptoms. Data were drawn from the Nicotine Dependence in Teens study which followed 1293 participants initially aged 12-13 years in Montreal, Canada every three months for five years. The subsample included 662 adolescents who had been current smokers (reported smoking during the previous three months) at any point during the study. Survival analysis was used to test whether self-medication scores predicted onset of elevated depressive symptoms. Changes over time in depressive symptom scores relative to self-medication scores were modeled in growth curve analyses controlling for sex and number of cigarettes smoked per week. Smokers who reported higher self-medication scores had higher depressive symptom scores. The interaction between self-medication scores and the acceleration rate in depressive symptom scores was significant and negative, suggesting that participants with higher self-medication scores had decelerated rates of change in depression over time compared to participants with lower self-medication scores. Smoking appears to be ineffective at reducing depressive symptoms. These findings are consistent with a version of the Positive Resource Model that suggests that smoking will not lower depressive symptoms, but could slow the rate of change over time. Alternatively, the perceived positive benefits may be the result of alleviation of symptoms of withdrawal and craving resulting from abstinence. The self-medication scale may identify a population at risk of increased levels of depressive symptoms." }, { "pmid": "24443639", "title": "A Separable Model for Dynamic Networks.", "abstract": "Models of dynamic networks - networks that evolve over time - have manifold applications. We develop a discrete-time generative model for social network evolution that inherits the richness and flexibility of the class of exponential-family random graph models. The model - a Separable Temporal ERGM (STERGM) - facilitates separable modeling of the tie duration distributions and the structural dynamics of tie formation. We develop likelihood-based inference for the model, and provide computational algorithms for maximum likelihood estimation. We illustrate the interpretability of the model in analyzing a longitudinal network of friendship ties within a school." }, { "pmid": "27942416", "title": "Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.", "abstract": "The heuristic k-means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp. We demonstrate its advantage in optimality and runtime over the standard iterative k-means algorithm." }, { "pmid": "24141695", "title": "Quantifying the effect of temporal resolution on time-varying networks.", "abstract": "Time-varying networks describe a wide array of systems whose constituents and interactions evolve over time. They are defined by an ordered stream of interactions between nodes, yet they are often represented in terms of a sequence of static networks, each aggregating all edges and nodes present in a time interval of size Δt. In this work we quantify the impact of an arbitrary Δt on the description of a dynamical process taking place upon a time-varying network. We focus on the elementary random walk, and put forth a simple mathematical framework that well describes the behavior observed on real datasets. The analytical description of the bias introduced by time integrating techniques represents a step forward in the correct characterization of dynamical processes on time-varying graphs." } ]
Scientific Reports
29079836
PMC5660213
10.1038/s41598-017-13923-x
Small Molecule Accurate Recognition Technology (SMART) to Enhance Natural Products Research
Various algorithms comparing 2D NMR spectra have been explored for their ability to dereplicate natural products as well as determine molecular structures. However, spectroscopic artefacts, solvent effects, and the interactive effect of functional group(s) on chemical shifts combine to hinder their effectiveness. Here, we leveraged Non-Uniform Sampling (NUS) 2D NMR techniques and deep Convolutional Neural Networks (CNNs) to create a tool, SMART, that can assist in natural products discovery efforts. First, an NUS heteronuclear single quantum coherence (HSQC) NMR pulse sequence was adapted to a state-of-the-art nuclear magnetic resonance (NMR) instrument, and data reconstruction methods were optimized, and second, a deep CNN with contrastive loss was trained on a database containing over 2,054 HSQC spectra as the training set. To demonstrate the utility of SMART, several newly isolated compounds were automatically located with their known analogues in the embedded clustering space, thereby streamlining the discovery pipeline for new natural products.
Related workAgain, the aforementioned grid-cell approaches28 are similar to ours in that the shifted grid positions can be thought of as corresponding to the first layer of convolutions, which have small receptive fields (like grid cells), and they are shifted across the input space like shifted grids. Also, our approach uses layers of convolutions that can capture multi-scale similarities. The grid-cell approaches, however, use hand-designed features (i.e. counts of peaks within each grid cell), and the similarities are computed by simple distance measures. In particular, PLSI and LSR are linear techniques applied to hand-designed features. Furthermore, other representations, for example the ‘tree-based’ method59, also rely on data structures designed by the researcher. Our approach, using deep networks and gradient descent, allows higher-level and nonlinear features to be learned in the service of the task. This approach is similar to modern approaches for computer vision, which since 2012 has shifted away from hand-designed features to deep networks and learned features, and has led to orders of magnitude better performance. Similarly to how deep networks applied to computer vision tasks have learned to deal with common problems, such as recognizing objects and faces in different lighting conditions and poses, our CNN pattern recognition-based method can overcome solvent effects, instrumental artefacts, and weak signal issues.It is difficult to directly compare Wolfram et al.’s results to ours because they used a much smaller dataset (132 compounds) from 10 well-separated families. This is not enough data to train the deep network. To further compare our approach with other NMR pattern recognition approaches, we generated precision-recall curves (Fig. 5) using SMART trained with the SMART5 and SMART10 databases (Fig. 6). Considering SMART as a search engine, precision recall curves help evaluate the SMART’s performance to find the most relevant chemical structures, while taking into account the non-relevant compounds that are retrieved. In our approach to HSQC spectra recognition/retrieval, precision is a measure of the percentage of correct compounds over the total number retrieved, while recall is the percentage of the total number of relevant compounds. Therefore, higher precision indicates a lower false positive rate, and higher recall indicates a lower false negative rate. The precision-recall curves of our approach show high precision peaks at low recall rates, suggesting that SMART retrieves at least some relevant structures in the first 10–20% of compounds retrieved, and thus indicates that SMART returns accurate chemical structures. To compare this to a linear embedding, we performed PCA on the SMART5 and SMART10 databases separately. The precision recall curves of those PCA results are much worse than those processed by the CNN (Fig. 5).Figure 5Precision-recall curves measured across 10-fold validation for different dimensions (dim) of embeddings. (a) and (b) Mean precision-recall curves on test HSQC spectra for SMART5 and SMART10 datasets, respectively. (c) and (d) Mean precision-recall with error curves (grey) for SMART5 and SMART10, respectively. (e) and (f) Mean precision-recall curves for SMART5 and SMART10 clustered by Principal Component Analysis (PCA) without use of the CNN. AUC: area under the curve. Figure 6Distribution in the Training Dataset of Numbers of Families Containing Different Numbers of Individual Compounds. The SMART5 training set contains 238 compound subfamilies, giving rise to 2,054 HSQC spectra in total. (Blue and Green) The SMART10 training set contains 69 compound subfamilies and is composed of 911 HSQC spectra in total. (Green only).
[ "26852623", "24149839", "26284661", "26284660", "20179874", "23291908", "22481242", "21538743", "25901905", "21796515", "22331404", "21773916", "21314130", "18052244", "26017442", "25462637", "9887531", "19635374", "24140622", "22663190", "25036668", "24047259", "24484201", "24256462", "23772699", "24957203", "24660966", "17790723", "22924493", "11922791", "15787510", "21488639", "22742732", "8520220", "25871261" ]
[ { "pmid": "26852623", "title": "Natural Products as Sources of New Drugs from 1981 to 2014.", "abstract": "This contribution is a completely updated and expanded version of the four prior analogous reviews that were published in this journal in 1997, 2003, 2007, and 2012. In the case of all approved therapeutic agents, the time frame has been extended to cover the 34 years from January 1, 1981, to December 31, 2014, for all diseases worldwide, and from 1950 (earliest so far identified) to December 2014 for all approved antitumor drugs worldwide. As mentioned in the 2012 review, we have continued to utilize our secondary subdivision of a \"natural product mimic\", or \"NM\", to join the original primary divisions and the designation \"natural product botanical\", or \"NB\", to cover those botanical \"defined mixtures\" now recognized as drug entities by the U.S. FDA (and similar organizations). From the data presented in this review, the utilization of natural products and/or their novel structures, in order to discover and develop the final drug entity, is still alive and well. For example, in the area of cancer, over the time frame from around the 1940s to the end of 2014, of the 175 small molecules approved, 131, or 75%, are other than \"S\" (synthetic), with 85, or 49%, actually being either natural products or directly derived therefrom. In other areas, the influence of natural product structures is quite marked, with, as expected from prior information, the anti-infective area being dependent on natural products and their structures. We wish to draw the attention of readers to the rapidly evolving recognition that a significant number of natural product drugs/leads are actually produced by microbes and/or microbial interactions with the \"host from whence it was isolated\", and therefore it is considered that this area of natural product research should be expanded significantly." }, { "pmid": "24149839", "title": "MS/MS-based networking and peptidogenomics guided genome mining revealed the stenothricin gene cluster in Streptomyces roseosporus.", "abstract": "Most (75%) of the anti-infectives that save countless lives and enormously improve quality of life originate from microbes found in nature. Herein, we described a global visualization of the detectable molecules produced from a single microorganism, which we define as the 'molecular network' of that organism, followed by studies to characterize the cellular effects of antibacterial molecules. We demonstrate that Streptomyces roseosporus produces at least four non-ribosomal peptide synthetase-derived molecular families and their gene subnetworks (daptomycin, arylomycin, napsamycin and stenothricin) were identified with different modes of action. A number of previously unreported analogs involving truncation, glycosylation, hydrolysis and biosynthetic intermediates and/or shunt products were also captured and visualized by creation of a map through MS/MS networking. The diversity of antibacterial compounds produced by S. roseosporus highlights the importance of developing new approaches to characterize the molecular capacity of an organism in a more global manner. This allows one to more deeply interrogate the biosynthetic capacities of microorganisms with the goal to streamline the discovery pipeline for biotechnological applications in agriculture and medicine. This is a contribution to a special issue to honor Chris Walsh's amazing career." }, { "pmid": "20179874", "title": "NMR of natural products at the 'nanomole-scale'.", "abstract": "Over the last decade, dramatic improvements in mass-sensitivity of NMR through low-volume tube probes and capillary probes, coupled with cryogenically cooled radiofrequency (rf) coils and preamplifier components, have provided chemists with new capabilities for exploration of submilligram natural product samples. These innovations led to an approximate 20-fold increase in mass sensitivity compared with conventional NMR instrumentation at the same field. Now, full characterization by 1D and 2D NMR of natural products--some available only in vanishingly small amounts, down to approximately 1 nanomole--can be achieved in reasonable time-frames. In this Highlight, some recent applications of the new NMR methodology to the area of natural products discovery are discussed, along with a perspective of practical limitations and potential future applications in new areas." }, { "pmid": "23291908", "title": "Using NMR to identify and characterize natural products.", "abstract": "Over the past 28 years there have been several thousand publications describing the use of 2D NMR to identify and characterize natural products. During this time period, the amount of sample needed for this purpose has decreased from the 20-50 mg range to under 1 mg. This has been due to both improvements in NMR hardware and methodology. This review will focus on mainly methodology improvements, particularly in pulse sequences, acquisition and processing methods which are particularly relevant to natural product research, with lesser discussion of hardware improvements." }, { "pmid": "22481242", "title": "Sparse sampling methods in multidimensional NMR.", "abstract": "Although the discrete Fourier transform played an enabling role in the development of modern NMR spectroscopy, it suffers from a well-known difficulty providing high-resolution spectra from short data records. In multidimensional NMR experiments, so-called indirect time dimensions are sampled parametrically, with each instance of evolution times along the indirect dimensions sampled via separate one-dimensional experiments. The time required to conduct multidimensional experiments is directly proportional to the number of indirect evolution times sampled. Despite remarkable advances in resolution with increasing magnetic field strength, multiple dimensions remain essential for resolving individual resonances in NMR spectra of biological macromolecues. Conventional Fourier-based methods of spectrum analysis limit the resolution that can be practically achieved in the indirect dimensions. Nonuniform or sparse data collection strategies, together with suitable non-Fourier methods of spectrum analysis, enable high-resolution multidimensional spectra to be obtained. Although some of these approaches were first employed in NMR more than two decades ago, it is only relatively recently that they have been widely adopted. Here we describe the current practice of sparse sampling methods and prospects for further development of the approach to improve resolution and sensitivity and shorten experiment time in multidimensional NMR. While sparse sampling is particularly promising for multidimensional NMR, the basic principles could apply to other forms of multidimensional spectroscopy." }, { "pmid": "25901905", "title": "Sensitivity of nonuniform sampling NMR.", "abstract": "Many information-rich multidimensional experiments in nuclear magnetic resonance spectroscopy can benefit from a signal-to-noise ratio (SNR) enhancement of up to about 2-fold if a decaying signal in an indirect dimension is sampled with nonconsecutive increments, termed nonuniform sampling (NUS). This work provides formal theoretical results and applications to resolve major questions about the scope of the NUS enhancement. First, we introduce the NUS Sensitivity Theorem in which any decreasing sampling density applied to any exponentially decaying signal always results in higher sensitivity (SNR per square root of measurement time) than uniform sampling (US). Several cases will illustrate this theorem and show that even conservative applications of NUS improve sensitivity by useful amounts. Next, we turn to a serious limitation of uniform sampling: the SNR by US decreases for extending evolution times, and thus total experimental times, beyond 1.26T2 (T2 = signal decay constant). Thus, SNR and resolution cannot be simultaneously improved by extending US beyond 1.26T2. We find that NUS can eliminate this constraint, and we introduce the matched NUS SNR Theorem: an exponential sampling density matched to the signal decay always improves the SNR with additional evolution time. Though proved for a specific case, broader classes of NUS densities also improve SNR with evolution time. Applications of these theoretical results are given for a soluble plant natural product and a solid tripeptide (u-(13)C,(15)N-MLF). These formal results clearly demonstrate the inadequacies of applying US to decaying signals in indirect nD-NMR dimensions, supporting a broader adoption of NUS." }, { "pmid": "21796515", "title": "Applications of non-uniform sampling and processing.", "abstract": "Modern high-field NMR instruments provide unprecedented resolution. To make use of the resolving power in multidimensional NMR experiment standard linear sampling through the indirect dimensions to the maximum optimal evolution times (~1.2T (2)) is not practical because it would require extremely long measurement times. Thus, alternative sampling methods have been proposed during the past 20 years. Originally, random nonlinear sampling with an exponentially decreasing sampling density was suggested, and data were transformed with a maximum entropy algorithm (Barna et al., J Magn Reson 73:69-77, 1987). Numerous other procedures have been proposed in the meantime. It has become obvious that the quality of spectra depends crucially on the sampling schedules and the algorithms of data reconstruction. Here we use the forward maximum entropy (FM) reconstruction method to evaluate several alternate sampling schedules. At the current stage, multidimensional NMR spectra that do not have a serious dynamic range problem, such as triple resonance experiments used for sequential assignments, are readily recorded and faithfully reconstructed using non-uniform sampling. Thus, these experiments can all be recorded non-uniformly to utilize the power of modern instruments. On the other hand, for spectra with a large dynamic range, such as 3D and 4D NOESYs, choosing optimal sampling schedules and the best reconstruction method is crucial if one wants to recover very weak peaks. Thus, this chapter is focused on selecting the best sampling schedules and processing methods for high-dynamic range spectra." }, { "pmid": "22331404", "title": "Application of iterative soft thresholding for fast reconstruction of NMR data non-uniformly sampled with multidimensional Poisson Gap scheduling.", "abstract": "The fast Fourier transformation has been the gold standard for transforming data from time to frequency domain in many spectroscopic methods, including NMR. While reliable, it has as a drawback that it requires a grid of uniformly sampled data points. This needs very long measuring times for sampling in multidimensional experiments in all indirect dimensions uniformly and even does not allow reaching optimal evolution times that would match the resolution power of modern high-field instruments. Thus, many alternative sampling and transformation schemes have been proposed. Their common challenges are the suppression of the artifacts due to the non-uniformity of the sampling schedules, the preservation of the relative signal amplitudes, and the computing time needed for spectra reconstruction. Here we present a fast implementation of the Iterative Soft Thresholding approach (istHMS) that can reconstruct high-resolution non-uniformly sampled NMR data up to four dimensions within a few hours and make routine reconstruction of high-resolution NUS 3D and 4D spectra convenient. We include a graphical user interface for generating sampling schedules with the Poisson-Gap method and an estimation of optimal evolution times based on molecular properties. The performance of the approach is demonstrated with the reconstruction of non-uniformly sampled medium and high-resolution 3D and 4D protein spectra acquired with sampling densities as low as 0.8%. The method presented here facilitates acquisition, reconstruction and use of multidimensional NMR spectra at otherwise unreachable spectral resolution in indirect dimensions." }, { "pmid": "21773916", "title": "Data sampling in multidimensional NMR: fundamentals and strategies.", "abstract": "Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (free induction decay) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform, the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records were already well understood, and despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. In this chapter we review the fundamentals of uniform and nonuniform sampling methods in one- and multidimensional NMR." }, { "pmid": "21314130", "title": "Hierarchical alignment and full resolution pattern recognition of 2D NMR spectra: application to nematode chemical ecology.", "abstract": "Nuclear magnetic resonance (NMR) is the most widely used nondestructive technique in analytical chemistry. In recent years, it has been applied to metabolic profiling due to its high reproducibility, capacity for relative and absolute quantification, atomic resolution, and ability to detect a broad range of compounds in an untargeted manner. While one-dimensional (1D) (1)H NMR experiments are popular in metabolic profiling due to their simplicity and fast acquisition times, two-dimensional (2D) NMR spectra offer increased spectral resolution as well as atomic correlations, which aid in the assignment of known small molecules and the structural elucidation of novel compounds. Given the small number of statistical analysis methods for 2D NMR spectra, we developed a new approach for the analysis, information recovery, and display of 2D NMR spectral data. We present a native 2D peak alignment algorithm we term HATS, for hierarchical alignment of two-dimensional spectra, enabling pattern recognition (PR) using full-resolution spectra. Principle component analysis (PCA) and partial least squares (PLS) regression of full resolution total correlation spectroscopy (TOCSY) spectra greatly aid the assignment and interpretation of statistical pattern recognition results by producing back-scaled loading plots that look like traditional TOCSY spectra but incorporate qualitative and quantitative biological information of the resonances. The HATS-PR methodology is demonstrated here using multiple 2D TOCSY spectra of the exudates from two nematode species: Pristionchus pacificus and Panagrellus redivivus. We show the utility of this integrated approach with the rapid, semiautomated assignment of small molecules differentiating the two species and the identification of spectral regions suggesting the presence of species-specific compounds. These results demonstrate that the combination of 2D NMR spectra with full-resolution statistical analysis provides a platform for chemical and biological studies in cellular biochemistry, metabolomics, and chemical ecology." }, { "pmid": "18052244", "title": "Toward more reliable 13C and 1H chemical shift prediction: a systematic comparison of neural-network and least-squares regression based approaches.", "abstract": "The efficacy of neural network (NN) and partial least-squares (PLS) methods is compared for the prediction of NMR chemical shifts for both 1H and 13C nuclei using very large databases containing millions of chemical shifts. The chemical structure description scheme used in this work is based on individual atoms rather than functional groups. The performances of each of the methods were optimized in a systematic manner described in this work. Both of the methods, least-squares and neural network analyses, produce results of a very similar quality, but the least-squares algorithm is approximately 2--3 times faster." }, { "pmid": "26017442", "title": "Deep learning.", "abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech." }, { "pmid": "25462637", "title": "Deep learning in neural networks: an overview.", "abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks." }, { "pmid": "9887531", "title": "Curacin D, an antimitotic agent from the marine cyanobacterium Lyngbya majuscula.", "abstract": "Curacin D is a novel brine shrimp toxic metabolite isolated from a Virgin Islands collection of the marine cyanobacterium Lyngbya majuscula. Structure elucidation of curacin D was accomplished through multidimensional NMR, GC/MS, and comparisons with curacin A. Curacin D provides new insights into structure-activity relationships in this natural product class as well as some aspects of the likely biosynthetic pathway of the curacins." }, { "pmid": "19635374", "title": "Automatic compatibility tests of HSQC NMR spectra with proposed structures of chemical compounds.", "abstract": "A recently introduced similarity measure is extended here for comparing two-dimensional spectra. Its applicability is demonstrated with heteronuclear single-quantum correlation (HSQC) NMR spectra. For testing the compatibility of a spectrum with the proposed chemical structure, first, the spectrum is predicted on the basis of that structure and then, the proposed comparison algorithm is applied. In this context, the topics of optimization are peak picking, signal intensity measures, and optimizing the parameters of the two-dimensional comparison method. The performance is analyzed with a test set of 289 structures of organic compounds and their HSQC and (1)H NMR spectra. The results obtained with HSQC spectra are better than those achieved using the previously described one-dimensional similarity test with (1)H NMR spectra alone." }, { "pmid": "24140622", "title": "Sampling scheme and compressed sensing applied to solid-state NMR spectroscopy.", "abstract": "We describe the incorporation of non-uniform sampling (NUS) compressed sensing (CS) into oriented sample (OS) solid-state NMR for stationary aligned samples and magic angle spinning (MAS) Solid-state NMR for unoriented 'powder' samples. Both simulated and experimental results indicate that 25-33% of a full linearly sampled data set is required to reconstruct two- and three-dimensional solid-state NMR spectra with high fidelity. A modest increase in signal-to-noise ratio accompanies the reconstruction." }, { "pmid": "22663190", "title": "Polyhydroxylated steroidal glycosides from Paris polyphylla.", "abstract": "Three new steroidal saponins, parisyunnanosides G-I (1-3), one new C(21) steroidal glycoside, parisyunnanoside J (4), and three known compounds, padelaoside B (5), pinnatasterone (6), and 20-hydroxyecdyson (7), were isolated from the rhizomes of Paris polyphylla Smith var. yunnanensis. Compounds 1 and 3 have unique trisdesmoside structures that include a C-21 β-d-galactopyranose moiety. All compounds were evaluated for their cytotoxicity against human CCRF leukemia cells." }, { "pmid": "25036668", "title": "Anti-Inflammatory Spirostanol and Furostanol Saponins from Solanum macaonense.", "abstract": "Eight new spirostanol saponins, macaosides A-H (1-8), and 10 new furostanol saponins, macaosides I-R (9-18), together with six known spirostanol compounds (19-24) were isolated from Solanum macaonense. The structures of the new compounds were determined from their spectroscopic data, and the compounds were tested for in vitro antineutrophilic inflammatory activity. It was found that both immediate inflammation responses including superoxide anion generation and elastase release were significantly inhibited by treatment with compounds 20, 21, and 24 (superoxide anion generation: IC50 7.0, 7.6, 4.0 μM; elastase release: IC50 3.7, 4.4, 1.0 μM, respectively). However, compounds 1 and 4 exhibited effects on the inhibition of elastase release only, with IC50 values of 3.2 and 4.2 μM, respectively, while 19 was active against superoxide anion generation only, with an IC50 value of 6.1 μM. Accordingly, spirostanols may be promising lead compounds for further neutrophilic inflammatory disease studies." }, { "pmid": "24047259", "title": "Anti-inflammatory asterosaponins from the starfish Astropecten monacanthus.", "abstract": "Four new asterosaponins, astrosteriosides A-D (1-3 and 5), and two known compounds, psilasteroside (4) and marthasteroside B (6), were isolated from the MeOH extract of the edible Vietnamese starfish Astropecten monacanthus. Their structures were elucidated by chemical and spectroscopic methods including FTICRMS and 1D and 2D NMR experiments. The effects of the extracts and isolated compounds on pro-inflammatory cytokines were evaluated by measuring the production of IL-12 p40, IL-6, and TNF-α in LPS-stimulated bone marrow-derived dendritic cells. Compounds 1, 5, and 6 exhibited potent anti-inflammatory activity comparable to that of the positive control. Further studies are required to confirm efficacy in vivo and the mechanism of effects. Such potent anti-inflammatory activities render compounds 1, 5, and 6 important materials for further applications including complementary inflammation remedies and/or functional foods and nutraceuticals." }, { "pmid": "24484201", "title": "Antihyperglycemic glucosylated coumaroyltyramine derivatives from Teucrium viscidum.", "abstract": "Eight new glucosylated coumaroyltyramine derivatives, teuvissides A-H (1-8), were isolated from whole plants of Teucrium viscidum. Their structures were elucidated using spectroscopic data and chemical methods. The antihyperglycemic activities of these compounds were evaluated in HepG2 cells and 3T3-L1 adipocytes, and all of the isolates elicited different levels of glucose consumption at a concentration of 2.0 μM. Teuvissides A (1), B (2), and F (6) induced 2.2-, 2.1-, and 2.2-fold changes, respectively, in the levels of glucose consumption in HepG2 cells and 2.5-, 2.1-, and 2.3-fold changes, respectively, in 3T3-L1 adipocytes relative to the basal levels." }, { "pmid": "24256462", "title": "Limonoids from Aphanamixis polystachya and their antifeedant activity.", "abstract": "Eight new aphanamixoid-type aphanamixoids (C-J, 1-8) and six new prieurianin-type limonoids, aphanamixoids K-P (9-14), along with 10 known terpenoids were isolated from Aphanamixis polystachya, and their structures were established by spectroscopic data analysis. Among the new limonoids, 13 compounds exhibited antifeedant activity against the generalist Helicoverpa armigera, a plant-feeding insect, at various concentration levels. In particular, compounds 1, 4, and 5 showed potent activities with EC50 values of 0.017, 0.008, and 0.012 μmol/cm(2), respectively. On the basis of a preliminary structure-activity relationship analysis, some potential active sites in the aphanamixoid-type limonoid molecules are proposed." }, { "pmid": "23772699", "title": "Bioactive terpenoids from the fruits of Aphanamixis grandifolia.", "abstract": "From the fruits of the tropical tree Aphanamixis grandifolia, five new evodulone-type limonoids, aphanalides I-M (1-5), one new apo-tirucallane-type triterpenoid, polystanin E (6), and three new chain-like diterpenoids, nemoralisins A-C (7-9), along with 12 known compounds were identified. The absolute configurations were determined by a combination of single-crystal X-ray diffraction studies, Mo2(OAc)4-induced electronic circular dichroism (ECD) data, the Mosher ester method, and calculated ECD data. The cytotoxicities of all the isolates and the insecticidal activities of the limonoids were evaluated." }, { "pmid": "24957203", "title": "Uralsaponins M-Y, antiviral triterpenoid saponins from the roots of Glycyrrhiza uralensis.", "abstract": "Thirteen new oleanane-type triterpenoid saponins, uralsaponins M-Y (1-13), and 15 known analogues (14-28) were isolated from the roots of Glycyrrhiza uralensis Fisch. The structures of 1-13 were identified on the basis of extensive NMR and MS data analyses. The sugar residues were identified by gas chromatography and ion chromatography coupled with pulsed amperometric detection after hydrolysis. Saponins containing a galacturonic acid (1-3) or xylose (5) residue are reported from Glycyrrhiza species for the first time. Compounds 1, 7, 8, and 24 exhibited good inhibitory activities against the influenza virus A/WSN/33 (H1N1) in MDCK cells with IC50 values of 48.0, 42.7, 39.6, and 49.1 μM, respectively, versus 45.6 μM of the positive control oseltamivir phosphate. In addition, compounds 24 and 28 showed anti-HIV activities with IC50 values of 29.5 and 41.7 μM, respectively." }, { "pmid": "24660966", "title": "Anti-inflammatory diterpenoids from the roots of Euphorbia ebracteolata.", "abstract": "Thirteen diterpenoids (1-13), including two new norditerpene lactones (1-2) and eight new rosane diterpenoids (3-10), were isolated from the roots of Euphorbia ebracteolata. The structures were determined by 1D and 2D NMR, HRESIMS, and electronic circular dichroism (ECD). The ECD-based empirical rule for α,β-unsaturated-γ-lactones was applied to determine the absolute configurations of 1 and 2. Compounds 7, 10, and 13 exhibited significant inhibition of nitric oxide production in RAW 264.7 lipopolysaccharide-induced macrophages, with IC50 values of 2.44, 2.76, and 1.02 μM, respectively." }, { "pmid": "22924493", "title": "Viequeamide A, a cytotoxic member of the kulolide superfamily of cyclic depsipeptides from a marine button cyanobacterium.", "abstract": "The viequeamides, a family of 2,2-dimethyl-3-hydroxy-7-octynoic acid (Dhoya)-containing cyclic depsipeptides, were isolated from a shallow subtidal collection of a \"button\" cyanobacterium (Rivularia sp.) from near the island of Vieques, Puerto Rico. Planar structures of the two major compounds, viequeamide A (1) and viequeamide B (2), were elucidated by 2D-NMR spectroscopy and mass spectrometry, whereas absolute configurations were determined by traditional hydrolysis, derivative formation, and chromatography in comparison with standards. In addition, a series of related minor metabolites, viequeamides C-F (3-6), were characterized by HRMS fragmentation methods. Viequeamide A was found to be highly toxic to H460 human lung cancer cells (IC(50) = 60 ± 10 nM), whereas the mixture of B-F was inactive. From a broader perspective, the viequeamides help to define a \"superfamily\" of related cyanobacterial natural products, the first of which to be discovered was kulolide. Within the kulolide superfamily, a wide variation in biological properties is observed, and the reported producing strains are also highly divergent, giving rise to several intriguing questions about structure-activity relationships and the evolutionary origins of this metabolite class." }, { "pmid": "11922791", "title": "Somocystinamide A, a novel cytotoxic disulfide dimer from a Fijian marine cyanobacterial mixed assemblage.", "abstract": "[reaction: see text] Bioassay-guided investigation of the extract from a Lyngbya majuscula/Schizothrix sp. assemblage of marine cyanobacteria led to the discovery of somocystinamide A (1), an extraordinary disulfide dimer of mixed PKS/NRPS biosynthetic origin. Somocystinamide A (1) was highly acid-sensitive, rapidly and completely converting to a characterizable derivative (2). Compound 1 exhibits significant cytotoxicity against mouse neuro-2a neuroblastoma cells (IC50 = 1.4 microg/mL), whereas 2 has no activity." }, { "pmid": "15787510", "title": "Isolation of swinholide A and related glycosylated derivatives from two field collections of marine cyanobacteria.", "abstract": "[structure: see text] Chemical investigation of two field collections of marine cyanobacteria has led to the discovery of two new cytotoxic natural products, ankaraholides A (2) and B (3), along with the known compound swinholide A (1). Since swinholide-type compounds were previously localized to the heterotrophic bacteria of sponges, these findings raise intriguing questions about their true metabolic source." }, { "pmid": "21488639", "title": "Cytotoxic veraguamides, alkynyl bromide-containing cyclic depsipeptides from the marine cyanobacterium cf. Oscillatoria margaritifera.", "abstract": "A family of cancer cell cytotoxic cyclodepsipeptides, veraguamides A-C (1-3) and H-L (4-8), were isolated from a collection of cf. Oscillatoria margaritifera obtained from the Coiba National Park, Panama, as part of the Panama International Cooperative Biodiversity Group program. The planar structure of veraguamide A (1) was deduced by 2D NMR spectroscopy and mass spectrometry, whereas the structures of 2-8 were mainly determined by a combination of 1H NMR and MS2/MS3 techniques. These new compounds are analogous to the mollusk-derived kulomo'opunalide natural products, with two of the veraguamides (C and H) containing the same terminal alkyne moiety. However, four veraguamides, A, B, K, and L, also feature an alkynyl bromide, a functionality that has been previously observed in only one other marine natural product, jamaicamide A. Veraguamide A showed potent cytotoxicity to the H-460 human lung cancer cell line (LD50=141 nM)." }, { "pmid": "22742732", "title": "Naphthomycins L-N, ansamycin antibiotics from Streptomyces sp. CS.", "abstract": "Previous analyses of the naphthomycin biosynthetic gene cluster and a comparison with known naphthomycin-type products from Streptomyces sp. CS have suggested that new products can be found from this strain. In this study, screening by LC-MS of Streptomyces sp. CS products formed under different culture conditions revealed several unknown peaks in the product spectra of extracts derived from oatmeal medium cultures. Three new naphthomycins, naphthomycins L (1), M (2), and N (3), and the known naphthomycins A (4), E (5), and D (6) were obtained. The structures were elucidated using spectroscopic data from 1D and 2D NMR and HRESIMS experiments." }, { "pmid": "8520220", "title": "NMRPipe: a multidimensional spectral processing system based on UNIX pipes.", "abstract": "The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks." }, { "pmid": "25871261", "title": "Polycyclic Polyprenylated Acylphloroglucinol Congeners Possessing Diverse Structures from Hypericum henryi.", "abstract": "Polycyclic polyprenylated acylphloroglucinols (PPAPs) are a class of hybrid natural products sharing the mevalonate/methylerythritol phosphate and polyketide biosynthetic pathways and showing considerable structural and bioactive diversity. In a systematic phytochemical investigation of Hypericum henryi, 40 PPAP-type derivatives, including the new compounds hyphenrones G-Q, were obtained. These compounds represent 12 different structural types, including four unusual skeletons exemplified by 5, 8, 10, and 17. The 12 different core structures found are explicable in terms of their biosynthetic origin. The structure of a known PPAP, perforatumone, was revised to hyphenrone A (5) by NMR spectroscopic and biomimetic synthesis methods. Several compounds exhibited inhibitory activities against acetylcholinesterase and human tumor cell lines. This study deals with the structural diversity, function, and biogenesis of natural PPAPs." } ]
Scientific Reports
29093451
PMC5665979
10.1038/s41598-017-12884-5
Sleep Benefits Memory for Semantic Category Structure While Preserving Exemplar-Specific Information
Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigate the impact of sleep on new semantic learning using a property inference task in which both kinds of information are initially acquired equally well. Participants learned about three categories of novel objects possessing some properties that were shared among category exemplars and others that were unique to an exemplar, with exposure frequency varying across categories. In Experiment 1, memory for shared properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. In Experiment 2, memory for shared properties improved across a nap, but only for the lower-frequency category, suggesting a prioritization of weakly learned information early in a sleep period. The increase was significantly correlated with amount of REM, but was also observed in participants who did not enter REM, suggesting involvement of both REM and NREM sleep. The results provide the first evidence that sleep improves memory for the shared structure of object categories, while simultaneously preserving object-unique information.
Other related workThere have been a few prior studies finding effects of sleep on semantic memory45,51, but they have focused on integrating new information with existing semantic memory networks, not learning an entirely novel domain, as in our study. One study that did look at novel conceptual learning found retention of memory for category exemplars as well as retention of ability to generalize to novel exemplars and never-seen prototypes across a night of sleep, but not a day awake, in a dot pattern categorization task52. This study is consistent with ours in suggesting a benefit of overnight sleep for both unique and shared structure. They did not find any reliable above-baseline improvements, but there was numerical improvement in novel exemplar accuracy similar in magnitude to our shared feature effects, and statistical power may have been lower due to an across-subject design and fewer subjects.Another study found increased generalization in an object categorization task over the course of an afternoon delay in both nap and wake conditions, with no difference between the two53. The task assessed memory or inference for the locations of faces, where some locations were predicted by feature rules (e.g. faces at one location were all young, stout, and had no headwear) and other locations had no rules. Overall memory for studied faces decreased over time, but more so for faces at locations without feature rules, suggesting a benefit due to shared structure in the rule location. While there are many differences between this paradigm and ours, our findings suggest the possibility that the forgetting and lack of sleep-wake differences observed may be due to averaging across all memories, instead of focusing on the weaker memories; Our Experiment 2 findings averaged across frequency are qualitatively similar to the findings from this study.
[ "16797219", "7624455", "14599236", "22775499", "23589831", "24813468", "17500644", "17488206", "20035789", "26856905", "25498222", "15576888", "19926780", "23055117", "18274266", "18391183", "12061756", "1519015", "2265922", "21967958", "21742389", "26227582", "17085038", "21335017", "17449637", "28056104", "14534586", "12819785", "20176120", "25447376", "27481220", "20046194", "23755173", "16913948", "22110606", "19189775", "25174663", "23962141", "28211489", "27046022", "24068804", "28288833", "25129237", "27881854", "22998869", "15470431" ]
[ { "pmid": "16797219", "title": "Theory-based Bayesian models of inductive learning and reasoning.", "abstract": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations." }, { "pmid": "7624455", "title": "Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.", "abstract": "Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems." }, { "pmid": "14599236", "title": "Modeling hippocampal and neocortical contributions to recognition memory: a complementary-learning-systems approach.", "abstract": "The authors present a computational neural-network model of how the hippocampus and medial temporal lobe cortex (MTLC) contribute to recognition memory. The hippocampal component contributes by recalling studied details. The MTLC component cannot support recall, but one can extract a scalar familiarity signal from MTLC that tracks how well a test item matches studied items. The authors present simulations that establish key differences in the operating characteristics of the hippocampal-recall and MTLC-familiarity signals and identify several manipulations (e.g., target-lure similarity, interference) that differentially affect the 2 signals. They also use the model to address the stochastic relationship between recall and familiarity and the effects of partial versus complete hippocampal lesions on recognition." }, { "pmid": "22775499", "title": "Generalization through the recurrent interaction of episodic memories: a model of the hippocampal system.", "abstract": "In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus in pattern separation (Marr, 1971; McClelland, McNaughton, & O'Reilly, 1995), and empirical support for its role in generalization and flexible relational memory (Cohen & Eichenbaum, 1993; Eichenbaum, 1999). Our account provides a means by which to resolve this conflict, by demonstrating that the basic representational scheme envisioned by complementary learning systems theory (McClelland et al., 1995), which relies upon orthogonalized codes in the hippocampus, is compatible with efficient generalization-as long as there is recurrence rather than unidirectional flow within the hippocampal circuit or, more widely, between the hippocampus and neocortex. We propose that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory (e.g., Nosofsky, 1984) and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space." }, { "pmid": "23589831", "title": "About sleep's role in memory.", "abstract": "Over more than a century of research has established the fact that sleep benefits the retention of memory. In this review we aim to comprehensively cover the field of \"sleep and memory\" research by providing a historical perspective on concepts and a discussion of more recent key findings. Whereas initial theories posed a passive role for sleep enhancing memories by protecting them from interfering stimuli, current theories highlight an active role for sleep in which memories undergo a process of system consolidation during sleep. Whereas older research concentrated on the role of rapid-eye-movement (REM) sleep, recent work has revealed the importance of slow-wave sleep (SWS) for memory consolidation and also enlightened some of the underlying electrophysiological, neurochemical, and genetic mechanisms, as well as developmental aspects in these processes. Specifically, newer findings characterize sleep as a brain state optimizing memory consolidation, in opposition to the waking brain being optimized for encoding of memories. Consolidation originates from reactivation of recently encoded neuronal memory representations, which occur during SWS and transform respective representations for integration into long-term memory. Ensuing REM sleep may stabilize transformed memories. While elaborated with respect to hippocampus-dependent memories, the concept of an active redistribution of memory representations from networks serving as temporary store into long-term stores might hold also for non-hippocampus-dependent memory, and even for nonneuronal, i.e., immunological memories, giving rise to the idea that the offline consolidation of memory during sleep represents a principle of long-term memory formation established in quite different physiological systems." }, { "pmid": "24813468", "title": "The reorganisation of memory during sleep.", "abstract": "Sleep after learning promotes the quantitative strengthening of new memories. Less is known about the impact of sleep on the qualitative reorganisation of memory, which is the focus of this review. Studies have shown that, in the declarative system, sleep facilitates the abstraction of rules (schema formation), the integration of knowledge into existing schemas (schema integration) and creativity that requires the disbandment of existing patterns (schema disintegration). Schema formation and integration might primarily benefit from slow wave sleep, whereas the disintegration of a schema might be facilitated by rapid eye movement sleep. In the procedural system, sleep fosters the reorganisation of motor memory. The neural mechanisms of these processes remain to be determined. Notably, emotions have been shown to modulate the sleep-related reorganisation of memories. In the final section of this review, we propose that the sleep-related reorganisation of memories might be particularly relevant for mental disorders. Thus, sleep disruptions might contribute to disturbed memory reorganisation and to the development of mental disorders. Therefore, sleep-related interventions might modulate the reorganisation of memories and provide new inroads into treatment." }, { "pmid": "17500644", "title": "Sleep's function in the spontaneous recovery and consolidation of memories.", "abstract": "Building on 2 previous studies (B. R. Ekstrand, 1967; B. R. Ekstrand, M. J. Sullivan, D. F. Parker, & J. N. West, 1971), the authors present 2 experiments that were aimed at characterizing the role of retroactive interference in sleep-associated declarative memory consolidation. Using an A-B, A-C paradigm with lists of word pairs in Experiment 1, the authors showed that sleep provides recovery from retroactive interference induced at encoding, whereas no such recovery was seen in several wake control conditions. Noninterfering word-pair lists were used in Experiment 2 (A-B, C-D). Sleeping after learning, in comparison with waking after learning, enhanced retention of both lists to a similar extent when encoding was less intense because of less list repetition and briefer word-pair presentations. With intense encoding, sleep-associated improvements were not seen for either list. In combination, the results indicate that the benefit of sleep for declarative memory consolidation is greater for weaker associations, regardless of whether weak associations result from retroactive interference or poor encoding." }, { "pmid": "17488206", "title": "Changes in sleep architecture following motor learning depend on initial skill level.", "abstract": "Previous research has linked both rapid eye movement (REM) sleep and Stage 2 sleep to procedural memory consolidation. The present study sought to clarify the relationship between sleep stages and procedural memory consolidation by examining the effect of initial skill level in this relationship in young adults. In-home sleep recordings were performed on participants before and after learning the pursuit rotor task. We divided the participants into low- and high-skill groups based on their initial performance of the pursuit rotor task. In high-skill participants, there was a significant increase in Stage 2 spindle density after learning, and there was a significant correlation between the spindle density that occurred after learning and pursuit rotor performance at retest 1 week later. In contrast, there was a significant correlation between changes in REM density and performance on the pursuit rotor task during retest 1 week later in low-skill participants, although the actual increase in REM density failed to reach significance in this group. The results of the present study suggest the presence of a double dissociation in the sleep-related processes that are involved in procedural memory consolidation in low- and high-skill individuals. These results indicate that the changes in sleep microarchitecture that take place after learning depend on the initial skill level of the individual and therefore provide validation for the model proposed by Smith et al. [Smith, C. T., Aubrey, J. B., & Peters, K. R. Different roles for REM and Stage 2 sleep in motor learning. Psychologica Belgica, 44, 79-102, 2004]. Accordingly, skill level is an important variable that needs to be considered in future research on sleep and memory consolidation." }, { "pmid": "20035789", "title": "Sleep enhances false memories depending on general memory performance.", "abstract": "Memory is subject to dynamic changes, sometimes giving rise to the formation of false memories due to biased processes of consolidation or retrieval. Sleep is known to benefit memory consolidation through an active reorganization of representations whereas acute sleep deprivation impairs retrieval functions. Here, we investigated whether sleep after learning and sleep deprivation at retrieval enhance the generation of false memories in a free recall test. According to the Deese, Roediger, McDermott (DRM) false memory paradigm, subjects learned lists of semantically associated words (e.g., \"night\", \"dark\", \"coal\", etc.), lacking the strongest common associate or theme word (here: \"black\"). Free recall was tested after 9h following a night of sleep, a night of wakefulness (sleep deprivation) or daytime wakefulness. Compared with memory performance after a retention period of daytime wakefulness, both post-learning nocturnal sleep as well as acute sleep deprivation at retrieval significantly enhanced false recall of theme words. However, these effects were only observed in subjects with low general memory performance. These data point to two different ways in which sleep affects false memory generation through semantic generalization: one acts during consolidation on the memory trace per se, presumably by active reorganization of the trace in the post-learning sleep period. The other is related to the recovery function of sleep and affects cognitive control processes of retrieval. Both effects are unmasked when the material is relatively weakly encoded." }, { "pmid": "26856905", "title": "The Benefits of Targeted Memory Reactivation for Consolidation in Sleep are Contingent on Memory Accuracy and Direct Cue-Memory Associations.", "abstract": "STUDY OBJECTIVES\nTo investigate how the effects of targeted memory reactivation (TMR) are influenced by memory accuracy prior to sleep and the presence or absence of direct cue-memory associations.\n\n\nMETHODS\n30 participants associated each of 50 pictures with an unrelated word and then with a screen location in two separate tasks. During picture-location training, each picture was also presented with a semantically related sound. The sounds were therefore directly associated with the picture locations but indirectly associated with the words. During a subsequent nap, half of the sounds were replayed in slow wave sleep (SWS). The effect of TMR on memory for the picture locations (direct cue-memory associations) and picture-word pairs (indirect cue-memory associations) was then examined.\n\n\nRESULTS\nTMR reduced overall memory decay for recall of picture locations. Further analyses revealed a benefit of TMR for picture locations recalled with a low degree of accuracy prior to sleep, but not those recalled with a high degree of accuracy. The benefit of TMR for low accuracy memories was predicted by time spent in SWS. There was no benefit of TMR for memory of the picture-word pairs, irrespective of memory accuracy prior to sleep.\n\n\nCONCLUSIONS\nTMR provides the greatest benefit to memories recalled with a low degree of accuracy prior to sleep. The memory benefits of TMR may also be contingent on direct cue-memory associations." }, { "pmid": "25498222", "title": "REM sleep rescues learning from interference.", "abstract": "Classical human memory studies investigating the acquisition of temporally-linked events have found that the memories for two events will interfere with each other and cause forgetting (i.e., interference; Wixted, 2004). Importantly, sleep helps consolidate memories and protect them from subsequent interference (Ellenbogen, Hulbert, Stickgold, Dinges, & Thompson-Schill, 2006). We asked whether sleep can also repair memories that have already been damaged by interference. Using a perceptual learning paradigm, we induced interference either before or after a consolidation period. We varied brain states during consolidation by comparing active wake, quiet wake, and naps with either non-rapid eye movement sleep (NREM), or both NREM and REM sleep. When interference occurred after consolidation, sleep and wake both produced learning. However, interference prior to consolidation impaired memory, with retroactive interference showing more disruption than proactive interference. Sleep rescued learning damaged by interference. Critically, only naps that contained REM sleep were able to rescue learning that was highly disrupted by retroactive interference. Furthermore, the magnitude of rescued learning was correlated with the amount of REM sleep. We demonstrate the first evidence of a process by which the brain can rescue and consolidate memories damaged by interference, and that this process requires REM sleep. We explain these results within a theoretical model that considers how interference during encoding interacts with consolidation processes to predict which memories are retained or lost." }, { "pmid": "15576888", "title": "Sleep-dependent learning and motor-skill complexity.", "abstract": "Learning of a procedural motor-skill task is known to progress through a series of unique memory stages. Performance initially improves during training, and continues to improve, without further rehearsal, across subsequent periods of sleep. Here, we investigate how this delayed sleep-dependent learning is affected when the task characteristics are varied across several degrees of difficulty, and whether this improvement differentially enhances individual transitions of the motor-sequence pattern being learned. We report that subjects show similar overnight improvements in speed whether learning a five-element unimanual sequence (17.7% improvement), a nine-element unimanual sequence (20.2%), or a five-element bimanual sequence (17.5%), but show markedly increased overnight improvement (28.9%) with a nine-element bimanual sequence. In addition, individual transitions within the motor-sequence pattern that appeared most difficult at the end of training showed a significant 17.8% increase in speed overnight, whereas those transitions that were performed most rapidly at the end of training showed only a non-significant 1.4% improvement. Together, these findings suggest that the sleep-dependent learning process selectively provides maximum benefit to motor-skill procedures that proved to be most difficult prior to sleep." }, { "pmid": "19926780", "title": "Sleep enhances category learning.", "abstract": "The ability to categorize objects and events in the world around us is a fundamental and critical aspect of human learning. We trained healthy adults on a probabilistic category-learning task in two different training modes. The aim of this study was to see whether either form of probabilistic category learning (feedback or observational) undergoes subsequent enhancement during sleep. Our results suggest that after training, a good night of sleep can lead to improved performance the following day on such tasks." }, { "pmid": "23055117", "title": "Sleep on it, but only if it is difficult: effects of sleep on problem solving.", "abstract": "Previous research has shown that performance on problem solving improves over a period of sleep, as compared with wakefulness. However, these studies have not determined whether sleep is beneficial for problem solving or whether sleep merely mitigates against interference due to an interruption to solution attempts. Sleep-dependent improvements have been described in terms of spreading activation, which raises the prediction that an effect of sleep should be greater for problems requiring a broader solution search. We presented participants with a set of remote-associate tasks that varied in difficulty as a function of the strength of the stimuli-answer associations. After a period of sleep, wake, or no delay, participants reattempted previously unsolved problems. The sleep group solved a greater number of difficult problems than did the other groups, but no difference was found for easy problems. We conclude that sleep facilitates problem solving, most likely via spreading activation, but this has its primary effect for harder problems." }, { "pmid": "18274266", "title": "Enhancement of declarative memory performance following a daytime nap is contingent on strength of initial task acquisition.", "abstract": "STUDY OBJECTIVES\nIn this study we examined the benefit of a daytime nap containing only NREM sleep on the performance of three declarative memory tasks: unrelated paired associates, maze learning, and the Rey-Osterrieth complex figure. Additionally, we explored the impact of factors related to task acquisition on sleep-related memory processing. To this end, we examined whether testing of paired associates during training leads to sleep-related enhancement of memory compared to simply learning the word pairs without test. We also examined whether strength of task acquisition modulates sleep-related processing for each of the three tasks. SUBJECTS AND PROCEDURE: Subjects (11 male, 22 female) arrived at 11:30, were trained on each of the declarative memory tasks at 12:15, and at 13:00 either took a nap or remained awake in the sleep lab. After the nap period, all subjects remained in the lab until retest at 16:00.\n\n\nRESULTS\nCompared to subjects who stayed awake during the training-retest interval, subjects who took a NREM nap demonstrated enhanced performance for word pairs that were tested during training, but not for untested word pairs. For each of the three declarative memory tasks, we observed a sleep-dependent performance benefit only for subjects that most strongly acquired the tasks during the training session.\n\n\nCONCLUSIONS\nNREM sleep obtained during a daytime nap benefits declarative memory performance, with these benefits being intimately tied to how well subjects acquire the tasks and the way in which the information is acquired." }, { "pmid": "18391183", "title": "Sleep directly following learning benefits consolidation of spatial associative memory.", "abstract": "The last decade has brought forth convincing evidence for a role of sleep in non-declarative memory. A similar function of sleep in episodic memory is supported by various correlational studies, but direct evidence is limited. Here we show that cued recall of face-location associations is significantly higher following a 12-h retention interval containing sleep than following an equally long period of waking. Furthermore, retention is significantly higher over a 24-h sleep-wake interval than over an equally long wake-sleep interval. This difference occurs because retention during sleep was significantly better when sleep followed learning directly, rather than after a day of waking. These data demonstrate a beneficial effect of sleep on memory that cannot be explained solely as a consequence of reduced interference. Rather, our findings suggest a competitive consolidation process, in which the fate of a memory depends, at least in part, on its relative stability at sleep onset: Strong memories tend to be preserved, while weaker memories erode still further. An important aspect of memory consolidation may thus result from the removal of irrelevant memory \"debris.\"" }, { "pmid": "12061756", "title": "The effect of category learning on sensitivity to within-category correlations.", "abstract": "A salient property of many categories is that they are not just sets of independent features but consist of clusters of correlated features. Although there is much evidence that people are sensitive to between-categories correlations, the evidence about within-category correlations is mixed. Two experiments tested whether the disparities might be due to different learning and test tasks. Subjects learned about categories either by classifying items or by inferring missing features of items. Their knowledge of the correlations was measured with classification, prediction, typicality, and production tests. The inference learners, but not the classification learners, showed sensitivity to the correlations, although different tests were differentially sensitive. These results reconcile some earlier disparities and provide a more complete understanding of people's sensitivities to within-category correlations." }, { "pmid": "1519015", "title": "Reliability and factor analysis of the Epworth Sleepiness Scale.", "abstract": "The Epworth Sleepiness Scale (ESS) is a self-administered eight-item questionnaire that has been proposed as a simple method for measuring daytime sleepiness in adults. This investigation was concerned with the reliability and internal consistency of the ESS. When 87 healthy medical students were tested and retested 5 months later, their paired ESS scores did not change significantly and were highly correlated (r = 0.82). By contrast, ESS scores that were initially high in 54 patients suffering from obstructive sleep apnea syndrome returned to more normal levels, as expected, after 3-9 months' treatment with nasal continuous positive airway pressure. The questionnaire had a high level of internal consistency as measured by Cronbach's alpha (0.88). Factor analysis of item scores showed that the ESS had only one factor for 104 medical students and for 150 patients with various sleep disorders. The ESS is a simple and reliable method for measuring persistent daytime sleepiness in adults." }, { "pmid": "2265922", "title": "Subjective and objective sleepiness in the active individual.", "abstract": "Eight subjects were kept awake and active overnight in a sleep lab isolated from environmental time cues. Ambulatory EEG and EOG were continuously recorded and sleepiness ratings carried out every two hours as was a short EEG test session with eyes open for 5 min and closed for 2 min. The EEG was subjected to spectral analysis and the EOG was visually scored for slow rolling eye movements (SEM). Intrusions of SEM and of alpha and theta power density during waking, open-eyed activity strongly differentiated between high and low subjective sleepiness (the differentiation was poorer for closed eyes) and the mean intraindividual correlations between subjective and objective sleepiness were very high. Still, the covariation was curvilinear; physiological indices of sleepiness did not occur reliably until subjective perceptions fell between \"sleepy\" and \"extremely sleepy-fighting sleep\"; i.e. physiological changes due to sleepiness are not likely to occur until extreme sleepiness is encountered. The results support the notion that ambulatory EEG/EOG changes may be used to quantify sleepiness." }, { "pmid": "21967958", "title": "Reduced sleep spindles and spindle coherence in schizophrenia: mechanisms of impaired memory consolidation?", "abstract": "BACKGROUND\nSleep spindles are thought to induce synaptic changes and thereby contribute to memory consolidation during sleep. Patients with schizophrenia show dramatic reductions of both spindles and sleep-dependent memory consolidation, which may be causally related.\n\n\nMETHODS\nTo examine the relations of sleep spindle activity to sleep-dependent consolidation of motor procedural memory, 21 chronic, medicated schizophrenia outpatients and 17 healthy volunteers underwent polysomnography on two consecutive nights. On the second night, participants were trained on the finger-tapping motor sequence task (MST) at bedtime and tested the following morning. The number, density, frequency, duration, amplitude, spectral content, and coherence of stage 2 sleep spindles were compared between groups and examined in relation to overnight changes in MST performance.\n\n\nRESULTS\nPatients failed to show overnight improvement on the MST and differed significantly from control participants who did improve. Patients also exhibited marked reductions in the density (reduced 38% relative to control participants), number (reduced 36%), and coherence (reduced 19%) of sleep spindles but showed no abnormalities in the morphology of individual spindles or of sleep architecture. In patients, reduced spindle number and density predicted less overnight improvement on the MST. In addition, reduced amplitude and sigma power of individual spindles correlated with greater severity of positive symptoms.\n\n\nCONCLUSIONS\nThe observed sleep spindle abnormalities implicate thalamocortical network dysfunction in schizophrenia. In addition, the findings suggest that abnormal spindle generation impairs sleep-dependent memory consolidation in schizophrenia, contributes to positive symptoms, and is a promising novel target for the treatment of cognitive deficits in schizophrenia." }, { "pmid": "21742389", "title": "An opportunistic theory of cellular and systems consolidation.", "abstract": "Memories are often classified as hippocampus dependent or independent, and sleep has been found to facilitate both, but in different ways. In this Opinion, we explore the optimal neural state for cellular and systems consolidation of hippocampus-dependent memories that benefit from sleep. We suggest that these two kinds of consolidation, which are ordinarily treated separately, overlap in time and jointly benefit from a period of reduced interference (during which no new memories are formed). Conditions that result in reduced interference include slow wave sleep (SWS), NMDA receptor antagonists, benzodiazepines, alcohol and acetylcholine antagonists. We hypothesize that the consolidation of hippocampal-dependent memories might not depend on SWS per se. Instead, the brain opportunistically consolidates previously encoded memories whenever the hippocampus is not otherwise occupied by the task of encoding new memories." }, { "pmid": "26227582", "title": "Sleep not just protects memories against forgetting, it also makes them more accessible.", "abstract": "Two published datasets (Dumay & Gaskell, 2007, Psychological Science; Tamminen, Payne, Stickgold, Wamsley, & Gaskell, 2010, Journal of Neuroscience) showing a positive influence of sleep on declarative memory were re-analyzed, focusing on the \"fate\" of each item at the 0-h test and 12-h retest. In particular, I looked at which items were retrieved at test and \"maintained\" (i.e., not forgotten) at retest, and which items were not retrieved at test, but eventually \"gained\" at retest. This gave me separate estimates of protection against loss and memory enhancement, which the classic approach relying on net recall/recognition levels has remained blind to. In both free recall and recognition, the likelihood of maintaining an item between test and retest, like that of gaining one at retest, was higher when the retention interval was filled with nocturnal sleep, as opposed to day-time (active) wakefulness. And, in both cases, the effect of sleep was stronger on gained than maintained items. Thus, if sleep indeed protects against retroactive, unspecific interference, it also clearly promotes access to those memories initially too weak to be retrieved. These findings call for an integrated approach including both passive (cell-level) and active (systems-level) consolidation, possibly unfolding in an opportunistic fashion." }, { "pmid": "17085038", "title": "The role of sleep in declarative memory consolidation: passive, permissive, active or none?", "abstract": "Those inclined to relish in scientific controversy will not be disappointed by the literature on the effects of sleep on memory. Opinions abound. Yet refinements in the experimental study of these complex processes of sleep and memory are bringing this fascinating relationship into sharper focus. A longstanding position contends that sleep passively protects memories by temporarily sheltering them from interference, thus providing precious little benefit for memory. But recent evidence is unmasking a more substantial and long-lasting benefit of sleep for declarative memories. Although the precise causal mechanisms within sleep that result in memory consolidation remain elusive, recent evidence leads us to conclude that unique neurobiological processes within sleep actively enhance declarative memories." }, { "pmid": "21335017", "title": "Sleep-dependent consolidation of statistical learning.", "abstract": "The importance of sleep for memory consolidation has been firmly established over the past decade. Recent work has extended this by suggesting that sleep is also critical for the integration of disparate fragments of information into a unified schema, and for the abstraction of underlying rules. The question of which aspects of sleep play a significant role in integration and abstraction is, however, currently unresolved. Here, we examined the role of sleep in abstraction of the implicit probabilistic structure in sequential stimuli using a statistical learning paradigm, and tested for its role in such abstraction by searching for a predictive relationship between the type of sleep obtained and subsequent performance improvements using polysomnography. In our experiments, participants were exposed to a series of tones in a probabilistically determined sequential structure, and subsequently tested for recognition of novel short sequences adhering to this same statistical pattern in both immediate- and delayed-recall sessions. Participants who consolidated over a night of sleep improved significantly more than those who consolidated over an equivalent period of daytime wakefulness. Similarly, participants who consolidated across a 4-h afternoon delay containing a nap improved significantly more than those who consolidated across an equivalent period without a nap. Importantly, polysomnography revealed a significant correlation between the level of improvement and the amount of slow-wave sleep obtained. We also found evidence of a time-based consolidation process which operates alongside sleep-specific consolidation. These results demonstrate that abstraction of statistical patterns benefits from sleep, and provide the first clear support for the role of slow-wave sleep in this consolidation." }, { "pmid": "17449637", "title": "Human relational memory requires time and sleep.", "abstract": "Relational memory, the flexible ability to generalize across existing stores of information, is a fundamental property of human cognition. Little is known, however, about how and when this inferential knowledge emerges. Here, we test the hypothesis that human relational memory develops during offline time periods. Fifty-six participants initially learned five \"premise pairs\" (A>B, B>C, C>D, D>E, and E>F). Unknown to subjects, the pairs contained an embedded hierarchy (A>B>C>D>E>F). Following an offline delay of either 20 min, 12 hr (wake or sleep), or 24 hr, knowledge of the hierarchy was tested by examining inferential judgments for novel \"inference pairs\" (B>D, C>E, and B>E). Despite all groups achieving near-identical premise pair retention after the offline delay (all groups, >85%; the building blocks of the hierarchy), a striking dissociation was evident in the ability to make relational inference judgments: the 20-min group showed no evidence of inferential ability (52%), whereas the 12- and 24-hr groups displayed highly significant relational memory developments (inference ability of both groups, >75%; P < 0.001). Moreover, if the 12-hr period contained sleep, an additional boost to relational memory was seen for the most distant inferential judgment (the B>E pair; sleep = 93%, wake = 69%, P = 0.03). Interestingly, despite this increase in performance, the sleep benefit was not associated with an increase in subjective confidence for these judgments. Together, these findings demonstrate that human relational memory develops during offline time delays. Furthermore, sleep appears to preferentially facilitate this process by enhancing hierarchical memory binding, thereby allowing superior performance for the more distant inferential judgments, a benefit that may operate below the level of conscious awareness." }, { "pmid": "28056104", "title": "Sleep-Driven Computations in Speech Processing.", "abstract": "Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting-on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously-with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation." }, { "pmid": "14534586", "title": "Consolidation during sleep of perceptual learning of spoken language.", "abstract": "Memory consolidation resulting from sleep has been seen broadly: in verbal list learning, spatial learning, and skill acquisition in visual and motor tasks. These tasks do not generalize across spatial locations or motor sequences, or to different stimuli in the same location. Although episodic rote learning constitutes a large part of any organism's learning, generalization is a hallmark of adaptive behaviour. In speech, the same phoneme often has different acoustic patterns depending on context. Training on a small set of words improves performance on novel words using the same phonemes but with different acoustic patterns, demonstrating perceptual generalization. Here we show a role of sleep in the consolidation of a naturalistic spoken-language learning task that produces generalization of phonological categories across different acoustic patterns. Recognition performance immediately after training showed a significant improvement that subsequently degraded over the span of a day's retention interval, but completely recovered following sleep. Thus, sleep facilitates the recovery and subsequent retention of material learned opportunistically at any time throughout the day. Performance recovery indicates that representations and mappings associated with generalization are refined and stabilized during sleep." }, { "pmid": "12819785", "title": "Sleep-dependent learning: a nap is as good as a night.", "abstract": "The learning of perceptual skills has been shown in some cases to depend on the plasticity of the visual cortex and to require post-training nocturnal sleep. We now report that sleep-dependent learning of a texture discrimination task can be accomplished in humans by brief (60- 90 min) naps containing both slow-wave sleep (SWS) and rapid eye movement (REM) sleep. This nap-dependent learning closely resembled that previously reported for an 8-h night of sleep in terms of magnitude, sleep-stage dependency and retinotopic specificity, and it was additive to subsequent sleep-dependent improvement, such that performance over 24 h showed as much learning as is normally seen after twice that length of time. Thus, from the perspective of behavioral improvement, a nap is as good as a night of sleep for learning on this perceptual task." }, { "pmid": "20176120", "title": "Daytime napping: Effects on human direct associative and relational memory.", "abstract": "Sleep facilitates declarative memory processing. However, we know little about whether sleep plays a role in the processing of a fundamental feature of declarative memory, relational memory - the flexible representation of items not directly learned prior to sleep. Thirty-one healthy participants first learned at 12 pm two sets of face-object photograph pairs (direct associative memory), in which the objects in each pair were common to both lists, but paired with two different faces. Participants either were given approximately 90 min to have a NREM-only daytime nap (n=14) or an equivalent waking period (n=17). At 4:30 pm, participants who napped demonstrated significantly better retention of direct associative memory, as well as better performance on a surprise task assessing their relational memory, in which participants had to associate the two faces previously paired with the same object during learning. Particularly noteworthy, relational memory performance was correlated with the amount of NREM sleep during the nap, with only slow-wave sleep predicting relational memory performance. Sleep stage data did not correlate with direct associative memory retention. These results suggest an active role for sleep in facilitating multiple processes that are not limited to the mere strengthening of rote memories, but also the binding of items that were not directly learned together, reorganizing them for flexible use at a later time." }, { "pmid": "25447376", "title": "Sleep facilitates learning a new linguistic rule.", "abstract": "Natural languages contain countless regularities. Extraction of these patterns is an essential component of language acquisition. Here we examined the hypothesis that memory processing during sleep contributes to this learning. We exposed participants to a hidden linguistic rule by presenting a large number of two-word phrases, each including a noun preceded by one of four novel words that functioned as an article (e.g., gi rhino). These novel words (ul, gi, ro and ne) were presented as obeying an explicit rule: two words signified that the noun referent was relatively near, and two that it was relatively far. Undisclosed to participants was the fact that the novel articles also predicted noun animacy, with two of the articles preceding animate referents and the other two preceding inanimate referents. Rule acquisition was tested implicitly using a task in which participants responded to each phrase according to whether the noun was animate or inanimate. Learning of the hidden rule was evident in slower responses to phrases that violated the rule. Responses were delayed regardless of whether rule-knowledge was consciously accessible. Brain potentials provided additional confirmation of implicit and explicit rule-knowledge. An afternoon nap was interposed between two 20-min learning sessions. Participants who obtained greater amounts of both slow-wave and rapid-eye-movement sleep showed increased sensitivity to the hidden linguistic rule in the second session. We conclude that during sleep, reactivation of linguistic information linked with the rule was instrumental for stabilizing learning. The combination of slow-wave and rapid-eye-movement sleep may synergistically facilitate the abstraction of complex patterns in linguistic input." }, { "pmid": "27481220", "title": "The influence of sleep on emotional and cognitive processing is primarily trait- (but not state-) dependent.", "abstract": "Human studies of sleep and cognition have established thatdifferent sleep stages contribute to distinct aspects of cognitive and emotional processing. However, since the majority of these findings are based on single-night studies, it is difficult to determine whether such effects arise due to individual, between-subject differences in sleep patterns, or from within-subject variations in sleep over time. In the current study, weinvestigated the longitudinal relationship between sleep patterns and cognitive performance by monitoring both in parallel, daily, for a week. Using two cognitive tasks - one assessing emotional reactivity to facial expressions and the other evaluating learning abilities in a probabilistic categorization task - we found that between-subjectdifferences in the average time spent in particular sleep stages predicted performance in these tasks far more than within-subject daily variations. Specifically, the typical time individualsspent in Rapid-Eye Movement (REM) sleep and Slow-Wave Sleep (SWS) was correlated to their characteristic measures of emotional reactivity, whereas the typical time spent in SWS and non-REM stages 1 and 2 was correlated to their success in category learning. These effects were maintained even when sleep properties werebased onbaseline measures taken prior to the experimental week. In contrast, within-subject daily variations in sleep patterns only contributed to overnight difference in one particular measure of emotional reactivity. Thus, we conclude that the effects of natural sleep onemotional cognition and categorylearning are more trait-dependent than state-dependent, and suggest ways to reconcile these results with previous findings in the literature." }, { "pmid": "20046194", "title": "The memory function of sleep.", "abstract": "Sleep has been identified as a state that optimizes the consolidation of newly acquired information in memory, depending on the specific conditions of learning and the timing of sleep. Consolidation during sleep promotes both quantitative and qualitative changes of memory representations. Through specific patterns of neuromodulatory activity and electric field potential oscillations, slow-wave sleep (SWS) and rapid eye movement (REM) sleep support system consolidation and synaptic consolidation, respectively. During SWS, slow oscillations, spindles and ripples - at minimum cholinergic activity - coordinate the re-activation and redistribution of hippocampus-dependent memories to neocortical sites, whereas during REM sleep, local increases in plasticity-related immediate-early gene activity - at high cholinergic and theta activity - might favour the subsequent synaptic consolidation of memories in the cortex." }, { "pmid": "23755173", "title": "Sleep promotes the extraction of grammatical rules.", "abstract": "Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep) or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1) the holistic abstraction of the underlying grammar rules and (2) the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired." }, { "pmid": "16913948", "title": "Naps promote abstraction in language-learning infants.", "abstract": "Infants engage in an extraordinary amount of learning during their waking hours even though much of their day is consumed by sleep. What role does sleep play in infant learning? Fifteen-month-olds were familiarized with an artificial language 4 hr prior to a lab visit. Learning the language involved relating initial and final words in auditory strings by remembering the exact word dependencies or by remembering an abstract relation between initial and final words. One group napped during the interval between familiarization and test. Another group did not nap. Infants who napped appeared to remember a more abstract relation, one they could apply to stimuli that were similar but not identical to those from familiarization. Infants who did not nap showed a memory effect. Naps appear to promote a qualitative change in memory, one involving greater flexibility in learning." }, { "pmid": "22110606", "title": "Relational memory: a daytime nap facilitates the abstraction of general concepts.", "abstract": "It is increasingly evident that sleep strengthens memory. However, it is not clear whether sleep promotes relational memory, resultant of the integration of disparate memory traces into memory networks linked by commonalities. The present study investigates the effect of a daytime nap, immediately after learning or after a delay, on a relational memory task that requires abstraction of general concept from separately learned items. Specifically, participants learned English meanings of Chinese characters with overlapping semantic components called radicals. They were later tested on new characters sharing the same radicals and on explicitly stating the general concepts represented by the radicals. Regardless of whether the nap occurred immediately after learning or after a delay, the nap participants performed better on both tasks. The results suggest that sleep--even as brief as a nap--facilitates the reorganization of discrete memory traces into flexible relational memory networks." }, { "pmid": "19189775", "title": "Sleep promotes generalization of extinction of conditioned fear.", "abstract": "STUDY OBJECTIVE\nTo examine the effects of sleep on fear conditioning, extinction, extinction recall, and generalization of extinction recall in healthy humans.\n\n\nDESIGN\nDuring the Conditioning phase, a mild, 0.5-sec shock followed conditioned stimuli (CS+s), which consisted of 2 differently colored lamps. A third lamp color was interspersed but never reinforced (CS-). Immediately after Conditioning, one CS+ was extinguished (CS+E) by presentation without shocks (Extinction phase). The other CS+ went unextinguished (CS+U). Twelve hours later, following continuous normal daytime waking (Wake group, N=27) or an equal interval containing a normal night's sleep (Sleep group, N=26), conditioned responses (CRs) to all CSs were measured (Extinction Recall phase). It was hypothesized that the Sleep versus Wake group would show greater extinction recall and/or generalization of extinction recall from the CS+E to the CS+U.\n\n\nSETTING\nAcademic medical center.\n\n\nSUBJECTS\nPaid normal volunteers.\n\n\nMEASUREMENTS AND RESULTS\nSquare-root transformed skin conductance response (SCR) measured conditioned responding. During Extinction Recall, the Group (Wake or Sleep) x CS+ Type (CS+E or CS+U) interaction was significant (P = 0.04). SCRs to the CS+E did not differ between groups, whereas SCRs to the CS+U were significantly smaller in the Sleep group. Additionally, SCRs were significantly larger to the CS+U than CS+E in the Wake but not the Sleep group.\n\n\nCONCLUSIONS\nAfter sleep, extinction memory generalized from an extinguished conditioned stimulus to a similarly conditioned but unextinguished stimulus. Clinically, adequate sleep may promote generalization of extinction memory from specific stimuli treated during exposure therapy to similar stimuli later encountered in vivo." }, { "pmid": "25174663", "title": "Time- but not sleep-dependent consolidation promotes the emergence of cross-modal conceptual representations.", "abstract": "Conceptual knowledge about objects comprises a diverse set of multi-modal and generalisable information, which allows us to bring meaning to the stimuli in our environment. The formation of conceptual representations requires two key computational challenges: integrating information from different sensory modalities and abstracting statistical regularities across exemplars. Although these processes are thought to be facilitated by offline memory consolidation, investigations into how cross-modal concepts evolve offline, over time, rather than with continuous category exposure are still missing. Here, we aimed to mimic the formation of new conceptual representations by reducing this process to its two key computational challenges and exploring its evolution over an offline retention period. Participants learned to distinguish between members of two abstract categories based on a simple one-dimensional visual rule. Underlying the task was a more complex hidden indicator of category structure, which required the integration of information across two sensory modalities. In two experiments we investigated the impact of time- and sleep-dependent consolidation on category learning. Our results show that offline memory consolidation facilitated cross-modal category learning. Surprisingly, consolidation across wake, but not across sleep showed this beneficial effect. By demonstrating the importance of offline consolidation the current study provided further insights into the processes that underlie the formation of conceptual representations." }, { "pmid": "23962141", "title": "Wakefulness (not sleep) promotes generalization of word learning in 2.5-year-old children.", "abstract": "Sleep enhances generalization in adults, but this has not been examined in toddlers. This study examined the impact of napping versus wakefulness on the generalization of word learning in toddlers when the contextual background changes during learning. Thirty 2.5-year-old children (M = 32.94, SE = 0.46) learned labels for novel categories of objects, presented on different contextual backgrounds, and were tested on their ability to generalize the labels to new exemplars after a 4-hr delay with or without a nap. The results demonstrated that only children who did not nap were able to generalize learning. These findings have critical implications for the functions of sleep versus wakefulness in generalization, implicating a role for forgetting during wakefulness in generalization." }, { "pmid": "28211489", "title": "Sleep Supports the Slow Abstraction of Gist from Visual Perceptual Memories.", "abstract": "Sleep benefits the consolidation of individual episodic memories. In the long run, however, it may be more efficient to retain the abstract gist of single, related memories, which can be generalized to similar instances in the future. While episodic memory is enhanced after one night of sleep, effective gist abstraction is thought to require multiple nights. We tested this hypothesis using a visual Deese-Roediger-McDermott paradigm, examining gist abstraction and episodic-like memory consolidation after 20 min, after 10 hours, as well as after one year of retention. While after 10 hours, sleep enhanced episodic-like memory for single items, it did not affect gist abstraction. One year later, however, we found significant gist knowledge only if subjects had slept immediately after encoding, while there was no residual memory for individual items. These findings indicate that sleep after learning strengthens episodic-like memories in the short term and facilitates long-term gist abstraction." }, { "pmid": "27046022", "title": "Does Sleep Improve Your Grammar? Preferential Consolidation of Arbitrary Components of New Linguistic Knowledge.", "abstract": "We examined the role of sleep-related memory consolidation processes in learning new form-meaning mappings. Specifically, we examined a Complementary Learning Systems account, which implies that sleep-related consolidation should be more beneficial for new hippocampally dependent arbitrary mappings (e.g. new vocabulary items) relative to new systematic mappings (e.g. grammatical regularities), which can be better encoded neocortically. The hypothesis was tested using a novel language with an artificial grammatical gender system. Stem-referent mappings implemented arbitrary aspects of the new language, and determiner/suffix+natural gender mappings implemented systematic aspects (e.g. tib scoiffesh + ballerina, tib mofeem + bride; ked jorool + cowboy, ked heefaff + priest). Importantly, the determiner-gender and the suffix-gender mappings varied in complexity and salience, thus providing a range of opportunities to detect beneficial effects of sleep for this type of mapping. Participants were trained on the new language using a word-picture matching task, and were tested after a 2-hour delay which included sleep or wakefulness. Participants in the sleep group outperformed participants in the wake group on tests assessing memory for the arbitrary aspects of the new mappings (individual vocabulary items), whereas we saw no evidence of a sleep benefit in any of the tests assessing memory for the systematic aspects of the new mappings: Participants in both groups extracted the salient determiner-natural gender mapping, but not the more complex suffix-natural gender mapping. The data support the predictions of the complementary systems account and highlight the importance of the arbitrariness/systematicity dimension in the consolidation process for declarative memories." }, { "pmid": "24068804", "title": "The role of sleep spindles and slow-wave activity in integrating new information in semantic memory.", "abstract": "Assimilating new information into existing knowledge is a fundamental part of consolidating new memories and allowing them to guide behavior optimally and is vital for conceptual knowledge (semantic memory), which is accrued over many years. Sleep is important for memory consolidation, but its impact upon assimilation of new information into existing semantic knowledge has received minimal examination. Here, we examined the integration process by training human participants on novel words with meanings that fell into densely or sparsely populated areas of semantic memory in two separate sessions. Overnight sleep was polysomnographically monitored after each training session and recall was tested immediately after training, after a night of sleep, and 1 week later. Results showed that participants learned equal numbers of both word types, thus equating amount and difficulty of learning across the conditions. Measures of word recognition speed showed a disadvantage for novel words in dense semantic neighborhoods, presumably due to interference from many semantically related concepts, suggesting that the novel words had been successfully integrated into semantic memory. Most critically, semantic neighborhood density influenced sleep architecture, with participants exhibiting more sleep spindles and slow-wave activity after learning the sparse compared with the dense neighborhood words. These findings provide the first evidence that spindles and slow-wave activity mediate integration of new information into existing semantic networks." }, { "pmid": "28288833", "title": "The impact of sleep on novel concept learning.", "abstract": "Prior research demonstrates that sleep benefits memory consolidation. But beyond its role in memory retention, sleep may also facilitate the reorganization and flexible use of new information. In the present study, we investigated the effect of sleep on conceptual knowledge. Participants classified abstract dot patterns into novel categories, and were later tested on both previously seen dot patterns as well as on new patterns. A Wake group (n=17) trained at 9AM, continued with their daily activities, and then tested at 9PM that evening. A Sleep group (n=20) trained at 9PM, went home to sleep, and was tested the following morning at 9AM. Two Immediate Test control groups completed testing immediately following training in either the morning (n=18) or evening (n=18). Post-training sleep led to superior classification of all stimulus types, including the specific exemplars learned during training, novel patterns that had not previously been seen, and \"prototype\" patterns from which the exemplars were derived. However, performance did not improve significantly above baseline after a night of sleep. Instead, sleep appeared to maintain performance, relative to a performance decline across a day of wakefulness. There was additionally evidence of a time of day effect on performance. Together with prior observations, these data support the notion that sleep may be involved in an important process whereby we extract commonalities from our experiences to construct useful mental models of the world around us." }, { "pmid": "25129237", "title": "Generalization from episodic memories across time: a route for semantic knowledge acquisition.", "abstract": "The storage of input regularities, at all levels of processing complexity, is a fundamental property of the nervous system. At high levels of complexity, this may involve the extraction of associative regularities between higher order entities such as objects, concepts and environments across events that are separated in space and time. We propose that such a mechanism provides an important route towards the formation of higher order semantic knowledge. The present study assessed whether subjects were able to extract complex regularities from multiple associative memories and whether they could generalize this regularity knowledge to new items. We used a memory task in which subjects were required to learn face-location associations, but in which certain facial features were predictive of locations. We assessed generalization, as well as memory for arbitrary stimulus components, over a 4-h post-encoding consolidation period containing wakefulness or sleep. We also assessed the stability of regularity knowledge across a period of several weeks thereafter. We found that subjects were able to detect the regularity structure and use it in a generalization task. Interestingly, the performance on this task increased across the 4hr post-learning period. However, no differential effects of cerebral sleep and wake states during this interval were observed. Furthermore, it was found that regularity extraction hampered the storage of arbitrary facial features, resulting in an impoverished memory trace. Finally, across a period of several weeks, memory for the regularity structure appeared very robust whereas memory for arbitrary associations showed steep forgetting. The current findings improve our understanding of how regularities across memories impact memory (trans)formation." }, { "pmid": "27881854", "title": "The neural and computational bases of semantic cognition.", "abstract": "Semantic cognition refers to our ability to use, manipulate and generalize knowledge that is acquired over the lifespan to support innumerable verbal and non-verbal behaviours. This Review summarizes key findings and issues arising from a decade of research into the neurocognitive and neurocomputational underpinnings of this ability, leading to a new framework that we term controlled semantic cognition (CSC). CSC offers solutions to long-standing queries in philosophy and cognitive science, and yields a convergent framework for understanding the neural and computational bases of healthy semantic cognition and its dysfunction in brain disorders." }, { "pmid": "22998869", "title": "REM sleep reorganizes hippocampal excitability.", "abstract": "Sleep is composed of an alternating sequence of REM and non-REM episodes, but their respective roles are not known. We found that the overall firing rates of hippocampal CA1 neurons decreased across sleep concurrent with an increase in the recruitment of neuronal spiking to brief \"ripple\" episodes, resulting in a net increase in neural synchrony. Unexpectedly, within non-REM episodes, overall firing rates gradually increased together with a decrease in the recruitment of spiking to ripples. The rate increase within non-REM episodes was counteracted by a larger and more rapid decrease of discharge frequency within the interleaved REM episodes. Both the decrease in firing rates and the increase in synchrony during the course of sleep were correlated with the power of theta activity during REM episodes. These findings assign a prominent role of REM sleep in sleep-related neuronal plasticity." }, { "pmid": "15470431", "title": "Role for a cortical input to hippocampal area CA1 in the consolidation of a long-term memory.", "abstract": "A dialogue between the hippocampus and the neocortex is thought to underlie the formation, consolidation and retrieval of episodic memories, although the nature of this cortico-hippocampal communication is poorly understood. Using selective electrolytic lesions in rats, here we examined the role of the direct entorhinal projection (temporoammonic, TA) to the hippocampal area CA1 in short-term (24 hours) and long-term (four weeks) spatial memory in the Morris water maze. When short-term memory was examined, both sham- and TA-lesioned animals showed a significant preference for the target quadrant. When re-tested four weeks later, sham-lesioned animals exhibited long-term memory; in contrast, the TA-lesioned animals no longer showed target quadrant preference. Many long-lasting memories require a process called consolidation, which involves the exchange of information between the cortex and hippocampus. The disruption of long-term memory by the TA lesion could reflect a requirement for TA input during either the acquisition or consolidation of long-term memory. To distinguish between these possibilities, we trained animals, verified their spatial memory 24 hours later, and then subjected trained animals to TA lesions. TA-lesioned animals still exhibited a deficit in long-term memory, indicating a disruption of consolidation. Animals in which the TA lesion was delayed by three weeks, however, showed a significant preference for the target quadrant, indicating that the memory had already been adequately consolidated at the time of the delayed lesion. These results indicate that, after learning, ongoing cortical input conveyed by the TA path is required to consolidate long-term spatial memory." } ]
Scientific Reports
29097661
PMC5668259
10.1038/s41598-017-14411-y
GRAFENE: Graphlet-based alignment-free network approach integrates 3D structural and sequence (residue order) data to improve protein structural comparison
Initial protein structural comparisons were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly and are alignment-based. Instead, 3D structures can be modeled as protein structure networks (PSNs). Then, network approaches can compare proteins by comparing their PSNs. These can be alignment-based or alignment-free. We focus on the latter. Existing network alignment-free approaches have drawbacks: 1) They rely on naive measures of network topology. 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could improve comparison because the different data types capture complementary aspects of the protein structure. We address this by: 1) exploiting well-established graphlet measures via a new network alignment-free approach, 2) introducing normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary PSN data and sequence (specifically, residue order) data. We compare synthetic networks and real-world PSNs more accurately and faster than existing network (alignment-free and alignment-based), 3D contact, or sequence approaches.
Motivation and related workProteins perform important cellular functions. While understanding protein function is clearly important, doing so experimentally is expensive and time-consuming1,2. Because of this, the functions of many proteins remain unknown2,3. Consequently, computational prediction of protein function has received attention. In this context, protein structural comparison (PC) aims to quantify similarity between proteins with respect to their sequence or 3-dimensional (3D) structural patterns. Then, functions of unannotated proteins can be predicted based on functions of similar, annotated proteins. By “function”, we mean traditional notions of protein function, such as its biological process, molecular function, or cellular localization4, or any protein characteristic (e.g., length, hydrophobicity/hydrophilicity, or folding rate), as long as the given characteristic is expected to correlate well with the protein structure. In this study, we propose a new PC approach, which we evaluate in an established way: by measuring how accurately it captures expected (dis)similarities between known groups of structurally (dis)similar proteins5, such as protein structural classes from Class, Architecture, Topology, Homology (CATH)6,7, or Structural Classification of Proteins (SCOP)8. Application of our proposed PC approach to protein function prediction is out of the scope of the current study and is the subject of future work.Early PC has relied on sequence analyses9–11. Due to advancements of high-throughput sequencing technologies, rich sequence data are available for many species, and thus, comprehensive sequence pattern searches are possible. However, amino acids that are distant in the linear sequence can be close in 3D structure. Thus, 3D structural analyses can reveal patterns that might not be apparent from the sequence alone12. For example, while high sequence similarity between proteins typically indicates their high structural and functional similarity3, proteins with low sequence similarity can still be structurally similar and perform similar function13,14. In this case, 3D structural approaches, unlike sequence approaches, can correctly identify structurally and thus functionally similar proteins. On the other extreme, proteins with high sequence similarity can be structurally dissimilar and perform different functions15–19. In this case, 3D structural approaches, unlike sequence approaches, can correctly identify structurally and thus functionally different proteins.3D structural approaches can be categorized into traditional 3D contact approaches, which are alignment-based, and network approaches, which can be alignment-based or alignment-free. By alignment-based (3D contact or network) approaches, we mean approaches whose main goal is to map amino acid residues between the compared proteins in a way that conserves the maximum amount of common substructure. In the process, alignment-based approaches can and typically do quantify similarity between the compared protein structures, and they do so under their resulting residue mappings. Given this, alignment-based approaches can and have been used in the task of PC as we define it5,20, even though they are not necessarily directly designed for this task. On the other hand, by alignment-free (network) approaches, we mean approaches whose main goal is to quantify similarity between the compared protein structures independent of any residue mapping, typically by extracting from each structure some network patterns (also called network properties, network features, network fingerprints, or measures of network topology) and comparing the patterns between the structures. Alignment-free approaches are directly designed for the task of PC as we define it. We note that there exist approaches that are alignment-free but not network-based21, which are out of the scope of our study. Below, we discuss 3D contact alignment-based PC approaches, followed by network alignment-based PC approaches, followed by network alignment-free PC approaches.3D contact alignment-based PC approaches are typically rigid-body approaches22,23, meaning that they treat proteins as rigid objects. Such approaches aim to identify alignments that satisfy two objectives: 1) they maximize the number of mapped residues and 2) they minimize deviations between the mapped structures (with respect to e.g., Root Mean Square Deviation). Different rigid-body approaches mainly differ in how they combine these two objectives. There exist many approaches of this type5,24–26. Prominent ones are DaliLite27 and TM-align28. These two approaches have explicitly been used and evaluated in the task of PC as we define it5,20, and are thus directly relevant for our study.Network alignment-based PC approaches are typically flexible alignment methods, meaning that they treat proteins as flexible (rather than rigid) objects, because proteins can undergo large conformational changes. These approaches align local protein regions in a rigid-body manner but account for flexibility by allowing for twists between the locally aligned regions5,29. Also, these approaches are typically of the Contact Map Overlap (CMO) type. That is, first, they represent a 3D protein structure consisting of n residues as a contact map, i.e., n × n matrix C, where position Cij has a value of l if residues i and j are close enough and are thus in contact, and it has a value of 0 otherwise. Note that contact maps are equivalent to protein structure networks (PSNs), in which nodes are residues and edges link spatially close amino acids30. Second, CMO approaches aim to compare contact maps of two proteins, in order to align a subset of residues in one protein to a subset of residues in another protein in a way that maximizes the number of common contacts and also conserves the order of the aligned residues31. Prominent CMO approaches are Apurva, MSVNS, AlEigen7, and GR-Align5. When evaluated in the task of PC as we define it, i.e., when used to compare proteins labeled with structural classes of CATH or SCOP, GR-Align outperformed Apurva, MSVNS, and AlEigen7 in terms of both accuracy and running time5. So, we consider GR-Align to be the state-of-the-art CMO (i.e., network alignment-based) approach. In addition to these network alignment-based approaches, GR-Align was evaluated in the same manner against the existing 3D contact alignment-based approaches (DaliLite and TM-Align mentioned above, as well as three additional approaches, MATT, Yakusa, and FAST)5. In terms of running time, GR-Align was the fastest. In terms of accuracy, GR-Align was superior to MATT, Yakusa, and FAST, but it was inferior or comparable to DaliLite and TM-Align. So, while GR-Align remains the state-of-the-art network alignment-based PC approach, DaliLite and TM-Align remain state-of-the-art 3D contact alignment-based PC approaches, and we continue to consider all three in our study.Network alignment-free approaches also deal with PSN representations of compared proteins, but they aim to quantify protein similarity without accounting for any residue mapping. We propose a novel network alignment-free PC approach (see below). We first compare our approach to its most direct competitors, i.e., existing alignment-free approaches. Then, we compare our approach to existing alignment-based approaches. We recognize that evaluation of alignment-free against alignment-based approaches should be taken with caution32,33, (because the two comparison types quantify protein similarity differently – see above). Yet, as we show later in our evaluation, existing alignment-based PC approaches are superior to existing alignment-free PC approaches and are thus our strongest (though not necessarily fairest) competitors.Next, we discuss existing network alignment-free PC approaches. Such approaches have already been developed14,34, to compare network topological patterns within a protein or across proteins, for example to study differences in network properties between transmembrane and globular proteins, analyze the packing topology of structurally important residues in membrane proteins, or refine homology models of transmembrane proteins35–37. Existing network alignment-free PC approaches, however, have the following limitations:They rely on naive measures of network topology, such as average degree, average or maximum distance (diameter), or average clustering coefficient of a network, which capture the global view of a network but ignore complex local interconnectivities that exist in real-world networks, including PSNs38–40.They can bias PC by PSN size: networks of similar topology but different sizes can be mistakenly identified as dissimilar by the existing approaches simply because of their size differences alone.Because different network measures quantify the same PSN topology from different perspectives41, and because each existing approach uses a single measure, PC could be biased towards the perspective captured by the given measure.They ignore valuable sequence information (also, the existing sequence approaches ignore valuable PSN information).
[ "10802651", "18037900", "25428369", "24443377", "25348408", "9847200", "7723011", "18811946", "15987894", "18506576", "18004789", "22817892", "18923675", "24392935", "8377180", "8506262", "24629187", "19481444", "8819165", "20457744", "15849316", "14534198", "19557139", "21210730", "25810431", "28073757", "18178566", "19054790", "24554629", "15284103", "17237089", "24594900", "26072480", "24686408", "15288760", "17953741", "19664241", "10592235", "23140471", "19259413", "21887225" ]
[ { "pmid": "18037900", "title": "Predicting protein function from sequence and structure.", "abstract": "While the number of sequenced genomes continues to grow, experimentally verified functional annotation of whole genomes remains patchy. Structural genomics projects are yielding many protein structures that have unknown function. Nevertheless, subsequent experimental investigation is costly and time-consuming, which makes computational methods for predicting protein function very attractive. There is an increasing number of noteworthy methods for predicting protein function from sequence and structural data alone, many of which are readily available to cell biologists who are aware of the strengths and pitfalls of each available technique." }, { "pmid": "25428369", "title": "Gene Ontology Consortium: going forward.", "abstract": "The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology." }, { "pmid": "24443377", "title": "GR-Align: fast and flexible alignment of protein 3D structures using graphlet degree similarity.", "abstract": "MOTIVATION\nProtein structure alignment is key for transferring information from well-studied proteins to less studied ones. Structural alignment identifies the most precise mapping of equivalent residues, as structures are more conserved during evolution than sequences. Among the methods for aligning protein structures, maximum Contact Map Overlap (CMO) has received sustained attention during the past decade. Yet, known algorithms exhibit modest performance and are not applicable for large-scale comparison.\n\n\nRESULTS\nGraphlets are small induced subgraphs that are used to design sensitive topological similarity measures between nodes and networks. By generalizing graphlets to ordered graphs, we introduce GR-Align, a CMO heuristic that is suited for database searches. On the Proteus_300 set (44 850 protein domain pairs), GR-Align is several orders of magnitude faster than the state-of-the-art CMO solvers Apurva, MSVNS and AlEigen7, and its similarity score is in better agreement with the structural classification of proteins. On a large-scale experiment on the Gold-standard benchmark dataset (3 207 270 protein domain pairs), GR-Align is several orders of magnitude faster than the state-of-the-art protein structure comparison tools TM-Align, DaliLite, MATT and Yakusa, while achieving similar classification performances. Finally, we illustrate the difference between GR-Align's flexible alignments and the traditional ones by querying a flexible protein in the Astral-40 database (11 154 protein domains). In this experiment, GR-Align's top scoring alignments are not only in better agreement with structural classification of proteins, but also that they allow transferring more information across proteins." }, { "pmid": "25348408", "title": "CATH: comprehensive structural and functional annotations for genome sequences.", "abstract": "The latest version of the CATH-Gene3D protein structure classification database (4.0, http://www.cathdb.info) provides annotations for over 235,000 protein domain structures and includes 25 million domain predictions. This article provides an update on the major developments in the 2 years since the last publication in this journal including: significant improvements to the predictive power of our functional families (FunFams); the release of our 'current' putative domain assignments (CATH-B); a new, strictly non-redundant data set of CATH domains suitable for homology benchmarking experiments (CATH-40) and a number of improvements to the web pages." }, { "pmid": "9847200", "title": "The CATH Database provides insights into protein structure/function relationships.", "abstract": "We report the latest release (version 1.4) of the CATH protein domains database (http://www.biochem.ucl.ac.uk/bsm/cath). This is a hierarchical classification of 13 359 protein domain structures into evolutionary families and structural groupings. We currently identify 827 homologous families in which the proteins have both structual similarity and sequence and/or functional similarity. These can be further clustered into 593 fold groups and 32 distinct architectures. Using our structural classification and associated data on protein functions, stored in the database (EC identifiers, SWISS-PROT keywords and information from the Enzyme database and literature) we have been able to analyse the correlation between the 3D structure and function. More than 96% of folds in the PDB are associated with a single homologous family. However, within the superfolds, three or more different functions are observed. Considering enzyme functions, more than 95% of clearly homologous families exhibit either single or closely related functions, as demonstrated by the EC identifiers of their relatives. Our analysis supports the view that determining structures, for example as part of a 'structural genomics' initiative, will make a major contribution to interpreting genome data." }, { "pmid": "7723011", "title": "SCOP: a structural classification of proteins database for the investigation of sequences and structures.", "abstract": "To facilitate understanding of, and access to, the information available for protein structures, we have constructed the Structural Classification of Proteins (scop) database. This database provides a detailed and comprehensive description of the structural and evolutionary relationships of the proteins of known structure. It also provides for each entry links to co-ordinates, images of the structure, interactive viewers, sequence data and literature references. Two search facilities are available. The homology search permits users to enter a sequence and obtain a list of any structures to which it has significant levels of sequence similarity. The key word search finds, for a word entered by the user, matches from both the text of the scop database and the headers of Brookhaven Protein Databank structure files. The database is freely accessible on World Wide Web (WWW) with an entry point to URL http: parallel scop.mrc-lmb.cam.ac.uk magnitude of scop." }, { "pmid": "18811946", "title": "Comparison study on k-word statistical measures for protein: from sequence to 'sequence space'.", "abstract": "BACKGROUND\nMany proposed statistical measures can efficiently compare protein sequence to further infer protein structure, function and evolutionary information. They share the same idea of using k-word frequencies of protein sequences. Given a protein sequence, the information on its related protein sequences hasn't been used for protein sequence comparison until now. This paper proposed a scheme to construct protein 'sequence space' which was associated with protein sequences related to the given protein, and the performances of statistical measures were compared when they explored the information on protein 'sequence space' or not. This paper also presented two statistical measures for protein: gre.k (generalized relative entropy) and gsm.k (gapped similarity measure).\n\n\nRESULTS\nWe tested statistical measures based on protein 'sequence space' or not with three data sets. This not only offers the systematic and quantitative experimental assessment of these statistical measures, but also naturally complements the available comparison of statistical measures based on protein sequence. Moreover, we compared our statistical measures with alignment-based measures and the existing statistical measures. The experiments were grouped into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the statistical measures to discriminate and classify protein sequences. The second set of the experiments aims at assessing how well our measure does in phylogenetic analysis. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of protein 'sequence space' and statistical measures were obtained.\n\n\nCONCLUSION\nAlignment-based measures have a clear advantage when the data is high redundant. The more efficient statistical measure is the novel gsm.k introduced by this article, the cos.k followed. When the data becomes less redundant, gre.k proposed by us achieves a better performance, but all the other measures perform poorly on classification tasks. Almost all the statistical measures achieve improvement by exploring the information on 'sequence space' as word's length increases, especially for less redundant data. The reasonable results of phylogenetic analysis confirm that Gdis.k based on 'sequence space' is a reliable measure for phylogenetic analysis. In summary, our quantitative analysis verifies that exploring the information on 'sequence space' is a promising way to improve the abilities of statistical measures for protein comparison." }, { "pmid": "15987894", "title": "The effect of long-range interactions on the secondary structure formation of proteins.", "abstract": "The influence of long-range residue interactions on defining secondary structure in a protein has long been discussed and is often cited as the current limitation to accurate secondary structure prediction. There are several experimental examples where a local sequence alone is not sufficient to determine its secondary structure, but a comprehensive survey on a large data set has not yet been done. Interestingly, some earlier studies denied the negative effect of long-range interactions on secondary structure prediction accuracy. Here, we have introduced the residue contact order (RCO), which directly indicates the separation of contacting residues in terms of the position in the sequence, and examined the relationship between the RCO and the prediction accuracy. A large data set of 2777 nonhomologous proteins was used in our analysis. Unlike previous studies, we do find that prediction accuracy drops as residues have contacts with more distant residues. Moreover, this negative correlation between the RCO and the prediction accuracy was found not only for beta-strands, but also for alpha-helices. The prediction accuracy of beta-strands is lower if residues have a high RCO or a low RCO, which corresponds to the situation that a beta-sheet is formed by beta-strands from different chains in a protein complex. The reason why the current study draws the opposite conclusion from the previous studies is examined. The implication for protein folding is also discussed." }, { "pmid": "18506576", "title": "Conserved network properties of helical membrane protein structures and its implication for improving membrane protein homology modeling at the twilight zone.", "abstract": "Homology modeling techniques remain an important tool for membrane protein studies and membrane protein-targeted drug development. Due to the paucity of available structure data, an imminent challenge in this field is to develop novel computational methods to help improve the quality of the homology models constructed using template proteins with low sequence identity. In this work, we attempted to address this challenge using the network approach developed in our group. First, a structure pair dataset of 27 high-resolution and low sequence identity (7-36%) comparative TM proteins was compiled by analyzing available X-ray structures of helical membrane proteins. Structure deviation between these pairs was subsequently confirmed by calculating their backbone RMSD and comparing their potential energy per residue. Next, this dataset was further studied using the network approach. Results of these analyses indicated that the network measure applied represents a conserved feature of TM domains of similar folds with various sequence identities. Further comparison of this salient feature between high-resolution template structures and their homology models at the twilight zone suggested a useful method to utilize this property for homology model refinement. These findings should be of help for improving the quality of homology models based on templates with low sequence identity, thus broadening the application of homology modeling techniques in TM protein studies." }, { "pmid": "18004789", "title": "Sequence-similar, structure-dissimilar protein pairs in the PDB.", "abstract": "It is often assumed that in the Protein Data Bank (PDB), two proteins with similar sequences will also have similar structures. Accordingly, it has proved useful to develop subsets of the PDB from which \"redundant\" structures have been removed, based on a sequence-based criterion for similarity. Similarly, when predicting protein structure using homology modeling, if a template structure for modeling a target sequence is selected by sequence alone, this implicitly assumes that all sequence-similar templates are equivalent. Here, we show that this assumption is often not correct and that standard approaches to create subsets of the PDB can lead to the loss of structurally and functionally important information. We have carried out sequence-based structural superpositions and geometry-based structural alignments of a large number of protein pairs to determine the extent to which sequence similarity ensures structural similarity. We find many examples where two proteins that are similar in sequence have structures that differ significantly from one another. The source of the structural differences usually has a functional basis. The number of such proteins pairs that are identified and the magnitude of the dissimilarity depend on the approach that is used to calculate the differences; in particular sequence-based structure superpositioning will identify a larger number of structurally dissimilar pairs than geometry-based structural alignments. When two sequences can be aligned in a statistically meaningful way, sequence-based structural superpositioning provides a meaningful measure of structural differences. This approach and geometry-based structure alignments reveal somewhat different information and one or the other might be preferable in a given application. Our results suggest that in some cases, notably homology modeling, the common use of nonredundant datasets, culled from the PDB based on sequence, may mask important structural and functional information. We have established a data base of sequence-similar, structurally dissimilar protein pairs that will help address this problem (http://luna.bioc.columbia.edu/rachel/seqsimstrdiff.htm)." }, { "pmid": "22817892", "title": "An α helix to β barrel domain switch transforms the transcription factor RfaH into a translation factor.", "abstract": "NusG homologs regulate transcription and coupled processes in all living organisms. The Escherichia coli (E. coli) two-domain paralogs NusG and RfaH have conformationally identical N-terminal domains (NTDs) but dramatically different carboxy-terminal domains (CTDs), a β barrel in NusG and an α hairpin in RfaH. Both NTDs interact with elongating RNA polymerase (RNAP) to reduce pausing. In NusG, NTD and CTD are completely independent, and NusG-CTD interacts with termination factor Rho or ribosomal protein S10. In contrast, RfaH-CTD makes extensive contacts with RfaH-NTD to mask an RNAP-binding site therein. Upon RfaH interaction with its DNA target, the operon polarity suppressor (ops) DNA, RfaH-CTD is released, allowing RfaH-NTD to bind to RNAP. Here, we show that the released RfaH-CTD completely refolds from an all-α to an all-β conformation identical to that of NusG-CTD. As a consequence, RfaH-CTD binding to S10 is enabled and translation of RfaH-controlled operons is strongly potentiated. PAPERFLICK:" }, { "pmid": "18923675", "title": "Rare codons cluster.", "abstract": "Most amino acids are encoded by more than one codon. These synonymous codons are not used with equal frequency: in every organism, some codons are used more commonly, while others are more rare. Though the encoded protein sequence is identical, selective pressures favor more common codons for enhanced translation speed and fidelity. However, rare codons persist, presumably due to neutral drift. Here, we determine whether other, unknown factors, beyond neutral drift, affect the selection and/or distribution of rare codons. We have developed a novel algorithm that evaluates the relative rareness of a nucleotide sequence used to produce a given protein sequence. We show that rare codons, rather than being randomly scattered across genes, often occur in large clusters. These clusters occur in numerous eukaryotic and prokaryotic genomes, and are not confined to unusual or rarely expressed genes: many highly expressed genes, including genes for ribosomal proteins, contain rare codon clusters. A rare codon cluster can impede ribosome translation of the rare codon sequence. These results indicate additional selective pressures govern the use of synonymous codons, and specifically that local pauses in translation can be beneficial for protein biogenesis." }, { "pmid": "24392935", "title": "Expanding Anfinsen's principle: contributions of synonymous codon selection to rational protein design.", "abstract": "Anfinsen's principle asserts that all information required to specify the structure of a protein is encoded in its amino acid sequence. However, during protein synthesis by the ribosome, the N-terminus of the nascent chain can begin to fold before the C-terminus is available. We tested whether this cotranslational folding can alter the folded structure of an encoded protein in vivo, versus the structure formed when refolded in vitro. We designed a fluorescent protein consisting of three half-domains, where the N- and C-terminal half-domains compete with each other to interact with the central half-domain. The outcome of this competition determines the fluorescence properties of the resulting folded structure. Upon refolding after chemical denaturation, this protein produced equimolar amounts of the N- and C-terminal folded structures, respectively. In contrast, translation in Escherichia coli resulted in a 2-fold enhancement in the formation of the N-terminal folded structure. Rare synonymous codon substitutions at the 5' end of the C-terminal half-domain further increased selection for the N-terminal folded structure. These results demonstrate that the rate at which a nascent protein emerges from the ribosome can specify the folded structure of a protein." }, { "pmid": "8377180", "title": "Protein structure comparison by alignment of distance matrices.", "abstract": "With a rapidly growing pool of known tertiary structures, the importance of protein structure comparison parallels that of sequence alignment. We have developed a novel algorithm (DALI) for optimal pairwise alignment of protein structures. The three-dimensional co-ordinates of each protein are used to calculate residue-residue (C alpha-C alpha) distance matrices. The distance matrices are first decomposed into elementary contact patterns, e.g. hexapeptide-hexapeptide submatrices. Then, similar contact patterns in the two matrices are paired and combined into larger consistent sets of pairs. A Monte Carlo procedure is used to optimize a similarity score defined in terms of equivalent intramolecular distances. Several alignments are optimized in parallel, leading to simultaneous detection of the best, second-best and so on solutions. The method allows sequence gaps of any length, reversal of chain direction and free topological connectivity of aligned segments. Sequential connectivity can be imposed as an option. The method is fully automatic and identifies structural resemblances and common structural cores accurately and sensitively, even in the presence of geometrical distortions. An all-against-all alignment of over 200 representative protein structures results in an objective classification of known three-dimensional folds in agreement with visual classifications. Unexpected topological similarities of biological interest have been detected, e.g. between the bacterial toxin colicin A and globins, and between the eukaryotic POU-specific DNA-binding domain and the bacterial lambda repressor." }, { "pmid": "8506262", "title": "A computer vision based technique for 3-D sequence-independent structural comparison of proteins.", "abstract": "A detailed description of an efficient approach to comparison of protein structures is presented. Given the 3-D coordinate data of the structures to be compared, the system automatically identifies every region of structural similarity between the structures without prior knowledge of an initial alignment. The method uses the geometric hashing technique which was originally developed for model-based object recognition problems in the area of computer vision. It exploits a rotationally and translationally invariant representation of rigid objects, resulting in a highly efficient, fully automated tool. The method is independent of the amino acid sequence and, thus, insensitive to insertions, deletions and displacements of equivalent substructures between the molecules being compared. The method described here is general, identifies 'real' 3-D substructures and is not constrained by the order imposed by the primary chain of the amino acids. Typical structure comparison problems are examined and the results of the new method are compared with the published results from previous methods. These results, obtained without using the sequence order of the chains, confirm published structural analogies that use sequence-dependent techniques. Our results also extend previous analogies by detecting geometrically equivalent out-of-sequential-order structural elements which cannot be obtained by current techniques." }, { "pmid": "24629187", "title": "Algorithms, applications, and challenges of protein structure alignment.", "abstract": "As a fundamental problem in computational structure biology, protein structure alignment has attracted the focus of the community for more than 20 years. While the pairwise structure alignment could be applied to measure the similarity between two proteins, which is a first step for homology search and fold space construction, the multiple structure alignment could be used to understand evolutionary conservation and divergence from a family of protein structures. Structure alignment is an NP-hard problem, which is only computationally tractable by using heuristics. Three levels of heuristics for pairwise structure alignment have been proposed, from the representations of protein structure, the perspectives of viewing protein as a rigid-body or flexible, to the scoring functions as well as the search algorithms for the alignment. For multiple structure alignment, the fourth level of heuristics is applied on how to merge all input structures to a multiple structure alignment. In this review, we first present a small survey of current methods for protein pairwise and multiple alignment, focusing on those that are publicly available as web servers. In more detail, we also discuss the advancements on the development of the new approaches to increase the pairwise alignment accuracy, to efficiently and reliably merge input structures to the multiple structure alignment. Finally, besides broadening the spectrum of the applications of structure alignment for protein template-based prediction, we also list several open problems that need to be solved in the future, such as the large complex alignment and the fast database search." }, { "pmid": "19481444", "title": "Advances and pitfalls of protein structural alignment.", "abstract": "Structure comparison opens a window into the distant past of protein evolution, which has been unreachable by sequence comparison alone. With 55,000 entries in the Protein Data Bank and about 500 new structures added each week, automated processing, comparison, and classification are necessary. A variety of methods use different representations, scoring functions, and optimization algorithms, and they generate contradictory results even for moderately distant structures. Sequence mutations, insertions, and deletions are accommodated by plastic deformations of the common core, retaining the precise geometry of the active site, and peripheral regions may refold completely. Therefore structure comparison methods that allow for flexibility and plasticity generate the most biologically meaningful alignments. Active research directions include both the search for fold invariant features and the modeling of structural transitions in evolution. Advances have been made in algorithmic robustness, multiple alignment, and speeding up database searches." }, { "pmid": "8819165", "title": "The structural alignment between two proteins: is there a unique answer?", "abstract": "Structurally similar but sequentially unrelated proteins have been discovered and rediscovered by many researchers, using a variety of structure comparison tools. For several pairs of such proteins, existing structural alignments obtained from the literature, as well as alignments prepared using several different similarity criteria, are compared with each other. It is shown that, in general, they differ from each other, with differences increasing with diminishing sequence similarity. Differences are particularly strong between alignments optimizing global similarity measures, such as RMS deviation between C alpha atoms, and alignments focusing on more local features, such as packing or interaction pattern similarity. Simply speaking, by putting emphasis on different aspects of structure, different structural alignments show the unquestionable similarity in a different way. With differences between various alignments extending to a point where they can differ at all positions, analysis of structural similarities leads to contradictory results reported by groups using different alignment techniques. The problem of uniqueness and stability of structural alignments is further studied with the help of visualization of the suboptimal alignments. It is shown that alignments are often degenerate and whole families of alignments can be generated with almost the same score as the \"optimal alignment.\" However, for some similarity criteria, specially those based on side-chain positions, rather than C alpha positions, alignments in some areas of the protein are unique. This opens the question of how and if the structural alignments can be used as \"standards of truth\" for protein comparison." }, { "pmid": "20457744", "title": "Dali server: conservation mapping in 3D.", "abstract": "Our web site (http://ekhidna.biocenter.helsinki.fi/dali_server) runs the Dali program for protein structure comparison. The web site consists of three parts: (i) the Dali server compares newly solved structures against structures in the Protein Data Bank (PDB), (ii) the Dali database allows browsing precomputed structural neighbourhoods and (iii) the pairwise comparison generates suboptimal alignments for a pair of structures. Each part has its own query form and a common format for the results page. The inputs are either PDB identifiers or novel structures uploaded by the user. The results pages are hyperlinked to aid interactive analysis. The web interface is simple and easy to use. The key purpose of interactive analysis is to check whether conserved residues line up in multiple structural alignments and how conserved residues and ligands cluster together in multiple structure superimpositions. In favourable cases, protein structure comparison can lead to evolutionary discoveries not detected by sequence analysis." }, { "pmid": "15849316", "title": "TM-align: a protein structure alignment algorithm based on the TM-score.", "abstract": "We have developed TM-align, a new algorithm to identify the best structural alignment between protein pairs that combines the TM-score rotation matrix and Dynamic Programming (DP). The algorithm is approximately 4 times faster than CE and 20 times faster than DALI and SAL. On average, the resulting structure alignments have higher accuracy and coverage than those provided by these most often-used methods. TM-align is applied to an all-against-all structure comparison of 10 515 representative protein chains from the Protein Data Bank (PDB) with a sequence identity cutoff <95%: 1996 distinct folds are found when a TM-score threshold of 0.5 is used. We also use TM-align to match the models predicted by TASSER for solved non-homologous proteins in PDB. For both folded and misfolded models, TM-align can almost always find close structural analogs, with an average root mean square deviation, RMSD, of 3 A and 87% alignment coverage. Nevertheless, there exists a significant correlation between the correctness of the predicted structure and the structural similarity of the model to the other proteins in the PDB. This correlation could be used to assist in model selection in blind protein structure predictions. The TM-align program is freely downloadable at http://bioinformatics.buffalo.edu/TM-align." }, { "pmid": "14534198", "title": "Flexible structure alignment by chaining aligned fragment pairs allowing twists.", "abstract": "MOTIVATION\nProtein structures are flexible and undergo structural rearrangements as part of their function, and yet most existing protein structure comparison methods treat them as rigid bodies, which may lead to incorrect alignment.\n\n\nRESULTS\nWe have developed the Flexible structure AlignmenT by Chaining AFPs (Aligned Fragment Pairs) with Twists (FATCAT), a new method for structural alignment of proteins. The FATCAT approach simultaneously addresses the two major goals of flexible structure alignment; optimizing the alignment and minimizing the number of rigid-body movements (twists) around pivot points (hinges) introduced in the reference protein. In contrast, currently existing flexible structure alignment programs treat the hinge detection as a post-process of a standard rigid body alignment. We illustrate the advantages of the FATCAT approach by several examples of comparison between proteins known to adopt different conformations, where the FATCAT algorithm achieves more accurate structure alignments than current methods, while at the same time introducing fewer hinges." }, { "pmid": "19557139", "title": "Optimized null model for protein structure networks.", "abstract": "Much attention has recently been given to the statistical significance of topological features observed in biological networks. Here, we consider residue interaction graphs (RIGs) as network representations of protein structures with residues as nodes and inter-residue interactions as edges. Degree-preserving randomized models have been widely used for this purpose in biomolecular networks. However, such a single summary statistic of a network may not be detailed enough to capture the complex topological characteristics of protein structures and their network counterparts. Here, we investigate a variety of topological properties of RIGs to find a well fitting network null model for them. The RIGs are derived from a structurally diverse protein data set at various distance cut-offs and for different groups of interacting atoms. We compare the network structure of RIGs to several random graph models. We show that 3-dimensional geometric random graphs, that model spatial relationships between objects, provide the best fit to RIGs. We investigate the relationship between the strength of the fit and various protein structural features. We show that the fit depends on protein size, structural class, and thermostability, but not on quaternary structure. We apply our model to the identification of significantly over-represented structural building blocks, i.e., network motifs, in protein structure networks. As expected, choosing geometric graphs as a null model results in the most specific identification of motifs. Our geometric random graph model may facilitate further graph-based studies of protein conformation space and have important implications for protein structure comparison and prediction. The choice of a well-fitting null model is crucial for finding structural motifs that play an important role in protein folding, stability and function. To our knowledge, this is the first study that addresses the challenge of finding an optimized null model for RIGs, by comparing various RIG definitions against a series of network models." }, { "pmid": "21210730", "title": "Maximum contact map overlap revisited.", "abstract": "Among the measures for quantifying the similarity between three-dimensional (3D) protein structures, maximum contact map overlap (CMO) received sustained attention during the past decade. Despite this, the known algorithms exhibit modest performance and are not applicable for large-scale comparison. This article offers a clear advance in this respect. We present a new integer programming model for CMO and propose an exact branch-and-bound algorithm with bounds obtained by a novel Lagrangian relaxation. The efficiency of the approach is demonstrated on a popular small benchmark (Skolnick set, 40 domains). On this set, our algorithm significantly outperforms the best existing exact algorithms. Many hard CMO instances have been solved for the first time. To further assess our approach, we constructed a large-scale set of 300 protein domains. Computing the similarity measure for any of the 44850 pairs, we obtained a classification in excellent agreement with SCOP. Supplementary Material is available at www.liebertonline.com/cmb." }, { "pmid": "25810431", "title": "Proper evaluation of alignment-free network comparison methods.", "abstract": "MOTIVATION\nNetwork comparison is a computationally intractable problem with important applications in systems biology and other domains. A key challenge is to properly quantify similarity between wiring patterns of two networks in an alignment-free fashion. Also, alignment-based methods exist that aim to identify an actual node mapping between networks and as such serve a different purpose. Various alignment-free methods that use different global network properties (e.g. degree distribution) have been proposed. Methods based on small local subgraphs called graphlets perform the best in the alignment-free network comparison task, due to high level of topological detail that graphlets can capture. Among different graphlet-based methods, Graphlet Correlation Distance (GCD) was shown to be the most accurate for comparing networks. Recently, a new graphlet-based method called NetDis was proposed, which was claimed to be superior. We argue against this, as the performance of NetDis was not properly evaluated to position it correctly among the other alignment-free methods.\n\n\nRESULTS\nWe evaluate the performance of available alignment-free network comparison methods, including GCD and NetDis. We do this by measuring accuracy of each method (in a systematic precision-recall framework) in terms of how well the method can group (cluster) topologically similar networks. By testing this on both synthetic and real-world networks from different domains, we show that GCD remains the most accurate, noise-tolerant and computationally efficient alignment-free method. That is, we show that NetDis does not outperform the other methods, as originally claimed, while it is also computationally more expensive. Furthermore, since NetDis is dependent on the choice of a network null model (unlike the other graphlet-based methods), we show that its performance is highly sensitive to the choice of this parameter. Finally, we find that its performance is not independent on network sizes and densities, as originally claimed.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "18178566", "title": "Network pattern of residue packing in helical membrane proteins and its application in membrane protein structure prediction.", "abstract": "De novo protein structure prediction plays an important role in studies of helical membrane proteins as well as structure-based drug design efforts. Developing an accurate scoring function for protein structure discrimination and validation remains a current challenge. Network approaches based on overall network patterns of residue packing have proven useful in soluble protein structure discrimination. It is thus of interest to apply similar approaches to the studies of residue packing in membrane proteins. In this work, we first carried out such analysis on a set of diverse, non-redundant and high-resolution membrane protein structures. Next, we applied the same approach to three test sets. The first set includes nine structures of membrane proteins with the resolution worse than 2.5 A; the other two sets include a total of 101 G-protein coupled receptor models, constructed using either de novo or homology modeling techniques. Results of analyses indicate the two criteria derived from studying high-resolution membrane protein structures are good indicators of a high-quality native fold and the approach is very effective for discriminating native membrane protein folds from less-native ones. These findings should be of help for the investigation of the fundamental problem of membrane protein structure prediction." }, { "pmid": "19054790", "title": "Comparative analysis of the packing topology of structurally important residues in helical membrane and soluble proteins.", "abstract": "Elucidating the distinct topology of residue packing in transmembrane proteins is essential for developing high-quality computational tools for their structure prediction. Network approaches transforming a protein's three-dimensional structure into a network have proven useful in analyzing various aspects of protein structures. Residues with high degree of connectivity as identified through network analysis are considered to be important for the stability of a protein's folded structure. It is thus of interest to study the packing topology of these structurally important residues in membrane proteins. In this work, we systematically characterized the importance and the spatial topology of these highly connected residues in helical membrane and helical soluble proteins from several aspects. A representative helical membrane protein and two helical soluble protein structure data sets were compiled and analyzed. Results of analyses indicate that the highly connected amino acid residues in membrane proteins are more scattered peripherally and more exposed to the membrane than in soluble proteins. Accordingly, they are less densely connected with each other in membrane proteins than in soluble proteins. Together with the knowledge of a centralized function site for many membrane proteins, these findings suggest a structure-function model that is distinguishable from soluble proteins." }, { "pmid": "24554629", "title": "Dynamic networks reveal key players in aging.", "abstract": "MOTIVATION\nBecause susceptibility to diseases increases with age, studying aging gains importance. Analyses of gene expression or sequence data, which have been indispensable for investigating aging, have been limited to studying genes and their protein products in isolation, ignoring their connectivities. However, proteins function by interacting with other proteins, and this is exactly what biological networks (BNs) model. Thus, analyzing the proteins' BN topologies could contribute to the understanding of aging. Current methods for analyzing systems-level BNs deal with their static representations, even though cells are dynamic. For this reason, and because different data types can give complementary biological insights, we integrate current static BNs with aging-related gene expression data to construct dynamic age-specific BNs. Then, we apply sensitive measures of topology to the dynamic BNs to study cellular changes with age.\n\n\nRESULTS\nWhile global BN topologies do not significantly change with age, local topologies of a number of genes do. We predict such genes to be aging-related. We demonstrate credibility of our predictions by (i) observing significant overlap between our predicted aging-related genes and 'ground truth' aging-related genes; (ii) observing significant overlap between functions and diseases that are enriched in our aging-related predictions and those that are enriched in 'ground truth' aging-related data; (iii) providing evidence that diseases which are enriched in our aging-related predictions are linked to human aging; and (iv) validating our high-scoring novel predictions in the literature.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSoftware executables are available upon request." }, { "pmid": "15284103", "title": "Modeling interactome: scale-free or geometric?", "abstract": "MOTIVATION\nNetworks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced.\n\n\nRESULTS\nOne example of large and complex networks involves protein-protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free.\n\n\nCONCLUSIONS\nWe systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary information is available at http://www.cs.utoronto.ca/~juris/data/data/ppiGRG04/" }, { "pmid": "17237089", "title": "Biological network comparison using graphlet degree distribution.", "abstract": "MOTIVATION\nAnalogous to biological sequence comparison, comparing cellular networks is an important problem that could provide insight into biological understanding and therapeutics. For technical reasons, comparing large networks is computationally infeasible, and thus heuristics, such as the degree distribution, clustering coefficient, diameter, and relative graphlet frequency distribution have been sought. It is easy to demonstrate that two networks are different by simply showing a short list of properties in which they differ. It is much harder to show that two networks are similar, as it requires demonstrating their similarity in all of their exponentially many properties. Clearly, it is computationally prohibitive to analyze all network properties, but the larger the number of constraints we impose in determining network similarity, the more likely it is that the networks will truly be similar.\n\n\nRESULTS\nWe introduce a new systematic measure of a network's local structure that imposes a large number of similarity constraints on networks being compared. In particular, we generalize the degree distribution, which measures the number of nodes 'touching' k edges, into distributions measuring the number of nodes 'touching' k graphlets, where graphlets are small connected non-isomorphic subgraphs of a large network. Our new measure of network local structure consists of 73 graphlet degree distributions of graphlets with 2-5 nodes, but it is easily extendible to a greater number of constraints (i.e. graphlets), if necessary, and the extensions are limited only by the available CPU. Furthermore, we show a way to combine the 73 graphlet degree distributions into a network 'agreement' measure which is a number between 0 and 1, where 1 means that networks have identical distributions and 0 means that they are far apart. Based on this new network agreement measure, we show that almost all of the 14 eukaryotic PPI networks, including human, resulting from various high-throughput experimental techniques, as well as from curated databases, are better modeled by geometric random graphs than by Erdös-Rény, random scale-free, or Barabási-Albert scale-free networks.\n\n\nAVAILABILITY\nSoftware executables are available upon request." }, { "pmid": "24594900", "title": "Revealing missing parts of the interactome via link prediction.", "abstract": "Protein interaction networks (PINs) are often used to \"learn\" new biological function from their topology. Since current PINs are noisy, their computational de-noising via link prediction (LP) could improve the learning accuracy. LP uses the existing PIN topology to predict missing and spurious links. Many of existing LP methods rely on shared immediate neighborhoods of the nodes to be linked. As such, they have limitations. Thus, in order to comprehensively study what are the topological properties of nodes in PINs that dictate whether the nodes should be linked, we introduce novel sensitive LP measures that are expected to overcome the limitations of the existing methods. We systematically evaluate the new and existing LP measures by introducing \"synthetic\" noise into PINs and measuring how accurate the measures are in reconstructing the original PINs. Also, we use the LP measures to de-noise the original PINs, and we measure biological correctness of the de-noised PINs with respect to functional enrichment of the predicted interactions. Our main findings are: 1) LP measures that favor nodes which are both \"topologically similar\" and have large shared extended neighborhoods are superior; 2) using more network topology often though not always improves LP accuracy; and 3) LP improves biological correctness of the PINs, plus we validate a significant portion of the predicted interactions in independent, external PIN data sources. Ultimately, we are less focused on identifying a superior method but more on showing that LP improves biological correctness of PINs, which is its ultimate goal in computational biology. But we note that our new methods outperform each of the existing ones with respect to at least one evaluation criterion. Alarmingly, we find that the different criteria often disagree in identifying the best method(s), which has important implications for LP communities in any domain, including social networks." }, { "pmid": "26072480", "title": "Exploring the structure and function of temporal networks with dynamic graphlets.", "abstract": "MOTIVATION\nWith increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships.\n\n\nRESULTS\nWe base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttp://www.nd.edu/∼cone/DG." }, { "pmid": "24686408", "title": "Revealing the hidden language of complex networks.", "abstract": "Sophisticated methods for analysing complex networks promise to be of great benefit to almost all scientific disciplines, yet they elude us. In this work, we make fundamental methodological advances to rectify this. We discover that the interaction between a small number of roles, played by nodes in a network, can characterize a network's structure and also provide a clear real-world interpretation. Given this insight, we develop a framework for analysing and comparing networks, which outperforms all existing ones. We demonstrate its strength by uncovering novel relationships between seemingly unrelated networks, such as Facebook, metabolic, and protein structure networks. We also use it to track the dynamics of the world trade network, showing that a country's role of a broker between non-trading countries indicates economic prosperity, whereas peripheral roles are associated with poverty. This result, though intuitive, has escaped all existing frameworks. Finally, our approach translates network topology into everyday language, bringing network analysis closer to domain scientists." }, { "pmid": "15288760", "title": "Inter-residue interactions in protein folding and stability.", "abstract": "During the process of protein folding, the amino acid residues along the polypeptide chain interact with each other in a cooperative manner to form the stable native structure. The knowledge about inter-residue interactions in protein structures is very helpful to understand the mechanism of protein folding and stability. In this review, we introduce the classification of inter-residue interactions into short, medium and long range based on a simple geometric approach. The features of these interactions in different structural classes of globular and membrane proteins, and in various folds have been delineated. The development of contact potentials and the application of inter-residue contacts for predicting the structural class and secondary structures of globular proteins, solvent accessibility, fold recognition and ab initio tertiary structure prediction have been evaluated. Further, the relationship between inter-residue contacts and protein-folding rates has been highlighted. Moreover, the importance of inter-residue interactions in protein-folding kinetics and for understanding the stability of proteins has been discussed. In essence, the information gained from the studies on inter-residue interactions provides valuable insights for understanding protein folding and de novo protein design." }, { "pmid": "17953741", "title": "Application of amino acid occurrence for discriminating different folding types of globular proteins.", "abstract": "BACKGROUND\nPredicting the three-dimensional structure of a protein from its amino acid sequence is a long-standing goal in computational/molecular biology. The discrimination of different structural classes and folding types are intermediate steps in protein structure prediction.\n\n\nRESULTS\nIn this work, we have proposed a method based on linear discriminant analysis (LDA) for discriminating 30 different folding types of globular proteins using amino acid occurrence. Our method was tested with a non-redundant set of 1612 proteins and it discriminated them with the accuracy of 38%, which is comparable to or better than other methods in the literature. A web server has been developed for discriminating the folding type of a query protein from its amino acid sequence and it is available at http://granular.com/PROLDA/.\n\n\nCONCLUSION\nAmino acid occurrence has been successfully used to discriminate different folding types of globular proteins. The discrimination accuracy obtained with amino acid occurrence is better than that obtained with amino acid composition and/or amino acid properties. In addition, the method is very fast to obtain the results." }, { "pmid": "19664241", "title": "Identification of protein functions using a machine-learning approach based on sequence-derived properties.", "abstract": "BACKGROUND\nPredicting the function of an unknown protein is an essential goal in bioinformatics. Sequence similarity-based approaches are widely used for function prediction; however, they are often inadequate in the absence of similar sequences or when the sequence similarity among known protein sequences is statistically weak. This study aimed to develop an accurate prediction method for identifying protein function, irrespective of sequence and structural similarities.\n\n\nRESULTS\nA highly accurate prediction method capable of identifying protein function, based solely on protein sequence properties, is described. This method analyses and identifies specific features of the protein sequence that are highly correlated with certain protein functions and determines the combination of protein sequence features that best characterises protein function. Thirty-three features that represent subtle differences in local regions and full regions of the protein sequences were introduced. On the basis of 484 features extracted solely from the protein sequence, models were built to predict the functions of 11 different proteins from a broad range of cellular components, molecular functions, and biological processes. The accuracy of protein function prediction using random forests with feature selection ranged from 94.23% to 100%. The local sequence information was found to have a broad range of applicability in predicting protein function.\n\n\nCONCLUSION\nWe present an accurate prediction method using a machine-learning approach based solely on protein sequence properties. The primary contribution of this paper is to propose new PNPRD features representing global and/or local differences in sequences, based on positively and/or negatively charged residues, to assist in predicting protein function. In addition, we identified a compact and useful feature subset for predicting the function of various proteins. Our results indicate that sequence-based classifiers can provide good results among a broad range of proteins, that the proposed features are useful in predicting several functions, and that the combination of our and traditional features may support the creation of a discriminative feature set for specific protein functions." }, { "pmid": "10592235", "title": "The Protein Data Bank.", "abstract": "The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource." }, { "pmid": "23140471", "title": "Effective inter-residue contact definitions for accurate protein fold recognition.", "abstract": "BACKGROUND\nEffective encoding of residue contact information is crucial for protein structure prediction since it has a unique role to capture long-range residue interactions compared to other commonly used scoring terms. The residue contact information can be incorporated in structure prediction in several different ways: It can be incorporated as statistical potentials or it can be also used as constraints in ab initio structure prediction. To seek the most effective definition of residue contacts for template-based protein structure prediction, we evaluated 45 different contact definitions, varying bases of contacts and distance cutoffs, in terms of their ability to identify proteins of the same fold.\n\n\nRESULTS\nWe found that overall the residue contact pattern can distinguish protein folds best when contacts are defined for residue pairs whose Cβ atoms are at 7.0 Å or closer to each other. Lower fold recognition accuracy was observed when inaccurate threading alignments were used to identify common residue contacts between protein pairs. In the case of threading, alignment accuracy strongly influences the fraction of common contacts identified among proteins of the same fold, which eventually affects the fold recognition accuracy. The largest deterioration of the fold recognition was observed for β-class proteins when the threading methods were used because the average alignment accuracy was worst for this fold class. When results of fold recognition were examined for individual proteins, we found that the effective contact definition depends on the fold of the proteins. A larger distance cutoff is often advantageous for capturing spatial arrangement of the secondary structures which are not physically in contact. For capturing contacts between neighboring β strands, considering the distance between Cα atoms is better than the Cβ-based distance because the side-chain of interacting residues on β strands sometimes point to opposite directions.\n\n\nCONCLUSION\nResidue contacts defined by Cβ-Cβ distance of 7.0 Å work best overall among tested to identify proteins of the same fold. We also found that effective contact definitions differ from fold to fold, suggesting that using different residue contact definition specific for each template will lead to improvement of the performance of threading." }, { "pmid": "19259413", "title": "Uncovering biological network function via graphlet degree signatures.", "abstract": "MOTIVATION\nProteins are essential macromolecules of life and thus understanding their function is of great importance. The number of functionally unclassified proteins is large even for simple and well studied organisms such as baker's yeast. Methods for determining protein function have shifted their focus from targeting specific proteins based solely on sequence homology to analyses of the entire proteome based on protein-protein interaction (PPI) networks. Since proteins interact to perform a certain function, analyzing structural properties of PPI networks may provide useful clues about the biological function of individual proteins, protein complexes they participate in, and even larger subcellular machines.\n\n\nRESULTS\nWe design a sensitive graph theoretic method for comparing local structures of node neighborhoods that demonstrates that in PPI networks, biological function of a node and its local network structure are closely related. The method summarizes a protein's local topology in a PPI network into the vector of graphlet degrees called the signature of the protein and computes the signature similarities between all protein pairs. We group topologically similar proteins under this measure in a PPI network and show that these protein groups belong to the same protein complexes, perform the same biological functions, are localized in the same subcellular compartments, and have the same tissue expressions. Moreover, we apply our technique on a proteome-scale network data and infer biological function of yet unclassified proteins demonstrating that our method can provide valuable guidelines for future experimental research such as disease protein prediction.\n\n\nAVAILABILITY\nData is available upon request." }, { "pmid": "21887225", "title": "Dominating biological networks.", "abstract": "Proteins are essential macromolecules of life that carry out most cellular processes. Since proteins aggregate to perform function, and since protein-protein interaction (PPI) networks model these aggregations, one would expect to uncover new biology from PPI network topology. Hence, using PPI networks to predict protein function and role of protein pathways in disease has received attention. A debate remains open about whether network properties of \"biologically central (BC)\" genes (i.e., their protein products), such as those involved in aging, cancer, infectious diseases, or signaling and drug-targeted pathways, exhibit some topological centrality compared to the rest of the proteins in the human PPI network.To help resolve this debate, we design new network-based approaches and apply them to get new insight into biological function and disease. We hypothesize that BC genes have a topologically central (TC) role in the human PPI network. We propose two different concepts of topological centrality. We design a new centrality measure to capture complex wirings of proteins in the network that identifies as TC those proteins that reside in dense extended network neighborhoods. Also, we use the notion of domination and find dominating sets (DSs) in the PPI network, i.e., sets of proteins such that every protein is either in the DS or is a neighbor of the DS. Clearly, a DS has a TC role, as it enables efficient communication between different network parts. We find statistically significant enrichment in BC genes of TC nodes and outperform the existing methods indicating that genes involved in key biological processes occupy topologically complex and dense regions of the network and correspond to its \"spine\" that connects all other network parts and can thus pass cellular signals efficiently throughout the network. To our knowledge, this is the first study that explores domination in the context of PPI networks." } ]
JMIR Medical Informatics
29089288
PMC5686421
10.2196/medinform.8531
Ranking Medical Terms to Support Expansion of Lay Language Resources for Patient Comprehension of Electronic Health Record Notes: Adapted Distant Supervision Approach
BackgroundMedical terms are a major obstacle for patients to comprehend their electronic health record (EHR) notes. Clinical natural language processing (NLP) systems that link EHR terms to lay terms or definitions allow patients to easily access helpful information when reading through their EHR notes, and have shown to improve patient EHR comprehension. However, high-quality lay language resources for EHR terms are very limited in the public domain. Because expanding and curating such a resource is a costly process, it is beneficial and even necessary to identify terms important for patient EHR comprehension first.ObjectiveWe aimed to develop an NLP system, called adapted distant supervision (ADS), to rank candidate terms mined from EHR corpora. We will give EHR terms ranked as high by ADS a higher priority for lay language annotation—that is, creating lay definitions for these terms.MethodsAdapted distant supervision uses distant supervision from consumer health vocabulary and transfer learning to adapt itself to solve the problem of ranking EHR terms in the target domain. We investigated 2 state-of-the-art transfer learning algorithms (ie, feature space augmentation and supervised distant supervision) and designed 5 types of learning features, including distributed word representations learned from large EHR data for ADS. For evaluating ADS, we asked domain experts to annotate 6038 candidate terms as important or nonimportant for EHR comprehension. We then randomly divided these data into the target-domain training data (1000 examples) and the evaluation data (5038 examples). We compared ADS with 2 strong baselines, including standard supervised learning, on the evaluation data.ResultsThe ADS system using feature space augmentation achieved the best average precision, 0.850, on the evaluation set when using 1000 target-domain training examples. The ADS system using supervised distant supervision achieved the best average precision, 0.819, on the evaluation set when using only 100 target-domain training examples. The 2 ADS systems both performed significantly better than the baseline systems (P<.001 for all measures and all conditions). Using a rich set of learning features contributed to ADS’s performance substantially.ConclusionsADS can effectively rank terms mined from EHRs. Transfer learning improved ADS’s performance even with a small number of target-domain training examples. EHR terms prioritized by ADS were used to expand a lay language resource that supports patient EHR comprehension. The top 10,000 EHR terms ranked by ADS are available upon request.
Related WorkNatural Language Processing to Facilitate Creation of Lexical EntriesPrevious studies have used both unsupervised and supervised learning methods to prioritize terms for inclusion in biomedical and health knowledge resources [32-35]. Term recognition methods, which are widely used unsupervised methods for term extraction, use rules and statistics (eg, corpus-level word and term frequencies) to prioritize technical terms from domain-specific text corpora. Since these methods do not use manually annotated training data, they have better domain portability but are less accurate than supervised learning [32]. The contribution of this study is to propose a new learning-based method for EHR term prioritization, which is more accurate than supervised learning while also having good domain portability.Our work is also related to previous studies that have used distributional semantics for lexicon expansion [35-37]. In this work, we used word embedding, one technique for distributional semantics, to generate one type of learning features for the ADS system to rank EHR terms.Ranking Terms in Electronic Health RecordsWe previously developed NLP systems to rank and identify important terms from each EHR note of individual patients [38,39]. This study is different in that it aimed to rank terms at the EHR corpus level for the purpose of expanding a lay language resource to improve health literacy and EHR comprehension of the general patient population. Notice that both types of work are important for building NLP-enabled interventions to support patient EHR comprehension. For example, a real-world application can link all medical jargon terms in a patient’s EHR note to lay terms or definitions, and then highlight the terms most important for this patient and provide detailed information for these important terms.Distant SupervisionOur ADS system uses distant supervision from the CHV. Distant supervision refers to the learning framework that uses information from knowledge bases to create labeled data to train machine learning models [40-42]. Previous work often used this technique to address context-based classification problems such as named entity detection and relation detection. In contrast, we used it to rank terms without considering context. However, our work is similar in that it uses heuristic rules and knowledge bases to create training data. Although training data created this way often contain noise, distant supervision has been successfully applied to several biomedical NLP tasks to reduce human annotation efforts, including extraction of entities [40,41,43], relations [44-46], and important sentences [47] from the biomedical literature. In this study, we made novel use of the non-EHR-centric lexical resource CHV to create training data for ranking terms from EHRs. This approach has greater domain portability than conventional distant supervision methods due to fewer demands on the likeness between the knowledge base and the target-domain learning task. On the other hand, learning from the distantly labeled data with a mismatch to the target task is more challenging. We address this challenge by using transfer learning.Transfer LearningTransfer learning is a learning framework that transfers knowledge from the source domain DS (the training data derived from the CHV, in our case) to the target domain DT to help improve the learning of the target-domain task TT [48]. We followed Pan and Yang [48] to distinguish between inductive transfer learning, where the source- and target-domain tasks are different, and domain adaptation, where the source- and target-domain tasks are the same but the source and target domains (ie, data distributions) are different. Our approach belongs to the first category because our source-domain and target-domain tasks define positive and negative examples in different ways. Transfer learning has been applied to important bioinformatics tasks such as DNA sequence analysis and gene interaction network analysis [49]. It has also been applied to several clinical and biomedical NLP tasks, including part-of-speech tagging [50] and key concept identification for clinical text [51], semantic role labeling for biomedical articles [52] and clinical text [53], and key sentence extraction from biomedical literature [47]. In this work, we investigated 2 state-of-the-art transfer learning algorithms that have shown superior performance in recent studies [47,53]. We aimed to empirically show that they, in combination with distant supervision, are effective in ranking EHR terms.
[ "19224738", "24359554", "26104044", "23027317", "23407012", "23535584", "14965405", "18693866", "12923796", "11103725", "1517087", "20845203", "23978618", "26681155", "27702738", "20442139", "15561782", "18436895", "18693956", "21347002", "23920650", "22357448", "25953147", "28630033", "9934385", "11368735", "22473833", "16221948", "17478413", "21586386", "27671202", "24551362", "27903489", "28267590", "10786289", "15542014", "22174293", "23486109", "20179074", "26063745", "21925286", "10566330", "11604772", "11720966", "12425240", "16779162", "18436906", "14728258", "21163965", "17238339" ]
[ { "pmid": "24359554", "title": "The Medicare Electronic Health Record Incentive Program: provider performance on core and menu measures.", "abstract": "OBJECTIVE\nTo measure performance by eligible health care providers on CMS's meaningful use measures.\n\n\nDATA SOURCE\nMedicare Electronic Health Record Incentive Program Eligible Professionals Public Use File (PUF), which contains data on meaningful use attestations by 237,267 eligible providers through May 31, 2013.\n\n\nSTUDY DESIGN\nCross-sectional analysis of the 15 core and 10 menu measures pertaining to use of EHR functions reported in the PUF.\n\n\nPRINCIPAL FINDINGS\nProviders in the dataset performed strongly on all core measures, with the most frequent response for each of the 15 measures being 90-100 percent compliance, even when the threshold for a particular measure was lower (e.g., 30 percent). PCPs had higher scores than specialists for computerized order entry, maintaining an active medication list, and documenting vital signs, while specialists had higher scores for maintaining a problem list, recording patient demographics and smoking status, and for providing patients with an after-visit summary. In fact, 90.2 percent of eligible providers claimed at least one exclusion, and half claimed two or more.\n\n\nCONCLUSIONS\nProviders are successfully attesting to CMS's requirements, and often exceeding the thresholds required by CMS; however, some troubling patterns in exclusions are present. CMS should raise program requirements in future years." }, { "pmid": "26104044", "title": "Patient Portals and Patient Engagement: A State of the Science Review.", "abstract": "BACKGROUND\nPatient portals (ie, electronic personal health records tethered to institutional electronic health records) are recognized as a promising mechanism to support greater patient engagement, yet questions remain about how health care leaders, policy makers, and designers can encourage adoption of patient portals and what factors might contribute to sustained utilization.\n\n\nOBJECTIVE\nThe purposes of this state of the science review are to (1) present the definition, background, and how current literature addresses the encouragement and support of patient engagement through the patient portal, and (2) provide a summary of future directions for patient portal research and development to meaningfully impact patient engagement.\n\n\nMETHODS\nWe reviewed literature from 2006 through 2014 in PubMed, Ovid Medline, and PsycInfo using the search terms \"patient portal\" OR \"personal health record\" OR \"electronic personal health record\". Final inclusion criterion dictated that studies report on the patient experience and/or ways that patients may be supported to make competent health care decisions and act on those decisions using patient portal functionality.\n\n\nRESULTS\nWe found 120 studies that met the inclusion criteria. Based on the research questions, explicit and implicit aims of the studies, and related measures addressed, the studies were grouped into five major topics (patient adoption, provider endorsement, health literacy, usability, and utility). We discuss the findings and conclusions of studies that address the five topical areas.\n\n\nCONCLUSIONS\nCurrent research has demonstrated that patients' interest and ability to use patient portals is strongly influenced by personal factors such age, ethnicity, education level, health literacy, health status, and role as a caregiver. Health care delivery factors, mainly provider endorsement and patient portal usability also contribute to patient's ability to engage through and with the patient portal. Future directions of research should focus on identifying specific populations and contextual considerations that would benefit most from a greater degree of patient engagement through a patient portal. Ultimately, adoption by patients and endorsement by providers will come when existing patient portal features align with patients' and providers' information needs and functionality." }, { "pmid": "23027317", "title": "Inviting patients to read their doctors' notes: a quasi-experimental study and a look ahead.", "abstract": "BACKGROUND\nLittle information exists about what primary care physicians (PCPs) and patients experience if patients are invited to read their doctors' office notes.\n\n\nOBJECTIVE\nTo evaluate the effect on doctors and patients of facilitating patient access to visit notes over secure Internet portals.\n\n\nDESIGN\nQuasi-experimental trial of PCPs and patient volunteers in a year-long program that provided patients with electronic links to their doctors' notes.\n\n\nSETTING\nPrimary care practices at Beth Israel Deaconess Medical Center (BIDMC) in Massachusetts, Geisinger Health System (GHS) in Pennsylvania, and Harborview Medical Center (HMC) in Washington.\n\n\nPARTICIPANTS\n105 PCPs and 13,564 of their patients who had at least 1 completed note available during the intervention period.\n\n\nMEASUREMENTS\nPortal use and electronic messaging by patients and surveys focusing on participants' perceptions of behaviors, benefits, and negative consequences.\n\n\nRESULTS\n11,155 [corrected] of 13,564 patients with visit notes available opened at least 1 note (84% at BIDMC, 82% [corrected] at GHS, and 47% at HMC). Of 5219 [corrected] patients who opened at least 1 note and completed a postintervention survey, 77% to 59% [corrected] across the 3 sites reported that open notes helped them feel more in control of their care; 60% to 78% of those taking medications reported increased medication adherence; 26% to 36% had privacy concerns; 1% to 8% reported that the notes caused confusion, worry, or offense; and 20% to 42% reported sharing notes with others. The volume of electronic messages from patients did not change. After the intervention, few doctors reported longer visits (0% to 5%) or more time addressing patients' questions outside of visits (0% to 8%), with practice size having little effect; 3% to 36% of doctors reported changing documentation content; and 0% to 21% reported taking more time writing notes. Looking ahead, 59% to 62% of patients believed that they should be able to add comments to a doctor's note. One out of 3 patients believed that they should be able to approve the notes' contents, but 85% to 96% of doctors did not agree. At the end of the experimental period, 99% of patients wanted open notes to continue and no doctor elected to stop.\n\n\nLIMITATIONS\nOnly 3 geographic areas were represented, and most participants were experienced in using portals. Doctors volunteering to participate and patients using portals and completing surveys may tend to offer favorable feedback, and the response rate of the patient surveys (41%) may further limit generalizability.\n\n\nCONCLUSION\nPatients accessed visit notes frequently, a large majority reported clinically relevant benefits and minimal concerns, and virtually all patients wanted the practice to continue. With doctors experiencing no more than a modest effect on their work lives, open notes seem worthy of widespread adoption.\n\n\nPRIMARY FUNDING SOURCE\nThe Robert Wood Johnson Foundation, the Drane Family Fund, the Richard and Florence Koplow Charitable Foundation, and the National Cancer Institute." }, { "pmid": "23407012", "title": "Evaluating patient access to Electronic Health Records: results from a survey of veterans.", "abstract": "OBJECTIVE\nPersonal Health Records (PHRs) tethered to an Electronic Health Record (EHR) offer patients unprecedented access to their personal health information. At the Department of Veteran Affairs (VA), the My HealtheVet Pilot Program was an early PHR prototype enabling patients to import 18 types of information, including clinical notes and laboratory test results, from the VA EHR into a secure PHR portal. The goal of this study was to explore Veteran perceptions about this access to their medical records, including perceived value and effect on satisfaction, self-care, and communication.\n\n\nMETHODS\nPatients enrolled in the pilot program were invited to participate in a web-based survey.\n\n\nRESULTS\nAmong 688 Veteran respondents, there was a high degree of satisfaction with the pilot program, with 84% agreeing that the information and services were helpful. The most highly ranked feature was access to personal health information from the VA EHR. The majority of respondents (72%) indicated that the pilot Web site made it easy for them to locate relevant information. Most participants (66%) agreed that the pilot program helped improve their care, with 90% indicating that they would recommend it to another Veteran.\n\n\nCONCLUSIONS\nVeterans' primary motivation for use of the pilot Web site was the ability to access their own personal health information from the EHR. With patients viewing such access as beneficial to their health and care, PHRs with access to EHR data are positioned to improve health care quality. Making additional information accessible to patients is crucial to meet their needs and preferences." }, { "pmid": "23535584", "title": "Patient experiences with full electronic access to health records and clinical notes through the My HealtheVet Personal Health Record Pilot: qualitative study.", "abstract": "BACKGROUND\nFull sharing of the electronic health record with patients has been identified as an important opportunity to engage patients in their health and health care. The My HealtheVet Pilot, the initial personal health record of the US Department of Veterans Affairs, allowed patients and their delegates to view and download content in their electronic health record, including clinical notes, laboratory tests, and imaging reports.\n\n\nOBJECTIVE\nA qualitative study with purposeful sampling sought to examine patients' views and experiences with reading their health records, including their clinical notes, online.\n\n\nMETHODS\nFive focus group sessions were conducted with patients and family members who enrolled in the My HealtheVet Pilot at the Portland Veterans Administration Medical Center, Oregon. A total of 30 patients enrolled in the My HealtheVet Pilot, and 6 family members who had accessed and viewed their electronic health records participated in the sessions.\n\n\nRESULTS\nFour themes characterized patient experiences with reading the full complement of their health information. Patients felt that seeing their records positively affected communication with providers and the health system, enhanced knowledge of their health and improved self-care, and allowed for greater participation in the quality of their care such as follow-up of abnormal test results or decision-making on when to seek care. While some patients felt that seeing previously undisclosed information, derogatory language, or inconsistencies in their notes caused challenges, they overwhelmingly felt that having more, rather than less, of their health record information provided benefits.\n\n\nCONCLUSIONS\nPatients and their delegates had predominantly positive experiences with health record transparency and the open sharing of notes and test results. Viewing their records appears to empower patients and enhance their contributions to care, calling into question common provider concerns about the effect of full record access on patient well-being. While shared records may or may not impact overall clinic workload, it is likely to change providers' work, necessitating new types of skills to communicate and partner with patients." }, { "pmid": "14965405", "title": "Patients' experiences when accessing their on-line electronic patient records in primary care.", "abstract": "BACKGROUND\nPatient access to on-line primary care electronic patient records is being developed nationally. Knowledge of what happens when patients access their electronic records is poor.\n\n\nAIM\nTo enable 100 patients to access their electronic records for the first time to elicit patients' views and to understand their requirements.\n\n\nDESIGN OF STUDY\nIn-depth interviews using semi-structured questionnaires as patients accessed their electronic records, plus a series of focus groups.\n\n\nSETTING\nSecure facilities for patients to view their primary care records privately.\n\n\nMETHOD\nOne hundred patients from a randomised group viewed their on-line electronic records for the first time. The questionnaire and focus groups addressed patients' views on the following topics: ease of use; confidentiality and security; consent to access; accuracy; printing records; expectations regarding content; exploitation of electronic records; receiving new information and bad news.\n\n\nRESULTS\nMost patients found the computer technology used acceptable. The majority found viewing their record useful and understood most of the content, although medical terms and abbreviations required explanation. Patients were concerned about security and confidentiality, including potential exploitation of records. They wanted the facility to give informed consent regarding access and use of data. Many found errors, although most were not medically significant. Many expected more detail and more information. Patients wanted to add personal information.\n\n\nCONCLUSION\nPatients have strong views on what they find acceptable regarding access to electronic records. Working in partnership with patients to develop systems is essential to their success. Further work is required to address legal and ethical issues of electronic records and to evaluate their impact on patients, health professionals and service provision." }, { "pmid": "18693866", "title": "Towards consumer-friendly PHRs: patients' experience with reviewing their health records.", "abstract": "Consumer-friendly Personal Health Records (PHRs) have the potential of providing patients with the basis for taking an active role in their healthcare. However, few studies focused on the features that make health records comprehensible for lay audiences. This paper presents a survey of patients' experience with reviewing their health records, in order to identify barriers to optimal record use. The data are analyzed via descriptive statistical and thematic analysis. The results point to providers' notes, laboratory test results and radiology reports as the most difficult records sections for lay reviewers. Professional medical terminology, lack of explanations of complex concepts (e.g., lab test ranges) and suboptimal data ordering emerge as the most common comprehension barriers. While most patients today access their records in paper format, electronic PHRs present much more opportunities for providing comprehension support." }, { "pmid": "12923796", "title": "Lay understanding of terms used in cancer consultations.", "abstract": "The study assessed lay understanding of terms used by doctors during cancer consultations. Terms and phrases were selected from 50-videotaped consultations and included in a survey of 105 randomly selected people in a seaside resort. The questionnaire included scenarios containing potentially ambiguous diagnostic/prognostic terms, multiple-choice, comprehension questions and figures on which to locate body organs that could be affected by cancer. Respondents also rated how confident they were about their answers. About half the sample understood euphemisms for the metastatic spread of cancer e.g. 'seedlings' and 'spots in the liver' (44 and 55% respectively). Sixty-three per cent were aware that the term 'metastasis' meant that the cancer was spreading but only 52% understood that the phrase 'the tumour is progressing' was not good news. Yet respondents were fairly confident that they understood these terms. Knowledge of organ location varied. For example, 94% correctly identified the lungs but only 46% located the liver. The findings suggest that a substantial proportion of the lay public do not understand phrases often used in cancer consultations and that knowledge of basic anatomy cannot be assumed. Yet high confidence ratings indicate that asking if patients understand is likely to overestimate comprehension. Awareness of the unfamiliarity of the lay population with cancer-related terms could prompt further explanation in cancer-related consultations." }, { "pmid": "11103725", "title": "Medical communication: do our patients understand?", "abstract": "The objective of this study was to determine emergency department (ED) patient's understanding of common medical terms used by health care providers (HCP). Consecutive patients over 18 years of age having nonurgent conditions were recruited from the EDs of an urban and a suburban hospital between the hours of 7 a.m. and 11 p.m. Patients were asked whether six pairs of terms had the same or different meaning and scored on the number of correct answers (maximum score 6). Multiple linear regression analysis was used to assess possible relationships between test scores and age, sex, hospital site, highest education level, and predicted household income (determined from zip code). Two hundred forty-nine patients (130 men/119 women) ranging in age from 18 to 87 years old (mean = 39.4, SD = 14.9) were enrolled on the study. The mean number of correct responses was 2.8 (SD = 1.2). The percentage of patients that did not recognize analogous terms was 79% for bleeding versus hemorrhage, 78% for broken versus fractured bone, 74% for heart attack versus myocardial infarction, and 38% for stitches versus sutures. The percentage that did not recognize nonanalogous terms was 37% for diarrhea versus loose stools, and 10% for cast versus splint. Regression analysis (R2 = .13) revealed a significant positive independent relationship between test score and age (P < .024), education (P < .001), and suburban hospital site (P < .004). Predicted income had a significant relationship with test score (P < .001); however, this was no longer significant when controlled for the confounding influence of age, education and hospital site. Medical terminology is often poorly understood, especially by young, urban, poorly educated patients. Emergency health care providers should remember that even commonly used medical terminology should be carefully explained to their patients." }, { "pmid": "1517087", "title": "Patient on-line access to medical records in general practice.", "abstract": "Many patients want more information about health and the computer offers tremendous potential for interactive patient education. However, patient education and the provision of information to patients will be most effective if it can be tailored to the individual patient by linkage to the medical record. Furthermore the Data Protection Act requires that patients can have access to explained versions of their computer-held medical record. We have examined the practicality and possible benefits of giving patients on-line access to their medical records in general practice. Seventy patients (20 males; 50 females) took part in the study. Sixty five of these used the computer to obtain information. The section on medical history was most popular, with 52 people accessing it. More than one in four of the problems were not understood until the further explanation screen had been seen. One in four also queried items or thought that something was incorrect. Most patients obviously enjoyed the opportunity to use the computer to see their own medical record and talk to the researcher. Many patients commented that because the General Practitioner (GP) didn't have enough time, the computer would be useful. Sixty one (87%) (95% CI: 79-95%) thought the computer easy to use and 59 (84%) would use it again. This is despite the fact that 43 (61%) thought they obtained enough information from their GP. This small study has shown that patients find this computer interface easy to use, and would use the computer to look at explanations of their medical record if it was routinely available.(ABSTRACT TRUNCATED AT 250 WORDS)" }, { "pmid": "20845203", "title": "The literacy divide: health literacy and the use of an internet-based patient portal in an integrated health system-results from the diabetes study of northern California (DISTANCE).", "abstract": "Internet-based patient portals are intended to improve access and quality, and will play an increasingly important role in health care, especially for diabetes and other chronic diseases. Diabetes patients with limited health literacy have worse health outcomes, and limited health literacy may be a barrier to effectively utilizing internet-based health access services. We investigated use of an internet-based patient portal among a well characterized population of adults with diabetes. We estimated health literacy using three validated self-report items. We explored the independent association between health literacy and use of the internet-based patient portal, adjusted for age, gender, race/ethnicity, educational attainment, and income. Among 14,102 participants (28% non-Hispanic White, 14% Latino, 21% African-American, 9% Asian, 12% Filipino, and 17% multiracial or other ethnicity), 6099 (62%) reported some limitation in health literacy, and 5671 (40%) respondents completed registration for the patient portal registration. In adjusted analyses, those with limited health literacy had higher odds of never signing on to the patient portal (OR 1.7, 1.4 to 1.9) compared with those who did not report any health literacy limitation. Even among those with internet access, the relationship between health literacy and patient portal use persisted (OR 1.4, 95% CI 1.2 to 1.8). Diabetes patients reporting limited health literacy were less likely to both access and navigate an internet-based patient portal than those with adequate health literacy. Although the internet has potential to greatly expand the capacity and reach of health care systems, current use patterns suggest that, in the absence of participatory design efforts involving those with limited health literacy, those most at risk for poor diabetes health outcomes will fall further behind if health systems increasingly rely on internet-based services." }, { "pmid": "23978618", "title": "Consumers' perceptions of patient-accessible electronic medical records.", "abstract": "BACKGROUND\nElectronic health information (eHealth) tools for patients, including patient-accessible electronic medical records (patient portals), are proliferating in health care delivery systems nationally. However, there has been very limited study of the perceived utility and functionality of portals, as well as limited assessment of these systems by vulnerable (low education level, racial/ethnic minority) consumers.\n\n\nOBJECTIVE\nThe objective of the study was to identify vulnerable consumers' response to patient portals, their perceived utility and value, as well as their reactions to specific portal functions.\n\n\nMETHODS\nThis qualitative study used 4 focus groups with 28 low education level, English-speaking consumers in June and July 2010, in New York City.\n\n\nRESULTS\nParticipants included 10 males and 18 females, ranging in age from 21-63 years; 19 non-Hispanic black, 7 Hispanic, 1 non-Hispanic White and 1 Other. None of the participants had higher than a high school level education, and 13 had less than a high school education. All participants had experience with computers and 26 used the Internet. Major themes were enhanced consumer engagement/patient empowerment, extending the doctor's visit/enhancing communication with health care providers, literacy and health literacy factors, improved prevention and health maintenance, and privacy and security concerns. Consumers were also asked to comment on a number of key portal features. Consumers were most positive about features that increased convenience, such as making appointments and refilling prescriptions. Consumers raised concerns about a number of potential barriers to usage, such as complex language, complex visual layouts, and poor usability features.\n\n\nCONCLUSIONS\nMost consumers were enthusiastic about patient portals and perceived that they had great utility and value. Study findings suggest that for patient portals to be effective for all consumers, portals must be designed to be easy to read, visually engaging, and have user-friendly navigation." }, { "pmid": "26681155", "title": "Barriers and Facilitators to Online Portal Use Among Patients and Caregivers in a Safety Net Health Care System: A Qualitative Study.", "abstract": "BACKGROUND\nPatient portals have the potential to support self-management for chronic diseases and improve health outcomes. With the rapid rise in adoption of patient portals spurred by meaningful use incentives among safety net health systems (a health system or hospital providing a significant level of care to low-income, uninsured, and vulnerable populations), it is important to understand the readiness and willingness of patients and caregivers in safety net settings to access their personal health records online.\n\n\nOBJECTIVE\nTo explore patient and caregiver perspectives on online patient portal use before its implementation at San Francisco General Hospital, a safety net hospital.\n\n\nMETHODS\nWe conducted 16 in-depth interviews with chronic disease patients and caregivers who expressed interest in using the Internet to manage their health. Discussions focused on health care experiences, technology use, and interest in using an online portal to manage health tasks. We used open coding to categorize all the barriers and facilitators to portal use, followed by a second round of coding that compared the categories to previously published findings. In secondary analyses, we also examined specific barriers among 2 subgroups: those with limited health literacy and caregivers.\n\n\nRESULTS\nWe interviewed 11 patients and 5 caregivers. Patients were predominantly male (82%, 9/11) and African American (45%, 5/11). All patients had been diagnosed with diabetes and the majority had limited health literacy (73%, 8/11). The majority of caregivers were female (80%, 4/5), African American (60%, 3/5), caregivers of individuals with diabetes (60%, 3/5), and had adequate health literacy (60%, 3/5). A total of 88% (14/16) of participants reported interest in using the portal after viewing a prototype. Major perceived barriers included security concerns, lack of technical skills/interest, and preference for in-person communication. Facilitators to portal use included convenience, health monitoring, and improvements in patient-provider communication. Participants with limited health literacy discussed more fundamental barriers to portal use, including challenges with reading and typing, personal experience with online security breaches/viruses, and distrust of potential security measures. Caregivers expressed high interest in portal use to support their roles in interpreting health information, advocating for quality care, and managing health behaviors and medical care.\n\n\nCONCLUSIONS\nDespite concerns about security, difficulty understanding medical information, and satisfaction with current communication processes, respondents generally expressed enthusiasm about portal use. Our findings suggest a strong need for training and support to assist vulnerable patients with portal registration and use, particularly those with limited health literacy. Efforts to encourage portal use among vulnerable patients should directly address health literacy and security/privacy issues and support access for caregivers." }, { "pmid": "27702738", "title": "Health Literacy and Health Information Technology Adoption: The Potential for a New Digital Divide.", "abstract": "BACKGROUND\nApproximately one-half of American adults exhibit low health literacy and thus struggle to find and use health information. Low health literacy is associated with negative outcomes including overall poorer health. Health information technology (HIT) makes health information available directly to patients through electronic tools including patient portals, wearable technology, and mobile apps. The direct availability of this information to patients, however, may be complicated by misunderstanding of HIT privacy and information sharing.\n\n\nOBJECTIVE\nThe purpose of this study was to determine whether health literacy is associated with patients' use of four types of HIT tools: fitness and nutrition apps, activity trackers, and patient portals. Additionally, we sought to explore whether health literacy is associated with patients' perceived ease of use and usefulness of these HIT tools, as well as patients' perceptions of privacy offered by HIT tools and trust in government, media, technology companies, and health care. This study is the first wide-scale investigation of these interrelated concepts.\n\n\nMETHODS\nParticipants were 4974 American adults (n=2102, 42.26% male, n=3146, 63.25% white, average age 43.5, SD 16.7 years). Participants completed the Newest Vital Sign measure of health literacy and indicated their actual use of HIT tools, as well as the perceived ease of use and usefulness of these applications. Participants also answered questions regarding information privacy and institutional trust, as well as demographic items.\n\n\nRESULTS\nCross-tabulation analysis indicated that adequate versus less than adequate health literacy was significantly associated with use of fitness apps (P=.02), nutrition apps (P<.001), activity trackers (P<.001), and patient portals (P<.001). Additionally, greater health literacy was significantly associated with greater perceived ease of use and perceived usefulness across all HIT tools after controlling for demographics. Regarding privacy perceptions of HIT and institutional trust, patients with greater health literacy often demonstrated decreased privacy perceptions for HIT tools including fitness apps (P<.001) and nutrition apps (P<.001). Health literacy was negatively associated with trust in government (P<.001), media (P<.001), and technology companies (P<.001). Interestingly, health literacy score was positively associated with trust in health care (P=.03).\n\n\nCONCLUSIONS\nPatients with low health literacy were less likely to use HIT tools or perceive them as easy or useful, but they perceived information on HIT as private. Given the fast-paced evolution of technology, there is a pressing need to further the understanding of how health literacy is related to HIT app adoption and usage. This will ensure that all users receive the full health benefits from these technological advances, in a manner that protects health information privacy, and that users engage with organizations and providers they trust." }, { "pmid": "20442139", "title": "An overview of MetaMap: historical perspective and recent advances.", "abstract": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD." }, { "pmid": "15561782", "title": "Promoting health literacy.", "abstract": "This report reviews some of the extensive literature in health literacy, much of it focused on the intersection of low literacy and the understanding of basic health care information. Several articles describe methods for assessing health literacy as well as methods for assessing the readability of texts, although generally these latter have not been developed with health materials in mind. Other studies have looked more closely at the mismatch between patients' literacy levels and the readability of materials intended for use by those patients. A number of studies have investigated the phenomenon of literacy from the perspective of patients' interactions in the health care setting, the disenfranchisement of some patients because of their low literacy skills, the difficulty some patients have in navigating the health care system, the quality of the communication between doctors and their patients including the cultural overlay of such exchanges, and ultimately the effect of low literacy on health outcomes. Finally, the impact of new information technologies has been studied by a number of investigators. There remain many opportunities for conducting further research to gain a better understanding of the complex interactions between general literacy, health literacy, information technologies, and the existing health care infrastructure." }, { "pmid": "18436895", "title": "Developing informatics tools and strategies for consumer-centered health communication.", "abstract": "As the emphasis on individuals' active partnership in health care grows, so does the public's need for effective, comprehensible consumer health resources. Consumer health informatics has the potential to provide frameworks and strategies for designing effective health communication tools that empower users and improve their health decisions. This article presents an overview of the consumer health informatics field, discusses promising approaches to supporting health communication, and identifies challenges plus direction for future research and development. The authors' recommendations emphasize the need for drawing upon communication and social science theories of information behavior, reaching out to consumers via a range of traditional and novel formats, gaining better understanding of the public's health information needs, and developing informatics solutions for tailoring resources to users' needs and competencies. This article was written as a scholarly outreach and leadership project by members of the American Medical Informatics Association's Consumer Health Informatics Working Group." }, { "pmid": "18693956", "title": "Making texts in electronic health records comprehensible to consumers: a prototype translator.", "abstract": "Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15)." }, { "pmid": "21347002", "title": "A semantic and syntactic text simplification tool for health content.", "abstract": "Text simplification is a challenging NLP task and it is particularly important in the health domain as most health information requires higher reading skills than an average consumer has. This low readability of health content is largely due to the presence of unfamiliar medical terms/concepts and certain syntactic characteristics, such as excessively complex sentences. In this paper, we discuss a simplification tool that was developed to simplify health information. The tool addresses semantic difficulty by substituting difficult terms with easier synonyms or through the use of hierarchically and/or semantically related terms. The tool also simplifies long sentences by splitting them into shorter grammatical sentences. We used the tool to simplify electronic medical records and journal articles and results show that the tool simplifies both document types though by different degrees. A cloze test on the electronic medical records showed a statistically significant improvement in the cloze score from 35.8% to 43.6%." }, { "pmid": "23920650", "title": "Improving patients' electronic health record comprehension with NoteAid.", "abstract": "Allowing patients direct access to their electronic health record (EHR) notes has been shown to enhance medical understanding and may improve healthcare management and outcome. However, EHR notes contain medical terms, shortened forms, complex disease and medication names, and other domain specific jargon that make them difficult for patients to fathom. In this paper, we present a BioNLP system, NoteAid, that automatically recognizes medical concepts and links these concepts with consumer oriented, simplified definitions from external resources. We conducted a pilot evaluation for linking EHR notes through NoteAid to three external knowledge resources: MedlinePlus, the Unified Medical Language System (UMLS), and Wikipedia. Our results show that Wikipedia significantly improves EHR note readability. Preliminary analyses show that MedlinePlus and the UMLS need to improve both content readability and content coverage for consumer health information. A demonstration version of fully functional NoteAid is available at http://clinicalnotesaid.org." }, { "pmid": "22357448", "title": "eHealth literacy: extending the digital divide to the realm of health information.", "abstract": "BACKGROUND\neHealth literacy is defined as the ability of people to use emerging information and communications technologies to improve or enable health and health care.\n\n\nOBJECTIVE\nThe goal of this study was to explore whether literacy disparities are diminished or enhanced in the search for health information on the Internet. The study focused on (1) traditional digital divide variables, such as sociodemographic characteristics, digital access, and digital literacy, (2) information search processes, and (3) the outcomes of Internet use for health information purposes.\n\n\nMETHODS\nWe used a countrywide representative random-digital-dial telephone household survey of the Israeli adult population (18 years and older, N = 4286). We measured eHealth literacy; Internet access; digital literacy; sociodemographic factors; perceived health; presence of chronic diseases; as well as health information sources, content, search strategies, and evaluation criteria used by consumers.\n\n\nRESULTS\nRespondents who were highly eHealth literate tended to be younger and more educated than their less eHealth-literate counterparts. They were also more active consumers of all types of information on the Internet, used more search strategies, and scrutinized information more carefully than did the less eHealth-literate respondents. Finally, respondents who were highly eHealth literate gained more positive outcomes from the information search in terms of cognitive, instrumental (self-management of health care needs, health behaviors, and better use of health insurance), and interpersonal (interacting with their physician) gains.\n\n\nCONCLUSIONS\nThe present study documented differences between respondents high and low in eHealth literacy in terms of background attributes, information consumption, and outcomes of the information search. The association of eHealth literacy with background attributes indicates that the Internet reinforces existing social differences. The more comprehensive and sophisticated use of the Internet and the subsequent increased gains among the high eHealth literate create new inequalities in the domain of digital health information. There is a need to educate at-risk and needy groups (eg, chronically ill) and to design technology in a mode befitting more consumers." }, { "pmid": "25953147", "title": "Low health literacy and evaluation of online health information: a systematic review of the literature.", "abstract": "BACKGROUND\nRecent years have witnessed a dramatic increase in consumer online health information seeking. The quality of online health information, however, remains questionable. The issue of information evaluation has become a hot topic, leading to the development of guidelines and checklists to design high-quality online health information. However, little attention has been devoted to how consumers, in particular people with low health literacy, evaluate online health information.\n\n\nOBJECTIVE\nThe main aim of this study was to review existing evidence on the association between low health literacy and (1) people's ability to evaluate online health information, (2) perceived quality of online health information, (3) trust in online health information, and (4) use of evaluation criteria for online health information.\n\n\nMETHODS\nFive academic databases (MEDLINE, PsycINFO, Web of Science, CINAHL, and Communication and Mass-media Complete) were systematically searched. We included peer-reviewed publications investigating differences in the evaluation of online information between people with different health literacy levels.\n\n\nRESULTS\nAfter abstract and full-text screening, 38 articles were included in the review. Only four studies investigated the specific role of low health literacy in the evaluation of online health information. The other studies examined the association between educational level or other skills-based proxies for health literacy, such as general literacy, and outcomes. Results indicate that low health literacy (and related skills) are negatively related to the ability to evaluate online health information and trust in online health information. Evidence on the association with perceived quality of online health information and use of evaluation criteria is inconclusive.\n\n\nCONCLUSIONS\nThe findings indicate that low health literacy (and related skills) play a role in the evaluation of online health information. This topic is therefore worth more scholarly attention. Based on the results of this review, future research in this field should (1) specifically focus on health literacy, (2) devote more attention to the identification of the different criteria people use to evaluate online health information, (3) develop shared definitions and measures for the most commonly used outcomes in the field of evaluation of online health information, and (4) assess the relationship between the different evaluative dimensions and the role played by health literacy in shaping their interplay." }, { "pmid": "28630033", "title": "Trust and Credibility in Web-Based Health Information: A Review and Agenda for Future Research.", "abstract": "BACKGROUND\nInternet sources are becoming increasingly important in seeking health information, such that they may have a significant effect on health care decisions and outcomes. Hence, given the wide range of different sources of Web-based health information (WHI) from different organizations and individuals, it is important to understand how information seekers evaluate and select the sources that they use, and more specifically, how they assess their credibility and trustworthiness.\n\n\nOBJECTIVE\nThe aim of this study was to review empirical studies on trust and credibility in the use of WHI. The article seeks to present a profile of the research conducted on trust and credibility in WHI seeking, to identify the factors that impact judgments of trustworthiness and credibility, and to explore the role of demographic factors affecting trust formation. On this basis, it aimed to identify the gaps in current knowledge and to propose an agenda for future research.\n\n\nMETHODS\nA systematic literature review was conducted. Searches were conducted using a variety of combinations of the terms WHI, trust, credibility, and their variants in four multi-disciplinary and four health-oriented databases. Articles selected were published in English from 2000 onwards; this process generated 3827 unique records. After the application of the exclusion criteria, 73 were analyzed fully.\n\n\nRESULTS\nInterest in this topic has persisted over the last 15 years, with articles being published in medicine, social science, and computer science and originating mostly from the United States and the United Kingdom. Documents in the final dataset fell into 3 categories: (1) those using trust or credibility as a dependent variable, (2) those using trust or credibility as an independent variable, and (3) studies of the demographic factors that influence the role of trust or credibility in WHI seeking. There is a consensus that website design, clear layout, interactive features, and the authority of the owner have a positive effect on trust or credibility, whereas advertising has a negative effect. With regard to content features, authority of the author, ease of use, and content have a positive effect on trust or credibility formation. Demographic factors influencing trust formation are age, gender, and perceived health status.\n\n\nCONCLUSIONS\nThere is considerable scope for further research. This includes increased clarity of the interaction between the variables associated with health information seeking, increased consistency on the measurement of trust and credibility, a greater focus on specific WHI sources, and enhanced understanding of the impact of demographic variables on trust and credibility judgments." }, { "pmid": "9934385", "title": "Readability levels of patient education material on the World Wide Web.", "abstract": "BACKGROUND\nPatient education is an important component of family practice. Pamphlets, verbal instructions, and physicians' self-created materials have been the most common resources for patient education. Today, however, the popularity of the World Wide Web (Web) as a ready source of educational materials is increasing. The reading skills required by a patient to understand that information has not been determined. The objective of our study was to assess the readability of medical information on the Web that is specifically intended for patients.\n\n\nMETHODS\nAn investigator downloaded 50 sequential samples of patient education material from the Web. This information was then evaluated for readability using the Flesch reading score and Flesch-Kinkaid reading level.\n\n\nRESULTS\nOn average, the patient information from the Web in our sample is written at a 10th grade, 2nd month reading level. Previous studies have shown that this readability level is not comprehensible to the majority of patients.\n\n\nCONCLUSIONS\nMuch of the medical information targeted for the general public on the Web is written at a reading level higher than is easily understood by much of the patient population." }, { "pmid": "11368735", "title": "Health information on the Internet: accessibility, quality, and readability in English and Spanish.", "abstract": "CONTEXT\nDespite the substantial amount of health-related information available on the Internet, little is known about the accessibility, quality, and reading grade level of that health information.\n\n\nOBJECTIVE\nTo evaluate health information on breast cancer, depression, obesity, and childhood asthma available through English- and Spanish-language search engines and Web sites.\n\n\nDESIGN AND SETTING\nThree unique studies were performed from July 2000 through December 2000. Accessibility of 14 search engines was assessed using a structured search experiment. Quality of 25 health Web sites and content provided by 1 search engine was evaluated by 34 physicians using structured implicit review (interrater reliability >0.90). The reading grade level of text selected for structured implicit review was established using the Fry Readability Graph method.\n\n\nMAIN OUTCOME MEASURES\nFor the accessibility study, proportion of links leading to relevant content; for quality, coverage and accuracy of key clinical elements; and grade level reading formulas.\n\n\nRESULTS\nLess than one quarter of the search engine's first pages of links led to relevant content (20% of English and 12% of Spanish). On average, 45% of the clinical elements on English- and 22% on Spanish-language Web sites were more than minimally covered and completely accurate and 24% of the clinical elements on English- and 53% on Spanish-language Web sites were not covered at all. All English and 86% of Spanish Web sites required high school level or greater reading ability.\n\n\nCONCLUSION\nAccessing health information using search engines and simple search terms is not efficient. Coverage of key information on English- and Spanish-language Web sites is poor and inconsistent, although the accuracy of the information provided is generally good. High reading levels are required to comprehend Web-based health information." }, { "pmid": "22473833", "title": "Readability assessment of patient education materials from the American Academy of Otolaryngology--Head and Neck Surgery Foundation.", "abstract": "OBJECTIVE\nAmericans are increasingly turning to the Internet as a source of health care information. These online resources should be written at a level readily understood by the average American. This study evaluates the readability of online patient education information available from the American Academy of Otolaryngology--Head and Neck Surgery Foundation (AAO-HNSF) professional Web site using 7 different assessment tools that analyze the materials for reading ease and grade level of its target audience.\n\n\nSTUDY DESIGN AND SETTING\nAnalysis of Internet-based patient education material from the AAO-HNSF Web site.\n\n\nMETHODS\nOnline patient education material from the AAO-HNSF was downloaded in January 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, SMOG grading, Coleman-Liau Index, Gunning-Fog Index, Raygor Readability Estimate graph, and Fry Readability graph. The text from each subsection was pasted as plain text into Microsoft Word document, and each subsection was subjected to readability analysis using the software package Readability Studio Professional Edition Version 2012.1.\n\n\nRESULTS\nAll health care education material assessed is written between an 11th grade and graduate reading level and is considered \"difficult to read\" by the assessment scales.\n\n\nCONCLUSIONS\nOnline patient education materials on the AAO-HNSF Web site are written above the recommended 6th grade level and may need to be revised to make them more easily understood by a broader audience." }, { "pmid": "16221948", "title": "Exploring and developing consumer health vocabularies.", "abstract": "Laypersons (\"consumers\") often have difficulty finding, understanding, and acting on health information due to gaps in their domain knowledge. Ideally, consumer health vocabularies (CHVs) would reflect the different ways consumers express and think about health topics, helping to bridge this vocabulary gap. However, despite the recent research on mismatches between consumer and professional language (e.g., lexical, semantic, and explanatory), there have been few systematic efforts to develop and evaluate CHVs. This paper presents the point of view that CHV development is practical and necessary for extending research on informatics-based tools to facilitate consumer health information seeking, retrieval, and understanding. In support of the view, we briefly describe a distributed, bottom-up approach for (1) exploring the relationship between common consumer health expressions and professional concepts and (2) developing an open-access, preliminary (draft) \"first-generation\" CHV. While recognizing the limitations of the approach (e.g., not addressing psychosocial and cultural factors), we suggest that such exploratory research and development will yield insights into the nature of consumer health expressions and assist developers in creating tools and applications to support consumer health information seeking." }, { "pmid": "17478413", "title": "Term identification methods for consumer health vocabulary development.", "abstract": "BACKGROUND\nThe development of consumer health information applications such as health education websites has motivated the research on consumer health vocabulary (CHV). Term identification is a critical task in vocabulary development. Because of the heterogeneity and ambiguity of consumer expressions, term identification for CHV is more challenging than for professional health vocabularies.\n\n\nOBJECTIVE\nFor the development of a CHV, we explored several term identification methods, including collaborative human review and automated term recognition methods.\n\n\nMETHODS\nA set of criteria was established to ensure consistency in the collaborative review, which analyzed 1893 strings. Using the results from the human review, we tested two automated methods-C-value formula and a logistic regression model.\n\n\nRESULTS\nThe study identified 753 consumer terms and found the logistic regression model to be highly effective for CHV term identification (area under the receiver operating characteristic curve = 95.5%).\n\n\nCONCLUSIONS\nThe collaborative human review and logistic regression methods were effective for identifying terms for CHV development." }, { "pmid": "21586386", "title": "Computer-assisted update of a consumer health vocabulary through mining of social network data.", "abstract": "BACKGROUND\nConsumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers' language.\n\n\nOBJECTIVE\nOur objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV.\n\n\nMETHODS\nThe CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system's utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages.\n\n\nRESULTS\nThe CAU system identified 88,994 unique terms 1- to 7-grams (\"n-grams\" are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed.\n\n\nCONCLUSION\nThe CAU system is effective for generating a list of candidate terms for human review during CHV development." }, { "pmid": "27671202", "title": "Expansion of medical vocabularies using distributional semantics on Japanese patient blogs.", "abstract": "BACKGROUND\nResearch on medical vocabulary expansion from large corpora has primarily been conducted using text written in English or similar languages, due to a limited availability of large biomedical corpora in most languages. Medical vocabularies are, however, essential also for text mining from corpora written in other languages than English and belonging to a variety of medical genres. The aim of this study was therefore to evaluate medical vocabulary expansion using a corpus very different from those previously used, in terms of grammar and orthographics, as well as in terms of text genre. This was carried out by applying a method based on distributional semantics to the task of extracting medical vocabulary terms from a large corpus of Japanese patient blogs.\n\n\nMETHODS\nDistributional properties of terms were modelled with random indexing, followed by agglomerative hierarchical clustering of 3 ×100 seed terms from existing vocabularies, belonging to three semantic categories: Medical Finding, Pharmaceutical Drug and Body Part. By automatically extracting unknown terms close to the centroids of the created clusters, candidates for new terms to include in the vocabulary were suggested. The method was evaluated for its ability to retrieve the remaining n terms in existing medical vocabularies.\n\n\nRESULTS\nRemoving case particles and using a context window size of 1+1 was a successful strategy for Medical Finding and Pharmaceutical Drug, while retaining case particles and using a window size of 8+8 was better for Body Part. For a 10n long candidate list, the use of different cluster sizes affected the result for Pharmaceutical Drug, while the effect was only marginal for the other two categories. For a list of top n candidates for Body Part, however, clusters with a size of up to two terms were slightly more useful than larger clusters. For Pharmaceutical Drug, the best settings resulted in a recall of 25 % for a candidate list of top n terms and a recall of 68 % for top 10n. For a candidate list of top 10n candidates, the second best results were obtained for Medical Finding: a recall of 58 %, compared to 46 % for Body Part. Only taking the top n candidates into account, however, resulted in a recall of 23 % for Body Part, compared to 16 % for Medical Finding.\n\n\nCONCLUSIONS\nDifferent settings for corpus pre-processing, window sizes and cluster sizes were suitable for different semantic categories and for different lengths of candidate lists, showing the need to adapt parameters, not only to the language and text genre used, but also to the semantic category for which the vocabulary is to be expanded. The results show, however, that the investigated choices for pre-processing and parameter settings were successful, and that a Japanese blog corpus, which in many ways differs from those used in previous studies, can be a useful resource for medical vocabulary expansion." }, { "pmid": "24551362", "title": "Identifying synonymy between SNOMED clinical terms of varying length using distributional analysis of electronic health records.", "abstract": "Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records - the MIMIC-II database - can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length." }, { "pmid": "27903489", "title": "Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations.", "abstract": "BACKGROUND\nMany health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients' notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care.\n\n\nOBJECTIVE\nWe aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients.\n\n\nMETHODS\nFirst, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians' agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems.\n\n\nRESULTS\nPhysicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen's kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P<.001). Rich learning features contributed to FOCUS's performance substantially.\n\n\nCONCLUSIONS\nFOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care." }, { "pmid": "28267590", "title": "Unsupervised ensemble ranking of terms in electronic health record notes based on their importance to patients.", "abstract": "BACKGROUND\nAllowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, EHR notes contain abundant medical jargon that can be difficult for patients to comprehend. One way to help patients is to reduce information overload and help them focus on medical terms that matter most to them. Targeted education can then be developed to improve patient EHR comprehension and the quality of care.\n\n\nOBJECTIVE\nThe aim of this work was to develop FIT (Finding Important Terms for patients), an unsupervised natural language processing (NLP) system that ranks medical terms in EHR notes based on their importance to patients.\n\n\nMETHODS\nWe built FIT on a new unsupervised ensemble ranking model derived from the biased random walk algorithm to combine heterogeneous information resources for ranking candidate terms from each EHR note. Specifically, FIT integrates four single views (rankers) for term importance: patient use of medical concepts, document-level term salience, word co-occurrence based term relatedness, and topic coherence. It also incorporates partial information of term importance as conveyed by terms' unfamiliarity levels and semantic types. We evaluated FIT on 90 expert-annotated EHR notes and used the four single-view rankers as baselines. In addition, we implemented three benchmark unsupervised ensemble ranking methods as strong baselines.\n\n\nRESULTS\nFIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FIT for identifying important terms from EHR notes was 0.813 AUC-ROC. Both performance scores significantly exceeded the corresponding scores from the four single rankers (P<0.001). FIT also outperformed the three ensemble rankers for most metrics. Its performance is relatively insensitive to its parameter.\n\n\nCONCLUSIONS\nFIT can automatically identify EHR terms important to patients. It may help develop future interventions to improve quality of care. By using unsupervised learning as well as a robust and flexible framework for information fusion, FIT can be readily applied to other domains and applications." }, { "pmid": "10786289", "title": "Constructing biological knowledge bases by extracting information from text sources.", "abstract": "Recently, there has been much effort in making databases for molecular biology more accessible and interoperable. However, information in text form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task--a statistical text classification method, and a relational learning method--and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data." }, { "pmid": "15542014", "title": "Gene name identification and normalization using a model organism database.", "abstract": "Biology has now become an information science, and researchers are increasingly dependent on expert-curated biological databases to organize the findings from the published literature. We report here on a series of experiments related to the application of natural language processing to aid in the curation process for FlyBase. We focused on listing the normalized form of genes and gene products discussed in an article. We broke this into two steps: gene mention tagging in text, followed by normalization of gene names. For gene mention tagging, we adopted a statistical approach. To provide training data, we were able to reverse engineer the gene lists from the associated articles and abstracts, to generate text labeled (imperfectly) with gene mentions. We then evaluated the quality of the noisy training data (precision of 78%, recall 88%) and the quality of the HMM tagger output trained on this noisy data (precision 78%, recall 71%). In order to generate normalized gene lists, we explored two approaches. First, we explored simple pattern matching based on synonym lists to obtain a high recall/low precision system (recall 95%, precision 2%). Using a series of filters, we were able to improve precision to 50% with a recall of 72% (balanced F-measure of 0.59). Our second approach combined the HMM gene mention tagger with various filters to remove ambiguous mentions; this approach achieved an F-measure of 0.72 (precision 88%, recall 61%). These experiments indicate that the lexical resources provided by FlyBase are complete enough to achieve high recall on the gene list task, and that normalization requires accurate disambiguation; different strategies for tagging and normalization trade off recall for precision." }, { "pmid": "22174293", "title": "The extraction of pharmacogenetic and pharmacogenomic relations--a case study using PharmGKB.", "abstract": "In this paper, we report on adapting the JREX relation extraction engine, originally developed For the elicitation of protein-protein interaction relations, to the domains of pharmacogenetics and pharmacogenomics. We propose an intrinsic and an extrinsic evaluation scenario which is based on knowledge contained in the PharmGKB knowledge base. Porting JREX yields favorable results in the range of 80% F-score for Gene-Disease, Gene-Drug, and Drug-Disease relations." }, { "pmid": "23486109", "title": "Improving performance of natural language processing part-of-speech tagging on clinical narratives through domain adaptation.", "abstract": "OBJECTIVE\nNatural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives.\n\n\nMETHODS\nTwo target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt.\n\n\nRESULTS\nThe evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%.\n\n\nCONCLUSIONS\nClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks." }, { "pmid": "20179074", "title": "Domain adaptation for semantic role labeling in the biomedical domain.", "abstract": "MOTIVATION\nSemantic role labeling (SRL) is a natural language processing (NLP) task that extracts a shallow meaning representation from free text sentences. Several efforts to create SRL systems for the biomedical domain have been made during the last few years. However, state-of-the-art SRL relies on manually annotated training instances, which are rare and expensive to prepare. In this article, we address SRL for the biomedical domain as a domain adaptation problem to leverage existing SRL resources from the newswire domain.\n\n\nRESULTS\nWe evaluate the performance of three recently proposed domain adaptation algorithms for SRL. Our results show that by using domain adaptation, the cost of developing an SRL system for the biomedical domain can be reduced significantly. Using domain adaptation, our system can achieve 97% of the performance with as little as 60 annotated target domain abstracts.\n\n\nAVAILABILITY\nOur BioKIT system that performs SRL in the biomedical domain as described in this article is implemented in Python and C and operates under the Linux operating system. BioKIT can be downloaded at http://nlp.comp.nus.edu.sg/software. The domain adaptation software is available for download at http://www.mysmu.edu/faculty/jingjiang/software/DALR.html. The BioProp corpus is available from the Linguistic Data Consortium http://www.ldc.upenn.edu." }, { "pmid": "26063745", "title": "Domain adaptation for semantic role labeling of clinical text.", "abstract": "OBJECTIVE\nSemantic role labeling (SRL), which extracts a shallow semantic relation representation from different surface textual forms of free text sentences, is important for understanding natural language. Few studies in SRL have been conducted in the medical domain, primarily due to lack of annotated clinical SRL corpora, which are time-consuming and costly to build. The goal of this study is to investigate domain adaptation techniques for clinical SRL leveraging resources built from newswire and biomedical literature to improve performance and save annotation costs.\n\n\nMATERIALS AND METHODS\nMultisource Integrated Platform for Answering Clinical Questions (MiPACQ), a manually annotated SRL clinical corpus, was used as the target domain dataset. PropBank and NomBank from newswire and BioProp from biomedical literature were used as source domain datasets. Three state-of-the-art domain adaptation algorithms were employed: instance pruning, transfer self-training, and feature augmentation. The SRL performance using different domain adaptation algorithms was evaluated by using 10-fold cross-validation on the MiPACQ corpus. Learning curves for the different methods were generated to assess the effect of sample size.\n\n\nRESULTS AND CONCLUSION\nWhen all three source domain corpora were used, the feature augmentation algorithm achieved statistically significant higher F-measure (83.18%), compared to the baseline with MiPACQ dataset alone (F-measure, 81.53%), indicating that domain adaptation algorithms may improve SRL performance on clinical text. To achieve a comparable performance to the baseline method that used 90% of MiPACQ training samples, the feature augmentation algorithm required <50% of training samples in MiPACQ, demonstrating that annotation costs of clinical SRL can be reduced significantly by leveraging existing SRL resources from other domains." }, { "pmid": "21925286", "title": "Building an automated SOAP classifier for emergency department reports.", "abstract": "Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks." }, { "pmid": "10566330", "title": "Terminology issues in user access to Web-based medical information.", "abstract": "We conducted a study of user queries to the National Library of Medicine Web site over a three month period. Our purpose was to study the nature and scope of these queries in order to understand how to improve users' access to the information they are seeking on our site. The results show that the queries are primarily medical in content (94%), with only a small percentage (5.5%) relating to library services, and with a very small percentage (.5%) not being medically relevant at all. We characterize the data set, and conclude with a discussion of our plans to develop a UMLS-based terminology server to assist NLM Web users." }, { "pmid": "11604772", "title": "Patient and clinician vocabulary: how different are they?", "abstract": "Consumers and patients are confronted with a plethora of health care information, especially through the proliferation of web content resources. Democratization of the web is an important milestone for patients and consumers since it helps to empower them, make them better advocates on their own behalf and foster better, more-informed decisions about their health. Yet lack of familiarity with medical vocabulary is a major problem for patients in accessing the available information. As a first step to providing better vocabulary support for patients, this study collected and analyzed patient and clinician terms to confirm and quantitatively assess their differences. We also analyzed the information retrieval (IR) performance resulting from these terms. The results showed that patient terminology does differ from clinician terminology in many respects including misspelling rate, mapping rate and semantic type distribution, and patient terms lead to poorer results in information retrieval." }, { "pmid": "11720966", "title": "Evaluation of controlled vocabulary resources for development of a consumer entry vocabulary for diabetes.", "abstract": "BACKGROUND\nDigital information technology can facilitate informed decision making by individuals regarding their personal health care. The digital divide separates those who do and those who do not have access to or otherwise make use of digital information. To close the digital divide, health care communications research must address a fundamental issue, the consumer vocabulary problem: consumers of health care, at least those who are laypersons, are not always familiar with the professional vocabulary and concepts used by providers of health care and by providers of health care information, and, conversely, health care and health care information providers are not always familiar with the vocabulary and concepts used by consumers. One way to address this problem is to develop a consumer entry vocabulary for health care communications.\n\n\nOBJECTIVES\nTo evaluate the potential of controlled vocabulary resources for supporting the development of consumer entry vocabulary for diabetes.\n\n\nMETHODS\nWe used folk medical terms from the Dictionary of American Regional English project to create extended versions of 3 controlled vocabulary resources: the Unified Medical Language System Metathesaurus, the Eurodicautom of the European Commission's Translation Service, and the European Commission Glossary of popular and technical medical terms. We extracted consumer terms from consumer-authored materials, and physician terms from physician-authored materials. We used our extended versions of the vocabulary resources to link diabetes-related terms used by health care consumers to synonymous, nearly-synonymous, or closely-related terms used by family physicians. We also examined whether retrieval of diabetes-related World Wide Web information sites maintained by nonprofit health care professional organizations, academic organizations, or governmental organizations can be improved by substituting a physician term for its related consumer term in the query.\n\n\nRESULTS\nThe Dictionary of American Regional English extension of the Metathesaurus provided coverage, either direct or indirect, of approximately 23% of the natural language consumer-term-physician-term pairs. The Dictionary of American Regional English extension of the Eurodicautom provided coverage for 16% of the term pairs. Both the Metathesaurus and the Eurodicautom indirectly related more terms than they directly related. A high percentage of covered term pairs, with more indirectly covered pairs than directly covered pairs, might be one way to make the most out of expensive controlled vocabulary resources. We compared retrieval of diabetes-related Web information sites using the physician terms to retrieval using related consumer terms We based the comparison on retrieval of sites maintained by non-profit healthcare professional organizations, academic organizations, or governmental organizations. The number of such sites in the first 20 Results from a search was increased by substituting a physician term for its related consumer term in the query. This suggests that the Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom may be used to provide useful links from natural language consumer terms to natural language physician terms.\n\n\nCONCLUSIONS\nThe Dictionary of American Regional English extensions of the Metathesaurus and Eurodicautom should be investigated further for support of consumer entry vocabulary for diabetes." }, { "pmid": "12425240", "title": "Characteristics of consumer terminology for health information retrieval.", "abstract": "OBJECTIVES\nAs millions of consumers perform health information retrieval online, the mismatch between their terminology and the terminologies of the information sources could become a major barrier to successful retrievals. To address this problem, we studied the characteristics of consumer terminology for health information retrieval.\n\n\nMETHODS\nOur study focused on consumer queries that were used on a consumer health service Web site and a consumer health information Web site. We analyzed data from the site-usage logs and conducted interviews with patients.\n\n\nRESULTS\nOur findings show that consumers' information retrieval performance is very poor. There are significant mismatches at all levels (lexical, semantic and mental models) between the consumer terminology and both the information source terminology and standard medical vocabularies.\n\n\nCONCLUSIONS\nComprehensive terminology support on all levels is needed for consumer health information retrieval." }, { "pmid": "16779162", "title": "Identifying consumer-friendly display (CFD) names for health concepts.", "abstract": "We have developed a systematic methodology using corpus-based text analysis followed by human review to assign \"consumer-friendly display (CFD) names\" to medical concepts from the National Library of Medicine (NLM) Unified Medical Language System (UMLS) Metathesaurus. Using NLM MedlinePlus queries as a corpus of consumer expressions and a collaborative Web-based tool to facilitate review, we analyzed 425 frequently occurring concepts. As a preliminary test of our method, we evaluated 34 ana-lyzed concepts and their CFD names, using a questionnaire modeled on standard reading assessments. The initial results that consumers (n=10) are more likely to understand and recognize CFD names than alternate labels suggest that the approach is useful in the development of consumer health vocabularies for displaying understandable health information." }, { "pmid": "18436906", "title": "Consumer health concepts that do not map to the UMLS: where do they fit?", "abstract": "OBJECTIVE\nThis study has two objectives: first, to identify and characterize consumer health terms not found in the Unified Medical Language System (UMLS) Metathesaurus (2007 AB); second, to describe the procedure for creating new concepts in the process of building a consumer health vocabulary. How do the unmapped consumer health concepts relate to the existing UMLS concepts? What is the place of these new concepts in professional medical discourse?\n\n\nDESIGN\nThe consumer health terms were extracted from two large corpora derived in the process of Open Access Collaboratory Consumer Health Vocabulary (OAC CHV) building. Terms that could not be mapped to existing UMLS concepts via machine and manual methods prompted creation of new concepts, which were then ascribed semantic types, related to existing UMLS concepts, and coded according to specified criteria.\n\n\nRESULTS\nThis approach identified 64 unmapped concepts, 17 of which were labeled as uniquely \"lay\" and not feasible for inclusion in professional health terminologies. The remaining terms constituted potential candidates for inclusion in professional vocabularies, or could be constructed by post-coordinating existing UMLS terms. The relationship between new and existing concepts differed depending on the corpora from which they were extracted.\n\n\nCONCLUSION\nNon-mapping concepts constitute a small proportion of consumer health terms, but a proportion that is likely to affect the process of consumer health vocabulary building. We have identified a novel approach for identifying such concepts." }, { "pmid": "14728258", "title": "Exploring medical expressions used by consumers and the media: an emerging view of consumer health vocabularies.", "abstract": "Healthcare consumers often have difficulty expressing and understanding medical concepts. The goal of this study is to identify and characterize medical expressions or \"terms\" (linguistic forms and associated concepts) used by consumers and health mediators. In particular, these terms were characterized according to the degree to which they mapped to professional medical vocabularies. Lay participants identified approximately 100,000 term tokens from online discussion forum postings and print media articles. Of the over 81,000 extracted term tokens reviewed, more than 75% were mapped as synonyms or quasi-synonyms to the Unified Medical Language System (UMLS) Metathesaurus. While 80% conceptual overlap was found between closely mapped lay (consumer and mediator) and technical (professional) medical terms, about half of these overlapping concepts contained lay forms different from technical forms. This study raises questions about the nature of consumer health vocabularies that we believe have theoretical and practical implications for bridging the medical vocabulary gap between consumers and professionals." }, { "pmid": "21163965", "title": "Quantitative analysis of culture using millions of digitized books.", "abstract": "We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of 'culturomics,' focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. Culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities." }, { "pmid": "17238339", "title": "Comprehending technical texts: predicting and defining unfamiliar terms.", "abstract": "We investigate how to improve access to medical literature for health consumers. Our focus is on medical terminology. We present a method to predict automatically in a given text which medical terms are unlikely to be understood by a lay reader. Our method, which is linguistically motivated and fully unsupervised, relies on how common a specific term is in texts that we already know are familiar to a lay reader. Once a term is identified as unfamiliar, an appropriate definition is mined from the Web to be provided to the reader. Our experiments show that the prediction and the addition of definitions significantly improve lay readers' comprehension of sentences containing technical medical terms." } ]
GigaScience
29048555
PMC5691353
10.1093/gigascience/gix099
An architecture for genomics analysis in a clinical setting using Galaxy and Docker
AbstractNext-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker.
Related worksWorkflow management systemsThe development of high-throughput methods in molecular biology has considerably increased the volume of molecular data produced daily by biologists. Many analytical scripts and software have been developed to assist biologists and clinicians in their tasks. Commercial and open-source solutions have emerged, allowing the user to combine analytical tools and build pipelines using graphical interfaces. In addition, workflow management systems (such as Taverna [12], Galaxy [11], SnakeMake [13], NextFlow [14]) also ensure the traceability and reproducibility of the analytical process. The efficient use of a workflow management system remains limited to trained bioinformaticians.Docker and GalaxyDocker provides a standard way to supply ready-to-use applications, and it's beginning to be a common way to share works [15–22]. In Aranguren and Wilkinson [23], the authors make the assumption that the reproducibility could be implemented at 2 levels: (1) at the Docker container level: the encapsulation of a tool with all its dependencies would ensure the sustainability, traceability, and reproducibility of the tool; and (2) at the workflow level: the reproducibility is ensured by Galaxy workflow definition. They developed a containerized Galaxy Docker platform in the context of the OpenLifeData2SADI research project. In Kuenzi et al. [16], the authors distribute a Galaxy Docker container hosting a tool suite called APOSTL, which is dedicated to proteomics analysis of mass spectrometry data. They implemented R/Shiny [24] environments inside Galaxy.Grüning et al. [17] provide a standard Dockerized Galaxy application that can be extended in many flavors [18, 25, 26]. Some Galaxy Dockerized applications already exist in containers, e.g., deepTools2 [18].The integration of new tools in Galaxy can be simplified by applications generating configuration files that are near-ready for integration [27, 28]. We propose a similar tool (the DockerTools2Galaxy script), dedicated to Dockerized tools.In this article, we present our architecture to deploy a bioinformatics platform in a clinical setting, leveraging the worldwide-known bioinformatics workflow management solution Galaxy and Docker virtualization technology, standardized bioinformatics application programming interfaces (APIs), and graphical interfaces developed in R SHINY.
[ "15118073", "20619739", "12860957", "21639808", "20818844", "19553641", "20609467", "24204232", "27137889", "23640334", "22908215", "27079975", "28045932", "26335558", "26640691", "26336600", "25423016", "28299179", "26280450", "28542180", "23479348", "27994937", "28027313" ]
[ { "pmid": "15118073", "title": "Activating mutations in the epidermal growth factor receptor underlying responsiveness of non-small-cell lung cancer to gefitinib.", "abstract": "BACKGROUND\nMost patients with non-small-cell lung cancer have no response to the tyrosine kinase inhibitor gefitinib, which targets the epidermal growth factor receptor (EGFR). However, about 10 percent of patients have a rapid and often dramatic clinical response. The molecular mechanisms underlying sensitivity to gefitinib are unknown.\n\n\nMETHODS\nWe searched for mutations in the EGFR gene in primary tumors from patients with non-small-cell lung cancer who had a response to gefitinib, those who did not have a response, and those who had not been exposed to gefitinib. The functional consequences of identified mutations were evaluated after the mutant proteins were expressed in cultured cells.\n\n\nRESULTS\nSomatic mutations were identified in the tyrosine kinase domain of the EGFR gene in eight of nine patients with gefitinib-responsive lung cancer, as compared with none of the seven patients with no response (P<0.001). Mutations were either small, in-frame deletions or amino acid substitutions clustered around the ATP-binding pocket of the tyrosine kinase domain. Similar mutations were detected in tumors from 2 of 25 patients with primary non-small-cell lung cancer who had not been exposed to gefitinib (8 percent). All mutations were heterozygous, and identical mutations were observed in multiple patients, suggesting an additive specific gain of function. In vitro, EGFR mutants demonstrated enhanced tyrosine kinase activity in response to epidermal growth factor and increased sensitivity to inhibition by gefitinib.\n\n\nCONCLUSIONS\nA subgroup of patients with non-small-cell lung cancer have specific mutations in the EGFR gene, which correlate with clinical responsiveness to the tyrosine kinase inhibitor gefitinib. These mutations lead to increased growth factor signaling and confer susceptibility to the inhibitor. Screening for such mutations in lung cancers may identify patients who will have a response to gefitinib." }, { "pmid": "20619739", "title": "Effects of KRAS, BRAF, NRAS, and PIK3CA mutations on the efficacy of cetuximab plus chemotherapy in chemotherapy-refractory metastatic colorectal cancer: a retrospective consortium analysis.", "abstract": "BACKGROUND\nFollowing the discovery that mutant KRAS is associated with resistance to anti-epidermal growth factor receptor (EGFR) antibodies, the tumours of patients with metastatic colorectal cancer are now profiled for seven KRAS mutations before receiving cetuximab or panitumumab. However, most patients with KRAS wild-type tumours still do not respond. We studied the effect of other downstream mutations on the efficacy of cetuximab in, to our knowledge, the largest cohort to date of patients with chemotherapy-refractory metastatic colorectal cancer treated with cetuximab plus chemotherapy in the pre-KRAS selection era.\n\n\nMETHODS\n1022 tumour DNA samples (73 from fresh-frozen and 949 from formalin-fixed, paraffin-embedded tissue) from patients treated with cetuximab between 2001 and 2008 were gathered from 11 centres in seven European countries. 773 primary tumour samples had sufficient quality DNA and were included in mutation frequency analyses; mass spectrometry genotyping of tumour samples for KRAS, BRAF, NRAS, and PIK3CA was done centrally. We analysed objective response, progression-free survival (PFS), and overall survival in molecularly defined subgroups of the 649 chemotherapy-refractory patients treated with cetuximab plus chemotherapy.\n\n\nFINDINGS\n40.0% (299/747) of the tumours harboured a KRAS mutation, 14.5% (108/743) harboured a PIK3CA mutation (of which 68.5% [74/108] were located in exon 9 and 20.4% [22/108] in exon 20), 4.7% (36/761) harboured a BRAF mutation, and 2.6% (17/644) harboured an NRAS mutation. KRAS mutants did not derive benefit compared with wild types, with a response rate of 6.7% (17/253) versus 35.8% (126/352; odds ratio [OR] 0.13, 95% CI 0.07-0.22; p<0.0001), a median PFS of 12 weeks versus 24 weeks (hazard ratio [HR] 1.98, 1.66-2.36; p<0.0001), and a median overall survival of 32 weeks versus 50 weeks (1.75, 1.47-2.09; p<0.0001). In KRAS wild types, carriers of BRAF and NRAS mutations had a significantly lower response rate than did BRAF and NRAS wild types, with a response rate of 8.3% (2/24) in carriers of BRAF mutations versus 38.0% in BRAF wild types (124/326; OR 0.15, 95% CI 0.02-0.51; p=0.0012); and 7.7% (1/13) in carriers of NRAS mutations versus 38.1% in NRAS wild types (110/289; OR 0.14, 0.007-0.70; p=0.013). PIK3CA exon 9 mutations had no effect, whereas exon 20 mutations were associated with a worse outcome compared with wild types, with a response rate of 0.0% (0/9) versus 36.8% (121/329; OR 0.00, 0.00-0.89; p=0.029), a median PFS of 11.5 weeks versus 24 weeks (HR 2.52, 1.33-4.78; p=0.013), and a median overall survival of 34 weeks versus 51 weeks (3.29, 1.60-6.74; p=0.0057). Multivariate analysis and conditional inference trees confirmed that, if KRAS is not mutated, assessing BRAF, NRAS, and PIK3CA exon 20 mutations (in that order) gives additional information about outcome. Objective response rates in our series were 24.4% in the unselected population, 36.3% in the KRAS wild-type selected population, and 41.2% in the KRAS, BRAF, NRAS, and PIK3CA exon 20 wild-type population.\n\n\nINTERPRETATION\nWhile confirming the negative effect of KRAS mutations on outcome after cetuximab, we show that BRAF, NRAS, and PIK3CA exon 20 mutations are significantly associated with a low response rate. Objective response rates could be improved by additional genotyping of BRAF, NRAS, and PIK3CA exon 20 mutations in a KRAS wild-type population.\n\n\nFUNDING\nBelgian Federation against Cancer (Stichting tegen Kanker)." }, { "pmid": "12860957", "title": "Status of epidermal growth factor receptor antagonists in the biology and treatment of cancer.", "abstract": "The epidermal growth factor receptor (EGFR) is a tyrosine kinase receptor of the ErbB family that is abnormally activated in many epithelial tumors. Receptor activation leads to recruitment and phosphorylation of several downstream intracellular substrates, leading to mitogenic signaling and other tumor-promoting cellular activities. In human tumors, receptor overexpression correlates with a more aggressive clinical course. Taken together, these observations indicate that the EGFR is a promising target for cancer therapy. Monoclonal antibodies directed at the ligand-binding extracellular domain and low-molecular weight inhibitors of the receptor's tyrosine kinase are currently in advanced stages of clinical development. These agents prevent ligand-induced receptor activation and downstream signaling, which results in cell cycle arrest, promotion of apoptosis, and inhibition of angiogenesis. They also enhance the antitumor effects of chemotherapy and radiation therapy. In patients, anti-EGFR agents can be given safely at doses that fully inhibit receptor signaling, and single-agent activity has been observed against a variety of tumor types, including colon carcinoma, non-small-cell lung cancer, head and neck cancer, ovarian carcinoma, and renal cell carcinoma. Although antitumor activity is significant, responses have been seen in only a minority of the patients treated. In some clinical trials, anti-EGFR agents enhanced the effects of conventional chemotherapy and radiation therapy. Ongoing research efforts are directed at the selection of patients with EGFR-dependent tumors, identification of the differences among the various classes of agents, and new clinical development strategies." }, { "pmid": "21639808", "title": "Improved survival with vemurafenib in melanoma with BRAF V600E mutation.", "abstract": "BACKGROUND\nPhase 1 and 2 clinical trials of the BRAF kinase inhibitor vemurafenib (PLX4032) have shown response rates of more than 50% in patients with metastatic melanoma with the BRAF V600E mutation.\n\n\nMETHODS\nWe conducted a phase 3 randomized clinical trial comparing vemurafenib with dacarbazine in 675 patients with previously untreated, metastatic melanoma with the BRAF V600E mutation. Patients were randomly assigned to receive either vemurafenib (960 mg orally twice daily) or dacarbazine (1000 mg per square meter of body-surface area intravenously every 3 weeks). Coprimary end points were rates of overall and progression-free survival. Secondary end points included the response rate, response duration, and safety. A final analysis was planned after 196 deaths and an interim analysis after 98 deaths.\n\n\nRESULTS\nAt 6 months, overall survival was 84% (95% confidence interval [CI], 78 to 89) in the vemurafenib group and 64% (95% CI, 56 to 73) in the dacarbazine group. In the interim analysis for overall survival and final analysis for progression-free survival, vemurafenib was associated with a relative reduction of 63% in the risk of death and of 74% in the risk of either death or disease progression, as compared with dacarbazine (P<0.001 for both comparisons). After review of the interim analysis by an independent data and safety monitoring board, crossover from dacarbazine to vemurafenib was recommended. Response rates were 48% for vemurafenib and 5% for dacarbazine. Common adverse events associated with vemurafenib were arthralgia, rash, fatigue, alopecia, keratoacanthoma or squamous-cell carcinoma, photosensitivity, nausea, and diarrhea; 38% of patients required dose modification because of toxic effects.\n\n\nCONCLUSIONS\nVemurafenib produced improved rates of overall and progression-free survival in patients with previously untreated melanoma with the BRAF V600E mutation. (Funded by Hoffmann-La Roche; BRIM-3 ClinicalTrials.gov number, NCT01006980.)." }, { "pmid": "20818844", "title": "Inhibition of mutated, activated BRAF in metastatic melanoma.", "abstract": "BACKGROUND\nThe identification of somatic mutations in the gene encoding the serine-threonine protein kinase B-RAF (BRAF) in the majority of melanomas offers an opportunity to test oncogene-targeted therapy for this disease.\n\n\nMETHODS\nWe conducted a multicenter, phase 1, dose-escalation trial of PLX4032 (also known as RG7204), an orally available inhibitor of mutated BRAF, followed by an extension phase involving the maximum dose that could be administered without adverse effects (the recommended phase 2 dose). Patients received PLX4032 twice daily until they had disease progression. Pharmacokinetic analysis and tumor-response assessments were conducted in all patients. In selected patients, tumor biopsy was performed before and during treatment to validate BRAF inhibition.\n\n\nRESULTS\nA total of 55 patients (49 of whom had melanoma) were enrolled in the dose-escalation phase, and 32 additional patients with metastatic melanoma who had BRAF with the V600E mutation were enrolled in the extension phase. The recommended phase 2 dose was 960 mg twice daily, with increases in the dose limited by grade 2 or 3 rash, fatigue, and arthralgia. In the dose-escalation cohort, among the 16 patients with melanoma whose tumors carried the V600E BRAF mutation and who were receiving 240 mg or more of PLX4032 twice daily, 10 had a partial response and 1 had a complete response. Among the 32 patients in the extension cohort, 24 had a partial response and 2 had a complete response. The estimated median progression-free survival among all patients was more than 7 months.\n\n\nCONCLUSIONS\nTreatment of metastatic melanoma with PLX4032 in patients with tumors that carry the V600E BRAF mutation resulted in complete or partial tumor regression in the majority of patients. (Funded by Plexxikon and Roche Pharmaceuticals.)" }, { "pmid": "19553641", "title": "Inhibition of poly(ADP-ribose) polymerase in tumors from BRCA mutation carriers.", "abstract": "BACKGROUND\nThe inhibition of poly(adenosine diphosphate [ADP]-ribose) polymerase (PARP) is a potential synthetic lethal therapeutic strategy for the treatment of cancers with specific DNA-repair defects, including those arising in carriers of a BRCA1 or BRCA2 mutation. We conducted a clinical evaluation in humans of olaparib (AZD2281), a novel, potent, orally active PARP inhibitor.\n\n\nMETHODS\nThis was a phase 1 trial that included the analysis of pharmacokinetic and pharmacodynamic characteristics of olaparib. Selection was aimed at having a study population enriched in carriers of a BRCA1 or BRCA2 mutation.\n\n\nRESULTS\nWe enrolled and treated 60 patients; 22 were carriers of a BRCA1 or BRCA2 mutation and 1 had a strong family history of BRCA-associated cancer but declined to undergo mutational testing. The olaparib dose and schedule were increased from 10 mg daily for 2 of every 3 weeks to 600 mg twice daily continuously. Reversible dose-limiting toxicity was seen in one of eight patients receiving 400 mg twice daily (grade 3 mood alteration and fatigue) and two of five patients receiving 600 mg twice daily (grade 4 thrombocytopenia and grade 3 somnolence). This led us to enroll another cohort, consisting only of carriers of a BRCA1 or BRCA2 mutation, to receive olaparib at a dose of 200 mg twice daily. Other adverse effects included mild gastrointestinal symptoms. There was no obvious increase in adverse effects seen in the mutation carriers. Pharmacokinetic data indicated rapid absorption and elimination; pharmacodynamic studies confirmed PARP inhibition in surrogate samples (of peripheral-blood mononuclear cells and plucked eyebrow-hair follicles) and tumor tissue. Objective antitumor activity was reported only in mutation carriers, all of whom had ovarian, breast, or prostate cancer and had received multiple treatment regimens.\n\n\nCONCLUSIONS\nOlaparib has few of the adverse effects of conventional chemotherapy, inhibits PARP, and has antitumor activity in cancer associated with the BRCA1 or BRCA2 mutation. (ClinicalTrials.gov number, NCT00516373.)" }, { "pmid": "20609467", "title": "Oral poly(ADP-ribose) polymerase inhibitor olaparib in patients with BRCA1 or BRCA2 mutations and advanced breast cancer: a proof-of-concept trial.", "abstract": "BACKGROUND\nOlaparib, a novel, orally active poly(ADP-ribose) polymerase (PARP) inhibitor, induced synthetic lethality in BRCA-deficient cells. A maximum tolerated dose and initial signal of efficacy in BRCA-deficient ovarian cancers have been reported. We therefore assessed the efficacy, safety, and tolerability of olaparib alone in women with BRCA1 or BRCA2 mutations and advanced breast cancer.\n\n\nMETHODS\nWomen (aged >or=18 years) with confirmed BRCA1 or BRCA2 mutations and recurrent, advanced breast cancer were assigned to two sequential cohorts in a phase 2 study undertaken in 16 centres in Australia, Germany, Spain, Sweden, the UK, and the USA. The first cohort (n=27) was given continuous oral olaparib at the maximum tolerated dose (400 mg twice daily), and the second (n=27) was given a lower dose (100 mg twice daily). The primary efficacy endpoint was objective response rate (ORR). This study is registered with ClinicalTrials.gov, number NCT00494234.\n\n\nFINDINGS\nPatients had been given a median of three previous chemotherapy regimens (range 1-5 in cohort 1, and 2-4 in cohort 2). ORR was 11 (41%) of 27 patients (95% CI 25-59) in the cohort assigned to 400 mg twice daily, and six (22%) of 27 (11-41) in the cohort assigned to 100 mg twice daily. Toxicities were mainly at low grades. The most frequent causally related adverse events in the cohort given 400 mg twice daily were fatigue (grade 1 or 2, 11 [41%]; grade 3 or 4, four [15%]), nausea (grade 1 or 2, 11 [41%]; grade 3 or 4, four [15%]), vomiting (grade 1 or 2, three [11%]; grade 3 or 4, three [11%]), and anaemia (grade 1 or 2, one [4%]; grade 3 or 4, three [11%]). The most frequent causally related adverse events in the cohort given 100 mg twice daily were nausea (grade 1 or 2, 11 [41%]; none grade 3 or 4) and fatigue (grade 1 or 2, seven [26%]; grade 3 or 4, one [4%]).\n\n\nINTERPRETATION\nThe results of this study provide positive proof of concept for PARP inhibition in BRCA-deficient breast cancers and shows a favourable therapeutic index for a novel targeted treatment strategy in patients with tumours that have genetic loss of function of BRCA1-associated or BRCA2-associated DNA repair. Toxicity in women with BRCA1 and BRCA2 mutations was similar to that reported previously in those without such mutations.\n\n\nFUNDING\nAstraZeneca." }, { "pmid": "27137889", "title": "The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update.", "abstract": "High-throughput data production technologies, particularly 'next-generation' DNA sequencing, have ushered in widespread and disruptive changes to biomedical research. Making sense of the large datasets produced by these technologies requires sophisticated statistical and computational methods, as well as substantial computational power. This has led to an acute crisis in life sciences, as researchers without informatics training attempt to perform computation-dependent analyses. Since 2005, the Galaxy project has worked to address this problem by providing a framework that makes advanced computational tools usable by non experts. Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse. In this report we highlight recently added features enabling biomedical analyses on a large scale." }, { "pmid": "23640334", "title": "The Taverna workflow suite: designing and executing workflows of Web Services on the desktop, web or in the cloud.", "abstract": "The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server." }, { "pmid": "22908215", "title": "Snakemake--a scalable bioinformatics workflow engine.", "abstract": "SUMMARY\nSnakemake is a workflow engine that provides a readable Python-based workflow definition language and a powerful execution environment that scales from single-core workstations to compute clusters without modifying the workflow. It is the first system to support the use of automatically inferred multiple named wildcards (or variables) in input and output filenames.\n\n\nAVAILABILITY\nhttp://snakemake.googlecode.com.\n\n\nCONTACT\[email protected]." }, { "pmid": "27079975", "title": "deepTools2: a next generation web server for deep-sequencing data analysis.", "abstract": "We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available." }, { "pmid": "28045932", "title": "Metavisitor, a Suite of Galaxy Tools for Simple and Rapid Detection and Discovery of Viruses in Deep Sequence Data.", "abstract": "Metavisitor is a software package that allows biologists and clinicians without specialized bioinformatics expertise to detect and assemble viral genomes from deep sequence datasets. The package is composed of a set of modular bioinformatic tools and workflows that are implemented in the Galaxy framework. Using the graphical Galaxy workflow editor, users with minimal computational skills can use existing Metavisitor workflows or adapt them to suit specific needs by adding or modifying analysis modules. Metavisitor works with DNA, RNA or small RNA sequencing data over a range of read lengths and can use a combination of de novo and guided approaches to assemble genomes from sequencing reads. We show that the software has the potential for quick diagnosis as well as discovery of viruses from a vast array of organisms. Importantly, we provide here executable Metavisitor use cases, which increase the accessibility and transparency of the software, ultimately enabling biologists or clinicians to focus on biological or medical questions." }, { "pmid": "26335558", "title": "ReproPhylo: An Environment for Reproducible Phylogenomics.", "abstract": "The reproducibility of experiments is key to the scientific process, and particularly necessary for accurate reporting of analyses in data-rich fields such as phylogenomics. We present ReproPhylo, a phylogenomic analysis environment developed to ensure experimental reproducibility, to facilitate the handling of large-scale data, and to assist methodological experimentation. Reproducibility, and instantaneous repeatability, is built in to the ReproPhylo system and does not require user intervention or configuration because it stores the experimental workflow as a single, serialized Python object containing explicit provenance and environment information. This 'single file' approach ensures the persistence of provenance across iterations of the analysis, with changes automatically managed by the version control program Git. This file, along with a Git repository, are the primary reproducibility outputs of the program. In addition, ReproPhylo produces an extensive human-readable report and generates a comprehensive experimental archive file, both of which are suitable for submission with publications. The system facilitates thorough experimental exploration of both parameters and data. ReproPhylo is a platform independent CC0 Python module and is easily installed as a Docker image or a WinPython self-sufficient package, with a Jupyter Notebook GUI, or as a slimmer version in a Galaxy distribution." }, { "pmid": "26640691", "title": "Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.", "abstract": "BACKGROUND\nSemantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services.\n\n\nFINDINGS\nThis article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration.\n\n\nCONCLUSIONS\nThe combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns." }, { "pmid": "26336600", "title": "NCBI BLAST+ integrated into Galaxy.", "abstract": "BACKGROUND\nThe NCBI BLAST suite has become ubiquitous in modern molecular biology and is used for small tasks such as checking capillary sequencing results of single PCR products, genome annotation or even larger scale pan-genome analyses. For early adopters of the Galaxy web-based biomedical data analysis platform, integrating BLAST into Galaxy was a natural step for sequence comparison workflows.\n\n\nFINDINGS\nThe command line NCBI BLAST+ tool suite was wrapped for use within Galaxy. Appropriate datatypes were defined as needed. The integration of the BLAST+ tool suite into Galaxy has the goal of making common BLAST tasks easy and advanced tasks possible.\n\n\nCONCLUSIONS\nThis project is an informal international collaborative effort, and is deployed and used on Galaxy servers worldwide. Several examples of applications are described here." }, { "pmid": "28299179", "title": "Cluster Flow: A user-friendly bioinformatics workflow tool.", "abstract": "Pipeline tools are becoming increasingly important within the field of bioinformatics. Using a pipeline manager to manage and run workflows comprised of multiple tools reduces workload and makes analysis results more reproducible. Existing tools require significant work to install and get running, typically needing pipeline scripts to be written from scratch before running any analysis. We present Cluster Flow, a simple and flexible bioinformatics pipeline tool designed to be quick and easy to install. Cluster Flow comes with 40 modules for common NGS processing steps, ready to work out of the box. Pipelines are assembled using these modules with a simple syntax that can be easily modified as required. Core helper functions automate many common NGS procedures, making running pipelines simple. Cluster Flow is available with an GNU GPLv3 license on GitHub. Documentation, examples and an online demo are available at http://clusterflow.io." }, { "pmid": "26280450", "title": "JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.", "abstract": "Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS." }, { "pmid": "28542180", "title": "Jupyter and Galaxy: Easing entry barriers into complex data analyses for biomedical researchers.", "abstract": "What does it take to convert a heap of sequencing data into a publishable result? First, common tools are employed to reduce primary data (sequencing reads) to a form suitable for further analyses (i.e., the list of variable sites). The subsequent exploratory stage is much more ad hoc and requires the development of custom scripts and pipelines, making it problematic for biomedical researchers. Here, we describe a hybrid platform combining common analysis pathways with the ability to explore data interactively. It aims to fully encompass and simplify the \"raw data-to-publication\" pathway and make it reproducible." }, { "pmid": "23479348", "title": "EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats.", "abstract": "MOTIVATION\nAdvancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required.\n\n\nRESULTS\nEDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations.\n\n\nAVAILABILITY\nThe latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl.\n\n\nCONTACT\[email protected]." }, { "pmid": "27994937", "title": "The growing need for microservices in bioinformatics.", "abstract": "OBJECTIVE\nWithin the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development.\n\n\nCONTEXT\nBioinformatics relies on nimble IT framework which can adapt to changing requirements.\n\n\nAIMS\nTo present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics.\n\n\nCONCLUSIONS\nUse of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting." }, { "pmid": "28027313", "title": "Base-Position Error Rate Analysis of Next-Generation Sequencing Applied to Circulating Tumor DNA in Non-Small Cell Lung Cancer: A Prospective Study.", "abstract": "BACKGROUND\nCirculating tumor DNA (ctDNA) is an approved noninvasive biomarker to test for the presence of EGFR mutations at diagnosis or recurrence of lung cancer. However, studies evaluating ctDNA as a noninvasive \"real-time\" biomarker to provide prognostic and predictive information in treatment monitoring have given inconsistent results, mainly due to methodological differences. We have recently validated a next-generation sequencing (NGS) approach to detect ctDNA. Using this new approach, we evaluated the clinical usefulness of ctDNA monitoring in a prospective observational series of patients with non-small cell lung cancer (NSCLC).\n\n\nMETHODS AND FINDINGS\nWe recruited 124 patients with newly diagnosed advanced NSCLC for ctDNA monitoring. The primary objective was to analyze the prognostic value of baseline ctDNA on overall survival. ctDNA was assessed by ultra-deep targeted NGS using our dedicated variant caller algorithm. Common mutations were validated by digital PCR. Out of the 109 patients with at least one follow-up marker mutation, plasma samples were contributive at baseline (n = 105), at first evaluation (n = 85), and at tumor progression (n = 66). We found that the presence of ctDNA at baseline was an independent marker of poor prognosis, with a median overall survival of 13.6 versus 21.5 mo (adjusted hazard ratio [HR] 1.82, 95% CI 1.01-3.55, p = 0.045) and a median progression-free survival of 4.9 versus 10.4 mo (adjusted HR 2.14, 95% CI 1.30-3.67, p = 0.002). It was also related to the presence of bone and liver metastasis. At first evaluation (E1) after treatment initiation, residual ctDNA was an early predictor of treatment benefit as judged by best radiological response and progression-free survival. Finally, negative ctDNA at E1 was associated with overall survival independently of Response Evaluation Criteria in Solid Tumors (RECIST) (HR 3.27, 95% CI 1.66-6.40, p < 0.001). Study population heterogeneity, over-representation of EGFR-mutated patients, and heterogeneous treatment types might limit the conclusions of this study, which require future validation in independent populations.\n\n\nCONCLUSIONS\nIn this study of patients with newly diagnosed NSCLC, we found that ctDNA detection using targeted NGS was associated with poor prognosis. The heterogeneity of lung cancer molecular alterations, particularly at time of progression, impairs the ability of individual gene testing to accurately detect ctDNA in unselected patients. Further investigations are needed to evaluate the clinical impact of earlier evaluation times at 1 or 2 wk. Supporting clinical decisions, such as early treatment switching based on ctDNA positivity at first evaluation, will require dedicated interventional studies." } ]
PLoS Computational Biology
29131816
PMC5703574
10.1371/journal.pcbi.1005857
Automated visualization of rule-based models
Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models.
Related workIn addition to the approaches discussed in Introduction (Fig 1A–1D) and Methods (Fig 2C), we show examples of other currently available tools (Fig 11) and how they compare with compact rule visualizations and atom-rule graphs.10.1371/journal.pcbi.1005857.g011Fig 11Other visualization approaches applied to the enzyme-substrate phosphorylation model of Fig 2.(A) The binding rule drawn using SBGN Process Description conventions, which require visual graph comparison. (B) Kappa story, showing the causal order of rules that produces sub_pp, which refers to doubly phosphorylated substrate. (C) Simmune Network Viewer diagram, which merges patterns across rules and hides certain causal dependencies (details in S4 Fig). Here, the Enz.Sub node merges all enzyme-substrate patterns shown in Fig 2. (D) SBGN Entity Relationship. (E) Molecular Interaction Map. Panels D-E require manual interpretation of the model, like the extended contact map. (F) rxncon regulatory graph visualization of the rxncon model format, which can only depict a limited subset of reaction rules (details in S5 Fig).The SBGN Process Description (Fig 11A) [24] is a visualization standard for reacting entities. It has the same limitation as conventional rule visualization, namely the need for visual graph comparison.The Kappa story (Fig 11B) [22] shows the causal order in which rules can be applied to generate specific outputs, and these are derived by analysis of model simulation trajectories. It is complementary to the statically derived AR graph for showing interactions between rules, but it does not show the structures that mediate these interactions nor does it provide a mechanism for grouping rules. Integrating Kappa stories with AR graphs is an interesting area for future work.The Simmune Network Viewer (Fig 11C) [26] compresses the representation of rules differently from the AR graph: it merges patterns that have the same molecules and bonds, but differ in internal states. Like the AR graph, it shows both structures and rules, and it produces diagrams with much lower density (‘sim’ in Fig 10), but it obscures causal dependencies on internal states (S4 Fig).The SBGN Entity Relationship diagram (Fig 11D) [24] and the Molecular Interaction Map (Fig 11E) [25], like the Extended Contact Map [23], are diagrams of model architecture that rely on manual analysis.The rxncon regulatory graph (Fig 11F) visualizes the rxncon model format [27], which uses atoms (called elemental states in rxncon) to specify contextual influences on processes. This approach, which is also followed in Process Interaction Model[49], is less expressive than the graph transformation approach used in BioNetGen, Kappa and Simmune (S5 Fig). The AR graph we have developed generalizes the regulatory graph visualization so it can be derived from arbitrary types of rules found in BioNetGen, Kappa and Simmune models.
[ "15217809", "19399430", "27402907", "16854213", "23508970", "12646643", "24782869", "19348745", "25147952", "16233948", "22913808", "26928575", "22114196", "22817898", "19045830", "27497444", "22607382", "21647530", "19668183", "12779444", "24934175", "22531118", "22412851", "22711887", "23423320", "24699269", "21288338", "26178138", "23361986", "24130475", "24288371", "10802651", "25378310", "26527732", "23020215", "23361986", "22081592", "23835289", "18755034", "18082598", "10647936", "23143271", "25231498", "23964998", "23716640", "25086704", "20829833" ]
[ { "pmid": "15217809", "title": "BioNetGen: software for rule-based modeling of signal transduction based on the interactions of molecular domains.", "abstract": "BioNetGen allows a user to create a computational model that characterizes the dynamics of a signal transduction system, and that accounts comprehensively and precisely for specified enzymatic activities, potential post-translational modifications and interactions of the domains of signaling molecules. The output defines and parameterizes the network of molecular species that can arise during signaling and provides functions that relate model variables to experimental readouts of interest. Models that can be generated are relevant for rational drug discovery, analysis of proteomic data and mechanistic studies of signal transduction." }, { "pmid": "19399430", "title": "Rule-based modeling of biochemical systems with BioNetGen.", "abstract": "Rule-based modeling involves the representation of molecules as structured objects and molecular interactions as rules for transforming the attributes of these objects. The approach is notable in that it allows one to systematically incorporate site-specific details about protein-protein interactions into a model for the dynamics of a signal-transduction system, but the method has other applications as well, such as following the fates of individual carbon atoms in metabolic reactions. The consequences of protein-protein interactions are difficult to specify and track with a conventional modeling approach because of the large number of protein phosphoforms and protein complexes that these interactions potentially generate. Here, we focus on how a rule-based model is specified in the BioNetGen language (BNGL) and how a model specification is analyzed using the BioNetGen software tool. We also discuss new developments in rule-based modeling that should enable the construction and analyses of comprehensive models for signal transduction pathways and similarly large-scale models for other biochemical systems." }, { "pmid": "27402907", "title": "BioNetGen 2.2: advances in rule-based modeling.", "abstract": ": BioNetGen is an open-source software package for rule-based modeling of complex biochemical systems. Version 2.2 of the software introduces numerous new features for both model specification and simulation. Here, we report on these additions, discussing how they facilitate the construction, simulation and analysis of larger and more complex models than previously possible.\n\n\nAVAILABILITY AND IMPLEMENTATION\nStable BioNetGen releases (Linux, Mac OS/X and Windows), with documentation, are available at http://bionetgen.org Source code is available at http://github.com/RuleWorld/bionetgen CONTACT: [email protected] information: Supplementary data are available at Bioinformatics online." }, { "pmid": "16854213", "title": "Key role of local regulation in chemosensing revealed by a new molecular interaction-based modeling method.", "abstract": "The signaling network underlying eukaryotic chemosensing is a complex combination of receptor-mediated transmembrane signals, lipid modifications, protein translocations, and differential activation/deactivation of membrane-bound and cytosolic components. As such, it provides particularly interesting challenges for a combined computational and experimental analysis. We developed a novel detailed molecular signaling model that, when used to simulate the response to the attractant cyclic adenosine monophosphate (cAMP), made nontrivial predictions about Dictyostelium chemosensing. These predictions, including the unexpected existence of spatially asymmetrical, multiphasic, cyclic adenosine monophosphate-induced PTEN translocation and phosphatidylinositol-(3,4,5)P3 generation, were experimentally verified by quantitative single-cell microscopy leading us to propose significant modifications to the current standard model for chemoattractant-induced biochemical polarization in this organism. Key to this successful modeling effort was the use of \"Simmune,\" a new software package that supports the facile development and testing of detailed computational representations of cellular behavior. An intuitive interface allows user definition of complex signaling networks based on the definition of specific molecular binding site interactions and the subcellular localization of molecules. It automatically translates such inputs into spatially resolved simulations and dynamic graphical representations of the resulting signaling network that can be explored in a manner that closely parallels wet lab experimental procedures. These features of Simmune were critical to the model development and analysis presented here and are likely to be useful in the computational investigation of many aspects of cell biology." }, { "pmid": "23508970", "title": "The Simmune Modeler visual interface for creating signaling networks based on bi-molecular interactions.", "abstract": "MOTIVATION\nBiochemical modeling efforts now frequently take advantage of the possibility to automatically create reaction networks based on the specification of pairwise molecular interactions. Even though a variety of tools exist to visualize the resulting networks, defining the rules for the molecular interactions typically requires writing scripts, which impacts the non-specialist accessibility of those approaches. We introduce the Simmune Modeler that allows users to specify molecular complexes and their interactions as well as the reaction-induced modifications of the molecules through a flexible visual interface. It can take into account the positions of the components of trans-membrane complexes relative to the embedding membranes as well as symmetry aspects affecting the reactions of multimeric molecular structures. Models created with this tool can be simulated using the Simmune Simulator or be exported as SBML code or as files describing the reaction networks as systems of ODEs for import into Matlab.\n\n\nAVAILABILITY\nThe Simmune Modeler and the associated simulators as well as extensive additional documentation and tutorials are freely available for Linux, Mac and Windows: http://go.usa.gov/QeH (Note shortened case-sensitive URL!)." }, { "pmid": "12646643", "title": "Investigation of early events in Fc epsilon RI-mediated signaling using a detailed mathematical model.", "abstract": "Aggregation of Fc epsilon RI on mast cells and basophils leads to autophosphorylation and increased activity of the cytosolic protein tyrosine kinase Syk. We investigated the roles of the Src kinase Lyn, the immunoreceptor tyrosine-based activation motifs (ITAMs) on the beta and gamma subunits of Fc epsilon RI, and Syk itself in the activation of Syk. Our approach was to build a detailed mathematical model of reactions involving Fc epsilon RI, Lyn, Syk, and a bivalent ligand that aggregates Fc(epsilon)RI. We applied the model to experiments in which covalently cross-linked IgE dimers stimulate rat basophilic leukemia cells. The model makes it possible to test the consistency of mechanistic assumptions with data that alone provide limited mechanistic insight. For example, the model helps sort out mechanisms that jointly control dephosphorylation of receptor subunits. In addition, interpreted in the context of the model, experimentally observed differences between the beta- and gamma-chains with respect to levels of phosphorylation and rates of dephosphorylation indicate that most cellular Syk, but only a small fraction of Lyn, is available to interact with receptors. We also show that although the beta ITAM acts to amplify signaling in experimental systems where its role has been investigated, there are conditions under which the beta ITAM will act as an inhibitor." }, { "pmid": "24782869", "title": "An Interaction Library for the FcεRI Signaling Network.", "abstract": "Antigen receptors play a central role in adaptive immune responses. Although the molecular networks associated with these receptors have been extensively studied, we currently lack a systems-level understanding of how combinations of non-covalent interactions and post-translational modifications are regulated during signaling to impact cellular decision-making. To fill this knowledge gap, it will be necessary to formalize and piece together information about individual molecular mechanisms to form large-scale computational models of signaling networks. To this end, we have developed an interaction library for signaling by the high-affinity IgE receptor, FcεRI. The library consists of executable rules for protein-protein and protein-lipid interactions. This library extends earlier models for FcεRI signaling and introduces new interactions that have not previously been considered in a model. Thus, this interaction library is a toolkit with which existing models can be expanded and from which new models can be built. As an example, we present models of branching pathways from the adaptor protein Lat, which influence production of the phospholipid PIP3 at the plasma membrane and the soluble second messenger IP3. We find that inclusion of a positive feedback loop gives rise to a bistable switch, which may ensure robust responses to stimulation above a threshold level. In addition, the library is visualized to facilitate understanding of network circuitry and identification of network motifs." }, { "pmid": "19348745", "title": "Aggregation of membrane proteins by cytosolic cross-linkers: theory and simulation of the LAT-Grb2-SOS1 system.", "abstract": "Ligand-induced receptor aggregation is a well-known mechanism for initiating intracellular signals but oligomerization of distal signaling molecules may also be required for signal propagation. Formation of complexes containing oligomers of the transmembrane adaptor protein, linker for the activation of T cells (LAT), has been identified as critical in mast cell and T cell activation mediated by immune response receptors. Cross-linking of LAT arises from the formation of a 2:1 complex between the adaptor Grb2 and the nucleotide exchange factor SOS1, which bridges two LAT molecules through the interaction of the Grb2 SH2 domain with a phosphotyrosine on LAT. We model this oligomerization and find that the valence of LAT for Grb2, which ranges from zero to three, is critical in determining the nature and extent of aggregation. A dramatic rise in oligomerization can occur when the valence switches from two to three. For valence three, an equilibrium theory predicts the possibility of forming a gel-like phase. This prediction is confirmed by stochastic simulations, which make additional predictions about the size of the gel and the kinetics of LAT oligomerization. We discuss the model predictions in light of recent experiments on RBL-2H3 and Jurkat E6.1 cells and suggest that the gel phase has been observed in activated mast cells." }, { "pmid": "25147952", "title": "Phosphorylation site dynamics of early T-cell receptor signaling.", "abstract": "In adaptive immune responses, T-cell receptor (TCR) signaling impacts multiple cellular processes and results in T-cell differentiation, proliferation, and cytokine production. Although individual protein-protein interactions and phosphorylation events have been studied extensively, we lack a systems-level understanding of how these components cooperate to control signaling dynamics, especially during the crucial first seconds of stimulation. Here, we used quantitative proteomics to characterize reshaping of the T-cell phosphoproteome in response to TCR/CD28 co-stimulation, and found that diverse dynamic patterns emerge within seconds. We detected phosphorylation dynamics as early as 5 s and observed widespread regulation of key TCR signaling proteins by 30 s. Development of a computational model pointed to the presence of novel regulatory mechanisms controlling phosphorylation of sites with central roles in TCR signaling. The model was used to generate predictions suggesting unexpected roles for the phosphatase PTPN6 (SHP-1) and shortcut recruitment of the actin regulator WAS. Predictions were validated experimentally. This integration of proteomics and modeling illustrates a novel, generalizable framework for solidifying quantitative understanding of a signaling network and for elucidating missing links." }, { "pmid": "16233948", "title": "A network model of early events in epidermal growth factor receptor signaling that accounts for combinatorial complexity.", "abstract": "We consider a model of early events in signaling by the epidermal growth factor (EGF) receptor (EGFR). The model includes EGF, EGFR, the adapter proteins Grb2 and Shc, and the guanine nucleotide exchange factor Sos, which is activated through EGF-induced formation of EGFR-Grb2-Sos and EGFR-Shc-Grb2-Sos assemblies at the plasma membrane. The protein interactions involved in signaling can potentially generate a diversity of protein complexes and phosphoforms; however, this diversity has been largely ignored in models of EGFR signaling. Here, we develop a model that accounts more fully for potential molecular diversity by specifying rules for protein interactions and then using these rules to generate a reaction network that includes all chemical species and reactions implied by the protein interactions. We obtain a model that predicts the dynamics of 356 molecular species, which are connected through 3749 unidirectional reactions. This network model is compared with a previously developed model that includes only 18 chemical species but incorporates the same scope of protein interactions. The predictions of this model are reproduced by the network model, which also yields new predictions. For example, the network model predicts distinct temporal patterns of autophosphorylation for different tyrosine residues of EGFR. A comparison of the two models suggests experiments that could lead to mechanistic insights about competition among adapter proteins for EGFR binding sites and the role of EGFR monomers in signal transduction." }, { "pmid": "22913808", "title": "Specification, annotation, visualization and simulation of a large rule-based model for ERBB receptor signaling.", "abstract": "BACKGROUND\nMathematical/computational models are needed to understand cell signaling networks, which are complex. Signaling proteins contain multiple functional components and multiple sites of post-translational modification. The multiplicity of components and sites of modification ensures that interactions among signaling proteins have the potential to generate myriad protein complexes and post-translational modification states. As a result, the number of chemical species that can be populated in a cell signaling network, and hence the number of equations in an ordinary differential equation model required to capture the dynamics of these species, is prohibitively large. To overcome this problem, the rule-based modeling approach has been developed for representing interactions within signaling networks efficiently and compactly through coarse-graining of the chemical kinetics of molecular interactions.\n\n\nRESULTS\nHere, we provide a demonstration that the rule-based modeling approach can be used to specify and simulate a large model for ERBB receptor signaling that accounts for site-specific details of protein-protein interactions. The model is considered large because it corresponds to a reaction network containing more reactions than can be practically enumerated. The model encompasses activation of ERK and Akt, and it can be simulated using a network-free simulator, such as NFsim, to generate time courses of phosphorylation for 55 individual serine, threonine, and tyrosine residues. The model is annotated and visualized in the form of an extended contact map.\n\n\nCONCLUSIONS\nWith the development of software that implements novel computational methods for calculating the dynamics of large-scale rule-based representations of cellular signaling networks, it is now possible to build and analyze models that include a significant fraction of the protein interactions that comprise a signaling network, with incorporation of the site-specific details of the interactions. Modeling at this level of detail is important for understanding cellular signaling." }, { "pmid": "26928575", "title": "Feedbacks, Bifurcations, and Cell Fate Decision-Making in the p53 System.", "abstract": "The p53 transcription factor is a regulator of key cellular processes including DNA repair, cell cycle arrest, and apoptosis. In this theoretical study, we investigate how the complex circuitry of the p53 network allows for stochastic yet unambiguous cell fate decision-making. The proposed Markov chain model consists of the regulatory core and two subordinated bistable modules responsible for cell cycle arrest and apoptosis. The regulatory core is controlled by two negative feedback loops (regulated by Mdm2 and Wip1) responsible for oscillations, and two antagonistic positive feedback loops (regulated by phosphatases Wip1 and PTEN) responsible for bistability. By means of bifurcation analysis of the deterministic approximation we capture the recurrent solutions (i.e., steady states and limit cycles) that delineate temporal responses of the stochastic system. Direct switching from the limit-cycle oscillations to the \"apoptotic\" steady state is enabled by the existence of a subcritical Neimark-Sacker bifurcation in which the limit cycle loses its stability by merging with an unstable invariant torus. Our analysis provides an explanation why cancer cell lines known to have vastly diverse expression levels of Wip1 and PTEN exhibit a broad spectrum of responses to DNA damage: from a fast transition to a high level of p53 killer (a p53 phosphoform which promotes commitment to apoptosis) in cells characterized by high PTEN and low Wip1 levels to long-lasting p53 level oscillations in cells having PTEN promoter methylated (as in, e.g., MCF-7 cell line)." }, { "pmid": "22114196", "title": "Scaffold number in yeast signaling system sets tradeoff between system output and dynamic range.", "abstract": "Although the proteins comprising many signaling systems are known, less is known about their numbers per cell. Existing measurements often vary by more than 10-fold. Here, we devised improved quantification methods to measure protein abundances in the Saccharomyces cerevisiae pheromone response pathway, an archetypical signaling system. These methods limited variation between independent measurements of protein abundance to a factor of two. We used these measurements together with quantitative models to identify and investigate behaviors of the pheromone response system sensitive to precise abundances. The difference between the maximum and basal signaling output (dynamic range) of the pheromone response MAPK cascade was strongly sensitive to the abundance of Ste5, the MAPK scaffold protein, and absolute system output depended on the amount of Fus3, the MAPK. Additional analysis and experiment suggest that scaffold abundance sets a tradeoff between maximum system output and system dynamic range, a prediction supported by recent experiments." }, { "pmid": "22817898", "title": "A whole-cell computational model predicts phenotype from genotype.", "abstract": "Understanding how complex phenotypes arise from individual molecules and their interactions is a primary challenge in biology that computational approaches are poised to tackle. We report a whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium that includes all of its molecular components and their interactions. An integrative approach to modeling that combines diverse mathematics enabled the simultaneous inclusion of fundamentally different cellular processes and experimental measurements. Our whole-cell model accounts for all annotated gene functions and was validated against a broad range of data. The model provides insights into many previously unobserved cellular behaviors, including in vivo rates of protein-DNA association and an inverse relationship between the durations of DNA replication initiation and replication. In addition, experimental analysis directed by model predictions identified previously undetected kinetic parameters and biological functions. We conclude that comprehensive whole-cell models can be used to facilitate biological discovery." }, { "pmid": "19045830", "title": "Virtual Cell modelling and simulation software environment.", "abstract": "The Virtual Cell (VCell; http://vcell.org/) is a problem solving environment, built on a central database, for analysis, modelling and simulation of cell biological processes. VCell integrates a growing range of molecular mechanisms, including reaction kinetics, diffusion, flow, membrane transport, lateral membrane diffusion and electrophysiology, and can associate these with geometries derived from experimental microscope images. It has been developed and deployed as a web-based, distributed, client-server system, with more than a thousand world-wide users. VCell provides a separation of layers (core technologies and abstractions) representing biological models, physical mechanisms, geometry, mathematical models and numerical methods. This separation clarifies the impact of modelling decisions, assumptions and approximations. The result is a physically consistent, mathematically rigorous, spatial modelling and simulation framework. Users create biological models and VCell will automatically (i) generate the appropriate mathematical encoding for running a simulation and (ii) generate and compile the appropriate computer code. Both deterministic and stochastic algorithms are supported for describing and running non-spatial simulations; a full partial differential equation solver using the finite volume numerical algorithm is available for reaction-diffusion-advection simulations in complex cell geometries including 3D geometries derived from microscope images. Using the VCell database, models and model components can be reused and updated, as well as privately shared among collaborating groups, or published. Exchange of models with other tools is possible via import/export of SBML, CellML and MatLab formats. Furthermore, curation of models is facilitated by external database binding mechanisms for unique identification of components and by standardised annotations compliant with the MIRIAM standard. VCell is now open source, with its native model encoding language (VCML) being a public specification, which stands as the basis for a new generation of more customised, experiment-centric modelling tools using a new plug-in based platform." }, { "pmid": "27497444", "title": "Rule-based modeling with Virtual Cell.", "abstract": "UNLABELLED\nRule-based modeling is invaluable when the number of possible species and reactions in a model become too large to allow convenient manual specification. The popular rule-based software tools BioNetGen and NFSim provide powerful modeling and simulation capabilities at the cost of learning a complex scripting language which is used to specify these models. Here, we introduce a modeling tool that combines new graphical rule-based model specification with existing simulation engines in a seamless way within the familiar Virtual Cell (VCell) modeling environment. A mathematical model can be built integrating explicit reaction networks with reaction rules. In addition to offering a large choice of ODE and stochastic solvers, a model can be simulated using a network free approach through the NFSim simulation engine.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAvailable as VCell (versions 6.0 and later) at the Virtual Cell web site (http://vcell.org/). The application installs and runs on all major platforms and does not require registration for use on the user's computer. Tutorials are available at the Virtual Cell website and Help is provided within the software. Source code is available at Sourceforge.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online." }, { "pmid": "22607382", "title": "RuleBender: integrated modeling, simulation and visualization for rule-based intracellular biochemistry.", "abstract": "BACKGROUND\nRule-based modeling (RBM) is a powerful and increasingly popular approach to modeling cell signaling networks. However, novel visual tools are needed in order to make RBM accessible to a broad range of users, to make specification of models less error prone, and to improve workflows.\n\n\nRESULTS\nWe introduce RuleBender, a novel visualization system for the integrated visualization, modeling and simulation of rule-based intracellular biochemistry. We present the user requirements, visual paradigms, algorithms and design decisions behind RuleBender, with emphasis on visual global/local model exploration and integrated execution of simulations. The support of RBM creation, debugging, and interactive visualization expedites the RBM learning process and reduces model construction time; while built-in model simulation and results with multiple linked views streamline the execution and analysis of newly created models and generated networks.\n\n\nCONCLUSION\nRuleBender has been adopted as both an educational and a research tool and is available as a free open source tool at http://www.rulebender.org. A development cycle that includes close interaction with expert users allows RuleBender to better serve the needs of the systems biology community." }, { "pmid": "21647530", "title": "Guidelines for visualizing and annotating rule-based models.", "abstract": "Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models." }, { "pmid": "19668183", "title": "The Systems Biology Graphical Notation.", "abstract": "Circuit diagrams and Unified Modeling Language diagrams are just two examples of standard visual languages that help accelerate work by promoting regularity, removing ambiguity and enabling software tool support for communication of complex information. Ironically, despite having one of the highest ratios of graphical to textual information, biology still lacks standard graphical notations. The recent deluge of biological knowledge makes addressing this deficit a pressing concern. Toward this goal, we present the Systems Biology Graphical Notation (SBGN), a visual language developed by a community of biochemists, modelers and computer scientists. SBGN consists of three complementary languages: process diagram, entity relationship diagram and activity flow diagram. Together they enable scientists to represent networks of biochemical interactions in a standard, unambiguous way. We believe that SBGN will foster efficient and accurate representation, visualization, storage, exchange and reuse of information on all kinds of biological knowledge, from gene regulation, to metabolism, to cellular signaling." }, { "pmid": "12779444", "title": "Molecular interaction maps as information organizers and simulation guides.", "abstract": "A graphical method for mapping bioregulatory networks is presented that is suited for the representation of multimolecular complexes, protein modifications, as well as actions at cell membranes and between protein domains. The symbol conventions defined for these molecular interaction maps are designed to accommodate multiprotein assemblies and protein modifications that can generate combinatorially large numbers of molecular species. Diagrams can either be \"heuristic,\" meaning that detailed knowledge of all possible reaction paths is not required, or \"explicit,\" meaning that the diagrams are totally unambiguous and suitable for simulation. Interaction maps are linked to annotation lists and indexes that provide ready access to pertinent data and references, and that allow any molecular species to be easily located. Illustrative interaction maps are included on the domain interactions of Src, transcription control of E2F-regulated genes, and signaling from receptor tyrosine kinase through phosphoinositides to Akt/PKB. A simple method of going from an explicit interaction diagram to an input file for a simulation program is outlined, in which the differential equations need not be written out. The role of interaction maps in selecting and defining systems for modeling is discussed. (c) 2001 American Institute of Physics." }, { "pmid": "24934175", "title": "NetworkViewer: visualizing biochemical reaction networks with embedded rendering of molecular interaction rules.", "abstract": "BACKGROUND\nNetwork representations of cell-biological signaling processes frequently contain large numbers of interacting molecular and multi-molecular components that can exist in, and switch between, multiple biochemical and/or structural states. In addition, the interaction categories (associations, dissociations and transformations) in such networks cannot satisfactorily be mapped onto simple arrows connecting pairs of components since their specifications involve information such as reaction rates and conditions with regard to the states of the interacting components. This leads to the challenge of having to reconcile competing objectives: providing a high-level overview without omitting relevant information, and showing interaction specifics while not overwhelming users with too much detail displayed simultaneously. This problem is typically addressed by splitting the information required to understand a reaction network model into several categories that are rendered separately through combinations of visualizations and/or textual and tabular elements, requiring modelers to consult several sources to obtain comprehensive insights into the underlying assumptions of the model.\n\n\nRESULTS\nWe report the development of an application, the Simmune NetworkViewer, that visualizes biochemical reaction networks using iconographic representations of protein interactions and the conditions under which the interactions take place using the same symbols that were used to specify the underlying model with the Simmune Modeler. This approach not only provides a coherent model representation but, moreover, following the principle of \"overview first, zoom and filter, then details-on-demand,\" can generate an overview visualization of the global network and, upon user request, presents more detailed views of local sub-networks and the underlying reaction rules for selected interactions. This visual integration of information would be difficult to achieve with static network representations or approaches that use scripted model specifications without offering simple but detailed symbolic representations of molecular interactions, their conditions and consequences in terms of biochemical modifications.\n\n\nCONCLUSIONS\nThe Simmune NetworkViewer provides concise, yet comprehensive visualizations of reaction networks created in the Simmune framework. In the near future, by adopting the upcoming SBML standard for encoding multi-component, multi-state molecular complexes and their interactions as input, the NetworkViewer will, moreover, be able to offer such visualization for any rule-based model that can be exported to that standard." }, { "pmid": "22531118", "title": "A framework for mapping, visualisation and automatic model creation of signal-transduction networks.", "abstract": "Intracellular signalling systems are highly complex. This complexity makes handling, analysis and visualisation of available knowledge a major challenge in current signalling research. Here, we present a novel framework for mapping signal-transduction networks that avoids the combinatorial explosion by breaking down the network in reaction and contingency information. It provides two new visualisation methods and automatic export to mathematical models. We use this framework to compile the presently most comprehensive map of the yeast MAP kinase network. Our method improves previous strategies by combining (I) more concise mapping adapted to empirical data, (II) individual referencing for each piece of information, (III) visualisation without simplifications or added uncertainty, (IV) automatic visualisation in multiple formats, (V) automatic export to mathematical models and (VI) compatibility with established formats. The framework is supported by an open source software tool that facilitates integration of the three levels of network analysis: definition, visualisation and mathematical modelling. The framework is species independent and we expect that it will have wider impact in signalling research on any system." }, { "pmid": "22412851", "title": "Combinatorial complexity and compositional drift in protein interaction networks.", "abstract": "The assembly of molecular machines and transient signaling complexes does not typically occur under circumstances in which the appropriate proteins are isolated from all others present in the cell. Rather, assembly must proceed in the context of large-scale protein-protein interaction (PPI) networks that are characterized both by conflict and combinatorial complexity. Conflict refers to the fact that protein interfaces can often bind many different partners in a mutually exclusive way, while combinatorial complexity refers to the explosion in the number of distinct complexes that can be formed by a network of binding possibilities. Using computational models, we explore the consequences of these characteristics for the global dynamics of a PPI network based on highly curated yeast two-hybrid data. The limited molecular context represented in this data-type translates formally into an assumption of independent binding sites for each protein. The challenge of avoiding the explicit enumeration of the astronomically many possibilities for complex formation is met by a rule-based approach to kinetic modeling. Despite imposing global biophysical constraints, we find that initially identical simulations rapidly diverge in the space of molecular possibilities, eventually sampling disjoint sets of large complexes. We refer to this phenomenon as \"compositional drift\". Since interaction data in PPI networks lack detailed information about geometric and biological constraints, our study does not represent a quantitative description of cellular dynamics. Rather, our work brings to light a fundamental problem (the control of compositional drift) that must be solved by mechanisms of assembly in the context of large networks. In cases where drift is not (or cannot be) completely controlled by the cell, this phenomenon could constitute a novel source of phenotypic heterogeneity in cell populations." }, { "pmid": "22711887", "title": "A computational model for early events in B cell antigen receptor signaling: analysis of the roles of Lyn and Fyn.", "abstract": "BCR signaling regulates the activities and fates of B cells. BCR signaling encompasses two feedback loops emanating from Lyn and Fyn, which are Src family protein tyrosine kinases (SFKs). Positive feedback arises from SFK-mediated trans phosphorylation of BCR and receptor-bound Lyn and Fyn, which increases the kinase activities of Lyn and Fyn. Negative feedback arises from SFK-mediated cis phosphorylation of the transmembrane adapter protein PAG1, which recruits the cytosolic protein tyrosine kinase Csk to the plasma membrane, where it acts to decrease the kinase activities of Lyn and Fyn. To study the effects of the positive and negative feedback loops on the dynamical stability of BCR signaling and the relative contributions of Lyn and Fyn to BCR signaling, we consider in this study a rule-based model for early events in BCR signaling that encompasses membrane-proximal interactions of six proteins, as follows: BCR, Lyn, Fyn, Csk, PAG1, and Syk, a cytosolic protein tyrosine kinase that is activated as a result of SFK-mediated phosphorylation of BCR. The model is consistent with known effects of Lyn and Fyn deletions. We find that BCR signaling can generate a single pulse or oscillations of Syk activation depending on the strength of Ag signal and the relative levels of Lyn and Fyn. We also show that bistability can arise in Lyn- or Csk-deficient cells." }, { "pmid": "23423320", "title": "Programming biological models in Python using PySB.", "abstract": "Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis." }, { "pmid": "24699269", "title": "Exact hybrid particle/population simulation of rule-based models of biochemical systems.", "abstract": "Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This \"network-free\" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of \"partial network expansion\" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility." }, { "pmid": "21288338", "title": "Hierarchical graphs for rule-based modeling of biochemical systems.", "abstract": "BACKGROUND\nIn rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system.\n\n\nRESULTS\nFor purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm.\n\n\nCONCLUSIONS\nHierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models, such as the BioNetGen language (BNGL). Thus, the proposed use of hierarchical graphs should promote clarity and better understanding of rule-based models." }, { "pmid": "26178138", "title": "Modeling for (physical) biologists: an introduction to the rule-based approach.", "abstract": "Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions." }, { "pmid": "23361986", "title": "Rule-based modeling of signal transduction: a primer.", "abstract": "Biological cells accomplish their physiological functions using interconnected networks of genes, proteins, and other biomolecules. Most interactions in biological signaling networks, such as bimolecular association or covalent modification, can be modeled in a physically realistic manner using elementary reaction kinetics. However, the size and combinatorial complexity of such reaction networks have hindered such a mechanistic approach, leading many to conclude that it is premature and to adopt alternative statistical or phenomenological approaches. The recent development of rule-based modeling languages, such as BioNetGen (BNG) and Kappa, enables the precise and succinct encoding of large reaction networks. Coupled with complementary advances in simulation methods, these languages circumvent the combinatorial barrier and allow mechanistic modeling on a much larger scale than previously possible. These languages are also intuitive to the biologist and accessible to the novice modeler. In this chapter, we provide a self-contained tutorial on modeling signal transduction networks using the BNG Language and related software tools. We review the basic syntax of the language and show how biochemical knowledge can be articulated using reaction rules, which can be used to capture a broad range of biochemical and biophysical phenomena in a concise and modular way. A model of ligand-activated receptor dimerization is examined, with a detailed treatment of each step of the modeling process. Sections discussing modeling theory, implicit and explicit model assumptions, and model parameterization are included, with special focus on retaining biophysical realism and avoiding common pitfalls. We also discuss the more advanced case of compartmental modeling using the compartmental extension to BioNetGen. In addition, we provide a comprehensive set of example reaction rules that cover the various aspects of signal transduction, from signaling at the membrane to gene regulation. The reader can modify these reaction rules to model their own systems of interest." }, { "pmid": "24130475", "title": "Machines vs. ensembles: effective MAPK signaling through heterogeneous sets of protein complexes.", "abstract": "Despite the importance of intracellular signaling networks, there is currently no consensus regarding the fundamental nature of the protein complexes such networks employ. One prominent view involves stable signaling machines with well-defined quaternary structures. The combinatorial complexity of signaling networks has led to an opposing perspective, namely that signaling proceeds via heterogeneous pleiomorphic ensembles of transient complexes. Since many hypotheses regarding network function rely on how we conceptualize signaling complexes, resolving this issue is a central problem in systems biology. Unfortunately, direct experimental characterization of these complexes has proven technologically difficult, while combinatorial complexity has prevented traditional modeling methods from approaching this question. Here we employ rule-based modeling, a technique that overcomes these limitations, to construct a model of the yeast pheromone signaling network. We found that this model exhibits significant ensemble character while generating reliable responses that match experimental observations. To contrast the ensemble behavior, we constructed a model that employs hierarchical assembly pathways to produce scaffold-based signaling machines. We found that this machine model could not replicate the experimentally observed combinatorial inhibition that arises when the scaffold is overexpressed. This finding provides evidence against the hierarchical assembly of machines in the pheromone signaling network and suggests that machines and ensembles may serve distinct purposes in vivo. In some cases, e.g. core enzymatic activities like protein synthesis and degradation, machines assembled via hierarchical energy landscapes may provide functional stability for the cell. In other cases, such as signaling, ensembles may represent a form of weak linkage, facilitating variation and plasticity in network evolution. The capacity of ensembles to signal effectively will ultimately shape how we conceptualize the function, evolution and engineering of signaling networks." }, { "pmid": "24288371", "title": "Pfam: the protein families database.", "abstract": "Pfam, available via servers in the UK (http://pfam.sanger.ac.uk/) and the USA (http://pfam.janelia.org/), is a widely used database of protein families, containing 14 831 manually curated entries in the current release, version 27.0. Since the last update article 2 years ago, we have generated 1182 new families and maintained sequence coverage of the UniProt Knowledgebase (UniProtKB) at nearly 80%, despite a 50% increase in the size of the underlying sequence database. Since our 2012 article describing Pfam, we have also undertaken a comprehensive review of the features that are provided by Pfam over and above the basic family data. For each feature, we determined the relevance, computational burden, usage statistics and the functionality of the feature in a website context. As a consequence of this review, we have removed some features, enhanced others and developed new ones to meet the changing demands of computational biology. Here, we describe the changes to Pfam content. Notably, we now provide family alignments based on four different representative proteome sequence data sets and a new interactive DNA search interface. We also discuss the mapping between Pfam and known 3D structures." }, { "pmid": "25378310", "title": "BRENDA in 2015: exciting developments in its 25th year of existence.", "abstract": "The BRENDA enzyme information system (http://www.brenda-enzymes.org/) has developed into an elaborate system of enzyme and enzyme-ligand information obtained from different sources, combined with flexible query systems and evaluation tools. The information is obtained by manual extraction from primary literature, text and data mining, data integration, and prediction algorithms. Approximately 300 million data include enzyme function and molecular data from more than 30,000 organisms. The manually derived core contains 3 million data from 77,000 enzymes annotated from 135,000 literature references. Each entry is connected to the literature reference and the source organism. They are complemented by information on occurrence, enzyme/disease relationships from text mining, sequences and 3D structures from other databases, and predicted enzyme location and genome annotation. Functional and structural data of more than 190,000 enzyme ligands are stored in BRENDA. New features improving the functionality and analysis tools were implemented. The human anatomy atlas CAVEman is linked to the BRENDA Tissue Ontology terms providing a connection between anatomical and functional enzyme data. Word Maps for enzymes obtained from PubMed abstracts highlight application and scientific relevance of enzymes. The EnzymeDetector genome annotation tool and the reaction database BKM-react including reactions from BRENDA, KEGG and MetaCyc were improved. The website was redesigned providing new query options." }, { "pmid": "26527732", "title": "The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases.", "abstract": "The MetaCyc database (MetaCyc.org) is a freely accessible comprehensive database describing metabolic pathways and enzymes from all domains of life. The majority of MetaCyc pathways are small-molecule metabolic pathways that have been experimentally determined. MetaCyc contains more than 2400 pathways derived from >46,000 publications, and is the largest curated collection of metabolic pathways. BioCyc (BioCyc.org) is a collection of 5700 organism-specific Pathway/Genome Databases (PGDBs), each containing the full genome and predicted metabolic network of one organism, including metabolites, enzymes, reactions, metabolic pathways, predicted operons, transport systems, and pathway-hole fillers. The BioCyc website offers a variety of tools for querying and analyzing PGDBs, including Omics Viewers and tools for comparative analysis. This article provides an update of new developments in MetaCyc and BioCyc during the last two years, including addition of Gibbs free energy values for compounds and reactions; redesign of the primary gene/protein page; addition of a tool for creating diagrams containing multiple linked pathways; several new search capabilities, including searching for genes based on sequence patterns, searching for databases based on an organism's phenotypes, and a cross-organism search; and a metabolite identifier translation service." }, { "pmid": "23020215", "title": "The Process-Interaction-Model: a common representation of rule-based and logical models allows studying signal transduction on different levels of detail.", "abstract": "BACKGROUND\nSignaling systems typically involve large, structured molecules each consisting of a large number of subunits called molecule domains. In modeling such systems these domains can be considered as the main players. In order to handle the resulting combinatorial complexity, rule-based modeling has been established as the tool of choice. In contrast to the detailed quantitative rule-based modeling, qualitative modeling approaches like logical modeling rely solely on the network structure and are particularly useful for analyzing structural and functional properties of signaling systems.\n\n\nRESULTS\nWe introduce the Process-Interaction-Model (PIM) concept. It defines a common representation (or basis) of rule-based models and site-specific logical models, and, furthermore, includes methods to derive models of both types from a given PIM. A PIM is based on directed graphs with nodes representing processes like post-translational modifications or binding processes and edges representing the interactions among processes. The applicability of the concept has been demonstrated by applying it to a model describing EGF insulin crosstalk. A prototypic implementation of the PIM concept has been integrated in the modeling software ProMoT.\n\n\nCONCLUSIONS\nThe PIM concept provides a common basis for two modeling formalisms tailored to the study of signaling systems: a quantitative (rule-based) and a qualitative (logical) modeling formalism. Every PIM is a compact specification of a rule-based model and facilitates the systematic set-up of a rule-based model, while at the same time facilitating the automatic generation of a site-specific logical model. Consequently, modifications can be made on the underlying basis and then be propagated into the different model specifications - ensuring consistency of all models, regardless of the modeling formalism. This facilitates the analysis of a system on different levels of detail as it guarantees the application of established simulation and analysis methods to consistent descriptions (rule-based and logical) of a particular signaling system." }, { "pmid": "23361986", "title": "Rule-based modeling of signal transduction: a primer.", "abstract": "Biological cells accomplish their physiological functions using interconnected networks of genes, proteins, and other biomolecules. Most interactions in biological signaling networks, such as bimolecular association or covalent modification, can be modeled in a physically realistic manner using elementary reaction kinetics. However, the size and combinatorial complexity of such reaction networks have hindered such a mechanistic approach, leading many to conclude that it is premature and to adopt alternative statistical or phenomenological approaches. The recent development of rule-based modeling languages, such as BioNetGen (BNG) and Kappa, enables the precise and succinct encoding of large reaction networks. Coupled with complementary advances in simulation methods, these languages circumvent the combinatorial barrier and allow mechanistic modeling on a much larger scale than previously possible. These languages are also intuitive to the biologist and accessible to the novice modeler. In this chapter, we provide a self-contained tutorial on modeling signal transduction networks using the BNG Language and related software tools. We review the basic syntax of the language and show how biochemical knowledge can be articulated using reaction rules, which can be used to capture a broad range of biochemical and biophysical phenomena in a concise and modular way. A model of ligand-activated receptor dimerization is examined, with a detailed treatment of each step of the modeling process. Sections discussing modeling theory, implicit and explicit model assumptions, and model parameterization are included, with special focus on retaining biophysical realism and avoiding common pitfalls. We also discuss the more advanced case of compartmental modeling using the compartmental extension to BioNetGen. In addition, we provide a comprehensive set of example reaction rules that cover the various aspects of signal transduction, from signaling at the membrane to gene regulation. The reader can modify these reaction rules to model their own systems of interest." }, { "pmid": "22081592", "title": "Fluxviz - Cytoscape plug-in for visualization of flux distributions in networks.", "abstract": "MOTIVATION\nMethods like FBA and kinetic modeling are widely used to calculate fluxes in metabolic networks. For the analysis and understanding of simulation results and experimentally measured fluxes visualization software within the network context is indispensable.\n\n\nRESULTS\nWe present Flux Viz, an open-source Cytoscape plug-in for the visualization of flux distributions in molecular interaction networks. FluxViz supports (i) import of networks in a variety of formats (SBML, GML, XGMML, SIF, BioPAX, PSI-MI) (ii) import of flux distributions as CSV, Cytoscape attributes or VAL files (iii) limitation of views to flux carrying reactions (flux subnetwork) or network attributes like localization (iv) export of generated views (SVG, EPS, PDF, BMP, PNG). Though FluxViz was primarily developed as tool for the visualization of fluxes in metabolic networks and the analysis of simulation results from FASIMU, a flexible software for batch flux-balance computation in large metabolic networks, it is not limited to biochemical reaction networks and FBA but can be applied to the visualization of arbitrary fluxes in arbitrary graphs.\n\n\nAVAILABILITY\nThe platform-independent program is an open-source project, freely available at http://sourceforge.net/projects/fluxvizplugin/ under GNU public license, including manual, tutorial and examples." }, { "pmid": "23835289", "title": "Reaction-contingency based bipartite Boolean modelling.", "abstract": "BACKGROUND\nIntracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts.\n\n\nRESULTS\nHere, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description.\n\n\nCONCLUSIONS\nTaken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation." }, { "pmid": "18755034", "title": "Exact model reduction of combinatorial reaction networks.", "abstract": "BACKGROUND\nReceptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models.\n\n\nRESULTS\nWe introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs.\n\n\nCONCLUSION\nThe methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks." }, { "pmid": "18082598", "title": "The age of crosstalk: phosphorylation, ubiquitination, and beyond.", "abstract": "Crosstalk between different types of posttranslational modification is an emerging theme in eukaryotic biology. Particularly prominent are the multiple connections between phosphorylation and ubiquitination, which act either positively or negatively in both directions to regulate these processes." }, { "pmid": "23143271", "title": "A dynamic and intricate regulatory network determines Pseudomonas aeruginosa virulence.", "abstract": "Pseudomonas aeruginosa is a metabolically versatile bacterium that is found in a wide range of biotic and abiotic habitats. It is a major human opportunistic pathogen causing numerous acute and chronic infections. The critical traits contributing to the pathogenic potential of P. aeruginosa are the production of a myriad of virulence factors, formation of biofilms and antibiotic resistance. Expression of these traits is under stringent regulation, and it responds to largely unidentified environmental signals. This review is focused on providing a global picture of virulence gene regulation in P. aeruginosa. In addition to key regulatory pathways that control the transition from acute to chronic infection phenotypes, some regulators have been identified that modulate multiple virulence mechanisms. Despite of a propensity for chaotic behaviour, no chaotic motifs were readily observed in the P. aeruginosa virulence regulatory network. Having a 'birds-eye' view of the regulatory cascades provides the forum opportunities to pose questions, formulate hypotheses and evaluate theories in elucidating P. aeruginosa pathogenesis. Understanding the mechanisms involved in making P. aeruginosa a successful pathogen is essential in helping devise control strategies." }, { "pmid": "25231498", "title": "WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.", "abstract": "Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering.\n\n\nDATABASE URL\nhttp://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB." }, { "pmid": "23964998", "title": "WholeCellViz: data visualization for whole-cell models.", "abstract": "BACKGROUND\nWhole-cell models promise to accelerate biomedical science and engineering. However, discovering new biology from whole-cell models and other high-throughput technologies requires novel tools for exploring and analyzing complex, high-dimensional data.\n\n\nRESULTS\nWe developed WholeCellViz, a web-based software program for visually exploring and analyzing whole-cell simulations. WholeCellViz provides 14 animated visualizations, including metabolic and chromosome maps. These visualizations help researchers analyze model predictions by displaying predictions in their biological context. Furthermore, WholeCellViz enables researchers to compare predictions within and across simulations by allowing users to simultaneously display multiple visualizations.\n\n\nCONCLUSION\nWholeCellViz was designed to facilitate exploration, analysis, and communication of whole-cell model data. Taken together, WholeCellViz helps researchers use whole-cell model simulations to drive advances in biology and bioengineering." }, { "pmid": "23716640", "title": "VisANT 4.0: Integrative network platform to connect genes, drugs, diseases and therapies.", "abstract": "With the rapid accumulation of our knowledge on diseases, disease-related genes and drug targets, network-based analysis plays an increasingly important role in systems biology, systems pharmacology and translational science. The new release of VisANT aims to provide new functions to facilitate the convenient network analysis of diseases, therapies, genes and drugs. With improved understanding of the mechanisms of complex diseases and drug actions through network analysis, novel drug methods (e.g., drug repositioning, multi-target drug and combination therapy) can be designed. More specifically, the new update includes (i) integrated search and navigation of disease and drug hierarchies; (ii) integrated disease-gene, therapy-drug and drug-target association to aid the network construction and filtering; (iii) annotation of genes/drugs using disease/therapy information; (iv) prediction of associated diseases/therapies for a given set of genes/drugs using enrichment analysis; (v) network transformation to support construction of versatile network of drugs, genes, diseases and therapies; (vi) enhanced user interface using docking windows to allow easy customization of node and edge properties with build-in legend node to distinguish different node type. VisANT is freely available at: http://visant.bu.edu." }, { "pmid": "25086704", "title": "Integrating biological pathways and genomic profiles with ChiBE 2.", "abstract": "BACKGROUND\nDynamic visual exploration of detailed pathway information can help researchers digest and interpret complex mechanisms and genomic datasets.\n\n\nRESULTS\nChiBE is a free, open-source software tool for visualizing, querying, and analyzing human biological pathways in BioPAX format. The recently released version 2 can search for neighborhoods, paths between molecules, and common regulators/targets of molecules, on large integrated cellular networks in the Pathway Commons database as well as in local BioPAX models. Resulting networks can be automatically laid out for visualization using a graphically rich, process-centric notation. Profiling data from the cBioPortal for Cancer Genomics and expression data from the Gene Expression Omnibus can be overlaid on these networks.\n\n\nCONCLUSIONS\nChiBE's new capabilities are organized around a genomics-oriented workflow and offer a unique comprehensive pathway analysis solution for genomics researchers. The software is freely available at http://code.google.com/p/chibe." }, { "pmid": "20829833", "title": "The BioPAX community standard for pathway data sharing.", "abstract": "Biological Pathway Exchange (BioPAX) is a standard language to represent biological pathways at the molecular and cellular level and to facilitate the exchange of pathway data. The rapid growth of the volume of pathway data has spurred the development of databases and computational tools to aid interpretation; however, use of these data is hampered by the current fragmentation of pathway information across many databases with incompatible formats. BioPAX, which was created through a community process, solves this problem by making pathway data substantially easier to collect, index, interpret and share. BioPAX can represent metabolic and signaling pathways, molecular and genetic interactions and gene regulation networks. Using BioPAX, millions of interactions, organized into thousands of pathways, from many organisms are available from a growing number of databases. This large amount of pathway data in a computable form will support visualization, analysis and biological discovery." } ]
International Journal for Equity in Health
29183335
PMC5706427
10.1186/s12939-017-0702-z
Evaluating medical convenience in ethnic minority areas of Southwest China via road network vulnerability: a case study for Dehong autonomous prefecture
BackgroundSouthwest China is home to more than 30 ethnic minority groups. Since most of these populations reside in mountainous areas, convenient access to medical services is an important metric of how well their livelihoods are being protected.MethodsThis paper proposes a medical convenience index (MCI) and computation model for mountain residents, taking into account various conditions including topography, geology, and climate. Data on road networks were used for comprehensive evaluation from three perspectives: vulnerability, complexity, and accessibility. The model is innovative for considering road network vulnerability in mountainous areas, and proposing a method of evaluating road network vulnerability by measuring the impacts of debris flows based on only links. The model was used to compute and rank the respective MCIs for settlements of each ethnic population in the Dehong Dai and Jingpo Autonomous Prefecture of Yunnan Province, in 2009 and 2015. Data on the settlements over the two periods were also used to analyze the spatial differentiation of medical convenience levels within the study area.ResultsThe medical convenience levels of many settlements improved significantly. 80 settlements were greatly improved, while another 103 showed slight improvement.Areas with obvious improvement were distributed in clusters, and mainly located in the southwestern part of Yingjiang County, northern Longchuan County, eastern Lianghe County, and the region where Lianghe and Longchuan counties and Mang City intersect.ConclusionsDevelopment of the road network was found to be a major contributor to improvements in MCI for mountain residents over the six-year period.
Related workCurrent studies by Chinese and international scholars on medical and healthcare services for residents mostly focus on accessibility to hospitals, or uniform distribution of medical and healthcare facilities [5]. When analyzing medical accessibility, the minimum travel time/distance is commonly used because the requisite data are easily available, the calculation method is simple, and the results are readily understood [6–11].In recent years, the most widely used method for studying medical accessibility and the balance between the demand and supply of medical and healthcare facilities is floating catchment area (FCA), and associated enhancements such as the two-step floating catchment area (2SFCA) method, enhanced two-step floating catchment area (E2SFCA) method and the three-step floating catchment area (2SFCA) method. FCA originated from spatial decomposition [12], and is a special case of gravity model [13]. The application and improvement of this method made calculations simpler and the results more rational [5, 6, 14–20]. Methods used in other studies included the gravity model [6, 16, 21] and kernel density estimation (KDE) [20, 22, 23]. Neutens (2015) [24] further analyzed the advantages and disadvantages of the aforementioned methods when studying medical accessibility. However, there remains a lack of research on road network vulnerability and its impact on residents’ medical convenience levels.Despite numerous studies on road network vulnerability in the past two decades, the concept of vulnerability has yet to be clearly defined. It is often jointly explained with other related terms such as risk, reliability, flexibility, robustness, and resilience. Many scholars have also attempted to explore the inter-relationships between those terms [25–28]. A review of the literature indicated that research on road network vulnerability generally adopts one of the following perspectives:i.The connectivity of the road network, taking into account its topological structure. For example, Kurauchi et al. (2009) [29] determined the critical index of each road segment by calculating the number of connecting links between journey origin and destination, thereby identifying critical segments in the road network. Rupi et al. (2015)[30]evaluated the vulnerability of mountain road networks by examining the connectivity between start and end points, before grading them.ii.After a segment has deteriorated, the road network becomes disrupted or its traffic capacity declines. This reduces regional accessibility and leads to socioeconomic losses. These losses are used to determine and grade critical segments of the road network. For example, Jenelius et al. (2006) [27] ranked the importance of different roads based on their daily traffic volumes. Next, the impact of each road grading on traveling options and durations under various scenarios were simulated. Chen et al. (2007) [31] determined the vulnerability level of a road segment by the impact of its failure on regional accessibility, while Qiang and Nagurney (2008) [32]identified the relative importance and ranking of nodes and links within a road network by documenting the traffic volumes and behaviors of the network. Similarly, Jenelius and Mattsson (2012) [33] used traffic volumes to calculate the importance of the road network within each grid.iii.The impact of a road network’s deterioration or obstruction for regional accessibility is assessed by simulating a particular scenario, for example the occurrence of a natural disaster or deliberate attack. The results provide decision support on the transportation and delivery of relief provisions, as well as disaster recovery efforts. Bono and Gutiérrez (2011) [34] analyzed the impacts of road network disruption caused by the Haiti earthquake on the accessibility of the urban area of ​​Port Au Prince.iv.Computational models for evaluating road network vulnerability are subjected to optimization. Some scholars have focused on model optimization because they believe that the computational burden is very heavy when grading the vulnerability of each segment within the overall road network. On the basis of the Hansen integral index, Luathep (2011) [35] used the relative accessibility index (AI) to analyze the socioeconomic impacts subsequent to road network deterioration, which causes network disruption or reduction in traffic capacity. Next, the AIs of all critical road segments before and after network deterioration were compared for categorization and ranking. This method reduces both computational burden and memory requirements. In the present study, road network vulnerability is determined by combining data on the paths of debris flow hazards with only links in the topological structure of a road network.
[ "26286033", "18190678", "22587023", "23964751", "17018146", "16574290", "7100960", "22469488", "24077335", "16548411", "15446621" ]
[ { "pmid": "26286033", "title": "Spatial inequity in access to healthcare facilities at a county level in a developing country: a case study of Deqing County, Zhejiang, China.", "abstract": "BACKGROUND\nThe inequities in healthcare services between regions, urban and rural, age groups and diverse income groups have been growing rapidly in China. Equal access to basic medical and healthcare services has been recognized as \"a basic right of the people\" by Chinese government. Spatial accessibility to healthcare facilities has received huge attention in Chinese case studies but been less studied particularly at a county level due to limited availability of high-resolution spatial data. This study is focused on measuring spatial accessibility to healthcare facilities in Deqing County. The spatial inequity between the urban (town) and rural is assessed and three scenarios are designed and built to examine which scenario is instrumental for better reducing the spatial inequity.\n\n\nMETHODS\nThis study utilizes highway network data, Digital Elevation Model (DEM), location of hospitals and clinics, 2010 census data at the finest level - village committee, residential building footprint and building height. Areal weighting method is used to disaggregate population data from village committee level to residential building cell level. Least cost path analysis is applied to calculate the travel time from each building cell to its closest healthcare facility. Then an integral accessibility will be calculated through weighting the travel time to the closest facility between three levels. The spatial inequity in healthcare accessibility between the town and rural areas is examined based on the coverages of areas and populations. The same method is used to compare three scenarios aimed at reducing such spatial inequity - relocation of hospitals, updates of weighting values, and the combination of both.\n\n\nRESULTS\n50.03% of residents can reach a county hospital within 15 min by driving, 95.77% and 100% within 30 and 60 min respectively. 55.14% of residents can reach a town hospital within 5 min, 98.04% and 100% within 15 and 30 min respectively. 57.86% of residential building areas can reach a village clinic within 5 min, 92.65% and 99.22% within 10 and 15 min. After weighting the travel time between the three-level facilities, 30.87% of residents can reach a facility within 5 min, 80.46%% and 99.88% within 15 and 30 min respectively.\n\n\nCONCLUSIONS\nThe healthcare accessibility pattern of Deqing County has exhibited spatial inequity between the town and rural areas, with the best accessibility in the capital of the county and poorest in the West of the county. There is a high negative correlation between population ageing and healthcare accessibility. Allocation of more advanced medical and healthcare equipment and highly skillful doctors and nurses to village clinics will be an efficient means of reducing the spatial inequity and further consolidating the national medical security system. GIS (Geographical Information Systems) methods have proven successful method of providing quantitative evidence for policy analysis although the data sets and methods could be further improved." }, { "pmid": "18190678", "title": "Validation of the geographic position of EPER-Spain industries.", "abstract": "BACKGROUND\nThe European Pollutant Emission Register in Spain (EPER-Spain) is a public inventory of pollutant industries created by decision of the European Union. The location of these industries is geocoded and the first published data correspond to 2001. Publication of these data will allow for quantification of the effect of proximity to one or more such plant on cancer and all-cause mortality observed in nearby towns. However, as errors have been detected in the geocoding of many of the pollutant foci shown in the EPER, it was decided that a validation study should be conducted into the accuracy of these co-ordinates. EPER-Spain geographic co-ordinates were drawn from the European Environment Agency (EEA) server and the Spanish Ministry of the Environment (MOE). The Farm Plot Geographic Information System (Sistema de Información Geográfica de Parcelas Agrícolas) (SIGPAC) enables orthophotos (digitalized aerial images) of any territorial point across Spain to be obtained. Through a search of co-ordinates in the SIGPAC, all the industrial foci (except farms) were located. The quality criteria used to ascertain possible errors in industrial location were high, medium and low quality, where industries were situated at a distance of less than 500 metres, more than 500 metres but less than 1 kilometre, and more than 1 kilometre from their real locations, respectively.\n\n\nRESULTS\nInsofar as initial registry quality was concerned, 84% of industrial complexes were inaccurately positioned (low quality) according to EEA data versus 60% for Spanish MOE data. The distribution of the distances between the original and corrected co-ordinates for each of the industries on the registry revealed that the median error was 2.55 kilometres for Spain overall (according to EEA data). The Autonomous Regions that displayed most errors in industrial geocoding were Murcia, Canary Islands, Andalusia and Madrid. Correct co-ordinates were successfully allocated to 100% of EPER-Spain industries.\n\n\nCONCLUSION\nKnowing the exact location of pollutant foci is vital to obtain reliable and valid conclusions in any study where distance to the focus is a decisive factor, as in the case of the consequences of industrial pollution on the health of neighbouring populations." }, { "pmid": "22587023", "title": "Measuring geographic access to health care: raster and network-based methods.", "abstract": "BACKGROUND\nInequalities in geographic access to health care result from the configuration of facilities, population distribution, and the transportation infrastructure. In recent accessibility studies, the traditional distance measure (Euclidean) has been replaced with more plausible measures such as travel distance or time. Both network and raster-based methods are often utilized for estimating travel time in a Geographic Information System. Therefore, exploring the differences in the underlying data models and associated methods and their impact on geographic accessibility estimates is warranted.\n\n\nMETHODS\nWe examine the assumptions present in population-based travel time models. Conceptual and practical differences between raster and network data models are reviewed, along with methodological implications for service area estimates. Our case study investigates Limited Access Areas defined by Michigan's Certificate of Need (CON) Program. Geographic accessibility is calculated by identifying the number of people residing more than 30 minutes from an acute care hospital. Both network and raster-based methods are implemented and their results are compared. We also examine sensitivity to changes in travel speed settings and population assignment.\n\n\nRESULTS\nIn both methods, the areas identified as having limited accessibility were similar in their location, configuration, and shape. However, the number of people identified as having limited accessibility varied substantially between methods. Over all permutations, the raster-based method identified more area and people with limited accessibility. The raster-based method was more sensitive to travel speed settings, while the network-based method was more sensitive to the specific population assignment method employed in Michigan.\n\n\nCONCLUSIONS\nDifferences between the underlying data models help to explain the variation in results between raster and network-based methods. Considering that the choice of data model/method may substantially alter the outcomes of a geographic accessibility analysis, we advise researchers to use caution in model selection. For policy, we recommend that Michigan adopt the network-based method or reevaluate the travel speed assignment rule in the raster-based method. Additionally, we recommend that the state revisit the population assignment method." }, { "pmid": "23964751", "title": "Accessibility to primary health care in Belgium: an evaluation of policies awarding financial assistance in shortage areas.", "abstract": "BACKGROUND\nIn many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods.\n\n\nMETHODS\nUsing a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods.\n\n\nRESULTS\nThe official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods.\n\n\nCONCLUSIONS\nThe assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g., census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used." }, { "pmid": "17018146", "title": "Defining rational hospital catchments for non-urban areas based on travel-time.", "abstract": "BACKGROUND\nCost containment typically involves rationalizing healthcare service delivery through centralization of services to achieve economies of scale. Hospitals are frequently the chosen site of cost containment and rationalization especially in rural areas. Socio-demographic and geographic characteristics make hospital service allocation more difficult in rural and remote regions. This research presents a methodology to model rational catchments or service areas around rural hospitals--based on travel time.\n\n\nRESULTS\nThis research employs a vector-based GIS network analysis to model catchments that better represent access to hospital-based healthcare services in British Columbia's rural and remote areas. The tool permits modelling of alternate scenarios in which access to different baskets of services (e.g. rural maternity care or ICU) are assessed. In addition, estimates of the percentage of population that is served--or not served--within specified travel times are calculated.\n\n\nCONCLUSION\nThe modelling tool described is useful for defining true geographical catchments around rural hospitals as well as modelling the percentage of the population served within certain time guidelines (e.g. one hour) for specific health services. It is potentially valuable to policy makers and health services allocation specialists." }, { "pmid": "16574290", "title": "Modelling and understanding primary health care accessibility and utilization in rural South Africa: an exploration using a geographical information system.", "abstract": "Physical access to health care affects a large array of health outcomes, yet meaningfully estimating physical access remains elusive in many developing country contexts where conventional geographical techniques are often not appropriate. We interviewed (and geographically positioned) 23,000 homesteads regarding clinic usage in the Hlabisa health sub-district, KwaZulu-Natal, South Africa. We used a cost analysis within a geographical information system to estimate mean travel time (at any given location) to clinic and to derive the clinic catchments. The model takes into account the proportion of people likely to be using public transport (as a function of estimated walking time to clinic), the quality and distribution of the road network and natural barriers, and was calibrated using reported travel times. We used the model to investigate differences in rural, urban and peri-urban usage of clinics by homesteads in the study area and to quantify the effect of physical access to clinic on usage. We were able to predict the reported clinic used with an accuracy of 91%. The median travel time to nearest clinic is 81 min and 65% of homesteads travel 1h or more to attend the nearest clinic. There was a significant logistic decline in usage with increasing travel time (p < 0.0001). The adjusted odds of a homestead within 30 min of a clinic making use of the clinics were 10 times (adjusted OR = 10; 95 CI 6.9-14.4) those of a homestead in the 90-120 min zone. The adjusted odds of usage of the clinics by urban homesteads were approximately 20/30 times smaller than those of their rural/peri-urban counterparts, respectively, after controlling for systematic differences in travel time to clinic. The estimated median travel time to the district hospital is 170 min. The methodology constitutes a framework for modelling physical access to clinics in many developing country settings." }, { "pmid": "22469488", "title": "Investigating impacts of positional error on potential health care accessibility.", "abstract": "Accessibility to health services at the local or community level is an effective approach to measuring health care delivery in various constituencies in Canada and the United States. GIS and spatial methods play an important role in measuring potential access to health services. The Three-Step Floating Catchment Area (3SFCA) method is a GIS based procedure developed to calculate potential (spatial) accessibility as a ratio of primary health care (PHC) providers to the surrounding population in urban settings. This method uses PHC provider locations in textual/address format supplied by local, regional, or national health authorities. An automated geocoding procedure is normally used to convert such addresses to a pair of geographic coordinates. The accuracy of geocoding depends on the type of reference data and the amount of value-added effort applied. This research investigates the success and accuracy of six geocoding methods as well as how geocoding error affects the 3SFCA method. ArcGIS software is used for geocoding and spatial accessibility estimation. Results will focus on two implications of geocoding: (1) the success and accuracy of different automated and value-added geocoding; and (2) the implications of these geocoding methods for GIS-based methods that generalise results based on location data." }, { "pmid": "24077335", "title": "Measuring spatial accessibility to healthcare for populations with multiple transportation modes.", "abstract": "Few measures of healthcare accessibility have considered multiple transportation modes when people seek healthcare. Based on the framework of the 2 Step Floating Catchment Area Method (2SFCAM), we proposed an innovative method to incorporate transportation modes into the accessibility estimation. Taking Florida, USA, as a study area, we illustrated the implementation of the multi-mode 2SFCAM, and compared the accessibility estimates with those from the traditional single-mode 2SFCAM. The results suggest that the multi-modal method, by accounting for heterogeneity in populations, provides more realistic accessibility estimations, and thus offers a better guidance for policy makers to mitigate health inequity issues." }, { "pmid": "16548411", "title": "Comparing GIS-based methods of measuring spatial accessibility to health services.", "abstract": "The inequitable geographic distribution of health care resources has long been recognized as a problem in the United States. Traditional measures, such as a simple ratio of supply to demand in an area or distance to the closest provider, are easy measures for spatial accessibility. However the former one does not consider interactions between patients and providers across administrative borders and the latter does not account for the demand side, that is, the competition for the supply. With advancements in GIS, however, better measures of geographic accessibility, variants of a gravity model, have been applied. Among them are (1) a two-step floating catchment area (2SFCA) method and (2) a kernel density (KD) method. This microscopic study compared these two GIS-based measures of accessibility in our case study of dialysis service centers in Chicago. Our comparison study found a significant mismatch of the accessibility ratios between the two methods. Overall, the 2SFCA method produced better accessibility ratios. There is room for further improvement of the 2SFCA method-varying the radius of service area according to the type of provider or the type of neighborhood and determining the appropriate weight equation form-still warrant further study." }, { "pmid": "15446621", "title": "Prenatal care need and access: a GIS analysis.", "abstract": "Many municipalities provide special prenatal care services targeted to low-income women whose access to prenatal care is constrained. For such services to be successful and effective, they must be geographically targeted to the places where low-income, high-need mothers live. This paper presents a GIS analysis of prenatal care need and clinic services for low-income mothers in Brooklyn, NY. We analyze fine-grained geographic variation in need using data on the residential locations of recent mothers who lack health insurance or are covered by Medicaid. Spatial statistical methods are used to create spatially smoothed maps of the density of mothers and corresponding maps of the density of prenatal clinics. For these mothers, clinic density is positively associated with early initiation of prenatal care. Although clinic locations conform relatively well to the residential concentrations of high-need women, we identify several underserved areas with large numbers of needy women and few clinics available." } ]
Frontiers in Neurorobotics
29311888
PMC5742219
10.3389/fnbot.2017.00066
Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.
2Related Work2.1Lexical Acquisition by RobotStudies of language acquisition also constitute a constructive approach to the human developmental process (Cangelosi and Schlesinger, 2015), the language grounding (Steels and Hild, 2012), and the symbol emergence (Taniguchi et al., 2016c). One approach to studying language acquisition focuses on the estimation of phonemes and words from speech signals (Goldwater et al., 2009; Heymann et al., 2014; Taniguchi et al., 2016d). However, these studies used only continuous speech signals without using co-occurrence based on other sensor information, e.g., visual, tactile, and proprioceptive information. Therefore, the robot was not required to understand the meaning of words. Yet, it is important for a robot to understand word meanings, i.e., grounding the meanings to words, for human–robot interaction (HRI).Roy and Pentland (2002) proposed a computational model by which a robot could learn the names of objects from images of the object and natural infant-directed speech. Their model could perform speech segmentation, lexical acquisition, and visual categorization. Hörnstein et al. (2010) proposed a method based on pattern recognition and hierarchical clustering that mimics a human infant to enable a humanoid robot to acquire language. Their method allowed the robot to acquire phonemes and words from visual and auditory information through interaction with the human. Nakamura et al. (2011a,b) proposed multimodal latent Dirichlet allocation (MLDA) and a multimodal hierarchical Dirichlet process (MHDP) that enables the categorization of objects from multimodal information, i.e., visual, auditory, haptic, and word information. Their methods enabled more accurate object categorization by using multimodal information. Taniguchi et al. (2016a) proposed a method for simultaneous estimation of self-positions and words from noisy sensory information and an uttered word. Their method integrated ambiguous speech recognition results with the self-localization method for learning spatial concepts. However, Taniguchi et al. (2016a) assumed that the name of a place would be learned from an uttered word. Taniguchi et al. (2016b) proposed a nonparametric Bayesian spatial concept acquisition method (SpCoA) based on place categorization and unsupervised word segmentation. SpCoA could acquire the names of places from spoken sentences including multiple words. In the above studies, the robot was taught to focus on one target, e.g., an object or a place, by a tutor using one word or one sentence. However, considering a more realistic problem, the robot needs to know which event in a complicated situation is associated with which word in the sentence. The CSL, which is extended from the aforementioned studies on the lexical acquisition, is a more difficult and important problem in robotics in comparison. Our research concerns the CSL problem because of its importance in relation to the lexical acquisition by a robot.2.2Cross-Situational Learning2.2.1Conventional Cross-Situational Learning StudiesFrank et al. (2007, 2009) proposed a Bayesian model that unifies statistical and intentional approaches to cross-situational word learning. They conducted basic CSL experiments with the purpose of teaching an object name. In addition, they discussed that the effectiveness of mutual exclusivity for CSL in probabilistic models. Fontanari et al. (2009) performed object-word mapping from the co-occurrence between objects and words by using a method based on neural modeling fields (NMF). In “modi” experiments using iCub, their findings were similar to those reported by Smith and Samuelson (2010). The abovementioned studies are CSL studies that were inspired by studies based on experiments with human infants. These studies assumed a simple situation such as learning the relationship between objects and words as the early stage of CSL. However, the real environment is varied and more complex. In this study, we focus on the problem of CSL in utterances including multiple words and observations from multiple sensory-channels.2.2.2Probabilistic ModelsQu and Chai (2008, 2010) proposed a learning method that automatically acquires novel words for an interactive system. They focused on the co-occurrence between word-sequences and entity-sequences tracked by eye-gaze in lexical acquisition. Qu and Chai’s method, which is based on the IBM-translation model (Brown et al., 1993), estimates the word-entity association probability. However, their studies did not result in perfect unsupervised lexical acquisition because they used domain knowledge based on WordNet. Matuszek et al. (2012) presented a joint model of language and perception for grounded attribute learning. This model enables the identification of which novel words correspond to color, shape, or no attribute at all. Celikkanat et al. (2014) proposed an unsupervised learning method based on latent Dirichlet allocation (LDA) that allows many-to-many relationships between objects and contexts. Their method was able to predict the context from the observation information and plan the action using learned contexts. Chen et al. (2016) proposed an active learning method for cross-situational learning of object-word association. In experiments, they showed that LDA was more effective than non-negative matrix factorization (NMF). However, they did not perform any HRI experiment using the learned language. In our study, we perform experiments that use word meanings learned in CSL to generate an action and explain a current situation.2.2.3Neural Network ModelsYamada et al. (2015, 2016) proposed a learning method based on a stochastic continuous-time recurrent neural network (CTRNN) and a multiple time-scales recurrent neural network (MTRNN). They showed that the learned network formed an attractor structure representing both the relationships between words and action and the temporal pattern of the task. Stramandinoli et al. (2017) proposed partially recurrent neural networks (P-RNNs) for learning the relationships between motor primitives and objects. Zhong et al. (2017) proposed multiple time-scales gated recurrent units (MTGRU) inspired by MTRNN and long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997). They showed that the MTGRU could learn long-term dependencies in large-dimensional multimodal datasets by conducting multimodal interaction experiments using iCub. The learning results of the above studies using neural networks (NNs) are difficult to interpret because time-series data is mapped to continuous latent space. These studies implicitly associate words with objects and actions. Generally, NN methods require a massive amount of learning data in many cases. On the other hand, the learning result is easier to interpret when Bayesian methods rather than NN methods are used. In addition, Bayesian methods require less data to learn efficiently. We propose a Bayesian generative model that can perform CSL, including action learning.2.2.4Robot-to-Robot InteractionSpranger (2015) and Spranger and Steels (2015) proposed a method for the co-acquisition of semantics and syntax in the spatial language. The experimental results showed that the robot could acquire spatial grammar and categories related to spatial direction. Heath et al. (2016) implemented mobile robots (Lingodroids) capable of learning a lexicon through robot-to-robot interaction. They used two robots equipped with different sensors and simultaneous localization and mapping (SLAM) algorithms. These studies reported that the robots created their lexicons in relation to places and the distance in terms of time. However, these studies did not consider lexical acquisition by HRI. We consider HRI to be necessary to enable a robot to learn human language.2.2.5Multimodal Categorization and Word LearningAttamimi et al. (2016) proposed multilayered MLDA (mMLDA) that hierarchically integrates multiple MLDAs as an extension of Nakamura et al. (2011a). They performed an estimation of the relationships among words and multiple concepts by weighting the learned words according to their mutual information as a post-processing step. In their model, the same uttered words are generated from three kinds of concepts, i.e., this model has three variables for same word information in different concepts. We consider this to be an unnatural assumption as the generative model for generating words. However, in our proposed model, we assume that the uttered words are generated from one variable. We consider our proposed model to involve a more natural assumption than Attamimi’s model. In addition, their study did not use data that were autonomously obtained by the robot. In Attamimi et al. (2016), it was not possible for the robot to learn the relationships between self-actions and words because human motions obtained by the motion capture system based on Kinect and a wearable sensor device attached to a human were used as action data. In our study, the robot learns the action category based on subjective self-action. Therefore, the robot can perform a learned action based on a sentence of human speech. In this paper, we focus on complicated CSL problems arising from situations with multiple objects and sentences including words related to various sensory-channels such as the names, position, and color of objects, and the action carried out on the object.
[ "19596549", "19389131", "19409539", "24478693", "9377276", "21635302", "3365937", "3802971", "27471463" ]
[ { "pmid": "19596549", "title": "Cross-situational learning of object-word mapping using Neural Modeling Fields.", "abstract": "The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients--batch learning and clutter detection--the NMF mechanism was capable to infer perfectly the correct object-word mapping." }, { "pmid": "19389131", "title": "Using speakers' referential intentions to model early cross-situational word learning.", "abstract": "Word learning is a \"chicken and egg\" problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference." }, { "pmid": "19409539", "title": "A Bayesian framework for word segmentation: exploring the effects of context.", "abstract": "Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words--in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what's that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered." }, { "pmid": "24478693", "title": "A psychology based approach for longitudinal development in cognitive robotics.", "abstract": "A major challenge in robotics is the ability to learn, from novel experiences, new behavior that is useful for achieving new goals and skills. Autonomous systems must be able to learn solely through the environment, thus ruling out a priori task knowledge, tuning, extensive training, or other forms of pre-programming. Learning must also be cumulative and incremental, as complex skills are built on top of primitive skills. Additionally, it must be driven by intrinsic motivation because formative experience is gained through autonomous activity, even in the absence of extrinsic goals or tasks. This paper presents an approach to these issues through robotic implementations inspired by the learning behavior of human infants. We describe an approach to developmental learning and present results from a demonstration of longitudinal development on an iCub humanoid robot. The results cover the rapid emergence of staged behavior, the role of constraints in development, the effect of bootstrapping between stages, and the use of a schema memory of experiential fragments in learning new skills. The context is a longitudinal experiment in which the robot advanced from uncontrolled motor babbling to skilled hand/eye integrated reaching and basic manipulation of objects. This approach offers promise for further fast and effective sensory-motor learning techniques for robotic learning." }, { "pmid": "9377276", "title": "Long short-term memory.", "abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms." }, { "pmid": "21635302", "title": "Language-relative construal of individuation constrained by universal ontology: revisiting language universals and linguistic relativity.", "abstract": "Objects and substances bear fundamentally different ontologies. In this article, we examine the relations between language, the ontological distinction with respect to individuation, and the world. Specifically, in cross-linguistic developmental studies that follow Imai and Gentner (1997), we examine the question of whether language influences our thought in different forms, like (1) whether the language-specific construal of entities found in a word extension context (Imai & Gentner, 1997) is also found in a nonlinguistic classification context; (2) whether the presence of labels per se, independent of the count-mass syntax, fosters ontology-based classification; (3) in what way, if at all, the count-mass syntax that accompanies a label changes English speakers' default construal of a given entity? On the basis of the results, we argue that the ontological distinction concerning individuation is universally shared and functions as a constraint on early learning of words. At the same time, language influences one's construal of entities cross-lingistically and developmentally, and causes a temporary change of construal within a single language. We provide a detailed discussion of how each of these three ways language may affect the construal of entities, and discuss how our universally possessed knowledge interacts with language both within a single language and in cross-linguistic context." }, { "pmid": "3802971", "title": "Joint attention and early language.", "abstract": "This paper reports 2 studies that explore the role of joint attentional processes in the child's acquisition of language. In the first study, 24 children were videotaped at 15 and 21 months of age in naturalistic interaction with their mothers. Episodes of joint attentional focus between mother and child--for example, joint play with an object--were identified. Inside, as opposed to outside, these episodes both mothers and children produced more utterances, mothers used shorter sentences and more comments, and dyads engaged in longer conversations. Inside joint episodes maternal references to objects that were already the child's focus of attention were positively correlated with the child's vocabulary at 21 months, while object references that attempted to redirect the child's attention were negatively correlated. No measures from outside these episodes related to child language. In an experimental study, an adult attempted to teach novel words to 10 17-month-old children. Words referring to objects on which the child's attention was already focused were learned better than words presented in an attempt to redirect the child's attentional focus." }, { "pmid": "27471463", "title": "Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.", "abstract": "To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, \"internal dynamics\" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases." } ]
Frontiers in Neurorobotics
29311889
PMC5742615
10.3389/fnbot.2017.00067
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.
2. Related workVarious studies have focused on learning motion primitives from manually segmented motions (Gräve and Behnke, 2012; Manschitz et al., 2015). Manschitz et al. proposed a method to generate sequential skills by using motion primitives that are learned in a supervised manner. Gräve et al. proposed segmenting motions using motion primitives that are learned by a supervised hidden Markov model. In these studies, the motions are segmented and labeled in advance. However, we consider that it is difficult to segment and label all possible motion primitives.Additionally, some studies have proposed unsupervised motion segmentation. However, these studies rely on heuristics. For instance, Wächter et al. have proposed a method to segment human manipulation motions based on contact relations between the end-effectors and objects in a scene (Wachter and Asfour, 2015); in their method, the points at which the end-effectors make contact with an object are determined as boundaries of motions. We believe this method works well in limited scenes; however, there are many motions, such as gestures and dances, in which objects are not manipulated. Lioutikov et al. proposed unsupervised segmentation; however, to reduce computational costs, this technique requires the possible boundary candidates between motion primitives to be specified in advance (Lioutikov et al., 2015). Therefore, the segmentation depends on those candidates, and motions cannot be segmented correctly if the correct candidates are not selected. In contrast, our proposed method does not require such candidates; all possible cutting points are considered by use of forward filtering-backward sampling, which uses the principles of dynamic programming. In some methods (Fod et al., 2002; Shiratori et al., 2004; Lin and Kulić, 2012), motion features (such as the zero velocity of joint angles) are used for motion segmentation. However, these features cannot be applied to all motions. Takano et al. use the error between actual movements and predicted movements as the criteria for specifying boundaries (Takano and Nakamura, 2016). However, the threshold must be manually tuned according to the motions to be segmented. Moreover, they used an HMM that is a stochastic model. We consider such an assumption to be unnatural from the viewpoint of stochastic models, and boundaries should be determined based on a stochastic model. In our proposed method, we do not use such heuristics and assumptions, and instead formulate the segmentation based on a stochastic model.Fox et al. have proposed unsupervised segmentation for the discovery of a set of latent, shared dynamical behaviors in multiple time series data (Fox et al., 2011). They introduce a beta process, which represents a share of motion primitives in multiple motions, into autoregressive HMM. They formulate the segmentation using a stochastic model, and no heuristics are used in their proposed model. However, in their proposed method, continuous data points that are classified into the same states are extracted as segments, and the lengths of the segments are not estimated. The states can be changed in the short term, and therefore shorter segments are estimated. They reported that some true segments were split into two or more categories, and that those shorter segments were bridged in their experiment. On the other hand, our proposed method classifies data points into states, and uses HSMM to estimate segment lengths. Hence, our proposed method can prevent states from being changed in the short term.Matsubara et al. proposed an unsupervised segmentation method called AutoPlait (Matsubara et al., 2014). This method uses multiple HMMs, each of which represents a fixed pattern; moreover, transitions between the HMMs are allowed. Therefore, time series data is segmented at points at which the state is changed to another HMM's state. However, we believe that HMMs are too simple to represent complicated sequences such as motions. Figure 2 illustrates an example of representation of time series data by HMM. The graph on the right in Figure 2 represents the mean and standard deviation learned by HMM from data points shown in the graph on the left. HMM represents time series data using only the mean and standard deviation; therefore, details of time series data can be lost. Therefore, we use Gaussian processes, which are non-parametric methods that can represent complex time series data.Figure 2Example of representation of time series data by HMM. Left: Data points for learning HMM. Right: Mean and standard deviation learned by HMM.The field of natural language processing has also produced literature related to sequence data segmentation. For example, unsupervised morphological analysis has been proposed for segmenting sequence data (Goldwater, 2006; Mochihashi et al., 2009; Uchiumi et al., 2015). Goldwater et al. proposed a method to divide sentences into words by estimating the parameters of a 2-gram language model based on a hierarchical Dirichlet process. The parameters are estimated in an unsupervised manner by Gibbs sampling (Goldwater, 2006). Mochihashi et al. proposed a nested Pitman-Yor language model (NPYLM) (Mochihashi et al., 2009). In this method, parameters of an n-gram language model based on the hierarchical Pitman-Yor process are estimated via the forward filtering-backward sampling algorithm. NPYLM can thus divide sentences into words more quickly and accurately than the method proposed in (Goldwater, 2006). Moreover, Uchiumi et al. extended the NPYLM to a Pitman-Yor hidden semi-Markov model (PY-HSMM) (Uchiumi et al., 2015) that can divide sentences into words and estimate the parts of speech (POS) of the words by sampling not only words, but also POS in the sampling phase of the forward filtering-backward sampling algorithm. However, these relevant studies aimed to divide symbolized sequences (such as sentences) into segments, and did not consider analogous divisions in continuous sequence data, such as that obtained by analyzing human motion.Taniguchi et al. proposed a method to divide continuous sequences into segments by utilizing NPYLM (Taniguchi and Nagasaka, 2011). In their method, continuous sequences are discretized and converted into discrete-valued sequences using the infinite hidden Markov model (Fox et al., 2007). The discrete-valued sequences are then divided into segments by using NPYLM. In this method, motions can be recognized by the learned model, but cannot be generated naively because they are discretized. Moreover, segmentation based on NPYLM does not work well if errors occur in the discretization step.Therefore, we propose a method to divide a continuous sequence into segments without using discretization. This method divides continuous motions into unit actions. Our proposed method is based on HSMM, the emission distribution of which is GP, which represents continuous unit actions. To learn the model parameters, we use forward filtering-backward sampling, and segment points and classes are sampled simultaneously. However, our proposed method also has limitations. One limitation is that the method requires the number of motion classes to be specified in advance. It is estimated automatically in methods such as (Fox et al., 2011) and (Matsubara et al., 2014). Another limitation is that computational costs are very high, owing to the numerous recursive calculations. We discuss these limitations in the experiments.
[]
[]
PLoS Computational Biology
29240763
PMC5746283
10.1371/journal.pcbi.1005904
Costs of task allocation with local feedback: Effects of colony size and extra workers in social insects and other multi-agent systems
Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a ‘surplus’ set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically.
Related workThe process of task allocation and its typical outcome, division of labor, have received a lot of attention in the social insect literature. Empirical studies typically focus on determining the individual traits or experiences that shape, or at least correlate with, individual task specialization: e.g. when larger or older individuals are more likely to forage (e.g. [53]) or when interaction rates or positive experience in performing a task affect task choices [32, 64]. Generally the re-allocation of workers to tasks after changes in the demand for work often needs to happen on a time scale that is shorter than the production of new workers (which, in bees or ants, takes weeks or months, [65]), and indeed empirical studies have found that the traits of new workers do not seem to be modulated by colonies to match the need for work in particular tasks [66]. Therefore, more recent empirical and most modeling studies focus on finding simple, local behavior rules that generate individual task specialization (i.e. result in division of labor at the colony level), while simultaneously also enabling group-level responsiveness to the changing needs for work in different tasks [35, 67, 68]. For example, in classic papers, Bonabeau et al. [69] showed theoretically that differing task stimulus response thresholds among workers enable both task specialization and a flexible group-level response to changing task needs; and Tofts and others [70, 71] showed that if workers inhabit mutually-avoiding spatial fidelity zones, and tasks are spread over a work surface, this also enables both task specialization and flexible response to changing needs for work.In this paper we examined how well we should expect task allocation to be able to match actual demands for work, and how this will depend on group size and the number of ‘extra’, thus inactive, workers. Neither of the modeling studies cited above explicitly considered whether task allocation is improved or hindered by colony size and inactive workers. In addition, while several studies find increasing levels of individual specialization in larger groups, the empirical literature overall does not show a consensus on how task allocation or the proportion of inactive workers is or should be affected by group size (reviewed in [14, 22]).In general, few studies have cosidered the efficiency of the task allocation process itself, and how it relates to the algorithm employed [72], often in the context of comparing bio-(ant-)inspired algorithms to algorithms of an entirely different nature [73, 74]. For example, Pereira and Gordon, assuming task allocation by social interactions, demonstrate that speed and accuracy of task allocation may trade off against each other, mediated by group size, and thus ‘optimal’ allocation of workers to tasks is not achieved [72]. Duarte et al. also find that task allocation by response thresholds does not achieve optimal allocation, and they also find no effect of colony size on task allocation performance [75]. Some papers on task allocation in social insects do not examine how group size per se influences task allocation, but look at factors such as the potential for selfish worker motives [76], which may be affected by group size, and which imply that the task allocation algorithm is not shaped by what maximizes collective outcomes. When interpreting modeling studies on task allocation, it is also important to consider whether the number of inactive workers is an outcome emerging from particular studied task allocation mechanisms, or whether it is an assumption put into the model to study its effect on efficiency of task allocation. In our study, we examined how an assumed level of ‘superfluous’, thus by definition ‘inactive’, workers would affect the efficiency of re-allocating workers to tasks after demands had changed.While the above models concern the general situation of several tasks, such as building, guarding, and brood care, being performed in parallel but independently of one another, several published models of task allocation specifically consider the case of task partitioning [77], defined in the social insect literature as a situation where, in an assembly-line fashion, products of one task have to be directly passed to workers in the next task, such that a tight integration of the activity in different tasks is required. This is, for example, the case in wasp nest building, where water and pulp are collected by different foragers, these then have to be handed to a construction worker (who mixes the materials and applies them to the nest). Very limited buffering is possible because the materials are not stored externally to the workers, and a construction worker cannot proceed with its task until it receives a packet of water and pulp. One would expect different, better-coordinated mechanisms of task allocation to be at work in this case. In task partitioning situations, a higher level of noise (variation in availability of materials, or in worker success at procuring them) increases the optimal task switching rate as well as the number of inactive workers, although this might reverse at very high noise levels [78]. Generally larger groups are expected to experience relatively lower levels of noise [79]. In this line of reasoning, inactive workers are seen as serving a function as ‘buffer’ (or ‘common stomach’, as they can hold materials awaiting work) [79, 80]; this implies that as noise or task switching rate increase, so does the benefit (and optimal number) of inactive workers.
[ "10221902", "18330538", "18031303", "21888521", "11112175", "28127225", "28877229", "21233379", "26213412", "19018663", "24417977", "25489940", "17629482", "1539941", "11162062", "22661824", "23666484", "22079942", "15024125", "26218613" ]
[ { "pmid": "10221902", "title": "Notch signaling: cell fate control and signal integration in development.", "abstract": "Notch signaling defines an evolutionarily ancient cell interaction mechanism, which plays a fundamental role in metazoan development. Signals exchanged between neighboring cells through the Notch receptor can amplify and consolidate molecular differences, which eventually dictate cell fates. Thus, Notch signals control how cells respond to intrinsic or extrinsic developmental cues that are necessary to unfold specific developmental programs. Notch activity affects the implementation of differentiation, proliferation, and apoptotic programs, providing a general developmental tool to influence organ formation and morphogenesis." }, { "pmid": "18330538", "title": "Global information sampling in the honey bee.", "abstract": "Central to the question of task allocation in social insects is how workers acquire information. Patrolling is a curious behavior in which bees meander over the face of the comb inspecting cells. Several authors have suggested it allows bees to collect global information, but this has never been formally evaluated. This study explores this hypothesis by answering three questions. First, do bees gather information in a consistent manner as they patrol? Second, do they move far enough to get a sense of task demand in distant areas of the nest? And third, is patrolling a commonly performed task? Focal animal observations were used to address the first two predictions, while a scan sampling study was used to address the third. The results were affirmative for each question. While patrolling, workers collected information by performing periodic clusters of cell inspections. Patrolling bees not only traveled far enough to frequently change work zone; they often visited every part of the nest. Finally, the majority of the bees in the middle-age caste were shown to move throughout the nest over the course of a few hours in a manner suggestive of patrolling. Global information collection is contrary to much current theory, which assumes that workers respond to local information only. This study thus highlights the nonmutually exclusive nature of various information collection regimes in social insects." }, { "pmid": "18031303", "title": "Evolution of complexity in the volvocine algae: transitions in individuality through Darwin's eye.", "abstract": "The transition from unicellular to differentiated multicellular organisms constitutes an increase in the level complexity, because previously existing individuals are combined to form a new, higher-level individual. The volvocine algae represent a unique opportunity to study this transition because they diverged relatively recently from unicellular relatives and because extant species display a range of intermediate grades between unicellular and multicellular, with functional specialization of cells. Following the approach Darwin used to understand \"organs of extreme perfection\" such as the vertebrate eye, this jump in complexity can be reduced to a series of small steps that cumulatively describe a gradual transition between the two levels. We use phylogenetic reconstructions of ancestral character states to trace the evolution of steps involved in this transition in volvocine algae. The history of these characters includes several well-supported instances of multiple origins and reversals. The inferred changes can be understood as components of cooperation-conflict-conflict mediation cycles as predicted by multilevel selection theory. One such cycle may have taken place early in volvocine evolution, leading to the highly integrated colonies seen in extant volvocine algae. A second cycle, in which the defection of somatic cells must be prevented, may still be in progress." }, { "pmid": "21888521", "title": "Group size and its effects on collective organization.", "abstract": "Many insects and arthropods live in colonies or aggregations of varying size. Group size may affect collective organization either because the same individual behavior has different consequences when displayed in a larger group or because larger groups are subject to different constraints and selection pressures than smaller groups. In eusocial colonies, group size may have similar effects on colony traits as body size has on organismal traits. Social insects may, therefore, be useful to test theories about general principles of scaling, as they constitute a distinct level of organization. However, there is a surprising lack of data on group sizes in social insects and other group-living arthropods, and multiple confounding factors have to be controlled to detect effects of group size. If such rigorous studies are performed, group size may become as important to understanding collective organization as is body size in explaining behavior and life history of individual organisms." }, { "pmid": "11112175", "title": "Models of division of labor in social insects.", "abstract": "Division of labor is one of the most basic and widely studied aspects of colony behavior in social insects. Studies of division of labor are concerned with the integration of individual worker behavior into colony level task organization and with the question of how regulation of division of labor may contribute to colony efficiency. Here we describe and critique the current models concerned with the proximate causes of division of labor in social insects. The models have identified various proximate mechanisms to explain division of labor, based on both internal and external factors. On the basis of these factors, we suggest a classification of the models. We first describe the different types of models and then review the empirical evidence supporting them. The models to date may be considered preliminary and exploratory; they have advanced our understanding by suggesting possible mechanisms for division of labor and by revealing how individual and colony-level behavior may be related. They suggest specific hypotheses that can be tested by experiment and so may lead to the development of more powerful and integrative explanatory models." }, { "pmid": "28127225", "title": "Task switching is associated with temporal delays in Temnothorax rugatulus ants.", "abstract": "The major evolutionary transitions often result in reorganization of biological systems, and a component of such reorganization is that individuals within the system specialize on performing certain tasks, resulting in a division of labor. Although the traditional benefit of division of labor is thought to be a gain in work efficiency, one alternative benefit of specialization is avoiding temporal delays associated with switching tasks. While models have demonstrated that costs of task switching can drive the evolution of division of labor, little empirical support exists for this hypothesis. We tested whether there were task-switching costs in Temnothorax rugatulus. We recorded the behavior of every individual in 44 colonies and used this dataset to identify each instance where an individual performed a task, spent time in the interval (i.e., inactive, wandering inside, and self-grooming), and then performed a task again. We compared the interval time where an individual switched task type between that first and second bout of work to instances where an individual performed the same type of work in both bouts. In certain cases, we find that the interval time was significantly shorter if individuals repeated the same task. We find this time cost for switching to a new behavior in all active worker groups, that is, independently of worker specialization. These results suggest that task-switching costs may select for behavioral specialization." }, { "pmid": "28877229", "title": "Who needs 'lazy' workers? Inactive workers act as a 'reserve' labor force replacing active workers, but inactive workers are not replaced when they are removed.", "abstract": "Social insect colonies are highly successful, self-organized complex systems. Surprisingly however, most social insect colonies contain large numbers of highly inactive workers. Although this may seem inefficient, it may be that inactive workers actually contribute to colony function. Indeed, the most commonly proposed explanation for inactive workers is that they form a 'reserve' labor force that becomes active when needed, thus helping mitigate the effects of colony workload fluctuations or worker loss. Thus, it may be that inactive workers facilitate colony flexibility and resilience. However, this idea has not been empirically confirmed. Here we test whether colonies of Temnothorax rugatulus ants replace highly active (spending large proportions of time on specific tasks) or highly inactive (spending large proportions of time completely immobile) workers when they are experimentally removed. We show that colonies maintained pre-removal activity levels even after active workers were removed, and that previously inactive workers became active subsequent to the removal of active workers. Conversely, when inactive workers were removed, inactivity levels decreased and remained lower post-removal. Thus, colonies seem to have mechanisms for maintaining a certain number of active workers, but not a set number of inactive workers. The rapid replacement (within 1 week) of active workers suggests that the tasks they perform, mainly foraging and brood care, are necessary for colony function on short timescales. Conversely, the lack of replacement of inactive workers even 2 weeks after their removal suggests that any potential functions they have, including being a 'reserve', are less important, or auxiliary, and do not need immediate recovery. Thus, inactive workers act as a reserve labor force and may still play a role as food stores for the colony, but a role in facilitating colony-wide communication is unlikely. Our results are consistent with the often cited, but never yet empirically supported hypothesis that inactive workers act as a pool of 'reserve' labor that may allow colonies to quickly take advantage of novel resources and to mitigate worker loss." }, { "pmid": "21233379", "title": "A biological solution to a fundamental distributed computing problem.", "abstract": "Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly's nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights." }, { "pmid": "26213412", "title": "Bigger is better: honeybee colonies as distributed information-gathering systems.", "abstract": "In collectively foraging groups, communication about food resources can play an important role in the organization of the group's activity. For example, the honeybee dance communication system allows colonies to selectively allocate foragers among different floral resources according to their quality. Because larger groups can potentially collect more information than smaller groups, they might benefit more from communication because it allows them to integrate and use that information to coordinate forager activity. Larger groups might also benefit more from communication because it allows them to dominate high-value resources by recruiting large numbers of foragers. By manipulating both colony size and the ability to communicate location information in the dance, we show that larger colonies of honeybees benefit more from communication than do smaller colonies. In fact, colony size and dance communication worked together to improve foraging performance; the estimated net gain per foraging trip was highest in larger colonies with unimpaired communication. These colonies also had the earliest peaks in foraging activity, but not the highest ones. This suggests they may find and recruit to resources more quickly, but not more heavily. The benefits of communication we observed in larger colonies are thus likely a result of more effective informationgathering due to massive parallel search rather than increased competitive ability due to heavy recruitment." }, { "pmid": "19018663", "title": "Specialization does not predict individual efficiency in an ant.", "abstract": "The ecological success of social insects is often attributed to an increase in efficiency achieved through division of labor between workers in a colony. Much research has therefore focused on the mechanism by which a division of labor is implemented, i.e., on how tasks are allocated to workers. However, the important assumption that specialists are indeed more efficient at their work than generalist individuals--the \"Jack-of-all-trades is master of none\" hypothesis--has rarely been tested. Here, I quantify worker efficiency, measured as work completed per time, in four different tasks in the ant Temnothorax albipennis: honey and protein foraging, collection of nest-building material, and brood transports in a colony emigration. I show that individual efficiency is not predicted by how specialized workers were on the respective task. Worker efficiency is also not consistently predicted by that worker's overall activity or delay to begin the task. Even when only the worker's rank relative to nestmates in the same colony was used, specialization did not predict efficiency in three out of the four tasks, and more specialized workers actually performed worse than others in the fourth task (collection of sand grains). I also show that the above relationships, as well as median individual efficiency, do not change with colony size. My results demonstrate that in an ant species without morphologically differentiated worker castes, workers may nevertheless differ in their ability to perform different tasks. Surprisingly, this variation is not utilized by the colony--worker allocation to tasks is unrelated to their ability to perform them. What, then, are the adaptive benefits of behavioral specialization, and why do workers choose tasks without regard for whether they can perform them well? We are still far from an understanding of the adaptive benefits of division of labor in social insects." }, { "pmid": "24417977", "title": "Populations of a cyprinid fish are self-sustaining despite widespread feminization of males.", "abstract": "BACKGROUND\nTreated effluents from wastewater treatment works can comprise a large proportion of the flow of rivers in the developed world. Exposure to these effluents, or the steroidal estrogens they contain, feminizes wild male fish and can reduce their reproductive fitness. Long-term experimental exposures have resulted in skewed sex ratios, reproductive failures in breeding colonies, and population collapse. This suggests that environmental estrogens could threaten the sustainability of wild fish populations.\n\n\nRESULTS\nHere we tested this hypothesis by examining population genetic structures and effective population sizes (N(e)) of wild roach (Rutilus rutilus L.) living in English rivers contaminated with estrogenic effluents. N(e) was estimated from DNA microsatellite genotypes using approximate Bayesian computation and sibling assignment methods. We found no significant negative correlation between N(e) and the predicted estrogen exposure at 28 sample sites. Furthermore, examination of the population genetic structure of roach in the region showed that some populations have been confined to stretches of river with a high proportion of estrogenic effluent for multiple generations and have survived, apparently without reliance on immigration of fish from less polluted sites.\n\n\nCONCLUSIONS\nThese results demonstrate that roach populations living in some effluent-contaminated river stretches, where feminization is widespread, are self-sustaining. Although we found no evidence to suggest that exposure to estrogenic effluents is a significant driving factor in determining the size of roach breeding populations, a reduction in N(e) of up to 65% is still possible for the most contaminated sites because of the wide confidence intervals associated with the statistical model." }, { "pmid": "25489940", "title": "Not just a theory--the utility of mathematical models in evolutionary biology.", "abstract": "Progress in science often begins with verbal hypotheses meant to explain why certain biological phenomena exist. An important purpose of mathematical models in evolutionary research, as in many other fields, is to act as “proof-of-concept” tests of the logic in verbal explanations, paralleling the way in which empirical data are used to test hypotheses. Because not all subfields of biology use mathematics for this purpose, misunderstandings of the function of proof-of-concept modeling are common. In the hope of facilitating communication, we discuss the role of proof-of-concept modeling in evolutionary biology." }, { "pmid": "17629482", "title": "Individual experience alone can generate lasting division of labor in ants.", "abstract": "Division of labor, the specialization of workers on different tasks, largely contributes to the ecological success of social insects [1, 2]. Morphological, genotypic, and age variations among workers, as well as their social interactions, all shape division of labor [1-12]. In addition, individual experience has been suggested to influence workers in their decision to execute a task [13-18], but its potential impact on the organization of insect societies has yet to be demonstrated [19, 20]. Here we show that, all else being equal, ant workers engaged in distinct functions in accordance with their previous experience. When individuals were experimentally led to discover prey at each of their foraging attempts, they showed a high propensity for food exploration. Conversely, foraging activity progressively decreased for individuals who always failed in the same situation. One month later, workers that previously found prey kept on exploring for food, whereas those who always failed specialized in brood care. It thus appears that individual experience can strongly channel the behavioral ontogeny of ants to generate a lasting division of labor. This self-organized task-attribution system, based on an individual learning process, is particularly robust and might play an important role in colony efficiency." }, { "pmid": "11162062", "title": "A trade-off in task allocation between sensitivity to the environment and response time.", "abstract": "Task allocation is the process that adjusts the number of workers in each colony task in response to the environment. There is no central coordination of task allocation; instead workers use local cues from the environment and from other workers to decide which task to perform. We examine two aspects of task allocation: the sensitivity to the environment of task distribution, and the rate of response to environmental changes. We investigate how these two aspects are influenced by: (1) colony size, and (2) behavioral rules used by workers, i.e. how a worker uses cues from the environment and from social interactions with other workers in deciding which task to perform. We show that if workers use social cues in their choice of task, response time decreases with increasing colony size. Sensitivity of task distribution to the environment may decrease or not with colony size, depending on the behavioral rules used by workers. This produces a trade-off in task allocation: short response times can be achieved by increasing colony size, but at the cost of decreased sensitivity to the environment. We show that when a worker's response to social interactions depends on the local environment, sensitivity of task distribution to the environment is not affected by colony size and the trade-off is avoided." }, { "pmid": "22661824", "title": "Evolution of self-organized division of labor in a response threshold model.", "abstract": "Division of labor in social insects is determinant to their ecological success. Recent models emphasize that division of labor is an emergent property of the interactions among nestmates obeying to simple behavioral rules. However, the role of evolution in shaping these rules has been largely neglected. Here, we investigate a model that integrates the perspectives of self-organization and evolution. Our point of departure is the response threshold model, where we allow thresholds to evolve. We ask whether the thresholds will evolve to a state where division of labor emerges in a form that fits the needs of the colony. We find that division of labor can indeed evolve through the evolutionary branching of thresholds, leading to workers that differ in their tendency to take on a given task. However, the conditions under which division of labor evolves depend on the strength of selection on the two fitness components considered: amount of work performed and on worker distribution over tasks. When selection is strongest on the amount of work performed, division of labor evolves if switching tasks is costly. When selection is strongest on worker distribution, division of labor is less likely to evolve. Furthermore, we show that a biased distribution (like 3:1) of workers over tasks is not easily achievable by a threshold mechanism, even under strong selection. Contrary to expectation, multiple matings of colony foundresses impede the evolution of specialization. Overall, our model sheds light on the importance of considering the interaction between specific mechanisms and ecological requirements to better understand the evolutionary scenarios that lead to division of labor in complex systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00265-012-1343-2) contains supplementary material, which is available to authorized users." }, { "pmid": "23666484", "title": "Time delay implies cost on task switching: a model to investigate the efficiency of task partitioning.", "abstract": "Task allocation, and task switching have an important effect on the efficiency of distributed, locally controlled systems such as social insect colonies. Both efficiency and workload distribution are global features of the system which are not directly accessible to workers and can only be sampled locally by an individual in a distributed system. To investigate how the cost of task switching affects global performance we use social wasp societies as a metaphor to construct a simple model system with four interconnected tasks. Our goal is not the accurate description of the behavior of a given species, but to seek general conclusions on the effect of noise and time delay on a behavior that is partitioned into subtasks. In our model a nest structure needs to be constructed by the cooperation of individuals that carry out different tasks: builders, pulp and water foragers, and individuals storing water. We report a simulation study based on a model using delay-differential equations to analyze the trade-off between task switching costs and keeping a high degree of adaptivity in a dynamic, noisy environment. Combining the methods of time-delayed equations and stochastic processes we are able to represent the influence of swarm size and task switching sensitivity. We find that the system is stable for reasonable choices of parameters but shows oscillations for extreme choices of parameters and we find that the system is resilient to perturbations. We identify a trade-off between reaching equilibria of high performance and having short transients." }, { "pmid": "22079942", "title": "Regulation of task differentiation in wasp societies: a bottom-up model of the \"common stomach\".", "abstract": "Metapolybia wasps live in small societies (around one hundred adults) and rear their young in nests they construct on flat surfaces from plant materials. For processing nest paper, they must gather plant materials and process it into pulp with water. The water is collected by water foragers and is transferred to pulp foragers indirectly via a \"common stomach.\" The common stomach, or social crop, is formed by generalist wasps called laborers. These wasps can engage in water exchange, store water in their crops, and may become specialist foragers or builders. We provide an alternative model for regulating task partitioning in construction behavior by using an agent based modeling framework parameterized by our field observations. Our model predicts that assessing colony needs via individual interactions with the common stomach leads to a robust regulation of task partitioning in construction behavior. By using perturbation experiments in our simulations, we show that this emergent task allocation is able to dynamically adapt to perturbations of the environment and to changes in colony-level demands or population structure. The robustness of our model stems from the fact that the common stomach is both a strong buffer and a source of several feedback mechanisms that affect the individual wasps. We show that both the efficiency and the task fidelity of these colonies are dependent upon colony size. We also demonstrate that the emergence of specialist wasps (individuals with high task fidelity) does not require any special initial conditions or reinforcement at the individual level, but it is rather a consequence of colony-level workflow stability. Our model closely mimics the behavior of Metapolybia wasps, demonstrating that a regulation mechanism based on simple pair-wise interactions through a common stomach is a plausible hypothesis for the organization of collective behavior." }, { "pmid": "15024125", "title": "Synaptic organization in the adult honey bee brain is influenced by brood-temperature control during pupal development.", "abstract": "Recent studies have shown that the behavioral performance of adult honey bees is influenced by the temperature experienced during pupal development. Here we explore whether there are temperature-mediated effects on the brain. We raised pupae at different constant temperatures between 29 and 37 degrees C and performed neuroanatomical analyses of the adult brains. Analyses focused on sensory-input regions in the mushroom bodies, brain areas associated with higher-order processing such as learning and memory. Distinct synaptic complexes [microglomeruli (MG)] within the mushroom body calyces were visualized by using fluorophore-conjugated phalloidin and an antibody to synapsin. The numbers of MG were different in bees that had been raised at different temperatures, and these differences persisted after the first week of adult life. In the olfactory-input region (lip), MG numbers were highest in bees raised at the temperature normally maintained in brood cells (34.5 degrees C) and significantly decreased in bees raised at 1 degrees C below and above this norm. Interestingly, in the neighboring visual-input region (collar), MG numbers were less affected by temperature. We conclude that thermoregulatory control of brood rearing can generate area- and modality-specific effects on synaptic neuropils in the adult brain. We propose that resulting differences in the synaptic circuitry may affect neuronal plasticity and may underlie temperature-mediated effects on multimodal communication and learning." }, { "pmid": "26218613", "title": "Ant groups optimally amplify the effect of transiently informed individuals.", "abstract": "To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge." } ]
Royal Society Open Science
29308229
PMC5748960
10.1098/rsos.170853
Quantifying team cooperation through intrinsic multi-scale measures: respiratory and cardiac synchronization in choir singers and surgical teams
A highly localized data-association measure, termed intrinsic synchrosqueezing transform (ISC), is proposed for the analysis of coupled nonlinear and non-stationary multivariate signals. This is achieved based on a combination of noise-assisted multivariate empirical mode decomposition and short-time Fourier transform-based univariate and multivariate synchrosqueezing transforms. It is shown that the ISC outperforms six other combinations of algorithms in estimating degrees of synchrony in synthetic linear and nonlinear bivariate signals. Its advantage is further illustrated in the precise identification of the synchronized respiratory and heart rate variability frequencies among a subset of bass singers of a professional choir, where it distinctly exhibits better performance than the continuous wavelet transform-based ISC. We also introduce an extension to the intrinsic phase synchrony (IPS) measure, referred to as nested intrinsic phase synchrony (N-IPS), for the empirical quantification of physically meaningful and straightforward-to-interpret trends in phase synchrony. The N-IPS is employed to reveal physically meaningful variations in the levels of cooperation in choir singing and performing a surgical procedure. Both the proposed techniques successfully reveal degrees of synchronization of the physiological signals in two different aspects: (i) precise localization of synchrony in time and frequency (ISC), and (ii) large-scale analysis for the empirical quantification of physically meaningful trends in synchrony (N-IPS).
2.Related workIn addition to our recently proposed data association measures, IPS and ICoh, there also exist several other synchrony measures. Cross-correlation is a simple measure of linear synchronization between two signals, and hence it cannot effectively deal with the nonlinear coupling behaviour, thus resulting in an undesired low value of correlation coefficient. Phase synchronization index (PSI) proposed in [32] is obtained by considering time-averaged phase difference between two signals, instead of considering the distribution of phase differences as employed in IPS and the proposed N-IPS (see §3.2 for more details). This technique can underestimate synchrony if the distribution of phase differences between two signals has more than one peak, and by averaging over time, phase differences can be cancelled out, resulting in an undesired low value of PSI. Note that the estimation of PSI in IPS, N-IPS and [32] is achieved via the calculation of instantaneous phase of the analytic signal generated using the Hilbert transform. Wavelet-based PSI was introduced in [33], whereby instantaneous phase is calculated by convolving each signal with a complex wavelet function and PSI is obtained in the same manner as in IPS and [32]. As a central frequency and a width of the wavelet function must be specified, this approach for estimating PSI is sensitive to phase synchrony only in a certain frequency band.Synchrony can also be measured by means of information-theoric concepts [34], whereby the mutual information between two signals is defined as the indication of the amount of information of a signal which can be obtained by knowing the other signal and vice versa. The physical meaning or interpretation of synchrony quantified using this approach, however, does not exist.General synchronization—the existence of a functional relationship between the systems generating the signals of interest—can be characterized by the conditional stability of the driven chaotic oscillator if the equations of the systems are known [35]. For real-world data, however, the model equations are typically unavailable. The non-parametric method of mutual false nearest neighbours [36], which is based on the technique of delay embedding and on conditional neighbours, therefore, has been proposed to characterize general synchronization, yet this technique might produce errors if the signals of interest have more than one predominant time scale [37]. Phase and general synchronization can also be quantified using the concept of recurrence quantification analysis, whereby two signals are deemed: (i) phase synchronized if the distances between the diagonal lines in their respective recurrence plots coincide and (ii) generally synchronized if their recurrence plots are very similar or approximately the same.All of the described measures are limited to quantifying synchrony between the signals as a whole, and cannot yield TF representations of synchrony. Although such representations can be generated from the IPS algorithm, through the Hilbert transform, we have empirically found that for effective estimation of time-varying synchrony using IPS relatively long sliding windows should be used; hence its time localization is poor. Furthermore, a number of realizations of IPS must be performed for statistical relevance, thus inevitably blurring out TF representations of synchrony. On the other hand, the ISC proposed here generates highly localized TF representations of synchrony and is suitable for the analysis of synchrony in nonlinear and non-stationary multivariate signals.
[ "23847555", "23431279", "28154536", "16235655", "23204288", "10619414", "10060528", "12796727", "15465935", "22200377", "23083794", "10783543" ]
[ { "pmid": "23847555", "title": "Music structure determines heart rate variability of singers.", "abstract": "Choir singing is known to promote wellbeing. One reason for this may be that singing demands a slower than normal respiration, which may in turn affect heart activity. Coupling of heart rate variability (HRV) to respiration is called Respiratory sinus arrhythmia (RSA). This coupling has a subjective as well as a biologically soothing effect, and it is beneficial for cardiovascular function. RSA is seen to be more marked during slow-paced breathing and at lower respiration rates (0.1 Hz and below). In this study, we investigate how singing, which is a form of guided breathing, affects HRV and RSA. The study comprises a group of healthy 18 year olds of mixed gender. The subjects are asked to; (1) hum a single tone and breathe whenever they need to; (2) sing a hymn with free, unguided breathing; and (3) sing a slow mantra and breathe solely between phrases. Heart rate (HR) is measured continuously during the study. The study design makes it possible to compare above three levels of song structure. In a separate case study, we examine five individuals performing singing tasks (1-3). We collect data with more advanced equipment, simultaneously recording HR, respiration, skin conductance and finger temperature. We show how song structure, respiration and HR are connected. Unison singing of regular song structures makes the hearts of the singers accelerate and decelerate simultaneously. Implications concerning the effect on wellbeing and health are discussed as well as the question how this inner entrainment may affect perception and behavior." }, { "pmid": "16235655", "title": "Application of the empirical mode decomposition to the analysis of esophageal manometric data in gastroesophageal reflux disease.", "abstract": "The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The central idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). An IMF is defined as any function having the number of extrema and the number of zero-crossings equal (or differing at most by one), and also having symmetric envelopes defined by the local minima, and maxima respectively. The decomposition procedure is adaptive, data-driven, therefore, highly efficient. In this contribution, we applied the idea of EMD to develop strategies to automatically identify the relevant IMFs that contribute to the slow-varying trend in the data, and presented its application on the analysis of esophageal manometric time series in gastroesophageal reflux disease. The results from both extensive simulations and real data show that the EMD may prove to be a vital technique for the analysis of esophageal manometric data." }, { "pmid": "23204288", "title": "Classification of motor imagery BCI using multivariate empirical mode decomposition.", "abstract": "Brain electrical activity recorded via electroencephalogram (EEG) is the most convenient means for brain-computer interface (BCI), and is notoriously noisy. The information of interest is located in well defined frequency bands, and a number of standard frequency estimation algorithms have been used for feature extraction. To deal with data nonstationarity, low signal-to-noise ratio, and closely spaced frequency bands of interest, we investigate the effectiveness of recently introduced multivariate extensions of empirical mode decomposition (MEMD) in motor imagery BCI. We show that direct multichannel processing via MEMD allows for enhanced localization of the frequency information in EEG, and, in particular, its noise-assisted mode of operation (NA-MEMD) provides a highly localized time-frequency representation. Comparative analysis with other state of the art methods on both synthetic benchmark examples and a well established BCI motor imagery dataset support the analysis." }, { "pmid": "10619414", "title": "Measuring phase synchrony in brain signals.", "abstract": "This article presents, for the first time, a practical method for the direct quantification of frequency-specific synchronization (i.e., transient phase-locking) between two neuroelectric signals. The motivation for its development is to be able to examine the role of neural synchronies as a putative mechanism for long-range neural integration during cognitive tasks. The method, called phase-locking statistics (PLS), measures the significance of the phase covariance between two signals with a reasonable time-resolution (<100 ms). Unlike the more traditional method of spectral coherence, PLS separates the phase and amplitude components and can be directly interpreted in the framework of neural integration. To validate synchrony values against background fluctuations, PLS uses surrogate data and thus makes no a priori assumptions on the nature of the experimental data. We also apply PLS to investigate intracortical recordings from an epileptic patient performing a visual discrimination task. We find large-scale synchronies in the gamma band (45 Hz), e.g., between hippocampus and frontal gyrus, and local synchronies, within a limbic region, a few cm apart. We argue that whereas long-scale effects do reflect cognitive processing, short-scale synchronies are likely to be due to volume conduction. We discuss ways to separate such conduction effects from true signal synchrony." }, { "pmid": "12796727", "title": "Analysis of errors reported by surgeons at three teaching hospitals.", "abstract": "BACKGROUND\nLittle is known of the factors that underlie surgical errors. Incident reporting has been proposed as a method of obtaining information about medical errors to help identify such factors.\n\n\nMETHODS\nBetween November 1, 2000, and March 15, 2001, we conducted confidential interviews with randomly selected surgeons from three Massachusetts teaching hospitals to elicit detailed reports on surgical adverse events resulting from errors in management (\"incidents\"). Data on the characteristics of the incidents and the factors that surgeons reported to have contributed to the errors were recorded and analyzed.\n\n\nRESULTS\nAmong 45 surgeons approached for interviews, 38 (84%) agreed to participate and provided reports on 146 incidents. Thirty-three percent of incidents resulted in permanent disability and 13% in patient death. Seventy-seven percent involved injuries related to an operation or other invasive intervention (visceral injuries, bleeding, and wound infection/dehiscence were the most common subtypes), 13% involved unnecessary or inappropriate procedures, and 10% involved unnecessary advancement of disease. Two thirds of the incidents involved errors during the intraoperative phase of surgical care, 27% during preoperative management, and 22% during postoperative management. Two or more clinicians were cited as substantially contributing to errors in 70% of the incidents. The most commonly cited systems factors contributing to errors were inexperience/lack of competence in a surgical task (53% of incidents), communication breakdowns among personnel (43%), and fatigue or excessive workload (33%). Surgeons reported significantly more systems failures in incidents involving emergency surgical care than those involving nonemergency care (P <.001).\n\n\nCONCLUSIONS\nSubjective incident reports gathered through interviews allow identification of characteristics of surgical errors and their leading contributing factors, which may help target research and interventions to reduce such errors." }, { "pmid": "15465935", "title": "Communication failures in the operating room: an observational classification of recurrent types and effects.", "abstract": "BACKGROUND\nIneffective team communication is frequently at the root of medical error. The objective of this study was to describe the characteristics of communication failures in the operating room (OR) and to classify their effects. This study was part of a larger project to develop a team checklist to improve communication in the OR.\n\n\nMETHODS\nTrained observers recorded 90 hours of observation during 48 surgical procedures. Ninety four team members participated from anesthesia (16 staff, 6 fellows, 3 residents), surgery (14 staff, 8 fellows, 13 residents, 3 clerks), and nursing (31 staff). Field notes recording procedurally relevant communication events were analysed using a framework which considered the content, audience, purpose, and occasion of a communication exchange. A communication failure was defined as an event that was flawed in one or more of these dimensions.\n\n\nRESULTS\n421 communication events were noted, of which 129 were categorized as communication failures. Failure types included \"occasion\" (45.7% of instances) where timing was poor; \"content\" (35.7%) where information was missing or inaccurate, \"purpose\" (24.0%) where issues were not resolved, and \"audience\" (20.9%) where key individuals were excluded. 36.4% of failures resulted in visible effects on system processes including inefficiency, team tension, resource waste, workaround, delay, patient inconvenience and procedural error.\n\n\nCONCLUSION\nCommunication failures in the OR exhibited a common set of problems. They occurred in approximately 30% of team exchanges and a third of these resulted in effects which jeopardized patient safety by increasing cognitive load, interrupting routine, and increasing tension in the OR." }, { "pmid": "22200377", "title": "The impact of nontechnical skills on technical performance in surgery: a systematic review.", "abstract": "BACKGROUND\nFailures in nontechnical and teamwork skills frequently lie at the heart of harm and near-misses in the operating room (OR). The purpose of this systematic review was to assess the impact of nontechnical skills on technical performance in surgery.\n\n\nSTUDY DESIGN\nMEDLINE, EMBASE, PsycINFO databases were searched, and 2,041 articles were identified. After limits were applied, 341 articles were retrieved for evaluation. Of these, 28 articles were accepted for this review. Data were extracted from the articles regarding sample population, study design and setting, measures of nontechnical skills and technical performance, study findings, and limitations.\n\n\nRESULTS\nOf the 28 articles that met inclusion criteria, 21 articles assessed the impact of surgeons' nontechnical skills on their technical performance. The evidence suggests that receiving feedback and effectively coping with stressful events in the OR has a beneficial impact on certain aspects of technical performance. Conversely, increased levels of fatigue are associated with detriments to surgical skill. One article assessed the impact of anesthesiologists' nontechnical skills on anesthetic technical performance, finding a strong positive correlation between the 2 skill sets. Finally, 6 articles assessed the impact of multiple nontechnical skills of the entire OR team on surgical performance. A strong relationship between teamwork failure and technical error was empirically demonstrated in these studies.\n\n\nCONCLUSIONS\nEvidence suggests that certain nontechnical aspects of performance can enhance or, if lacking, contribute to deterioration of surgeons' technical performance. The precise extent of this effect remains to be elucidated." }, { "pmid": "23083794", "title": "Cumulative team experience matters more than individual surgeon experience in cardiac surgery.", "abstract": "OBJECTIVES\nIndividual surgeon experience and the cumulative experience of the surgical team have both been implicated as factors that influence surgical efficiency. We sought to quantitatively evaluate the effects of both individual surgeon experience and the cumulative experience of attending surgeon-cardiothoracic fellow collaborations in isolated coronary artery bypass graft (CABG) procedures.\n\n\nMETHODS\nUsing a prospectively collected retrospective database, we analyzed all medical records of patients undergoing isolated CABG procedure at our institution. We used multivariate generalized estimating equation regression models to adjust for patient mix and subsequently evaluated the effect of both attending cardiac surgeon experience (since fellowship graduation) and the number of previous collaborations between attending cardiac surgeons and cardiothoracic fellow pairs on cardiopulmonary bypass and crossclamp times.\n\n\nRESULTS\nFrom 2001 to 2010, 4068 consecutive patients underwent isolated CABG procedure at our institution performed by 11 attending cardiac surgeons and 73 cardiothoracic fellows. Mean attending experience after fellowship graduation was 10.9 ± 8.0 years and mean number of cases between unique pairs of attending cardiac surgeons and cardiothoracic fellows was 10.0 ± 10.0 cases. After patient risk adjustment, both attending surgical experience since fellowship graduation and the number of previous collaborations between attending surgeons and cardiothoracic fellows were significantly associated with a reduction in cardiopulmonary bypass and crossclamp times (P < .001). The influence of attending-fellow pair experience far exceeded the influence of surgical experience with beta estimates for attending-fellow pair experience nearly three times that of attending surgeon experience.\n\n\nCONCLUSIONS\nCumulative experience of attending cardiac surgeons and cardiothoracic fellows has a dramatic effect on both cardiopulmonary bypass and crossclamp times, whereas attending cardiac surgeon learning curves following fellowship graduation are clinically insignificant. Taken together, these findings suggest that the primary driver of operative efficiency in CABG procedure is the collaborative experience of the attending surgeon-cardiothoracic fellow operative team, rather than the individual experience of the attending surgeon." }, { "pmid": "10783543", "title": "The influence of shared mental models on team process and performance.", "abstract": "The influence of teammates' shared mental models on team processes and performance was tested using 56 undergraduate dyads who \"flew\" a series of missions on a personal-computer-based flight-combat simulation. The authors both conceptually and empirically distinguished between teammates' task- and team-based mental models and indexed their convergence or \"sharedness\" using individually completed paired-comparisons matrices analyzed using a network-based algorithm. The results illustrated that both shared-team- and task-based mental models related positively to subsequent team process and performance. Furthermore, team processes fully mediated the relationship between mental model convergence and team effectiveness. Results are discussed in terms of the role of shared cognitions in team effectiveness and the applicability of different interventions designed to achieve such convergence." } ]
Royal Society Open Science
29308250
PMC5750017
10.1098/rsos.171200
A brittle star-like robot capable of immediately adapting to unexpected physical damage
A major challenge in robotic design is enabling robots to immediately adapt to unexpected physical damage. However, conventional robots require considerable time (more than several tens of seconds) for adaptation because the process entails high computational costs. To overcome this problem, we focus on a brittle star—a primitive creature with expendable body parts. Brittle stars, most of which have five flexible arms, occasionally lose some of them and promptly coordinate the remaining arms to escape from predators. We adopted a synthetic approach to elucidate the essential mechanism underlying this resilient locomotion. Specifically, based on behavioural experiments involving brittle stars whose arms were amputated in various ways, we inferred the decentralized control mechanism that self-coordinates the arm motions by constructing a simple mathematical model. We implemented this mechanism in a brittle star-like robot and demonstrated that it adapts to unexpected physical damage within a few seconds by automatically coordinating its undamaged arms similar to brittle stars. Through the above-mentioned process, we found that physical interaction between arms plays an essential role for the resilient inter-arm coordination of brittle stars. This finding will help develop resilient robots that can work in inhospitable environments. Further, it provides insights into the essential mechanism of resilient coordinated motions characteristic of animal locomotion.
2.Related works on brittle stars2.1.Anatomical studiesBrittle stars have a circular body disc and typically five radiating arms (figure 1a). Each arm consists of a series of segments, each containing a roughly discoidal vertebral ossicle. Adjacent ossicles are linked by four muscle blocks, which enables the arm to bend both in the horizontal and vertical directions [28]. The arm movements are innervated by a simple distributed nervous system. Specifically, brittle stars have a ‘circumoral nerve ring’ that surrounds the disc and connects to ‘radial nerves’ running along the arms (figure 1b; electronic supplementary material, video S1, 1 : 13–1 : 31) [27]. Each arm communicates with its two adjacent arms via the nerve ring [22].2.2.Behavioural studiesBrittle stars have a locomotion strategy distinguished from any other metazoan: arms with highly muscularized internal skeletons coordinate powerful strides for rapid movement across the ocean floor [23]. Despite the lack of a sophisticated centralized controller, brittle stars assign distinct roles to individual arms and coordinate their movements to propel the body [21–26]. When a stimulus is encountered and locomotion becomes necessary, each arm is assigned one of three roles in the gait corresponding with its position relative to the requisite direction of movement [21,22,25,26]. One arm is designated as the centre limb, two as the forelimbs and two as hindlimbs. The centre limb is the arm parallel to the direction of movement. The forelimbs are the primary structures that work in coordination to move the organism forward, and the hindlimbs take a minimal role in propulsion. When the centre limb is anterior to the direction of desired disc movement (during the locomotor mode, referred to as ‘rowing’), the left and right forelimbs are adjacent to the centre limb, and the remaining two take the role as the hindlimbs. When the centre limb is posterior to the direction of movement (referred to as ‘reverse rowing’), the forelimbs are the most anterior limbs, while the hindlimbs flank the centre limb [21,23]. Each arm is capable of assuming any of the three roles. Therefore, to change direction, the animal simply reassigns the roles of the arms [23]. This system allows these organisms to move in every direction equally without rotating the body to turn, as would need to occur in a bilateral organism.Further, brittle stars can seamlessly modify their locomotion strategy to accommodate a lost or inoperative arm [19,20,22,31]. For example, a brittle star can autotomize some of its arms and coordinate the remaining arms to evade predators or harmful stimuli (figure 1c; electronic supplementary material, video S1, 0 : 55–1 : 13) [19,20]. Brittle stars whose arms are amputated surgically in various ways can also maintain locomotion by coordinating the remaining arms [22].2.3.Mathematical and robotic studiesBrittle star locomotion has also attracted attention in the fields of mathematics and robotics. For example, Lal et al. [32] developed a brittle star-like modular robot. They let the robots learn their movements by using a genetic algorithm so as to coordinate each other and generate locomotion. However, as the ‘performance phase’ of the robot is completely separated from the ‘learning phase’ that requires a certain amount of time, the robot cannot behave adaptively in real time.In contrast, we have proposed a decentralized control mechanism for the locomotion of brittle stars with five arms, based on a synthetic approach [25,26]. Spontaneous role assignment of rhythmic and non-rhythmic arm movements was modelled by using an active rotator model that can describe both oscillatory and excitatory properties. The proposed mechanism was validated via simulations [25] and robot experiments [26].
[ "20130624", "17110570", "26017452", "18929578", "25845627", "22573771", "22617431", "7285093", "18006736", "28412715", "28325917", "19545439" ]
[ { "pmid": "17110570", "title": "Resilient machines through continuous self-modeling.", "abstract": "Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals." }, { "pmid": "26017452", "title": "Robots that can adapt like animals.", "abstract": "Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot 'think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury." }, { "pmid": "18929578", "title": "Environment-dependent morphology in plasmodium of true slime mold Physarum polycephalum and a network growth model.", "abstract": "Branching network growth patterns, depending on environmental conditions, in plasmodium of true slime mold Physarum polycephalum were investigated. Surprisingly, the patterns resemble those in bacterial colonies even though the biological mechanisms differ greatly. Bacterial colonies are collectives of microorganisms in which individual organisms have motility and interact through nutritious and chemical fields. In contrast, the plasmodium is a giant amoeba-like multinucleated unicellular organism that forms a network of tubular structures through which protoplasm streams. The cell motility of the plasmodium is generated by oscillation phenomena observed in the partial bodies, which interact through the tubular structures. First, we analyze characteristics of the morphology quantitatively, then we abstract local rules governing the growing process to construct a simple network growth model. This model is independent of specific systems, in which only two rules are applied. Finally, we discuss the mechanism of commonly observed biological pattern formations through comparison with the system of bacterial colonies." }, { "pmid": "25845627", "title": "C. elegans locomotion: small circuits, complex functions.", "abstract": "With 302 neurons in the adult Caenorhabditis elegans nervous system, it should be possible to build models of complex behaviors spanning sensory input to motor output. The logic of the motor circuit is an essential component of such models. Advances in physiological, anatomical, and neurogenetic analysis are revealing a surprisingly complex signaling network in the worm's small motor circuit. We are progressing towards a systems level dissection of the network of premotor interneurons, motor neurons, and muscle cells that move the animal forward and backward in its environment." }, { "pmid": "22573771", "title": "Getting around when you're round: quantitative analysis of the locomotion of the blunt-spined brittle star, Ophiocoma echinata.", "abstract": "Brittle stars (Ophiuroidea, Echinodermata) are pentaradially symmetrical echinoderms that use five multi-jointed limbs to locomote along the seafloor. Prior qualitative descriptions have claimed coordinated movements of the limbs in a manner similar to tetrapod vertebrates, but this has not been evaluated quantitatively. It is uncertain whether the ring-shaped nervous system, which lacks an anatomically defined anterior, is capable of generating rhythmic coordinated movements of multiple limbs. This study tested whether brittle stars possess distinct locomotor modes with strong inter-limb coordination as seen in limbed animals in other phyla (e.g. tetrapods and arthropods), or instead move each limb independently according to local sensory feedback. Limb tips and the body disk were digitized for 56 cycles from 13 individuals moving across sand. Despite their pentaradial anatomy, all individuals were functionally bilateral, moving along the axis of a central limb via synchronous motions of contralateral limbs (±~13% phase lag). Two locomotor modes were observed, distinguishable mainly by whether the central limb was directed forwards or backwards. Turning was accomplished without rotation of the body disk by defining a different limb as the center limb and shifting other limb identities correspondingly, and then continuing locomotion in the direction of the newly defined anterior. These observations support the hypothesis that, in spite of their radial body plan, brittle stars employ coordinated, bilaterally symmetrical locomotion." }, { "pmid": "22617431", "title": "Ophiuroid robot that self-organizes periodic and non-periodic arm movements.", "abstract": "Autonomous decentralized control is a key concept for understanding the mechanism underlying adaptive and versatile locomotion of animals. Although the design of an autonomous decentralized control system that ensures adaptability by using coupled oscillators has been proposed previously, it cannot comprehensively reproduce the versatility of animal behaviour. To tackle this problem, we focus on using ophiuroids as a simple model that exhibits versatile locomotion including periodic and non-periodic arm movements. Our existing model for ophiuroid locomotion uses an active rotator model that describes both oscillatory and excitatory properties. In this communication, we develop an ophiuroid robot to confirm the validity of this proposed model in the real world. We show that the robot travels by successfully coordinating periodic and non-periodic arm movements in response to external stimuli." }, { "pmid": "7285093", "title": "The giant neurone system in Ophiuroids. I. The general morphology of the radial nerve cords and circumoral nerve ring.", "abstract": "The nervous system of Ophiura texturata contains nerve fibres and cell bodies that are an order of magnitude larger than anything previously described in the Asteroidea and Echinoidea. These large nerve cells are designated giant fibres. Giant nerve cells are present in both the ectoneural and hyponeural nervous system. The layout of these nerve cells is described and it is shown that the organization is repeated in each segmental ganglion that makes up the radial nerve cord. The circumoral nerve ring is composed, in the main, of tracts of nerve fibres joining the radial nerves, and it contains only limited areas of neuropil associated with the alimentary canal and muscle of the disc and jaws. Degeneration studies have shown that each segmental ganglion of the radial nerve cords contains a discrete population of neurones separate from adjacent ganglion and that there are not anatomically continuous giant fibres along the whole length of the nerve cord." }, { "pmid": "18006736", "title": "Self-organization, embodiment, and biologically inspired robotics.", "abstract": "Robotics researchers increasingly agree that ideas from biology and self-organization can strongly benefit the design of autonomous robots. Biological organisms have evolved to perform and survive in a world characterized by rapid changes, high uncertainty, indefinite richness, and limited availability of information. Industrial robots, in contrast, operate in highly controlled environments with no or very little uncertainty. Although many challenges remain, concepts from biologically inspired (bio-inspired) robotics will eventually enable researchers to engineer machines for the real world that possess at least some of the desirable properties of biological organisms, such as adaptivity, robustness, versatility, and agility." }, { "pmid": "28412715", "title": "Non-centralized and functionally localized nervous system of ophiuroids: evidence from topical anesthetic experiments.", "abstract": "Ophiuroids locomote along the seafloor by coordinated rhythmic movements of multi-segmented arms. The mechanisms by which such coordinated movements are achieved are a focus of interest from the standpoints of neurobiology and robotics, because ophiuroids appear to lack a central nervous system that could exert centralized control over five arms. To explore the underlying mechanism of arm coordination, we examined the effects of selective anesthesia to various parts of the body of ophiuroids on locomotion. We observed the following: (1) anesthesia of the circumoral nerve ring completely blocked the initiation of locomotion; however, initiation of single arm movement, such as occurs during the retrieval of food, was unaffected, indicating that the inability to initiate locomotion was not due to the spread of the anesthetic agent. (2) During locomotion, the midsegments of the arms periodically made contact with the floor to elevate the disc. In contrast, the distal segments of the arms were pointed aborally and did not make contact with the floor. (3) When the midsegments of all arms were anesthetized, arm movements were rendered completely uncoordinated. In contrast, even when only one arm was left intact, inter-arm coordination was preserved. (4) Locomotion was unaffected by anesthesia of the distal arms. (5) A radial nerve block to the proximal region of an arm abolished coordination among the segments of that arm, rendering it motionless. These findings indicate that the circumoral nerve ring and radial nerves play different roles in intra- and inter-arm coordination in ophiuroids." }, { "pmid": "28325917", "title": "A Quadruped Robot Exhibiting Spontaneous Gait Transitions from Walking to Trotting to Galloping.", "abstract": "The manner in which quadrupeds change their locomotive patterns-walking, trotting, and galloping-with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple \"central pattern generator\" (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion." }, { "pmid": "19545439", "title": "MicroCT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues.", "abstract": "BACKGROUND\nComparative, functional, and developmental studies of animal morphology require accurate visualization of three-dimensional structures, but few widely applicable methods exist for non-destructive whole-volume imaging of animal tissues. Quantitative studies in particular require accurately aligned and calibrated volume images of animal structures. X-ray microtomography (microCT) has the potential to produce quantitative 3D images of small biological samples, but its widespread use for non-mineralized tissues has been limited by the low x-ray contrast of soft tissues. Although osmium staining and a few other techniques have been used for contrast enhancement, generally useful methods for microCT imaging for comparative morphology are still lacking.\n\n\nRESULTS\nSeveral very simple and versatile staining methods are presented for microCT imaging of animal soft tissues, along with advice on tissue fixation and sample preparation. The stains, based on inorganic iodine and phosphotungstic acid, are easier to handle and much less toxic than osmium, and they produce high-contrast x-ray images of a wide variety of soft tissues. The breadth of possible applications is illustrated with a few microCT images of model and non-model animals, including volume and section images of vertebrates, embryos, insects, and other invertebrates. Each image dataset contains x-ray absorbance values for every point in the imaged volume, and objects as small as individual muscle fibers and single blood cells can be resolved in their original locations and orientations within the sample.\n\n\nCONCLUSION\nWith very simple contrast staining, microCT imaging can produce quantitative, high-resolution, high-contrast volume images of animal soft tissues, without destroying the specimens and with possibilities of combining with other preparation and imaging methods. Such images are expected to be useful in comparative, developmental, functional, and quantitative studies of morphology." } ]
Scientific Reports
29343692
PMC5772550
10.1038/s41598-018-19194-4
Symmetric Decomposition of Asymmetric Games
We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.
Related WorkThe most straightforward and classical approach to asymmetric games is to treat agents as evolving separately: one population per player, where each agent in a population interacts by playing against agent(s) from the other population(s), i.e. co-evolution21. This assumes that players of these games are always fundamentally attached to one role and never need to know/understand how to play as the other player. In many cases, though, a player may want to know how to play as either player. For example, a good chess player should know how to play as white or black. This reasoning inspired the role-based symmetrization of asymmetric games22.The role-based symmetrization of an arbitrary bimatrix game defines a new (extensive-form) game where before choosing actions the role of the two players are decided by uniform random chance. If two roles are available, an agent is assigned one specific role with probability \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2}$$\end{document}12. Then, the agent plays the game under that role and collects the role-specific payoff appropriately. A new strategy space is defined, which is the product of both players’ strategy spaces, and a new payoff matrix computing (expected) payoffs for each combination of pure strategies that could arise under the different roles. There are relationships between the sets of evolutionarily stable strategies and rest points of the replicator dynamics between the original and symmetrized game19,23.This single-population model forces the players to be general: able to devise a strategy for each role, which may unnecessarily complicate algorithms that compute strategies for such players. In general, the payoff matrix in the resulting role-based symmetrization is n! (n being the number of agents) times larger due to the number of permutations of player role assignments. There are two-population variants that formulate the problem slightly differently: a new matrix that encapsulates both players’ utilities assigns 0 utility to combinations of roles that are not in one-to-one correspondence with players24. This too, however, results in an unnecessarily larger (albeit sparse) matrix.Lastly, there are approaches that have structured asymmetry, that arises due to ecological constraints such as locality in a network and genotype/genetic relationships between population members25. Similarly here, replicator dynamics and their properties are derived by transforming the payoff matrix into a larger symmetric matrix.Our primary motivation is to enable analysis techniques for asymmetric games. However, we do this by introducing new symmetric counterpart dynamics rather than using standard dynamics on a symmetrised game. Therefore, the traditional role interpretation as well as any method that enlarges the game for the purpose of obtaining symmetry is unnecessarily complex for our purposes. Consequently, we consider the original co-evolutionary interpretation, and derive new (lower-dimensional) strategy space mappings.
[ "23519283", "27892509", "16843499", "7412323", "26308326", "27161141", "28355181" ]
[ { "pmid": "23519283", "title": "Evolution of collective action in adaptive social structures.", "abstract": "Many problems in nature can be conveniently framed as a problem of evolution of collective cooperative behaviour, often modelled resorting to the tools of evolutionary game theory in well-mixed populations, combined with an appropriate N-person dilemma. Yet, the well-mixed assumption fails to describe the population dynamics whenever individuals have a say in deciding which groups they will participate. Here we propose a simple model in which dynamical group formation is described as a result of a topological evolution of a social network of interactions. We show analytically how evolutionary dynamics under public goods games in finite adaptive networks can be effectively transformed into a N-Person dilemma involving both coordination and co-existence. Such dynamics would be impossible to foresee from more conventional 2-person interactions as well as from descriptions based on infinite, well-mixed populations. Finally, we show how stochastic effects help rendering cooperation viable, promoting polymorphic configurations in which cooperators prevail." }, { "pmid": "27892509", "title": "Evolution of cooperation under indirect reciprocity and arbitrary exploration rates.", "abstract": "Cooperation has been recognized as an evolutionary puzzle since Darwin, and remains identified as one of the biggest challenges of the XXIst century. Indirect Reciprocity (IR), a key mechanism that humans employ to cooperate with each other, establishes that individual behaviour depends on reputations, which in turn evolve depending on social norms that classify behaviours as good or bad. While it is well known that different social norms give rise to distinct cooperation levels, it remains unclear how the performance of each norm is influenced by the random exploration of new behaviours, often a key component of social dynamics where a plethora of stimuli may compel individuals to deviate from pre-defined behaviours. Here we study, for the first time, the impact of varying degrees of exploration rates - the likelihood of spontaneously adopting another strategy, akin to a mutation probability in evolutionary dynamics - in the emergence of cooperation under IR. We show that high exploration rates may either improve or harm cooperation, depending on the underlying social norm at work. Regarding some of the most popular social norms studied to date, we find that cooperation under Simple-standing and Image-score is enhanced by high exploration rates, whereas the opposite occurs for Stern-judging and Shunning." }, { "pmid": "16843499", "title": "How to reach linguistic consensus: a proof of convergence for the naming game.", "abstract": "In this paper we introduce a mathematical model of naming games. Naming games have been widely used within research on the origins and evolution of language. Despite the many interesting empirical results these studies have produced, most of this research lacks a formal elucidating theory. In this paper we show how a population of agents can reach linguistic consensus, i.e. learn to use one common language to communicate with one another. Our approach differs from existing formal work in two important ways: one, we relax the too strong assumption that an agent samples infinitely often during each time interval. This assumption is usually made to guarantee convergence of an empirical learning process to a deterministic dynamical system. Two, we provide a proof that under these new realistic conditions, our model converges to a common language for the entire population of agents. Finally the model is experimentally validated." }, { "pmid": "26308326", "title": "Asymmetric Evolutionary Games.", "abstract": "Evolutionary game theory is a powerful framework for studying evolution in populations of interacting individuals. A common assumption in evolutionary game theory is that interactions are symmetric, which means that the players are distinguished by only their strategies. In nature, however, the microscopic interactions between players are nearly always asymmetric due to environmental effects, differing baseline characteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce into evolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Ecological asymmetry results from variation in the environments of the players, while genotypic asymmetry is a consequence of the players having differing baseline genotypes. We develop a theory of these forms of asymmetry for games in structured populations and use the classical social dilemmas, the Prisoner's Dilemma and the Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essential differences between models of genetic evolution based on reproduction and models of cultural evolution based on imitation that are not apparent in symmetric games." }, { "pmid": "27161141", "title": "Comparing reactive and memory-one strategies of direct reciprocity.", "abstract": "Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player's previous move, and memory-one strategies, which take into account the own and the co-player's previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation." }, { "pmid": "28355181", "title": "Evolutionary dynamics on any population structure.", "abstract": "Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure-graph surgery-affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties." } ]
Plant Methods
29375647
PMC5773030
10.1186/s13007-018-0273-z
The use of plant models in deep learning: an application to leaf counting in rosette plants
Deep learning presents many opportunities for image-based plant phenotyping. Here we consider the capability of deep convolutional neural networks to perform the leaf counting task. Deep learning techniques typically require large and diverse datasets to learn generalizable models without providing a priori an engineered algorithm for performing the task. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high. In this work we propose a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants. We demonstrate that the use of high-quality 3D synthetic plants to augment a dataset can improve performance on the leaf counting task. We also show that the ability of the model to generate an arbitrary distribution of phenotypes mitigates the problem of dataset shift when training and testing on different datasets. Finally, we show that real and synthetic plants are significantly interchangeable when training a neural network on the leaf counting task.Electronic supplementary materialThe online version of this article (10.1186/s13007-018-0273-z) contains supplementary material, which is available to authorized users.
Related workThe use of synthetic or simulation data has been explored in several visual learning contexts, including pose estimation [29] as well as viewpoint estimation [30]. In the plant phenotyping literature, models have been used as testing data to validate image-based root system descriptions [23], as well as to train machine learning models for root description tasks [31]. However, when using synthetic images, the model was both trained and tested on synthetic data, leaving it unclear whether the use of synthetic roots could offer advantages to the analysis of real root systems, or how a similar technique would perform on shoots.The specialized root system models used by Benoit et al. [23] and Lobet et al. [31] are not applicable to tasks involving the aerial parts of a plant—the models have not been generalized to produce structures other than roots. Nonetheless, for image-based tasks Benoit et al. [23] were the first to employ a model [24] based on the L-system formalism. Because of its effectiveness in modelling the structure and development of plants, we chose the same formalism for creating our Arabidopsis rosette model
[ "22074787", "28066963", "28736569", "23286457", "14732445", "22235985", "25469374", "5659071", "22670147", "28220127", "22942389" ]
[ { "pmid": "22074787", "title": "Phenomics--technologies to relieve the phenotyping bottleneck.", "abstract": "Global agriculture is facing major challenges to ensure global food security, such as the need to breed high-yielding crops adapted to future climates and the identification of dedicated feedstock crops for biofuel production (biofuel feedstocks). Plant phenomics offers a suite of new technologies to accelerate progress in understanding gene function and environmental responses. This will enable breeders to develop new agricultural germplasm to support future agricultural production. In this review we present plant physiology in an 'omics' perspective, review some of the new high-throughput and high-resolution phenotyping tools and discuss their application to plant biology, functional genomics and crop breeding." }, { "pmid": "28066963", "title": "Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants.", "abstract": "Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com)." }, { "pmid": "28736569", "title": "Deep Plant Phenomics: A Deep Learning Platform for Complex Plant Phenotyping Tasks.", "abstract": "Plant phenomics has received increasing interest in recent years in an attempt to bridge the genotype-to-phenotype knowledge gap. There is a need for expanded high-throughput phenotyping capabilities to keep up with an increasing amount of data from high-dimensional imaging sensors and the desire to measure more complex phenotypic traits (Knecht et al., 2016). In this paper, we introduce an open-source deep learning tool called Deep Plant Phenomics. This tool provides pre-trained neural networks for several common plant phenotyping tasks, as well as an easy platform that can be used by plant scientists to train models for their own phenotyping applications. We report performance results on three plant phenotyping benchmarks from the literature, including state of the art performance on leaf counting, as well as the first published results for the mutant classification and age regression tasks for Arabidopsis thaliana." }, { "pmid": "23286457", "title": "Novel scanning procedure enabling the vectorization of entire rhizotron-grown root systems.", "abstract": ": This paper presents an original spit-and-combine imaging procedure that enables the complete vectorization of complex root systems grown in rhizotrons. The general principle of the method is to (1) separate the root system into a small number of large pieces to reduce root overlap, (2) scan these pieces one by one, (3) analyze separate images with a root tracing software and (4) combine all tracings into a single vectorized root system. This method generates a rich dataset containing morphological, topological and geometrical information of entire root systems grown in rhizotrons. The utility of the method is illustrated with a detailed architectural analysis of a 20-day old maize root system, coupled with a spatial analysis of water uptake patterns." }, { "pmid": "14732445", "title": "Modeling plant growth and development.", "abstract": "Computational plant models or 'virtual plants' are increasingly seen as a useful tool for comprehending complex relationships between gene function, plant physiology, plant development, and the resulting plant form. The theory of L-systems, which was introduced by Lindemayer in 1968, has led to a well-established methodology for simulating the branching architecture of plants. Many current architectural models provide insights into the mechanisms of plant development by incorporating physiological processes, such as the transport and allocation of carbon. Other models aim at elucidating the geometry of plant organs, including flower petals and apical meristems, and are beginning to address the relationship between patterns of gene expression and the resulting plant form." }, { "pmid": "22235985", "title": "Computational models of plant development and form.", "abstract": "The use of computational techniques increasingly permeates developmental biology, from the acquisition, processing and analysis of experimental data to the construction of models of organisms. Specifically, models help to untangle the non-intuitive relations between local morphogenetic processes and global patterns and forms. We survey the modeling techniques and selected models that are designed to elucidate plant development in mechanistic terms, with an emphasis on: the history of mathematical and computational approaches to developmental plant biology; the key objectives and methodological aspects of model construction; the diverse mathematical and computational methods related to plant modeling; and the essence of two classes of models, which approach plant morphogenesis from the geometric and molecular perspectives. In the geometric domain, we review models of cell division patterns, phyllotaxis, the form and vascular patterns of leaves, and branching patterns. In the molecular-level domain, we focus on the currently most extensively developed theme: the role of auxin in plant morphogenesis. The review is addressed to both biologists and computational modelers." }, { "pmid": "25469374", "title": "Functional-structural plant models: a growing paradigm for plant studies.", "abstract": "A number of research groups in various areas of plant biology as well as computer science and applied mathematics have addressed modelling the spatiotemporal dynamics of growth and development of plants. This has resulted in development of functional-structural plant models (FSPMs). In FSPMs, the plant structure is always explicitly represented in terms of a network of elementary units. In this respect, FSPMs are different from more abstract models in which a simplified representation of the plant structure is frequently used (e.g. spatial density of leaves, total biomass, etc.). This key feature makes it possible to build modular models and creates avenues for efficient exchange of model components and experimental data. They are being used to deal with the complex 3-D structure of plants and to simulate growth and development occurring at spatial scales from cells to forest areas, and temporal scales from seconds to decades and many plant generations. The plant types studied also cover a broad spectrum, from algae to trees. This special issue of Annals of Botany features selected papers on FSPM topics such as models of morphological development, models of physical and biological processes, integrated models predicting dynamics of plants and plant communities, modelling platforms, methods for acquiring the 3-D structures of plants using automated measurements, and practical applications for agronomic purposes." }, { "pmid": "22670147", "title": "L-py: an L-system simulation framework for modeling plant architecture development based on a dynamic language.", "abstract": "The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom." }, { "pmid": "28220127", "title": "Nitric Oxide Ameliorates Zinc Oxide Nanoparticles Phytotoxicity in Wheat Seedlings: Implication of the Ascorbate-Glutathione Cycle.", "abstract": "The present study investigates ameliorative effects of nitric oxide (NO) against zinc oxide nanoparticles (ZnONPs) phytotoxicity in wheat seedlings. ZnONPs exposure hampered growth of wheat seedlings, which coincided with reduced photosynthetic efficiency (Fv/Fm and qP), due to increased accumulation of zinc (Zn) in xylem and phloem saps. However, SNP supplementation partially mitigated the ZnONPs-mediated toxicity through the modulation of photosynthetic activity and Zn accumulation in xylem and phloem saps. Further, the results reveal that ZnONPs treatments enhanced levels of hydrogen peroxide and lipid peroxidation (as malondialdehyde; MDA) due to severely inhibited activities of the following ascorbate-glutatione cycle (AsA-GSH) enzymes: ascorbate peroxidase, glutathione reductase, monodehydroascorbate reductase and dehydroascorbate reductase, and its associated metabolites ascorbate and glutathione. In contrast to this, the addition of SNP together with ZnONPs maintained the cellular functioning of the AsA-GSH cycle properly, hence lesser damage was noticed in comparison to ZnONPs treatments alone. The protective effect of SNP against ZnONPs toxicity on fresh weight (growth) can be reversed by 2-(4carboxy-2-phenyl)-4,4,5,5-tetramethyl- imidazoline-1-oxyl-3-oxide, a NO scavenger, and thus suggesting that NO released from SNP ameliorates ZnONPs toxicity. Overall, the results of the present study have shown the role of NO in the reducing of ZnONPs toxicity through the regulation of accumulation of Zn as well as the functioning of the AsA-GSH cycle." }, { "pmid": "22942389", "title": "Rosette tracker: an open source image analysis tool for automatic quantification of genotype effects.", "abstract": "Image analysis of Arabidopsis (Arabidopsis thaliana) rosettes is an important nondestructive method for studying plant growth. Some work on automatic rosette measurement using image analysis has been proposed in the past but is generally restricted to be used only in combination with specific high-throughput monitoring systems. We introduce Rosette Tracker, a new open source image analysis tool for evaluation of plant-shoot phenotypes. This tool is not constrained by one specific monitoring system, can be adapted to different low-budget imaging setups, and requires minimal user input. In contrast with previously described monitoring tools, Rosette Tracker allows us to simultaneously quantify plant growth, photosynthesis, and leaf temperature-related parameters through the analysis of visual, chlorophyll fluorescence, and/or thermal infrared time-lapse sequences. Freely available, Rosette Tracker facilitates the rapid understanding of Arabidopsis genotype effects." } ]
Scientific Reports
29374175
PMC5786031
10.1038/s41598-018-19440-9
Three-dimensional reconstruction and NURBS-based structured meshing of coronary arteries from the conventional X-ray angiography projection images
Despite its two-dimensional nature, X-ray angiography (XRA) has served as the gold standard imaging technique in the interventional cardiology for over five decades. Accordingly, demands for tools that could increase efficiency of the XRA procedure for the quantitative analysis of coronary arteries (CA) are constantly increasing. The aim of this study was to propose a novel procedure for three-dimensional modeling of CA from uncalibrated XRA projections. A comprehensive mathematical model of the image formation was developed and used with a robust genetic algorithm optimizer to determine the calibration parameters across XRA views. The frames correspondences between XRA acquisitions were found using a partial-matching approach. Using the same matching method, an efficient procedure for vessel centerline reconstruction was developed. Finally, the problem of meshing complex CA trees was simplified to independent reconstruction and meshing of connected branches using the proposed nonuniform rational B-spline (NURBS)-based method. Because it enables structured quadrilateral and hexahedral meshing, our method is suitable for the subsequent computational modelling of CA physiology (i.e. coronary blood flow, fractional flow reverse, virtual stenting and plaque progression). Extensive validations using digital, physical, and clinical datasets showed competitive performances and potential for further application on a wider scale.
Related workThe majority of the available methods for reconstructing CA from XRA are semi-automatic and consist of these five steps: (1) pairing of frames acquired from different views; (2) vessel segmentation, decomposition and tracking in the XRA dynamic runs; (3) calibration of the parameters defining the device orientations; (4) modeling of CA centerline from its synchronized segmentations; and (5) reconstruction of the CA tree surface. In this part of the section, we briefly review state of the art approaches for solving these reconstruction tasks (see also Table 1).Table 1Comparative overview of the features provided by studies.MethodCalibrationCenterlineLumen approximationcross-section (# views)4DDelivered mesh formatSurface validationapproach (referent)ProposedOpt. I & EPM (B-Spline)C (2), E (2–4), P (4+)+PS (NURBS, TRI, TET, HEX, QUAD)DP QA (GT), PP QA (CT) RP QA (CT)Chen & Carroll7Opt. EEM (poly)C (2)—PCTRP VACañero et al.10+AC (poly)——N/A—Chen & Carroll16Opt. EEM (poly)C (2)+PCTRP VAAndriotis et al.18Opt. EEM (poly)C (2)+PCTPP QA (CT), RP QA (CT)Yang et al.9Opt. I & EEM (poly)C (2)—PCTPP VA (GT), RP VAZheng et al.14Opt. EAC (poly)—+N/A—Yang et al.22Opt. I & EAC (poly)N/A+PCTDP QA (GT), RP VACong et al.12+AC (poly)E (2–5)—PCTDP QA (GT), RP VAI-intrinsic, E-extrinsic, Opt.-optimization, + -precalibrated, AC-active contours, PM-partial matching, EM-epipolar matching, C-circle, E-elipse, P-polyline, PS-parametric surface, PCT-point cloud triangulation, DP-digital phantom, PP-physical phantom, RP-real patient, GT-ground truth, CT-computed tomography, VA-visual assessment, QA-quantitative assessment.
[ "19573711", "2210785", "10909927", "15822802", "19414289", "12564886", "20053531", "12774895", "21227652", "12872946", "15575409", "19060360", "12200927", "18583730", "24503518", "12194670", "11607632", "8006269", "24771202", "15191145", "21327913", "20300489", "20964209", "16243736", "22917461", "17388180", "26117471", "27187726", "23428836" ]
[ { "pmid": "19573711", "title": "Coronary angiography: the need for improvement and the barriers to adoption of new technology.", "abstract": "Traditional coronary angiography presents a variety of limitations related to image acquisition, content, interpretation, and patient safety. These limitations were first apparent with coronary angiography used as a diagnostic tool and have been further magnified in today's world of percutaneous coronary intervention (PCI), with the frequent use of implantable coronary stents. Improvements are needed to overcome the limitations in using current two-dimensional radiographic imaging for optimizing patient selection, quantifying vessel features, guiding PCI, and assessing PCI results. Barriers to such improvements include the paucity of clinical outcomes studies related to new imaging technology, the resistance to changing long-standing practices, the need for physician and staff member training, and the costs associated with acquiring and effectively using these advances in coronary angiography." }, { "pmid": "2210785", "title": "Three-dimensional quantitative coronary angiography.", "abstract": "A method for reconstructing the three-dimensional coronary arterial tree structure from biplane two-dimensional angiographic images is presented. This method exploits the geometrical mathematics of X-ray imaging and the tracking of leading edges of injected contrast material into each vessel for identification of corresponding points on two images taken from orthogonal views. Accurate spatial position and dimension of each vessel in three-dimensional space can be obtained by this reconstruction procedure. The reconstructed arterial configuration is displayed as a shaded surface model, which can be viewed from various angles. Such three-dimensional vascular information provides accurate and reproducible measurements of vascular morphology and function. Flow measurements are obtained by tracking the leading edge of contrast material down the three-dimensional arterial tree. A quantitative analysis of coronary stenosis based on transverse area narrowing and regional blood flow, including the effect of vasoactive drugs, is described. Reconstruction experiments on actual angiographic images of the human coronary artery yield encouraging results toward a realization of computer-assisted three-dimensional quantitative angiography." }, { "pmid": "10909927", "title": "3-D reconstruction of coronary arterial tree to optimize angiographic visualization.", "abstract": "Due to vessel overlap and foreshortening, multiple projections are necessary to adequately evaluate the coronary tree with arteriography. Catheter-based interventions can only be optimally performed when these visualization problems are successfully solved. The traditional method provides multiple selected views in which overlap and foreshortening are subjectively minimized based on two dimensional (2-D) projections. A pair of images acquired from routine angiographic study at arbitrary orientation using a single-plane imaging system were chosen for three-dimensional (3-D) reconstruction. After the arterial segment of interest (e.g., a single coronary stenosis or bifurcation lesion) was selected, a set of gantry angulations minimizing segment foreshortening was calculated. Multiple computer-generated projection images with minimized segment foreshortening were then used to choose views with minimal overlapped vessels relative to the segment of interest. The optimized views could then be utilized to guide subsequent angiographic acquisition and interpretation. Over 800 cases of coronary arterial trees have been reconstructed, in which more than 40 cases were performed in room during cardiac catheterization. The accuracy of 3-D length measurement was confirmed to be within an average root-mean-square (rms) 3.5% error using eight different pairs of angiograms of an intracoronary guidewire of 105-mm length with eight radiopaque markers of 15-mm interdistance. The accuracy of similarity between the additional computer-generated projections versus the actual acquired views was demonstrated with the average rms errors of 3.09 mm and 3.13 mm in 20 LCA and 20 RCA cases, respectively. The projections of the reconstructed patient-specific 3-D coronary tree model can be utilized for planning optimal clinical views: minimal overlap and foreshortening. The assessment of lesion length and diameter narrowing can be optimized in both interventional cases and studies of disease progression and regression." }, { "pmid": "15822802", "title": "Prospective motion correction of X-ray images for coronary interventions.", "abstract": "A method for prospective motion correction of X-ray imaging of the heart is presented. A 3D + t coronary model is reconstructed from a biplane coronary angiogram obtained during free breathing. The deformation field is parameterized by cardiac and respiratory phase, which enables the estimation of the state of the arteries at any phase of the cardiac-respiratory cycle. The motion of the three-dimensional (3-D) coronary model is projected onto the image planes and used to compute a dewarping function for motion correcting the images. The use of a 3-D coronary model facilitates motion correction of images acquired with the X-ray system at arbitrary orientations. The performance of the algorithm was measured by tracking the motion of selected left coronary landmarks using a template matching cross-correlation. In three patients, we motion corrected the same images used to construct their 3D + t coronary model. In this best case scenario, the algorithm reduced the motion of the landmarks by 84%-85%, from mean RMS displacements of 12.8-14.6 pixels to 2.1-2.2 pixels. Prospective motion correction was tested in five patients by building the coronary model from one dataset, and correcting a second dataset. The patient's cardiac and respiratory phase are monitored and used to calculate the appropriate correction parameters. The results showed a 48%-63% reduction in the motion of the landmarks, from a mean RMS displacement of 11.5-13.6 pixels to 4.4-7.1 pixels." }, { "pmid": "19414289", "title": "Novel approach for 3-d reconstruction of coronary arteries from two uncalibrated angiographic images.", "abstract": "Three-dimensional reconstruction of vessels from digital X-ray angiographic images is a powerful technique that compensates for limitations in angiography. It can provide physicians with the ability to accurately inspect the complex arterial network and to quantitatively assess disease induced vascular alterations in three dimensions. In this paper, both the projection principle of single view angiography and mathematical modeling of two view angiographies are studied in detail. The movement of the table, which commonly occurs during clinical practice, complicates the reconstruction process. On the basis of the pinhole camera model and existing optimization methods, an algorithm is developed for 3-D reconstruction of coronary arteries from two uncalibrated monoplane angiographic images. A simple and effective perspective projection model is proposed for the 3-D reconstruction of coronary arteries. A nonlinear optimization method is employed for refinement of the 3-D structure of the vessel skeletons, which takes the influence of table movement into consideration. An accurate model is suggested for the calculation of contour points of the vascular surface, which fully utilizes the information in the two projections. In our experiments with phantom and patient angiograms, the vessel centerlines are reconstructed in 3-D space with a mean positional accuracy of 0.665 mm and with a mean back projection error of 0.259 mm. This shows that the algorithm put forward in this paper is very effective and robust." }, { "pmid": "12564886", "title": "Predictive (un)distortion model and 3-D reconstruction by biplane snakes.", "abstract": "This paper is concerned with the three-dimensional (3-D) reconstruction of coronary vessel centerlines and with how distortion of X-ray angiographic images affects it. Angiographies suffer from pincushion and other geometrical distortions, caused by the peripheral concavity of the image intensifier (II) and the nonlinearity of electronic acquisition devices. In routine clinical practice, where a field-of-view (FOV) of 17-23 cm is commonly used for the acquisition of coronary vessels, this distortion introduces a positional error of up to 7 pixels for an image matrix size of 512 x 512 and an FOV of 17 cm. This error increases with the size of the FOV. Geometrical distortions have a significant effect on the validity of the 3-D reconstruction of vessels from these images. We show how this effect can be reduced by integrating a predictive model of (un)distortion into the biplane snakes formulation for 3-D reconstruction. First, we prove that the distortion can be accurately modeled using a polynomial for each view. Also, we show that the estimated polynomial is independent of focal length, but not of changes in anatomical angles, as the II is influenced by the earth's magnetic field. Thus, we decompose the polynomial into two components: the steady and the orientation-dependent component. We determine the optimal polynomial degree for each component, which is empirically determined to be five for the steady component and three for the orientation-dependent component. This fact simplifies the prediction of the orientation-dependent polynomial, since the number of polynomial coefficients to be predicted is lower. The integration of this model into the biplane snakes formulation enables us to avoid image unwarping, which deteriorates image quality and therefore complicates vessel centerline feature extraction. Moreover, we improve the biplane snake behavior when dealing with wavy vessels, by means of using generalized gradient vector flow. Our experiments show that the proposed methods in this paper decrease up to 88% the reconstruction error obtained when geometrical distortion effects are ignored. Tests on imaged phantoms and real cardiac images are presented as well." }, { "pmid": "20053531", "title": "Sequential reconstruction of vessel skeletons from X-ray coronary angiographic sequences.", "abstract": "X-ray coronary angiography (CAG) is one of widely used imaging modalities for diagnosis and interventional treatment of cardiovascular diseases. Dynamic CAG sequences acquired from several viewpoints record coronary arterial morphological information as well as dynamic performances. The aim of this work is to propose a semi-automatic method for sequentially reconstructing coronary arterial skeletons from a pair of CAG sequences covering one or several cardiac cycles acquired from different views based on snake model. The snake curve deforms directly in 3D through minimizing a predefined energy function and ultimately stops at the global optimum with the minimal energy, which is the desired 3D vessel skeleton. The energy function combines intrinsic properties of the curve and acquired image data with a priori knowledge of coronary arterial morphology and dynamics. Consequently, 2D extraction, 3D sequential reconstruction and tracking of coronary arterial skeletons are synchronously implemented. The main advantage of this method is that matching between a pair of angiographic projections in point-by-point manner is avoided and the reproducibility and accuracy are improved. Results are given for clinical image data of patients in order to validate the proposed method." }, { "pmid": "12774895", "title": "Three-dimensional motion tracking of coronary arteries in biplane cineangiograms.", "abstract": "A three-dimensional (3-D) method for tracking the coronary arteries through a temporal sequence of biplane X-ray angiography images is presented. A 3-D centerline model of the coronary vasculature is reconstructed from a biplane image pair at one time frame, and its motion is tracked using a coarse-to-fine hierarchy of motion models. Three-dimensional constraints on the length of the arteries and on the spatial regularity of the motion field are used to overcome limitations of classical two-dimensional vessel tracking methods, such as tracking vessels through projective occlusions. This algorithm was clinically validated in five patients by tracking the motion of the left coronary tree over one cardiac cycle. The root mean square reprojection errors were found to be submillimeter in 93% (54/58) of the image pairs. The performance of the tracking algorithm was quantified in three dimensions using a deforming vascular phantom. RMS 3-D distance errors were computed between centerline models tracked in the X-ray images and gold-standard centerline models of the phantom generated from a gated 3-D magnetic resonance image acquisition. The mean error was 0.69 (+/- 0.06) mm over eight temporal phases and four different biplane orientations." }, { "pmid": "21227652", "title": "Motion estimation of 3D coronary vessel skeletons from X-ray angiographic sequences.", "abstract": "A method for quantitatively estimating global displacement fields of coronary arterial vessel skeletons during cardiac cycles from X-ray coronary angiographic (CAG) image sequences is proposed. First, dynamic sequence of arterial lumen skeletons is semi-automatically reconstructed from a pair of angiographic image sequences acquired from two nearly orthogonal view angles covering one or several cardiac cycles. Then, displacement fields of 3D vessel skeletons at different cardiac phases are quantitatively estimated through searching optimal correspondences between skeletons of a same vessel branch at different time-points of image sequences with dynamic programming algorithm. The main advantage of this method is that possible errors introduced by calibration parameters of the imaging system are avoided and application of dynamic programming ensures low computation cost. Also, any a priori knowledge and model about cardiac and arterial dynamics is not needed and the same matching error function and similarity measurement can be used to estimate global displacement fields of vessel skeletons performing different kinds of motion. Validation experiments with computer-simulated data and clinically acquired image data are designed and results are given to demonstrate the accuracy and validity of the proposed method." }, { "pmid": "12872946", "title": "Kinematic and deformation analysis of 4-D coronary arterial trees reconstructed from cine angiograms.", "abstract": "In the cardiovascular arena, percutaneous catheter-based interventional (i.e., therapeutic) procedures include a variety of coronary and other vascular system interventions. These procedures use two-dimensional (2-D) X-ray-based imaging as the sole or the major imaging modality for procedure guidance and quantification of key parameters. Coronary vascular curvilinearity is one key parameter that requires a four-dimensional (4-D) format, i.e., three-dimensional (3-D) anatomical representation that changes during the cardiac cycle. A new method has been developed for reconstruction and analysis of these patient-specific 4-D datasets utilizing routine cine angiograms. The proposed method consists of three major processes: 1) reconstruction of moving coronary arterial tree throughout the cardiac cycle; 2) establishment of temporal correspondence with smoothness constraints; and 3) kinematic and deformation analysis of the reconstructed 3-D moving coronary arterial trees throughout the cardiac cycle." }, { "pmid": "15575409", "title": "A quantitative analysis of 3-D coronary modeling from two or more projection images.", "abstract": "A method is introduced to examine the geometrical accuracy of the three-dimensional (3-D) representation of coronary arteries from multiple (two and more) calibrated two-dimensional (2-D) angiographic projections. When involving more then two projections, (multiprojection modeling) a novel procedure is presented that consists of fully automated centerline and width determination in all available projections based on the information provided by the semi-automated centerline detection in two initial calibrated projections. The accuracy of the 3-D coronary modeling approach is determined by a quantitative examination of the 3-D centerline point position and the 3-D cross sectional area of the reconstructed objects. The measurements are based on the analysis of calibrated phantom and calibrated coronary 2-D projection data. From this analysis a confidence region (alpha degrees approximately equal to [35 degrees - 145 degrees]) for the angular distance of two initial projection images is determined for which the modeling procedure is sufficiently accurate for the applied system. Within this angular border range the centerline position error is less then 0.8 mm, in terms of the Euclidean distance to a predefined ground truth. When involving more projections using our new procedure, experiments show that when the initial pair of projection images has an angular distance in the range alpha degrees approximately equal to [35 degrees - 145 degrees], the centerlines in all other projections (gamma = 0 degrees - 180 degrees) were indicated very precisely without any additional centering procedure. When involving additional projection images in the modeling procedure a more realistic shape of the structure can be provided. In case of the concave segment, however, the involvement of multiple projections does not necessarily provide a more realistic shape of the reconstructed structure." }, { "pmid": "19060360", "title": "Automatic generation of time resolved motion vector fields of coronary arteries and 4D surface extraction using rotational x-ray angiography.", "abstract": "Rotational coronary angiography provides a multitude of x-ray projections of the contrast agent enhanced coronary arteries along a given trajectory with parallel ECG recording. These data can be used to derive motion information of the coronary arteries including vessel displacement and pulsation. In this paper, a fully automated algorithm to generate 4D motion vector fields for coronary arteries from multi-phase 3D centerline data is presented. The algorithm computes similarity measures of centerline segments at different cardiac phases and defines corresponding centerline segments as those with highest similarity. In order to achieve an excellent matching accuracy, an increasing number of bifurcations is included as reference points in an iterative manner. Based on the motion data, time-dependent vessel surface extraction is performed on the projections without the need of prior reconstruction. The algorithm accuracy is evaluated quantitatively on phantom data. The magnitude of longitudinal errors (parallel to the centerline) reaches approx. 0.50 mm and is thus more than twice as large as the transversal 3D extraction errors of the underlying multi-phase 3D centerline data. It is shown that the algorithm can extract asymmetric stenoses accurately. The feasibility on clinical data is demonstrated on five different cases. The ability of the algorithm to extract time-dependent surface data, e.g. for quantification of pulsating stenosis is demonstrated." }, { "pmid": "12200927", "title": "An accurate iterative reconstruction algorithm for sparse objects: application to 3D blood vessel reconstruction from a limited number of projections.", "abstract": "Based on the duality of nonlinear programming, this paper proposes an accurate row-action type iterative algorithm which is appropriate to reconstruct sparse objects from a limited number of projections. The cost function we use is the Lp norm with p approximately 1.1. This norm allows us to pick up a sparse solution from a set of feasible solutions to the measurement equation. Furthermore, since it is both strictly convex and differentiable, we can use the duality of nonlinear programming to construct a row-action type iterative algorithm to find a solution. We also impose the bound constraint on pixel values to pick up a better solution. We demonstrate that this method works well in three-dimensional blood vessel reconstruction from a limited number of cone beam projections." }, { "pmid": "18583730", "title": "Projection-based motion compensation for gated coronary artery reconstruction from rotational x-ray angiograms.", "abstract": "Three-dimensional reconstruction of coronary arteries can be performed during x-ray-guided interventions by gated reconstruction from a rotational coronary angiography sequence. Due to imperfect gating and cardiac or breathing motion, the heart's motion state might not be the same in all projections used for the reconstruction of one cardiac phase. The motion state inconsistency causes motion artefacts and degrades the reconstruction quality. These effects can be reduced by a projection-based 2D motion compensation method. Using maximum-intensity forward projections of an initial uncompensated reconstruction as reference, the projection data are transformed elastically to improve the consistency with respect to the heart's motion state. A fast iterative closest-point algorithm working on vessel centrelines is employed for estimating the optimum transformation. Motion compensation is carried out prior to and independently from a final reconstruction. The motion compensation improves the accuracy of reconstructed vessel radii and the image contrast in a software phantom study. Reconstructions of human clinical cases are presented, in which the motion compensation substantially reduces motion blur and improves contrast and visibility of the coronary arteries." }, { "pmid": "24503518", "title": "External force back-projective composition and globally deformable optimization for 3-D coronary artery reconstruction.", "abstract": "The clinical value of the 3D reconstruction of a coronary artery is important for the diagnosis and intervention of cardiovascular diseases. This work proposes a method based on a deformable model for reconstructing coronary arteries from two monoplane angiographic images acquired from different angles. First, an external force back-projective composition model is developed to determine the external force, for which the force distributions in different views are back-projected to the 3D space and composited in the same coordinate system based on the perspective projection principle of x-ray imaging. The elasticity and bending forces are composited as an internal force to maintain the smoothness of the deformable curve. Second, the deformable curve evolves rapidly toward the true vascular centerlines in 3D space and angiographic images under the combination of internal and external forces. Third, densely matched correspondence among vessel centerlines is constructed using a curve alignment method. The bundle adjustment method is then utilized for the global optimization of the projection parameters and the 3D structures. The proposed method is validated on phantom data and routine angiographic images with consideration for space and re-projection image errors. Experimental results demonstrate the effectiveness and robustness of the proposed method for the reconstruction of coronary arteries from two monoplane angiographic images. The proposed method can achieve a mean space error of 0.564 mm and a mean re-projection error of 0.349 mm." }, { "pmid": "12194670", "title": "A novel approach for the detection of pathlines in X-ray angiograms: the wavefront propagation algorithm.", "abstract": "This article presents a new pathline approach, based on the wavefront propagation principle, and developed in order to reduce the variability in the outcomes of the quantitative coronary artery analysis. This novel approach, called wavepath, reduces the influence of the user-defined start- and endpoints of the vessel segment and is therefore more robust and improves the reproducibility of the lesion quantification substantially. The validation study shows that the wavepath method is totally constant in the middle part of the pathline, even when using the method for constructing a bifurcation or sidebranch pathline. Furthermore, the number of corrections needed to guide the wavepath through the correct vessel is decreased from an average of 0.44 corrections per pathline to an average of 0.12 per pathline. Therefore, it can be concluded that the wavepath algorithm improves the overall analysis substantially." }, { "pmid": "11607632", "title": "A fast marching level set method for monotonically advancing fronts.", "abstract": "A fast marching level set method is presented for monotonically advancing fronts, which leads to an extremely fast scheme for solving the Eikonal equation. Level set methods are numerical techniques for computing the position of propagating fronts. They rely on an initial value partial differential equation for a propagating level set function and use techniques borrowed from hyperbolic conservation laws. Topological changes, corner and cusp development, and accurate determination of geometric properties such as curvature and normal direction are naturally obtained in this setting. This paper describes a particular case of such methods for interfaces whose speed depends only on local position. The technique works by coupling work on entropy conditions for interface motion, the theory of viscosity solutions for Hamilton-Jacobi equations, and fast adaptive narrow band level set methods. The technique is applicable to a variety of problems, including shape-from-shading problems, lithographic development calculations in microchip manufacturing, and arrival time problems in control theory." }, { "pmid": "8006269", "title": "A new approach for the quantification of complex lesion morphology: the gradient field transform; basic principles and validation results.", "abstract": "OBJECTIVES\nThis report describes the basic principles and the results from clinical evaluation studies of a new algorithm that has been designed specifically for the quantification of complex coronary lesions.\n\n\nBACKGROUND\nCurrently used edge detection algorithms in quantitative coronary arteriography, such as the minimum cost algorithm, are limited in the precise quantification of complex coronary lesions characterized by abruptly changing shapes of the obstruction.\n\n\nMETHODS\nThe new algorithm, the gradient field transform, is not limited in its search directions and incorporates the directional information of the arterial boundaries. To evaluate its accuracy and precision, 11 tubular phantoms (sizes 0.6 to 5.0 mm), were analyzed. Second, angiographic images of 12 copper phantoms with U-shaped obstructions were analyzed by both the gradient field transform and the minimum cost algorithm. Third, 25 coronary artery segments with irregularly shaped obstructions were selected from 19 routinely acquired angiograms.\n\n\nRESULTS\nThe plexiglass phantom study demonstrated an accuracy and precision of -0.004 and 0.114 mm, respectively. The U-shaped copper phantoms showed that the gradient field transform performed very well for short, severe obstructions, whereas the minimum cost algorithm severely overestimated the minimal lumen diameter. From the coronary angiograms, the intraobserver variability in the minimal lumen diameter was found to be 0.14 mm for the gradient field transform and 0.20 mm for the minimum cost algorithm.\n\n\nCONCLUSIONS\nThe new gradient field transform eliminates the limitations of the currently used edge detection algorithms in quantitative coronary arteriography and is therefore particularly suitable for the quantification of complex coronary artery lesions." }, { "pmid": "24771202", "title": "Computer methods for follow-up study of hemodynamic and disease progression in the stented coronary artery by fusing IVUS and X-ray angiography.", "abstract": "Despite a lot of progress in the fields of medical imaging and modeling, problem of estimating the risk of in-stent restenosis and monitoring the progress of the therapy following stenting still remains. The principal aim of this paper was to propose architecture and implementation details of state of the art of computer methods for a follow-up study of disease progression in coronary arteries stented with bare-metal stents. The 3D reconstruction of coronary arteries was performed by fusing X-ray angiography and intravascular ultrasound (IVUS) as the most dominant modalities in interventional cardiology. The finite element simulation of plaque progression was performed by coupling the flow equations with the reaction-diffusion equation applying realistic boundary conditions at the wall. The alignment of baseline and follow-up data was performed automatically by temporal alignment of IVUS electrocardiogram-gated frames. The assessment was performed using three six-month follow-ups of right coronary artery. Simulation results were compared with the ground truth data measured by clinicians. In all three data sets, simulation results indicated the right places as critical. With the obtained difference of 5.89 ± ~4.5% between the clinical measurements and the results of computer simulations, we showed that presented framework is suitable for tracking the progress of coronary disease, especially for comparing face-to-face results and data of the same artery from distinct time periods." }, { "pmid": "15191145", "title": "Robust and objective decomposition and mapping of bifurcating vessels.", "abstract": "Computational modeling of human arteries has been broadly employed to investigate the relationships between geometry, hemodynamics and vascular disease. Recent developments in modeling techniques have made it possible to perform such analyses on realistic geometries acquired noninvasively and, thus, have opened up the possibility to extend the investigation to populations of subjects. However, for this to be feasible, novel methods for the comparison of the data obtained from large numbers of realistic models in the presence of anatomic variability must be developed. In this paper, we present an automatic technique for the objective comparison of distributions of geometric and hemodynamic quantities over the surface of bifurcating vessels. The method is based on centerlines and consists of robustly decomposing the surface into its constituent branches and mapping each branch onto a template parametric plane. The application of the technique to realistic data demonstrates how similar results are obtained over similar geometries, allowing for proper model-to-model comparison. Thanks to the computational and differential geometry criteria adopted, the method does not depend on user-defined parameters or user interaction, it is flexible with respect to the bifurcation geometry and it is readily extendible to more complex configurations of interconnecting vessels." }, { "pmid": "21327913", "title": "Dedicated bifurcation analysis: basic principles.", "abstract": "Over the last several years significant interest has arisen in bifurcation stenting, in particular stimulated by the European Bifurcation Club. Traditional straight vessel analysis by QCA does not satisfy the requirements for such complex morphologies anymore. To come up with practical solutions, we have developed two models, a Y-shape and a T-shape model, suitable for bifurcation QCA analysis depending on the specific anatomy of the coronary bifurcation. The principles of these models are described in this paper, as well as the results of validation studies carried out on clinical materials. It can be concluded that the accuracy, precision and applicability of these new bifurcation analyses are conform the general guidelines that have been set many years ago for conventional QCA-analyses." }, { "pmid": "20300489", "title": "Patient-Specific Vascular NURBS Modeling for Isogeometric Analysis of Blood Flow.", "abstract": "We describe an approach to construct hexahedral solid NURBS (Non-Uniform Rational B-Splines) meshes for patient-specific vascular geometric models from imaging data for use in isogeometric analysis. First, image processing techniques, such as contrast enhancement, filtering, classification, and segmentation, are used to improve the quality of the input imaging data. Then, lumenal surfaces are extracted by isocontouring the preprocessed data, followed by the extraction of vascular skeleton via Voronoi and Delaunay diagrams. Next, the skeleton-based sweeping method is used to construct hexahedral control meshes. Templates are designed for various branching configurations to decompose the geometry into mapped meshable patches. Each patch is then meshed using one-to-one sweeping techniques, and boundary vertices are projected to the lumenal surface. Finally, hexahedral solid NURBS are constructed and used in isogeometric analysis of blood flow. Piecewise linear hexahedral meshes can also be obtained using this approach. Examples of patient-specific arterial models are presented." }, { "pmid": "20964209", "title": "4D XCAT phantom for multimodality imaging research.", "abstract": "PURPOSE\nThe authors develop the 4D extended cardiac-torso (XCAT) phantom for multimodality imaging research.\n\n\nMETHODS\nHighly detailed whole-body anatomies for the adult male and female were defined in the XCAT using nonuniform rational B-spline (NURBS) and subdivision surfaces based on segmentation of the Visible Male and Female anatomical datasets from the National Library of Medicine as well as patient datasets. Using the flexibility of these surfaces, the Visible Human anatomies were transformed to match body measurements and organ volumes for a 50th percentile (height and weight) male and female. The desired body measurements for the models were obtained using the PEOPLESIZE program that contains anthropometric dimensions categorized from 1st to the 99th percentile for US adults. The desired organ volumes were determined from ICRP Publication 89 [ICRP, \"Basic anatomical and physiological data for use in radiological protection: reference values,\" ICRP Publication 89 (International Commission on Radiological Protection, New York, NY, 2002)]. The male and female anatomies serve as standard templates upon which anatomical variations may be modeled in the XCAT through user-defined parameters. Parametrized models for the cardiac and respiratory motions were also incorporated into the XCAT based on high-resolution cardiac- and respiratory-gated multislice CT data. To demonstrate the usefulness of the phantom, the authors show example simulation studies in PET, SPECT, and CT using publicly available simulation packages.\n\n\nRESULTS\nAs demonstrated in the pilot studies, the 4D XCAT (which includes thousands of anatomical structures) can produce realistic imaging data when combined with accurate models of the imaging process. With the flexibility of the NURBS surface primitives, any number of different anatomies, cardiac or respiratory motions or patterns, and spatial resolutions can be simulated to perform imaging research.\n\n\nCONCLUSIONS\nWith the ability to produce realistic, predictive 3D and 4D imaging data from populations of normal and abnormal patients under various imaging parameters, the authors conclude that the XCAT provides an important tool in imaging research to evaluate and improve imaging devices and techniques. In the field of x-ray CT, the phantom may also provide the necessary foundation with which to optimize clinical CT applications in terms of image quality versus radiation dose, an area of research that is becoming more significant with the growing use of CT." }, { "pmid": "16243736", "title": "Three-dimensional coronary reconstruction from routine single-plane coronary angiograms: in vivo quantitative validation.", "abstract": "BACKGROUND\nCurrent X-ray technology displays the complex 3-dimensional (3-D) geometry of the coronary arterial tree as 2-dimensional (2-D) images. To overcome this limitation, an algorithm was developed for the reconstruction of the 3-D pathway of the coronary arterial tree using routine single-plane 2-D angiographic imaging. This method provides information in real-time and is suitable for routine use in the cardiovascular catheterization laboratory.\n\n\nOBJECTIVES\nThe purpose of this study was to evaluate the precision of this algorithm and to compare it with 2-D quantitative coronary angiography (QCA) system.\n\n\nMETHODS\nThirty-eight angiographic images were acquired from 11 randomly selected patients with coronary artery disease undergoing diagnostic cardiac catheterization. The 2-D images were analyzed using QCA software. For the 3-D reconstruction, an algorithm integrating information from at least two single-plane angiographic images taken from different angles was formulated.\n\n\nRESULTS\n3-D acquisition was feasible in all patients and in all selected angiographic frames. Comparison between pairs of values yielded greater precision of the 3-D than the 2-D measurements of the minimal lesion diameter (P<0.005), minimal lesion area (P<0.05) and lesion length (P<0.01).\n\n\nCONCLUSIONS\nThe study validates the 3-D reconstruction algorithm, which may provide new insights into vessel morphology in 3-D space. This method is a promising clinical tool, making it possible for cardiologists to appreciate the complex curvilinear structure of the coronary arterial tree and to quantify atherosclerotic lesions more precisely." }, { "pmid": "17388180", "title": "Subvoxel precise skeletons of volumetric data based on fast marching methods.", "abstract": "The accurate calculation of the skeleton of an object is a problem not satisfactorily solved by existing approaches. Most algorithms require a significant amount of user interaction and use a voxel grid to compute discrete and often coarse approximations of this representation of the data. We present a novel, automatic algorithm for computing subvoxel precise skeletons of volumetric data based on subvoxel precise distance fields. Most voxel based centerline and skeleton algorithms start with a binary mask and end with a list of voxels that define the centerline or skeleton. Even though subsequent smoothing may be applied, the results are inherently discrete. Our skeletonization algorithm uses as input a subvoxel precise distance field and employs a number of fast marching method propagations to extract the skeleton at subvoxel precision. We present the skeletons of various three-dimensional (3D) data sets and digital phantom models as validations of our algorithm." }, { "pmid": "26117471", "title": "\"Virtual\" (Computed) Fractional Flow Reserve: Current Challenges and Limitations.", "abstract": "Fractional flow reserve (FFR) is the \"gold standard\" for assessing the physiological significance of coronary artery disease during invasive coronary angiography. FFR-guided percutaneous coronary intervention improves patient outcomes and reduces stent insertion and cost; yet, due to several practical and operator related factors, it is used in <10% of percutaneous coronary intervention procedures. Virtual fractional flow reserve (vFFR) is computed using coronary imaging and computational fluid dynamics modeling. vFFR has emerged as an attractive alternative to invasive FFR by delivering physiological assessment without the factors that limit the invasive technique. vFFR may offer further diagnostic and planning benefits, including virtual pullback and virtual stenting facilities. However, there are key challenges that need to be overcome before vFFR can be translated into routine clinical practice. These span a spectrum of scientific, logistic, commercial, and political areas. The method used to generate 3-dimensional geometric arterial models (segmentation) and selection of appropriate, patient-specific boundary conditions represent the primary scientific limitations. Many conflicting priorities and design features must be carefully considered for vFFR models to be sufficiently accurate, fast, and intuitive for physicians to use. Consistency is needed in how accuracy is defined and reported. Furthermore, appropriate regulatory and industry standards need to be in place, and cohesive approaches to intellectual property management, reimbursement, and clinician training are required. Assuming successful development continues in these key areas, vFFR is likely to become a desirable tool in the functional assessment of coronary artery disease." }, { "pmid": "27187726", "title": "Simplified Models of Non-Invasive Fractional Flow Reserve Based on CT Images.", "abstract": "Invasive fractional flow reserve (FFR) is the gold standard to assess the functional coronary stenosis. The non-invasive assessment of diameter stenosis (DS) using coronary computed tomography angiography (CTA) has high false positive rate in contrast to FFR. Combining CTA with computational fluid dynamics (CFD), recent studies have shown promising predictions of FFRCT for superior assessment of lesion severity over CTA alone. The CFD models tend to be computationally expensive, however, and require several hours for completing analysis. Here, we introduce simplified models to predict noninvasive FFR at substantially less computational time. In this retrospective pilot study, 21 patients received coronary CTA. Subsequently a total of 32 vessels underwent invasive FFR measurement. For each vessel, FFR based on steady-state and analytical models (FFRSS and FFRAM, respectively) were calculated non-invasively based on CTA and compared with FFR. The accuracy, sensitivity, specificity, positive predictive value and negative predictive value were 90.6% (87.5%), 80.0% (80.0%), 95.5% (90.9%), 88.9% (80.0%) and 91.3% (90.9%) respectively for FFRSS (and FFRAM) on a per-vessel basis, and were 75.0%, 50.0%, 86.4%, 62.5% and 79.2% respectively for DS. The area under the receiver operating characteristic curve (AUC) was 0.963, 0.954 and 0.741 for FFRSS, FFRAM and DS respectively, on a per-patient level. The results suggest that the CTA-derived FFRSS performed well in contrast to invasive FFR and they had better diagnostic performance than DS from CTA in the identification of functionally significant lesions. In contrast to FFRCT, FFRSS requires much less computational time." }, { "pmid": "23428836", "title": "Patient-specific simulations of stenting procedures in coronary bifurcations: two clinical cases.", "abstract": "Computational simulations of stenting procedures in idealized geometries can only provide general guidelines and their use in the patient-specific planning of percutaneous treatments is inadequate. Conversely, image-based patient-specific tools that are able to realistically simulate different interventional options might facilitate clinical decision-making and provide useful insights on the treatment for each individual patient. The aim of this work is the implementation of a patient-specific model that uses image-based reconstructions of coronary bifurcations and is able to replicate real stenting procedures following clinical indications. Two clinical cases are investigated focusing the attention on the open problems of coronary bifurcations and their main treatment, the provisional side branch approach. Image-based reconstructions are created combining the information from conventional coronary angiography and computed tomography angiography while structural finite element models are implemented to replicate the real procedure performed in the patients. First, numerical results show the biomechanical influence of stents deployment in the coronary bifurcations during and after the procedures. In particular, the straightening of the arterial wall and the influence of two overlapping stents on stress fields are investigated here. Results show that a sensible decrease of the vessel tortuosity occurs after stent implantation and that overlapping devices result in an increased stress state of both the artery and the stents. Lastly, the comparison between numerical and image-based post-stenting configurations proved the reliability of such models while replicating stent deployment in coronary arteries." } ]
Scientific Reports
29391406
PMC5794926
10.1038/s41598-018-20037-5
Advanced Steel Microstructural Classification by Deep Learning Methods
The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.
Related WorksBased on the instrument used for imaging, we can categorize the related works into Light Optical Microscopy (LOM) and Scanning Electron Microscopy (SEM) imaging. High-resolution SEM imaging is very expensive compared with LOM imaging in terms of time and operating costs. However, low-resolution LOM imaging makes distinguishing microstructures based on their substructures even more difficult. Nowadays, the task of microstructural classification is performed by observing a sample image by an expert and assigning one of the microstructure classes to it. As experts are different in their level of expertise, one can assume that sometimes there are different opinions from different experts. However, thanks to highly professional human experts, this task has been accomplished so far with low error which is appreciated. Regarding automatic microstructural classification, microstructures are typically defined by the means of standard procedures in metallography. Vander Voort11 used Light Optical Microscopy (LOM) microscopy, but without any sort of learning the microstructural features which is actually still the state of the art in material science for classification of microstructres in most institutes as well as in industry. His method defined only procedures with which one expert can decide on the class of the microstructure. Moreover, additional chemical etching12 made it possible to distinguish second phases using different contrasts, however etching is constrained to empirical methods and can not be used in distinguishing various phases in steel with more than two phases. Nowadays, different techniques and approaches made morphological or crystallographic properties accessible4,13–18. Any approach for identification of phases in multiphase steel relies on these methods and aims at the development of advanced metallographic methods for morphological analysis purposes using the common characterization techniques and were accompanied with pixel- and context-based image analysis steps.Previously, Velichko et al.19 proposed a method using data mining methods by extracting morphological features and a feature classification step on cast iron using Support Vector Machines (SVMs) - a well established method in the field of machine learning20. More recently, Pauly et al.21 followed this same approach by applying on a contrasted and etched dataset of steel, acquired by SEM and LOM imaging which was also used in this work. However, it could only reach 48.89% accuracy in microstructural classification on the given dataset for four different classes due to high complexity of substructures and not discriminative enough features.Deep Learning methods have been applied in object classification and image semantic segmentation for different applications. AlexNet, a CNN proposed by Alex Krizhevsky et al.22, with 7 layers was the winner of ImageNet Large Scale Visual Recognition Challenge (ILSVRC)23 in 2012 which is one of the most well-known object classification task challenges in computer vision community. It is the main reason that Deep Learning methods drew a lot of attention. AlexNet improved the accuracy ILSVRC2012 by 10 percent points which was a huge increase in this challenge. VGGNet, a CNN architecture proposed by Simonyan et al.8 has even more layers than AlexNet achieving better accuracy performance. Fully Convolutional Neural Networks (FCNNs) architectures, proposed by Long et al.24, is one of the first and well-known works to adapt object classification CNNs to semantic segmentation tasks. FCNNs and their extensions to the approach are currently the state-of-the-art in semantic segmentation on a range of benchmarks including Pascal VOC image segmentation challenge25 or Cityscape26.Our method transfers the success of Deep Learning for segmentation tasks to the challenging problem of microstructural classification in the context of steel quality appraisal. It is the first demonstration of a Deep Learning technique in this context that in particular shows substantial gains over the previous state of the art.
[]
[]
Frontiers in Neuroscience
29467600
PMC5808221
10.3389/fnins.2017.00754
White Matter Tract Segmentation as Multiple Linear Assignment Problems
Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method.
2. Related works2.1. Supervised tract segmentationHere we review the literature on supervised tractogram segmentation and on the linear assignment problem. In order to organize the body of work in this field, we articulate the discussion on supervised tract segmentation along these five topics: alignment, embedding space, similarity/distance, correspondence techniques, and refinement step.2.1.1. AlignmentIn supervised tract segmentation, tractograms are initially aligned to an atlas. Both voxel-based and streamline-based atlases have been used in literature, e.g., white matter ROI-based anatomical atlas (Maddah et al., 2005), high dimensional atlas (O'Donnell and Westin, 2007; Vercruysse et al., 2014), example-based single atlas (Guevara et al., 2012; Labra et al., 2017), example-based multi-atlas (Yoo et al., 2015). To the best of our knowledge, the specific step of alignment has been conducted with standard methods: in most of the cases with voxel-based linear registration (O'Donnell and Westin, 2007; Guevara et al., 2012; Yoo et al., 2015) and in the others with nonlinear voxel-based registration (Vercruysse et al., 2014).2.1.2. Embedding spaceStreamlines are complex geometrical objects with a different number of points one from another. They are unfit to be directly given as input to many efficient data analysis algorithms which, instead, require vectors all with the same number of dimensions. Tractograms are large collections of streamlines, from hundreds of thousands to millions of streamlines, and their analysis is often limited by the required computational cost. A common preprocessing step before using algorithms like clustering, or nearest neighbor, is to transform streamlines into vectors, a process called Euclidean embedding. Different authors opted for different embedding approaches, like the spectral embedding (O'Donnell and Westin, 2007), the re-sampling of all streamlines to the same number of points (Guevara et al., 2012; Yoo et al., 2015; Labra et al., 2017), the use of B-splines with re-sampling (Maddah et al., 2005) and the dissimilarity representation (Olivetti and Avesani, 2011). Re-sampling all streamlines to a fixed number of points is the most common approach to obtain the embedding. In principle, up-sampling/down-sampling to a particular number of points may cause the loss of information. On the other hand, spectral embedding has high computation cost. The dissimilarity representation has shown remarkable results in terms of machine learning applications (Olivetti et al., 2013) and exploration of tractograms (Porro-Muñoz et al., 2015), at a moderate computational cost.2.1.3. Streamline distanceIn order to find corresponding streamlines from one tractogram to another one, the definition of the streamline distance plays a crucial role. Most commonly, the corresponding streamline in the new tractogram is defined as the closest one, for the given streamline distance function. Similarly, when doing clustering for tract segmentation, the streamline distance function is a fundamental building block. Different streamline distance functions have been used in the supervised tract segmentation literature, e.g., minimum closest point (MCP) (O'Donnell and Westin, 2007), symmetric minimum average distance (MAM) (Olivetti and Avesani, 2011), minimum average flip distance (MDF) (Yoo et al., 2015; Garyfallidis et al., in press), Hausdorff distance (Maddah et al., 2005), normalized Euclidean distances (Labra et al., 2017), Mahalanobis distance (Yoo et al., 2015).2.1.4. Correspondence techniqueOne crucial aspect of supervised tract segmentation is the mechanism to find the corresponding streamline between the tractograms of different subjects, in order to transfer anatomical knowledge. A common approach for addressing such problem is to use the nearest neighbor strategy, i.e., finding the nearest streamline or centroid in the atlas and labeling the streamlines of the new subject based on that. In O'Donnell and Westin (2007) and O'Donnell et al. (2017), a high dimensional atlas was reconstructed from multiple tractograms. Then the new subject was aligned with the atlas and the closest cluster centroids from atlas were computed to assign the anatomical label. In Guevara et al. (2012), nearest centroids of the new subject were computed from a single atlas from multiple subjects, with the normalized Euclidean distance. Recently, a faster implementation has been proposed in Labra et al. (2017). There, they proposed to label each single streamline instead of cluster centroids and to accelerate the computation time by filtering the streamlines in advance, using properties of the normalized Euclidean distance. A limitation is that an appropriate threshold has to be defined for each tract to be segmented. Similarly, in Yoo et al. (2015), the nearest neighbor strategy is used to find corresponding streamlines between those of the tractogram of a new subject, with those of multiple example-subjects (12, in their experiments). Again, two thresholds, i.e., a distance threshold and a voting threshold, are required to be set in order to obtain the segmentation. The proposed implementation requires a GPU.A different approach based on graph matching, instead of the nearest neighbor, was proposed by us for tractogram alignment, see Olivetti et al. (2016). Such idea could be extended to the tract segmentation problem.2.1.5. RefinementAfter segmentation, in order to improve the accuracy of the segmented tract, some authors propose a refinement step, for example, to identify and remove outliers. In Mayer et al. (2011), a tree-based refinement step was introduced. Initially, they padded the segmented tract with the nearest neighbors and then used a probabilistic boosting tree classifier to identify the outliers. Another approach to increase the accuracy of the segmented tract is majority voting (Rohlfing et al., 2004; Jin et al., 2012; Vercruysse et al., 2014; Yoo et al., 2015). The main concept of the majority voting is to reach the agreement on the segmented streamlines (or voxel) coming from different examples, usually removing the infrequent ones.The accuracy of the outcome after the step of refinement is closely related to the number of examples. This relation has been investigated in the vast literature of multi-atlas segmentation (MAS). The intuitive idea is that the behavior of the segmentation error is connected to the size of the atlas dataset. A first attempt to characterize such a relationship, with a first principle approach, was proposed by Awate and Whitaker (2014). In their proposal the size of the atlases is predicted against the segmentation error by formulating the multi-atlas segmentation as a nonparametric regression problem. More recently, Zhuang and Shen (2016) combined the idea of multi-atlas with multi-modality and multi-scale patch for heart segmentation. For a comprehensive survey of multi-atlas segmentation in the broader field of medical imaging, see Iglesias and Sabuncu (2015).2.2. Linear assignment problem solutionsThe linear assignment problem (LAP) computes the optimal one-to-one assignment between the N elements of two sets of objects, minimizing the total cost. The LAP takes as input the cost matrix that describes the cost of assigning each object of the first set to each object of the second set. Various algorithms for solving the LAP in polynomial time has been proposed in literature. A comprehensive review of all the proposed algorithms can be found in Burkard et al. (2009) and Burkard and Cela (1999). An extensive computational comparison among eight well known algorithms is in Dell'Amico and Toth (2000). The algorithms are: Hungarian, signature, auction, pseudoflow, interior point method, Jonker-Volgenant (LAPJV). According that survey and to Serratosa (2015), the Hungarian (Kuhn, 1955) algorithm and Jonker-Volgenant algorithm (LAPJV, Jonker and Volgenant, 1987) are the most efficient ones, with time complexity O(N3). Nevertheless, in practice, LAPJV is much faster than the Hungarian algorithm, as reported in Serratosa (2015) and Dell'Amico and Toth (2000). This occurs because, despite the same time complexity class, i.e., O(N3), the respective constants of the 3rd order polynomials describing the exact running time of each algorithm are much different, giving large advantage to LAPJV. We have directly observed this behavior in our experiments with LAPJV, compared to those with the Hungarian algorithm that we published in Sharmin et al. (2016).According to Dell'Amico and Toth (2000) and Burkard et al. (2009), LAPJV is faster than other algorithms in multiple applications, see also Bijsterbosch and Volgenant (2010). Moreover, in many practical applications, the two sets of objects1 on which to compute the LAP have different sizes, i.e., the related cost matrix is rectangular. In Bijsterbosch and Volgenant (2010), the rectangular version of LAPJV was proposed with a more efficient and robust solution than the original one in Jonker and Volgenant (1987). In this work, we adopted the rectangular version of LAPJV because of its efficiency and because we need to compute the correspondence between an example tract and the target tractogram which, clearly, have different number of streamlines.
[ "24157921", "24802528", "8130344", "12023417", "12482069", "18041270", "18357821", "24600385", "23248578", "28712994", "23668970", "28034765", "22414992", "26201875", "24297904", "24821529", "27722821", "20716499", "23631987", "27981029", "18041271", "27994537", "15325368", "15050568", "15917106", "23702418", "17379540", "25134977", "23684880", "17481925", "20678578", "20079439", "24505722", "26754839", "26225419", "20570617", "26999615" ]
[ { "pmid": "24157921", "title": "Automated longitudinal intra-subject analysis (ALISA) for diffusion MRI tractography.", "abstract": "Fiber tractography (FT), which aims to reconstruct the three-dimensional trajectories of white matter (WM) fibers non-invasively, is one of the most popular approaches for analyzing diffusion tensor imaging (DTI) data given its high inter- and intra-rater reliability and scan-rescan reproducibility. The major disadvantage of manual FT segmentations, unfortunately, is that placing regions-of-interest for tract selection can be very labor-intensive and time-consuming. Although there are several methods that can identify specific WM fiber bundles in an automated way, manual FT segmentations across multiple subjects performed by a trained rater with neuroanatomical expertise are generally assumed to be more accurate. However, for longitudinal DTI analyses it may still be beneficial to automate the FT segmentation across multiple time points, but then for each individual subject separately. Both the inter-subject and intra-subject automation in this situation are intended for subjects without gross pathology. In this work, we propose such an automated longitudinal intra-subject analysis (dubbed ALISA) approach, and assessed whether ALISA could preserve the same level of reliability as obtained with manual FT segmentations. In addition, we compared ALISA with an automated inter-subject analysis. Based on DTI data sets from (i) ten healthy subjects that were scanned five times (six-month intervals, aged 7.6-8.6years at the first scan) and (ii) one control subject that was scanned ten times (weekly intervals, 12.2years at the first scan), we demonstrate that the increased efficiency provided by ALISA does not compromise the high degrees of precision and accuracy that can be achieved with manual FT segmentations. Further automation for inter-subject analyses, however, did not provide similarly accurate FT segmentations." }, { "pmid": "24802528", "title": "Multiatlas segmentation as nonparametric regression.", "abstract": "This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems." }, { "pmid": "8130344", "title": "MR diffusion tensor spectroscopy and imaging.", "abstract": "This paper describes a new NMR imaging modality--MR diffusion tensor imaging. It consists of estimating an effective diffusion tensor, Deff, within a voxel, and then displaying useful quantities derived from it. We show how the phenomenon of anisotropic diffusion of water (or metabolites) in anisotropic tissues, measured noninvasively by these NMR methods, is exploited to determine fiber tract orientation and mean particle displacements. Once Deff is estimated from a series of NMR pulsed-gradient, spin-echo experiments, a tissue's three orthotropic axes can be determined. They coincide with the eigenvectors of Deff, while the effective diffusivities along these orthotropic directions are the eigenvalues of Deff. Diffusion ellipsoids, constructed in each voxel from Deff, depict both these orthotropic axes and the mean diffusion distances in these directions. Moreover, the three scalar invariants of Deff, which are independent of the tissue's orientation in the laboratory frame of reference, reveal useful information about molecular mobility reflective of local microstructure and anatomy. Inherently tensors (like Deff) describing transport processes in anisotropic media contain new information within a macroscopic voxel that scalars (such as the apparent diffusivity, proton density, T1, and T2) do not." }, { "pmid": "12023417", "title": "White matter damage in Alzheimer's disease assessed in vivo using diffusion tensor magnetic resonance imaging.", "abstract": "OBJECTIVE\nTo investigate the extent and the nature of white matter tissue damage of patients with Alzheimer's disease using diffusion tensor magnetic resonance imaging (DT-MRI).\n\n\nBACKGROUND\nAlthough Alzheimer's disease pathology mainly affects cortical grey matter, previous pathological and MRI studies showed that also the brain white matter of patients is damaged. However, the nature of Alzheimer's disease associated white matter damage is still unclear.\n\n\nMETHODS\nConventional and DT-MRI scans were obtained from 16 patients with Alzheimer's disease and 10 sex and age matched healthy volunteers. The mean diffusivity (D), fractional anisotropy (FA), and inter-voxel coherence (C) of several white matter regions were measured.\n\n\nRESULTS\nD was higher and FA lower in the corpus callosum, as well as in the white matter of the frontal, temporal, and parietal lobes from patients with Alzheimer's disease than in the corresponding regions from healthy controls. D and FA of the white matter of the occipital lobe and internal capsule were not different between patients and controls. C values were also not different between patients and controls for any of the regions studied. Strong correlations were found between the mini mental state examination score and the average overall white matter D (r=0.92, p<0.001) and FA (r=0.78; p<0.001).\n\n\nCONCLUSIONS\nWhite matter changes in patients with Alzheimer's disease are likely to be secondary to wallerian degeneration of fibre tracts due to neuronal loss in cortical associative areas." }, { "pmid": "12482069", "title": "Virtual in vivo interactive dissection of white matter fasciculi in the human brain.", "abstract": "This work reports the use of diffusion tensor magnetic resonance tractography to visualize the three-dimensional (3D) structure of the major white matter fasciculi within living human brain. Specifically, we applied this technique to visualize in vivo (i) the superior longitudinal (arcuate) fasciculus, (ii) the inferior longitudinal fasciculus, (iii) the superior fronto-occipital (subcallosal) fasciculus, (iv) the inferior frontooccipital fasciculus, (v) the uncinate fasciculus, (vi) the cingulum, (vii) the anterior commissure, (viii) the corpus callosum, (ix) the internal capsule, and (x) the fornix. These fasciculi were first isolated and were then interactively displayed as a 3D-rendered object. The virtual tract maps obtained in vivo using this approach were faithful to the classical descriptions of white matter anatomy that have previously been documented in postmortem studies. Since we have been able to interactively delineate and visualize white matter fasciculi over their entire length in vivo, in a manner that has only previously been possible by histological means, \"virtual in vivo interactive dissection\" (VIVID) adds a new dimension to anatomical descriptions of the living human brain." }, { "pmid": "18041270", "title": "A probabilistic model-based approach to consistent white matter tract segmentation.", "abstract": "Since the invention of diffusion magnetic resonance imaging (dMRI), currently the only established method for studying white matter connectivity in a clinical environment, there has been a great deal of interest in the effects of various pathologies on the connectivity of the brain. As methods for in vivo tractography have been developed, it has become possible to track and segment specific white matter structures of interest for particular study. However, the consistency and reproducibility of tractography-based segmentation remain limited, and attempts to improve them have thus far typically involved the imposition of strong constraints on the tract reconstruction process itself. In this work we take a different approach, developing a formal probabilistic model for the relationships between comparable tracts in different scans, and then using it to choose a tract, a posteriori, which best matches a predefined reference tract for the structure of interest. We demonstrate that this method is able to significantly improve segmentation consistency without directly constraining the tractography algorithm." }, { "pmid": "24600385", "title": "Dipy, a library for the analysis of diffusion MRI data.", "abstract": "Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing." }, { "pmid": "23248578", "title": "QuickBundles, a Method for Tractography Simplification.", "abstract": "Diffusion MR data sets produce large numbers of streamlines which are hard to visualize, interact with, and interpret in a clinically acceptable time scale, despite numerous proposed approaches. As a solution we present a simple, compact, tailor-made clustering algorithm, QuickBundles (QB), that overcomes the complexity of these large data sets and provides informative clusters in seconds. Each QB cluster can be represented by a single centroid streamline; collectively these centroid streamlines can be taken as an effective representation of the tractography. We provide a number of tests to show how the QB reduction has good consistency and robustness. We show how the QB reduction can help in the search for similarities across several subjects." }, { "pmid": "28712994", "title": "Recognition of white matter bundles using local and global streamline-based registration and clustering.", "abstract": "Virtual dissection of diffusion MRI tractograms is cumbersome and needs extensive knowledge of white matter anatomy. This virtual dissection often requires several inclusion and exclusion regions-of-interest that make it a process that is very hard to reproduce across experts. Having automated tools that can extract white matter bundles for tract-based studies of large numbers of people is of great interest for neuroscience and neurosurgical planning. The purpose of our proposed method, named RecoBundles, is to segment white matter bundles and make virtual dissection easier to perform. This can help explore large tractograms from multiple persons directly in their native space. RecoBundles leverages latest state-of-the-art streamline-based registration and clustering to recognize and extract bundles using prior bundle models. RecoBundles uses bundle models as shape priors for detecting similar streamlines and bundles in tractograms. RecoBundles is 100% streamline-based, is efficient to work with millions of streamlines and, most importantly, is robust and adaptive to incomplete data and bundles with missing components. It is also robust to pathological brains with tumors and deformations. We evaluated our results using multiple bundles and showed that RecoBundles is in good agreement with the neuroanatomical experts and generally produced more dense bundles. Across all the different experiments reported in this paper, RecoBundles was able to identify the core parts of the bundles, independently from tractography type (deterministic or probabilistic) or size. Thus, RecoBundles can be a valuable method for exploring tractograms and facilitating tractometry studies." }, { "pmid": "23668970", "title": "The minimal preprocessing pipelines for the Human Connectome Project.", "abstract": "The Human Connectome Project (HCP) faces the challenging task of bringing multiple magnetic resonance imaging (MRI) modalities together in a common automated preprocessing framework across a large cohort of subjects. The MRI data acquired by the HCP differ in many ways from data acquired on conventional 3 Tesla scanners and often require newly developed preprocessing methods. We describe the minimal preprocessing pipelines for structural, functional, and diffusion MRI that were developed by the HCP to accomplish many low level tasks, including spatial artifact/distortion removal, surface generation, cross-modal registration, and alignment to standard space. These pipelines are specially designed to capitalize on the high quality data offered by the HCP. The final standard space makes use of a recently introduced CIFTI file format and the associated grayordinate spatial coordinate system. This allows for combined cortical surface and subcortical volume analyses while reducing the storage and processing requirements for high spatial and temporal resolution data. Here, we provide the minimum image acquisition requirements for the HCP minimal preprocessing pipelines and additional advice for investigators interested in replicating the HCP's acquisition protocols or using these pipelines. Finally, we discuss some potential future improvements to the pipelines." }, { "pmid": "28034765", "title": "Reproducibility of superficial white matter tracts using diffusion-weighted imaging tractography.", "abstract": "Human brain connection map is far from being complete. In particular the study of the superficial white matter (SWM) is an unachieved task. Its description is essential for the understanding of human brain function and the study of pathogenesis triggered by abnormal connectivity. In this work we automatically created a multi-subject atlas of SWM diffusion-based bundles of the whole brain. For each subject, the complete cortico-cortical tractogram is first split into sub-tractograms connecting pairs of gyri. Then intra-subject shape-based fiber clustering performs compression of each sub-tractogram into a set of bundles. Proceeding further with shape-based clustering provides a match of the bundles across subjects. Bundles found in most of the subjects are instantiated in the atlas. To increase robustness, this procedure was performed with two independent groups of subjects, in order to discard bundles without match across the two independent atlases. Finally, the resulting intersection atlas was projected on a third independent group of subjects in order to filter out bundles without reproducible and reliable projection. The final multi-subject diffusion-based U-fiber atlas is composed of 100 bundles in total, 50 per hemisphere, from which 35 are common to both hemispheres." }, { "pmid": "22414992", "title": "Automatic fiber bundle segmentation in massive tractography datasets using a multi-subject bundle atlas.", "abstract": "This paper presents a method for automatic segmentation of white matter fiber bundles from massive dMRI tractography datasets. The method is based on a multi-subject bundle atlas derived from a two-level intra-subject and inter-subject clustering strategy. This atlas is a model of the brain white matter organization, computed for a group of subjects, made up of a set of generic fiber bundles that can be detected in most of the population. Each atlas bundle corresponds to several inter-subject clusters manually labeled to account for subdivisions of the underlying pathways often presenting large variability across subjects. An atlas bundle is represented by the multi-subject list of the centroids of all intra-subject clusters in order to get a good sampling of the shape and localization variability. The atlas, composed of 36 known deep white matter bundles and 47 superficial white matter bundles in each hemisphere, was inferred from a first database of 12 brains. It was successfully used to segment the deep white matter bundles in a second database of 20 brains and most of the superficial white matter bundles in 10 subjects of the same database." }, { "pmid": "26201875", "title": "Multi-atlas segmentation of biomedical images: A survey.", "abstract": "Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of \"atlases\" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation." }, { "pmid": "24297904", "title": "Sex differences in the structural connectome of the human brain.", "abstract": "Sex differences in human behavior show adaptive complementarity: Males have better motor and spatial abilities, whereas females have superior memory and social cognition skills. Studies also show sex differences in human brains but do not explain this complementarity. In this work, we modeled the structural connectome using diffusion tensor imaging in a sample of 949 youths (aged 8-22 y, 428 males and 521 females) and discovered unique sex differences in brain connectivity during the course of development. Connection-wise statistical analysis, as well as analysis of regional and global network measures, presented a comprehensive description of network characteristics. In all supratentorial regions, males had greater within-hemispheric connectivity, as well as enhanced modularity and transitivity, whereas between-hemispheric connectivity and cross-module participation predominated in females. However, this effect was reversed in the cerebellar connections. Analysis of these changes developmentally demonstrated differences in trajectory between males and females mainly in adolescence and in adulthood. Overall, the results suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes." }, { "pmid": "24821529", "title": "Automatic clustering of white matter fibers in brain diffusion MRI with an application to genetics.", "abstract": "To understand factors that affect brain connectivity and integrity, it is beneficial to automatically cluster white matter (WM) fibers into anatomically recognizable tracts. Whole brain tractography, based on diffusion-weighted MRI, generates vast sets of fibers throughout the brain; clustering them into consistent and recognizable bundles can be difficult as there are wide individual variations in the trajectory and shape of WM pathways. Here we introduce a novel automated tract clustering algorithm based on label fusion--a concept from traditional intensity-based segmentation. Streamline tractography generates many incorrect fibers, so our top-down approach extracts tracts consistent with known anatomy, by mapping multiple hand-labeled atlases into a new dataset. We fuse clustering results from different atlases, using a mean distance fusion scheme. We reliably extracted the major tracts from 105-gradient high angular resolution diffusion images (HARDI) of 198 young normal twins. To compute population statistics, we use a pointwise correspondence method to match, compare, and average WM tracts across subjects. We illustrate our method in a genetic study of white matter tract heritability in twins." }, { "pmid": "27722821", "title": "Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.", "abstract": "This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets." }, { "pmid": "20716499", "title": "A supervised framework for the registration and segmentation of white matter fiber tracts.", "abstract": "A supervised framework is presented for the automatic registration and segmentation of white matter (WM) tractographies extracted from brain DT-MRI. The framework relies on the direct registration between the fibers, without requiring any intensity-based registration as preprocessing. An affine transform is recovered together with a set of segmented fibers. A recently introduced probabilistic boosting tree classifier is used in a segmentation refinement step to improve the precision of the target tract segmentation. The proposed method compares favorably with a state-of-the-art intensity-based algorithm for affine registration of DTI tractographies. Segmentation results for 12 major WM tracts are demonstrated. Quantitative results are also provided for the segmentation of a particularly difficult case, the optic radiation tract. An average precision of 80% and recall of 55% were obtained for the optimal configuration of the presented method." }, { "pmid": "23631987", "title": "Fiber clustering versus the parcellation-based connectome.", "abstract": "We compare two strategies for modeling the connections of the brain's white matter: fiber clustering and the parcellation-based connectome. Both methods analyze diffusion magnetic resonance imaging fiber tractography to produce a quantitative description of the brain's connections. Fiber clustering is designed to reconstruct anatomically-defined white matter tracts, while the parcellation-based white matter segmentation enables the study of the brain as a network. From the perspective of white matter segmentation, we compare and contrast the goals and methods of the parcellation-based and clustering approaches, with special focus on reviewing the field of fiber clustering. We also propose a third category of new hybrid methods that combine the aspects of parcellation and clustering, for joint analysis of connection structure and anatomy or function. We conclude that these different approaches for segmentation and modeling of the white matter can advance the neuroscientific study of the brain's connectivity in complementary ways." }, { "pmid": "27981029", "title": "Automated white matter fiber tract identification in patients with brain tumors.", "abstract": "We propose a method for the automated identification of key white matter fiber tracts for neurosurgical planning, and we apply the method in a retrospective study of 18 consecutive neurosurgical patients with brain tumors. Our method is designed to be relatively robust to challenges in neurosurgical tractography, which include peritumoral edema, displacement, and mass effect caused by mass lesions. The proposed method has two parts. First, we learn a data-driven white matter parcellation or fiber cluster atlas using groupwise registration and spectral clustering of multi-fiber tractography from healthy controls. Key fiber tract clusters are identified in the atlas. Next, patient-specific fiber tracts are automatically identified using tractography-based registration to the atlas and spectral embedding of patient tractography. Results indicate good generalization of the data-driven atlas to patients: 80% of the 800 fiber clusters were identified in all 18 patients, and 94% of the 800 fiber clusters were found in 16 or more of the 18 patients. Automated subject-specific tract identification was evaluated by quantitative comparison to subject-specific motor and language functional MRI, focusing on the arcuate fasciculus (language) and corticospinal tracts (motor), which were identified in all patients. Results indicate good colocalization: 89 of 95, or 94%, of patient-specific language and motor activations were intersected by the corresponding identified tract. All patient-specific activations were within 3mm of the corresponding language or motor tract. Overall, our results indicate the potential of an automated method for identifying fiber tracts of interest for neurosurgical planning, even in patients with mass lesions." }, { "pmid": "18041271", "title": "Automatic tractography segmentation using a high-dimensional white matter atlas.", "abstract": "We propose a new white matter atlas creation method that learns a model of the common white matter structures present in a group of subjects. We demonstrate that our atlas creation method, which is based on group spectral clustering of tractography, discovers structures corresponding to expected white matter anatomy such as the corpus callosum, uncinate fasciculus, cingulum bundles, arcuate fasciculus, and corona radiata. The white matter clusters are augmented with expert anatomical labels and stored in a new type of atlas that we call a high-dimensional white matter atlas. We then show how to perform automatic segmentation of tractography from novel subjects by extending the spectral clustering solution, stored in the atlas, using the Nystrom method. We present results regarding the stability of our method and parameter choices. Finally we give results from an atlas creation and automatic segmentation experiment. We demonstrate that our automatic tractography segmentation identifies corresponding white matter regions across hemispheres and across subjects, enabling group comparison of white matter anatomy." }, { "pmid": "27994537", "title": "Alignment of Tractograms As Graph Matching.", "abstract": "The white matter pathways of the brain can be reconstructed as 3D polylines, called streamlines, through the analysis of diffusion magnetic resonance imaging (dMRI) data. The whole set of streamlines is called tractogram and represents the structural connectome of the brain. In multiple applications, like group-analysis, segmentation, or atlasing, tractograms of different subjects need to be aligned. Typically, this is done with registration methods, that transform the tractograms in order to increase their similarity. In contrast with transformation-based registration methods, in this work we propose the concept of tractogram correspondence, whose aim is to find which streamline of one tractogram corresponds to which streamline in another tractogram, i.e., a map from one tractogram to another. As a further contribution, we propose to use the relational information of each streamline, i.e., its distances from the other streamlines in its own tractogram, as the building block to define the optimal correspondence. We provide an operational procedure to find the optimal correspondence through a combinatorial optimization problem and we discuss its similarity to the graph matching problem. In this work, we propose to represent tractograms as graphs and we adopt a recent inexact sub-graph matching algorithm to approximate the solution of the tractogram correspondence problem. On tractograms generated from the Human Connectome Project dataset, we report experimental evidence that tractogram correspondence, implemented as graph matching, provides much better alignment than affine registration and comparable if not better results than non-linear registration of volumes." }, { "pmid": "15325368", "title": "White matter hemisphere asymmetries in healthy subjects and in schizophrenia: a diffusion tensor MRI study.", "abstract": "Hemisphere asymmetry was explored in normal healthy subjects and in patients with schizophrenia using a novel voxel-based tensor analysis applied to fractional anisotropy (FA) of the diffusion tensor. Our voxel-based approach, which requires precise spatial normalization to remove the misalignment of fiber tracts, includes generating a symmetrical group average template of the diffusion tensor by applying nonlinear elastic warping of the demons algorithm. We then normalized all 32 diffusion tensor MRIs from healthy subjects and 23 from schizophrenic subjects to the symmetrical average template. For each brain, six channels of tensor component images and one T2-weighted image were used for registration to match tensor orientation and shape between images. A statistical evaluation of white matter asymmetry was then conducted on the normalized FA images and their flipped images. In controls, we found left-higher-than-right anisotropic asymmetry in the anterior part of the corpus callosum, cingulum bundle, the optic radiation, and the superior cerebellar peduncle, and right-higher-than-left anisotropic asymmetry in the anterior limb of the internal capsule and the anterior limb's prefrontal regions, in the uncinate fasciculus, and in the superior longitudinal fasciculus. In patients, the asymmetry was lower, although still present, in the cingulum bundle and the anterior corpus callosum, and not found in the anterior limb of the internal capsule, the uncinate fasciculus, and the superior cerebellar peduncle compared to healthy subjects. These findings of anisotropic asymmetry pattern differences between healthy controls and patients with schizophrenia are likely related to neurodevelopmental abnormalities in schizophrenia." }, { "pmid": "15050568", "title": "Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains.", "abstract": "This paper evaluates strategies for atlas selection in atlas-based segmentation of three-dimensional biomedical images. Segmentation by intensity-based nonrigid registration to atlas images is applied to confocal microscopy images acquired from the brains of 20 bees. This paper evaluates and compares four different approaches for atlas image selection: registration to an individual atlas image (IND), registration to an average-shape atlas image (AVG), registration to the most similar image from a database of individual atlas images (SIM), and registration to all images from a database of individual atlas images with subsequent multi-classifier decision fusion (MUL). The MUL strategy is a novel application of multi-classifier techniques, which are common in pattern recognition, to atlas-based segmentation. For each atlas selection strategy, the segmentation performance of the algorithm was quantified by the similarity index (SI) between the automatic segmentation result and a manually generated gold standard. The best segmentation accuracy was achieved using the MUL paradigm, which resulted in a mean similarity index value between manual and automatic segmentation of 0.86 (AVG, 0.84; SIM, 0.82; IND, 0.81). The superiority of the MUL strategy over the other three methods is statistically significant (two-sided paired t test, P < 0.001). Both the MUL and AVG strategies performed better than the best possible SIM and IND strategies with optimal a posteriori atlas selection (mean similarity index for optimal SIM, 0.83; for optimal IND, 0.81). Our findings show that atlas selection is an important issue in atlas-based segmentation and that, in particular, multi-classifier techniques can substantially increase the segmentation accuracy." }, { "pmid": "15917106", "title": "Age-related alterations in white matter microstructure measured by diffusion tensor imaging.", "abstract": "Cerebral white matter (WM) undergoes various degenerative changes with normal aging, including decreases in myelin density and alterations in myelin structure. We acquired whole-head, high-resolution diffusion tensor images (DTI) in 38 participants across the adult age span. Maps of fractional anisotropy (FA), a measure of WM microstructure, were calculated for each participant to determine whether particular fiber systems of the brain are preferentially vulnerable to WM degeneration. Regional FA measures were estimated from nine regions of interest in each hemisphere and from the genu and splenium of the corpus callosum (CC). The results showed significant age-related decline in FA in frontal WM, the posterior limb of the internal capsule (PLIC), and the genu of the CC. In contrast, temporal and posterior WM was relatively preserved. These findings suggest that WM alterations are variable throughout the brain and that particular fiber populations within prefrontal region and PLIC are most vulnerable to age-related degeneration." }, { "pmid": "23702418", "title": "Advances in diffusion MRI acquisition and processing in the Human Connectome Project.", "abstract": "The Human Connectome Project (HCP) is a collaborative 5-year effort to map human brain connections and their variability in healthy adults. A consortium of HCP investigators will study a population of 1200 healthy adults using multiple imaging modalities, along with extensive behavioral and genetic data. In this overview, we focus on diffusion MRI (dMRI) and the structural connectivity aspect of the project. We present recent advances in acquisition and processing that allow us to obtain very high-quality in-vivo MRI data, whilst enabling scanning of a very large number of subjects. These advances result from 2 years of intensive efforts in optimising many aspects of data acquisition and processing during the piloting phase of the project. The data quality and methods described here are representative of the datasets and processing pipelines that will be made freely available to the community at quarterly intervals, beginning in 2013." }, { "pmid": "17379540", "title": "Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution.", "abstract": "Diffusion-weighted (DW) MR images contain information about the orientation of brain white matter fibres that potentially can be used to study human brain connectivity in vivo using tractography techniques. Currently, the diffusion tensor model is widely used to extract fibre directions from DW-MRI data, but fails in regions containing multiple fibre orientations. The spherical deconvolution technique has recently been proposed to address this limitation. It provides an estimate of the fibre orientation distribution (FOD) by assuming the DW signal measured from any fibre bundle is adequately described by a single response function. However, the deconvolution is ill-conditioned and susceptible to noise contamination. This tends to introduce artefactual negative regions in the FOD, which are clearly physically impossible. In this study, the introduction of a constraint on such negative regions is proposed to improve the conditioning of the spherical deconvolution. This approach is shown to provide FOD estimates that are robust to noise whilst preserving angular resolution. The approach also permits the use of super-resolution, whereby more FOD parameters are estimated than were actually measured, improving the angular resolution of the results. The method provides much better defined fibre orientation estimates, and allows orientations to be resolved that are separated by smaller angles than previously possible. This should allow tractography algorithms to be designed that are able to track reliably through crossing fibre regions." }, { "pmid": "25134977", "title": "Automated tract extraction via atlas based Adaptive Clustering.", "abstract": "Advancements in imaging protocols such as the high angular resolution diffusion-weighted imaging (HARDI) and in tractography techniques are expected to cause an increase in the tract-based analyses. Statistical analyses over white matter tracts can contribute greatly towards understanding structural mechanisms of the brain since tracts are representative of connectivity pathways. The main challenge with tract-based studies is the extraction of the tracts of interest in a consistent and comparable manner over a large group of individuals without drawing the inclusion and exclusion regions of interest. In this work, we design a framework for automated extraction of white matter tracts. The framework introduces three main components, namely a connectivity based fiber representation, a fiber bundle atlas, and a clustering approach called Adaptive Clustering. The fiber representation relies on the connectivity signatures of fibers to establish an easy correspondence between different subjects. A group-wise clustering of these fibers that are represented by the connectivity signatures is then used to generate a fiber bundle atlas. Finally, Adaptive Clustering incorporates the previously generated clustering atlas as a prior, to cluster the fibers of a new subject automatically. Experiments on the HARDI scans of healthy individuals acquired repeatedly, demonstrate the applicability, reliability and the repeatability of our approach in extracting white matter tracts. By alleviating the seed region selection and the inclusion/exclusion ROI drawing requirements that are usually handled by trained radiologists, the proposed framework expands the range of possible clinical applications and establishes the ability to perform tract-based analyses with large samples." }, { "pmid": "23684880", "title": "The WU-Minn Human Connectome Project: an overview.", "abstract": "The Human Connectome Project consortium led by Washington University, University of Minnesota, and Oxford University is undertaking a systematic effort to map macroscopic human brain circuits and their relationship to behavior in a large population of healthy adults. This overview article focuses on progress made during the first half of the 5-year project in refining the methods for data acquisition and analysis. Preliminary analyses based on a finalized set of acquisition and preprocessing protocols demonstrate the exceptionally high quality of the data from each modality. The first quarterly release of imaging and behavioral data via the ConnectomeDB database demonstrates the commitment to making HCP datasets freely accessible. Altogether, the progress to date provides grounds for optimism that the HCP datasets and associated methods and software will become increasingly valuable resources for characterizing human brain connectivity and function, their relationship to behavior, and their heritability and genetic underpinnings." }, { "pmid": "17481925", "title": "Reproducibility of quantitative tractography methods applied to cerebral white matter.", "abstract": "Tractography based on diffusion tensor imaging (DTI) allows visualization of white matter tracts. In this study, protocols to reconstruct eleven major white matter tracts are described. The protocols were refined by several iterations of intra- and inter-rater measurements and identification of sources of variability. Reproducibility of the established protocols was then tested by raters who did not have previous experience in tractography. The protocols were applied to a DTI database of adult normal subjects to study size, fractional anisotropy (FA), and T2 of individual white matter tracts. Distinctive features in FA and T2 were found for the corticospinal tract and callosal fibers. Hemispheric asymmetry was observed for the size of white matter tracts projecting to the temporal lobe. This protocol provides guidelines for reproducible DTI-based tract-specific quantification." }, { "pmid": "20678578", "title": "Tractography segmentation using a hierarchical Dirichlet processes mixture model.", "abstract": "In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers." }, { "pmid": "20079439", "title": "Unsupervised white matter fiber clustering and tract probability map generation: applications of a Gaussian process framework for white matter fibers.", "abstract": "With the increasing importance of fiber tracking in diffusion tensor images for clinical needs, there has been a growing demand for an objective mathematical framework to perform quantitative analysis of white matter fiber bundles incorporating their underlying physical significance. This article presents such a novel mathematical framework that facilitates mathematical operations between tracts using an inner product between fibres. Such inner product operation, based on Gaussian processes, spans a metric space. This metric facilitates combination of fiber tracts, rendering operations like tract membership to a bundle or bundle similarity simple. Based on this framework, we have designed an automated unsupervised atlas-based clustering method that does not require manual initialization nor an a priori knowledge of the number of clusters. Quantitative analysis can now be performed on the clustered tract volumes across subjects, thereby avoiding the need for point parameterization of these fibers, or the use of medial or envelope representations as in previous work. Experiments on synthetic data demonstrate the mathematical operations. Subsequently, the applicability of the unsupervised clustering framework has been demonstrated on a 21-subject dataset." }, { "pmid": "24505722", "title": "On describing human white matter anatomy: the white matter query language.", "abstract": "The main contribution of this work is the careful syntactical definition of major white matter tracts in the human brain based on a neuroanatomist's expert knowledge. We present a technique to formally describe white matter tracts and to automatically extract them from diffusion MRI data. The framework is based on a novel query language with a near-to-English textual syntax. This query language allows us to construct a dictionary of anatomical definitions describing white matter tracts. The definitions include adjacent gray and white matter regions, and rules for spatial relations. This enables automated coherent labeling of white matter anatomy across subjects. We use our method to encode anatomical knowledge in human white matter describing 10 association and 8 projection tracts per hemisphere and 7 commissural tracts. The technique is shown to be comparable in accuracy to manual labeling. We present results applying this framework to create a white matter atlas from 77 healthy subjects, and we use this atlas in a proof-of-concept study to detect tract changes specific to schizophrenia." }, { "pmid": "26754839", "title": "The white matter query language: a novel approach for describing human white matter anatomy.", "abstract": "We have developed a novel method to describe human white matter anatomy using an approach that is both intuitive and simple to use, and which automatically extracts white matter tracts from diffusion MRI volumes. Further, our method simplifies the quantification and statistical analysis of white matter tracts on large diffusion MRI databases. This work reflects the careful syntactical definition of major white matter fiber tracts in the human brain based on a neuroanatomist's expert knowledge. The framework is based on a novel query language with a near-to-English textual syntax. This query language makes it possible to construct a dictionary of anatomical definitions that describe white matter tracts. The definitions include adjacent gray and white matter regions, and rules for spatial relations. This novel method makes it possible to automatically label white matter anatomy across subjects. After describing this method, we provide an example of its implementation where we encode anatomical knowledge in human white matter for ten association and 15 projection tracts per hemisphere, along with seven commissural tracts. Importantly, this novel method is comparable in accuracy to manual labeling. Finally, we present results applying this method to create a white matter atlas from 77 healthy subjects, and we use this atlas in a small proof-of-concept study to detect changes in association tracts that characterize schizophrenia." }, { "pmid": "26225419", "title": "An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts.", "abstract": "We present an example-based multi-atlas approach for classifying white matter (WM) tracts into anatomic bundles. Our approach exploits expert-provided example data to automatically classify the WM tracts of a subject. Multiple atlases are constructed to model the example data from multiple subjects in order to reflect the individual variability of bundle shapes and trajectories over subjects. For each example subject, an atlas is maintained to allow the example data of a subject to be added or deleted flexibly. A voting scheme is proposed to facilitate the multi-atlas exploitation of example data. For conceptual simplicity, we adopt the same metrics in both example data construction and WM tract labeling. Due to the huge number of WM tracts in a subject, it is time-consuming to label each WM tract individually. Thus, the WM tracts are grouped according to their shape similarity, and WM tracts within each group are labeled simultaneously. To further enhance the computational efficiency, we implemented our approach on the graphics processing unit (GPU). Through nested cross-validation we demonstrated that our approach yielded high classification performance. The average sensitivities for bundles in the left and right hemispheres were 89.5% and 91.0%, respectively, and their average false discovery rates were 14.9% and 14.2%, respectively." }, { "pmid": "20570617", "title": "Atlas-guided tract reconstruction for automated and comprehensive examination of the white matter anatomy.", "abstract": "Tractography based on diffusion tensor imaging (DTI) is widely used to quantitatively analyze the status of the white matter anatomy in a tract-specific manner in many types of diseases. This approach, however, involves subjective judgment in the tract-editing process to extract only the tracts of interest. This process, usually performed by manual delineation of regions of interest, is also time-consuming, and certain tracts, especially the short cortico-cortical association fibers, are difficult to reconstruct. In this paper, we propose an automated approach for reconstruction of a large number of white matter tracts. In this approach, existing anatomical knowledge about tract trajectories (called the Template ROI Set or TRS) were stored in our DTI-based brain atlas with 130 three-dimensional anatomical segmentations, which were warped non-linearly to individual DTI data. We examined the degree of matching with manual results for selected fibers. We established 30 TRSs to reconstruct 30 prominent and previously well-described fibers. In addition, TRSs were developed to delineate 29 short association fibers that were found in all normal subjects examined in this paper (N=20). Probabilistic maps of the 59 tract trajectories were created from the normal subjects and were incorporated into our image analysis tool for automated tract-specific quantification." }, { "pmid": "26999615", "title": "Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI.", "abstract": "A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min." } ]
Scientific Reports
29467401
PMC5821733
10.1038/s41598-018-21715-0
dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data
The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability.
Related worksLike dynGENIE3, many network inference approaches for time series data are based on an ODE model of the type (7) 8,21. These methods mainly differ in the terms present in the right-hand side of the ODE (such as decay rates or the influence of external perturbations), the mathematical form of the models fj, the algorithm used to train these models, and the way a network is inferred from the resulting models. dynGENIE3 adopts the same ODE formulation as in the Inferelator approach16: each ODE includes a term representing the decay of the target gene and the functions fj take as input the expression of all the genes at some time point t. In the specific case of dynGENIE3, the functions fj are represented by ensembles of regression trees, which are trained to minimize the least-square error using the Random forest algorithm, and a network is inferred by thresholding variable importance scores derived from the Random forest models. Like for the standard GENIE3, dynGENIE3 has a reasonable computational complexity, which is at worst O(prN log N), where p is the total number of genes, r is the number of candidate regulators and N is the number of observations.In comparison, most methods in the literature (including Inferelator) assume that the models fj are linear and train these models by jointly maximizing the quality of the fit and minimizing some sparsity-inducing penalty (e.g. using a L1 penalty term or some appropriate Bayesian priors). After training the linear models, a network can be obtained by analysing the weights within the models, several of which having been enforced to zero during training. In contrast to these methods, dynGENIE3 does not make any prior hypothesis about the form of the fj models. This is an advantage in terms of representational power but this could also result in a higher variance, and therefore worse performance because of overfitting, especially when the data is scarce. A few methods also exploit non-linear/non-parametric models within a similar framework, among which Jump320, OKVAR-Boost22 and CSI13. Like dynGENIE3, Jump3 incorporates a (different) dynamical model within a non-parametric, tree-based approach. In the model used by Jump3, the functions fj represent latent variables, which necessitated the development of a new type of decision tree, while Random forests can be applied as such in dynGENIE3. One drawback of Jump3 is its high computational complexity with respect to the number N of observations, being O(N4) in the worst-case scenario. Moreover, Jump3 can not be used for the joint analysis of time series and steady-state data. OKVAR-Boost jointly represents the models fj for all genes using an ensemble of operator-valued kernel regression models trained using a randomized boosting algorithm. The network structure is then estimated from the resulting model by computing its Jacobian matrix. One of the drawbacks of this method with respect to dynGENIE3 is that it requires to tune several meta-parameters. The authors have nevertheless proposed an original approach to tune them based on a stability criterion. Finally, CSI is a Bayesian inference method that learns the fj models in the form of Gaussian processes. Since learning Gaussian processes does not embed any feature selection mechanism, network inference is performed in CSI by a combinatorial search through all the potential sets of regulators for each gene in turn, and constructing a posterior probability distribution over these potential sets of regulators. As a consequence, the complexity of the method is O(pN3r d/(d − 1)!), where d is a parameter defining the maximum number of regulators per gene8. Its high complexity makes CSI unsuitable when the number of candidate regulators (r) or the number of observations (N) is too high. Supplementary Table S1 compares the running times of dynGENIE3 and CSI for different datasets. The most striking difference is observed when inferring the DREAM4 100-gene networks. While dynGENIE3 takes only several minutes to infer one network, CSI can take more than 48 hours per target gene. The CSI algorithm can be parallelised over the different target genes (like dynGENIE3), but even in that case the computational burden remains an issue when inferring large networks containing thousands of genes and hundreds of transcription factors (such as the E. coli network).
[ "22805708", "17214507", "20927193", "22796662", "23226586", "24176667", "24786523", "16686963", "21049040", "20186320", "24400020", "19961876", "20949005", "24529382", "17224916", "21036869", "25896902", "20461071", "23203884", "24243845", "27682842", "11911796" ]
[ { "pmid": "22805708", "title": "Studying and modelling dynamic biological processes using time-series gene expression data.", "abstract": "Biological processes are often dynamic, thus researchers must monitor their activity at multiple time points. The most abundant source of information regarding such dynamic activity is time-series gene expression data. These data are used to identify the complete set of activated genes in a biological process, to infer their rates of change, their order and their causal effects and to model dynamic systems in the cell. In this Review we discuss the basic patterns that have been observed in time-series experiments, how these patterns are combined to form expression programs, and the computational analysis, visualization and integration of these data to infer models of dynamic biological systems." }, { "pmid": "17214507", "title": "Large-scale mapping and validation of Escherichia coli transcriptional regulation from a compendium of expression profiles.", "abstract": "Machine learning approaches offer the potential to systematically identify transcriptional regulatory interactions from a compendium of microarray expression profiles. However, experimental validation of the performance of these methods at the genome scale has remained elusive. Here we assess the global performance of four existing classes of inference algorithms using 445 Escherichia coli Affymetrix arrays and 3,216 known E. coli regulatory interactions from RegulonDB. We also developed and applied the context likelihood of relatedness (CLR) algorithm, a novel extension of the relevance networks class of algorithms. CLR demonstrates an average precision gain of 36% relative to the next-best performing algorithm. At a 60% true positive rate, CLR identifies 1,079 regulatory interactions, of which 338 were in the previously known network and 741 were novel predictions. We tested the predicted interactions for three transcription factors with chromatin immunoprecipitation, confirming 21 novel interactions and verifying our RegulonDB-based performance estimates. CLR also identified a regulatory link providing central metabolic control of iron transport, which we confirmed with real-time quantitative PCR. The compendium of expression data compiled in this study, coupled with RegulonDB, provides a valuable model system for further improvement of network inference algorithms using experimental data." }, { "pmid": "20927193", "title": "Inferring regulatory networks from expression data using tree-based methods.", "abstract": "One of the pressing open problems of computational systems biology is the elucidation of the topology of genetic regulatory networks (GRNs) using high throughput genomic data, in particular microarray gene expression data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) challenge aims to evaluate the success of GRN inference algorithms on benchmarks of simulated data. In this article, we present GENIE3, a new algorithm for the inference of GRNs that was best performer in the DREAM4 In Silico Multifactorial challenge. GENIE3 decomposes the prediction of a regulatory network between p genes into p different regression problems. In each of the regression problems, the expression pattern of one of the genes (target gene) is predicted from the expression patterns of all the other genes (input genes), using tree-based ensemble methods Random Forests or Extra-Trees. The importance of an input gene in the prediction of the target gene expression pattern is taken as an indication of a putative regulatory link. Putative regulatory links are then aggregated over all genes to provide a ranking of interactions from which the whole network is reconstructed. In addition to performing well on the DREAM4 In Silico Multifactorial challenge simulated data, we show that GENIE3 compares favorably with existing algorithms to decipher the genetic regulatory network of Escherichia coli. It doesn't make any assumption about the nature of gene regulation, can deal with combinatorial and non-linear interactions, produces directed GRNs, and is fast and scalable. In conclusion, we propose a new algorithm for GRN inference that performs well on both synthetic and real gene expression data. The algorithm, based on feature selection with tree-based ensemble methods, is simple and generic, making it adaptable to other types of genomic data and interactions." }, { "pmid": "22796662", "title": "Wisdom of crowds for robust gene network inference.", "abstract": "Reconstructing gene regulatory networks from high-throughput data is a long-standing challenge. Through the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we performed a comprehensive blind assessment of over 30 network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae and in silico microarray data. We characterize the performance, data requirements and inherent biases of different inference approaches, and we provide guidelines for algorithm application and development. We observed that no single inference method performs optimally across all data sets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse data sets. We thereby constructed high-confidence networks for E. coli and S. aureus, each comprising ~1,700 transcriptional interactions at a precision of ~50%. We experimentally tested 53 previously unobserved regulatory interactions in E. coli, of which 23 (43%) were supported. Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks." }, { "pmid": "23226586", "title": "How to infer gene networks from expression profiles, revisited.", "abstract": "Inferring the topology of a gene-regulatory network (GRN) from genome-scale time-series measurements of transcriptional change has proved useful for disentangling complex biological processes. To address the challenges associated with this inference, a number of competing approaches have previously been used, including examples from information theory, Bayesian and dynamic Bayesian networks (DBNs), and ordinary differential equation (ODE) or stochastic differential equation. The performance of these competing approaches have previously been assessed using a variety of in silico and in vivo datasets. Here, we revisit this work by assessing the performance of more recent network inference algorithms, including a novel non-parametric learning approach based upon nonlinear dynamical systems. For larger GRNs, containing hundreds of genes, these non-parametric approaches more accurately infer network structures than do traditional approaches, but at significant computational cost. For smaller systems, DBNs are competitive with the non-parametric approaches with respect to computational time and accuracy, and both of these approaches appear to be more accurate than Granger causality-based methods and those using simple ODEs models." }, { "pmid": "24176667", "title": "Autoregressive models for gene regulatory network inference: sparsity, stability and causality issues.", "abstract": "Reconstructing gene regulatory networks from high-throughput measurements represents a key problem in functional genomics. It also represents a canonical learning problem and thus has attracted a lot of attention in both the informatics and the statistical learning literature. Numerous approaches have been proposed, ranging from simple clustering to rather involved dynamic Bayesian network modeling, as well as hybrid ones that combine a number of modeling steps, such as employing ordinary differential equations coupled with genome annotation. These approaches are tailored to the type of data being employed. Available data sources include static steady state data and time course data obtained either for wild type phenotypes or from perturbation experiments. This review focuses on the class of autoregressive models using time course data for inferring gene regulatory networks. The central themes of sparsity, stability and causality are discussed as well as the ability to integrate prior knowledge for successful use of these models for the learning task at hand." }, { "pmid": "24786523", "title": "Bridging physiological and evolutionary time-scales in a gene regulatory network.", "abstract": "Gene regulatory networks (GRNs) govern phenotypic adaptations and reflect the trade-offs between physiological responses and evolutionary adaptation that act at different time-scales. To identify patterns of molecular function and genetic diversity in GRNs, we studied the drought response of the common sunflower, Helianthus annuus, and how the underlying GRN is related to its evolution. We examined the responses of 32,423 expressed sequences to drought and to abscisic acid (ABA) and selected 145 co-expressed transcripts. We characterized their regulatory relationships in nine kinetic studies based on different hormones. From this, we inferred a GRN by meta-analyses of a Gaussian graphical model and a random forest algorithm and studied the genetic differentiation among populations (FST ) at nodes. We identified two main hubs in the network that transport nitrate in guard cells. This suggests that nitrate transport is a critical aspect of the sunflower physiological response to drought. We observed that differentiation of the network genes in elite sunflower cultivars is correlated with their position and connectivity. This systems biology approach combined molecular data at different time-scales and identified important physiological processes. At the evolutionary level, we propose that network topology could influence responses to human selection and possibly adaptation to dry environments." }, { "pmid": "16686963", "title": "The Inferelator: an algorithm for learning parsimonious regulatory networks from systems-biology data sets de novo.", "abstract": "We present a method (the Inferelator) for deriving genome-wide transcriptional regulatory interactions, and apply the method to predict a large portion of the regulatory network of the archaeon Halobacterium NRC-1. The Inferelator uses regression and variable selection to identify transcriptional influences on genes based on the integration of genome annotation and expression data. The learned network successfully predicted Halobacterium's global expression under novel perturbations with predictive power similar to that seen over training data. Several specific regulatory predictions were experimentally tested and verified." }, { "pmid": "21049040", "title": "DREAM4: Combining genetic and dynamic information to identify biological networks and dynamical models.", "abstract": "BACKGROUND\nCurrent technologies have lead to the availability of multiple genomic data types in sufficient quantity and quality to serve as a basis for automatic global network inference. Accordingly, there are currently a large variety of network inference methods that learn regulatory networks to varying degrees of detail. These methods have different strengths and weaknesses and thus can be complementary. However, combining different methods in a mutually reinforcing manner remains a challenge.\n\n\nMETHODOLOGY\nWe investigate how three scalable methods can be combined into a useful network inference pipeline. The first is a novel t-test-based method that relies on a comprehensive steady-state knock-out dataset to rank regulatory interactions. The remaining two are previously published mutual information and ordinary differential equation based methods (tlCLR and Inferelator 1.0, respectively) that use both time-series and steady-state data to rank regulatory interactions; the latter has the added advantage of also inferring dynamic models of gene regulation which can be used to predict the system's response to new perturbations.\n\n\nCONCLUSION/SIGNIFICANCE\nOur t-test based method proved powerful at ranking regulatory interactions, tying for first out of methods in the DREAM4 100-gene in-silico network inference challenge. We demonstrate complementarity between this method and the two methods that take advantage of time-series data by combining the three into a pipeline whose ability to rank regulatory interactions is markedly improved compared to either method alone. Moreover, the pipeline is able to accurately predict the response of the system to new conditions (in this case new double knock-out genetic perturbations). Our evaluation of the performance of multiple methods for network inference suggests avenues for future methods development and provides simple considerations for genomic experimental design. Our code is publicly available at http://err.bio.nyu.edu/inferelator/." }, { "pmid": "20186320", "title": "Towards a rigorous assessment of systems biology models: the DREAM3 challenges.", "abstract": "BACKGROUND\nSystems biology has embraced computational modeling in response to the quantitative nature and increasing scale of contemporary data sets. The onslaught of data is accelerating as molecular profiling technology evolves. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) is a community effort to catalyze discussion about the design, application, and assessment of systems biology models through annual reverse-engineering challenges.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nWe describe our assessments of the four challenges associated with the third DREAM conference which came to be known as the DREAM3 challenges: signaling cascade identification, signaling response prediction, gene expression prediction, and the DREAM3 in silico network challenge. The challenges, based on anonymized data sets, tested participants in network inference and prediction of measurements. Forty teams submitted 413 predicted networks and measurement test sets. Overall, a handful of best-performer teams were identified, while a majority of teams made predictions that were equivalent to random. Counterintuitively, combining the predictions of multiple teams (including the weaker teams) can in some cases improve predictive power beyond that of any single method.\n\n\nCONCLUSIONS\nDREAM provides valuable feedback to practitioners of systems biology modeling. Lessons learned from the predictions of the community provide much-needed context for interpreting claims of efficacy of algorithms described in the scientific literature." }, { "pmid": "24400020", "title": "Experimental assessment of static and dynamic algorithms for gene regulation inference from time series expression data.", "abstract": "Accurate inference of causal gene regulatory networks from gene expression data is an open bioinformatics challenge. Gene interactions are dynamical processes and consequently we can expect that the effect of any regulation action occurs after a certain temporal lag. However such lag is unknown a priori and temporal aspects require specific inference algorithms. In this paper we aim to assess the impact of taking into consideration temporal aspects on the final accuracy of the inference procedure. In particular we will compare the accuracy of static algorithms, where no dynamic aspect is considered, to that of fixed lag and adaptive lag algorithms in three inference tasks from microarray expression data. Experimental results show that network inference algorithms that take dynamics into account perform consistently better than static ones, once the considered lags are properly chosen. However, no individual algorithm stands out in all three inference tasks, and the challenging nature of network inference tasks is evidenced, as a large number of the assessed algorithms does not perform better than random." }, { "pmid": "19961876", "title": "A MATLAB toolbox for Granger causal connectivity analysis.", "abstract": "Assessing directed functional connectivity from time series data is a key challenge in neuroscience. One approach to this problem leverages a combination of Granger causality analysis and network theory. This article describes a freely available MATLAB toolbox--'Granger causal connectivity analysis' (GCCA)--which provides a core set of methods for performing this analysis on a variety of neuroscience data types including neuroelectric, neuromagnetic, functional MRI, and other neural signals. The toolbox includes core functions for Granger causality analysis of multivariate steady-state and event-related data, functions to preprocess data, assess statistical significance and validate results, and to compute and display network-level indices of causal connectivity including 'causal density' and 'causal flow'. The toolbox is deliberately small, enabling its easy assimilation into the repertoire of researchers. It is however readily extensible given proficiency with the MATLAB language." }, { "pmid": "20949005", "title": "From knockouts to networks: establishing direct cause-effect relationships through graph analysis.", "abstract": "BACKGROUND\nReverse-engineering gene networks from expression profiles is a difficult problem for which a multitude of techniques have been developed over the last decade. The yearly organized DREAM challenges allow for a fair evaluation and unbiased comparison of these methods.\n\n\nRESULTS\nWe propose an inference algorithm that combines confidence matrices, computed as the standard scores from single-gene knockout data, with the down-ranking of feed-forward edges. Substantial improvements on the predictions can be obtained after the execution of this second step.\n\n\nCONCLUSIONS\nOur algorithm was awarded the best overall performance at the DREAM4 In Silico 100-gene network sub-challenge, proving to be effective in inferring medium-size gene regulatory networks. This success demonstrates once again the decisive importance of gene expression data obtained after systematic gene perturbations and highlights the usefulness of graph analysis to increase the reliability of inference." }, { "pmid": "24529382", "title": "Global analysis of mRNA isoform half-lives reveals stabilizing and destabilizing elements in yeast.", "abstract": "We measured half-lives of 21,248 mRNA 3' isoforms in yeast by rapidly depleting RNA polymerase II from the nucleus and performing direct RNA sequencing throughout the decay process. Interestingly, half-lives of mRNA isoforms from the same gene, including nearly identical isoforms, often vary widely. Based on clusters of isoforms with different half-lives, we identify hundreds of sequences conferring stabilization or destabilization upon mRNAs terminating downstream. One class of stabilizing element is a polyU sequence that can interact with poly(A) tails, inhibit the association of poly(A)-binding protein, and confer increased stability upon introduction into ectopic transcripts. More generally, destabilizing and stabilizing elements are linked to the propensity of the poly(A) tail to engage in double-stranded structures. Isoforms engineered to fold into 3' stem-loop structures not involving the poly(A) tail exhibit even longer half-lives. We suggest that double-stranded structures at 3' ends are a major determinant of mRNA stability." }, { "pmid": "17224916", "title": "Identification of tightly regulated groups of genes during Drosophila melanogaster embryogenesis.", "abstract": "Time-series analysis of whole-genome expression data during Drosophila melanogaster development indicates that up to 86% of its genes change their relative transcript level during embryogenesis. By applying conservative filtering criteria and requiring 'sharp' transcript changes, we identified 1534 maternal genes, 792 transient zygotic genes, and 1053 genes whose transcript levels increase during embryogenesis. Each of these three categories is dominated by groups of genes where all transcript levels increase and/or decrease at similar times, suggesting a common mode of regulation. For example, 34% of the transiently expressed genes fall into three groups, with increased transcript levels between 2.5-12, 11-20, and 15-20 h of development, respectively. We highlight common and distinctive functional features of these expression groups and identify a coupling between downregulation of transcript levels and targeted protein degradation. By mapping the groups to the protein network, we also predict and experimentally confirm new functional associations." }, { "pmid": "21036869", "title": "DroID 2011: a comprehensive, integrated resource for protein, transcription factor, RNA and gene interactions for Drosophila.", "abstract": "DroID (http://droidb.org/), the Drosophila Interactions Database, is a comprehensive public resource for Drosophila gene and protein interactions. DroID contains genetic interactions and experimentally detected protein-protein interactions curated from the literature and from external databases, and predicted protein interactions based on experiments in other species. Protein interactions are annotated with experimental details and periodically updated confidence scores. Data in DroID is accessible through user-friendly, intuitive interfaces that allow simple or advanced searches and graphical visualization of interaction networks. DroID has been expanded to include interaction types that enable more complete analyses of the genetic networks that underlie biological processes. In addition to protein-protein and genetic interactions, the database now includes transcription factor-gene and regulatory RNA-gene interactions. In addition, DroID now has more gene expression data that can be used to search and filter interaction networks. Orthologous gene mappings of Drosophila genes to other organisms are also available to facilitate finding interactions based on gene names and identifiers for a number of common model organisms and humans. Improvements have been made to the web and graphical interfaces to help biologists gain a comprehensive view of the interaction networks relevant to the genes and systems that they study." }, { "pmid": "25896902", "title": "Dynamic regulation of mRNA decay during neural development.", "abstract": "BACKGROUND\nGene expression patterns are determined by rates of mRNA transcription and decay. While transcription is known to regulate many developmental processes, the role of mRNA decay is less extensively defined. A critical step toward defining the role of mRNA decay in neural development is to measure genome-wide mRNA decay rates in neural tissue. Such information should reveal the degree to which mRNA decay contributes to differential gene expression and provide a foundation for identifying regulatory mechanisms that affect neural mRNA decay.\n\n\nRESULTS\nWe developed a technique that allows genome-wide mRNA decay measurements in intact Drosophila embryos, across all tissues and specifically in the nervous system. Our approach revealed neural-specific decay kinetics, including stabilization of transcripts encoding regulators of axonogenesis and destabilization of transcripts encoding ribosomal proteins and histones. We also identified correlations between mRNA stability and physiologic properties of mRNAs; mRNAs that are predicted to be translated within axon growth cones or dendrites have long half-lives while mRNAs encoding transcription factors that regulate neurogenesis have short half-lives. A search for candidate cis-regulatory elements identified enrichment of the Pumilio recognition element (PRE) in mRNAs encoding regulators of neurogenesis. We found that decreased expression of the RNA-binding protein Pumilio stabilized predicted neural mRNA targets and that a PRE is necessary to trigger reporter-transcript decay in the nervous system.\n\n\nCONCLUSIONS\nWe found that differential mRNA decay contributes to the relative abundance of transcripts involved in cell-fate decisions, axonogenesis, and other critical events during Drosophila neural development. Neural-specific decay kinetics and the functional specificity of mRNA decay suggest the existence of a dynamic neurodevelopmental mRNA decay network. We found that Pumilio is one component of this network, revealing a novel function for this RNA-binding protein." }, { "pmid": "20461071", "title": "Metabolomic and transcriptomic stress response of Escherichia coli.", "abstract": "Environmental fluctuations lead to a rapid adjustment of the physiology of Escherichia coli, necessitating changes on every level of the underlying cellular and molecular network. Thus far, the majority of global analyses of E. coli stress responses have been limited to just one level, gene expression. Here, we incorporate the metabolite composition together with gene expression data to provide a more comprehensive insight on system level stress adjustments by describing detailed time-resolved E. coli response to five different perturbations (cold, heat, oxidative stress, lactose diauxie, and stationary phase). The metabolite response is more specific as compared with the general response observed on the transcript level and is reflected by much higher specificity during the early stress adaptation phase and when comparing the stationary phase response to other perturbations. Despite these differences, the response on both levels still follows the same dynamics and general strategy of energy conservation as reflected by rapid decrease of central carbon metabolism intermediates coinciding with downregulation of genes related to cell growth. Application of co-clustering and canonical correlation analysis on combined metabolite and transcript data identified a number of significant condition-dependent associations between metabolites and transcripts. The results confirm and extend existing models about co-regulation between gene expression and metabolites demonstrating the power of integrated systems oriented analysis." }, { "pmid": "23203884", "title": "RegulonDB v8.0: omics data sets, evolutionary conservation, regulatory phrases, cross-validated gold standards and more.", "abstract": "This article summarizes our progress with RegulonDB (http://regulondb.ccg.unam.mx/) during the past 2 years. We have kept up-to-date the knowledge from the published literature regarding transcriptional regulation in Escherichia coli K-12. We have maintained and expanded our curation efforts to improve the breadth and quality of the encoded experimental knowledge, and we have implemented criteria for the quality of our computational predictions. Regulatory phrases now provide high-level descriptions of regulatory regions. We expanded the assignment of quality to various sources of evidence, particularly for knowledge generated through high-throughput (HT) technology. Based on our analysis of most relevant methods, we defined rules for determining the quality of evidence when multiple independent sources support an entry. With this latest release of RegulonDB, we present a new highly reliable larger collection of transcription start sites, a result of our experimental HT genome-wide efforts. These improvements, together with several novel enhancements (the tracks display, uploading format and curational guidelines), address the challenges of incorporating HT-generated knowledge into RegulonDB. Information on the evolutionary conservation of regulatory elements is also available now. Altogether, RegulonDB version 8.0 is a much better home for integrating knowledge on gene regulation from the sources of information currently available." }, { "pmid": "24243845", "title": "Dual role of transcription and transcript stability in the regulation of gene expression in Escherichia coli cells cultured on glucose at different growth rates.", "abstract": "Microorganisms extensively reorganize gene expression to adjust growth rate to changes in growth conditions. At the genomic scale, we measured the contribution of both transcription and transcript stability to regulating messenger RNA (mRNA) concentration in Escherichia coli. Transcriptional control was the dominant regulatory process. Between growth rates of 0.10 and 0.63 h(-1), there was a generic increase in the bulk mRNA transcription. However, many transcripts became less stable and the median mRNA half-life decreased from 4.2 to 2.8 min. This is the first evidence that mRNA turnover is slower at extremely low-growth rates. The destabilization of many, but not all, transcripts at high-growth rate correlated with transcriptional upregulation of genes encoding the mRNA degradation machinery. We identified five classes of growth-rate regulation ranging from mainly transcriptional to mainly degradational. In general, differential stability within polycistronic messages encoded by operons does not appear to be affected by growth rate. We show here that the substantial reorganization of gene expression involving downregulation of tricarboxylic acid cycle genes and acetyl-CoA synthetase at high-growth rates is controlled mainly by transcript stability. Overall, our results demonstrate that the control of transcript stability has an important role in fine-tuning mRNA concentration during changes in growth rate." }, { "pmid": "27682842", "title": "Computational methods for trajectory inference from single-cell transcriptomics.", "abstract": "Recent developments in single-cell transcriptomics have opened new opportunities for studying dynamic processes in immunology in a high throughput and unbiased manner. Starting from a mixture of cells in different stages of a developmental process, unsupervised trajectory inference algorithms aim to automatically reconstruct the underlying developmental path that cells are following. In this review, we break down the strategies used by this novel class of methods, and organize their components into a common framework, highlighting several practical advantages and disadvantages of the individual methods. We also give an overview of new insights these methods have already provided regarding the wiring and gene regulation of cell differentiation. As the trajectory inference field is still in its infancy, we propose several future developments that will ultimately lead to a global and data-driven way of studying immune cell differentiation." }, { "pmid": "11911796", "title": "Modeling and simulation of genetic regulatory systems: a literature review.", "abstract": "In order to understand the functioning of organisms on the molecular level, we need to know which genes are expressed, when and where in the organism, and to which extent. The regulation of gene expression is achieved through genetic regulatory systems structured by networks of interactions between DNA, RNA, proteins, and small molecules. As most genetic regulatory networks of interest involve many components connected through interlocking positive and negative feedback loops, an intuitive understanding of their dynamics is hard to obtain. As a consequence, formal methods and computer tools for the modeling and simulation of genetic regulatory networks will be indispensable. This paper reviews formalisms that have been employed in mathematical biology and bioinformatics to describe genetic regulatory systems, in particular directed graphs, Bayesian networks, Boolean networks and their generalizations, ordinary and partial differential equations, qualitative differential equations, stochastic equations, and rule-based formalisms. In addition, the paper discusses how these formalisms have been used in the simulation of the behavior of actual regulatory systems." } ]
Frontiers in Neurorobotics
29515386
PMC5825909
10.3389/fnbot.2018.00004
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors
In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor.
1.3Related WorkThere is a variety of algorithms to extract lines from frames, most notably the Hough transform (Duda and Hart, 1972; Matas et al., 2000). In Grompone von Gioi et al. (2012), a line segment detector (called LSD) is proposed that works stably without parameter tuning (see also Section 3 for comparisons). In Section 3, we compare the results of these algorithms with our method. Different methods that use line segments for interframe tracking are described in Neubert et al. (2008), Hirose and Saito (2012), and Zhang and Koch (2013).In recent years, several trackers for different shapes have been developed for event-based systems. An early example of this can be found in Litzenberger et al. (2006). Based on this, Delbruck and Lang (2013) shows how to construct a robotic goalie with fast reaction time of only 3 ms. Conradt et al. (2009) focuses explicitly on detecting lines from events and describes a pencil balancer. Estimates about the pencil position are performed in Hough space.In a more recent work, Brändli et al. (2016) describe a line segment detector to detect multiple lines in arbitrary scenes. They use Sobel operators to find the local orientation of events and cluster events with similar angles to form line segments. Events are stored in a circular buffer of fixed size, so that old events are overwritten when new ones arrive and the position and orientation of lines is updated through this process, but do not put their focus on tracking (see Section 3 for a comparison with the here proposed method).There are also increasing efforts to track other basic geometric shapes in event-based systems: corners have been a focus in multiple works as they generate distinct features that do not suffer from the aperture problem, can be tracked fast and find usage in robotic navigation. Clady et al. (2015) use a corner matching algorithm based on a combination of geometric constraints to detect events caused by corners and reduce the event stream to a corner event stream. Vasco et al. (2016) transfer the well-known Harris corner detector (Harris and Stephens, 1988) to the event domain, while Mueggler et al. (2017) present a rapid corner detection method inspired by FAST (Rosten and Drummond, 2006), which is capable of processing more than one million events per second.Lagorce et al. (2015) introduces a method to track visual features using different kernels like Gaussians, Gabors, or other hand designed kernels. Tedaldi et al. (2016) uses a hybrid approach combining frames and event stream. It does not require features to be specified beforehand but extracts them using the grayscale frames. The extracted features are subsequently tracked asynchronously using the stream of events. This permits a smooth tracking through time between two frames.
[ "28704206", "26120965", "21869365", "25828960", "24311999", "25248193" ]
[ { "pmid": "28704206", "title": "An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments.", "abstract": "OBJECTIVE\nMany computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem.\n\n\nAPPROACH\nWe used our recent recordings from 'small target motion detector' neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour.\n\n\nMAIN RESULTS\nOur robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements.\n\n\nSIGNIFICANCE\nOur results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system." }, { "pmid": "26120965", "title": "Common circuit design in fly and mammalian motion vision.", "abstract": "Motion-sensitive neurons have long been studied in both the mammalian retina and the insect optic lobe, yet striking similarities have become obvious only recently. Detailed studies at the circuit level revealed that, in both systems, (i) motion information is extracted from primary visual information in parallel ON and OFF pathways; (ii) in each pathway, the process of elementary motion detection involves the correlation of signals with different temporal dynamics; and (iii) primary motion information from both pathways converges at the next synapse, resulting in four groups of ON-OFF neurons, selective for the four cardinal directions. Given that the last common ancestor of insects and mammals lived about 550 million years ago, this general strategy seems to be a robust solution for how to compute the direction of visual motion with neural hardware." }, { "pmid": "21869365", "title": "A computational approach to edge detection.", "abstract": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge." }, { "pmid": "25828960", "title": "Asynchronous event-based corner detection and matching.", "abstract": "This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating \"spiking\" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching." }, { "pmid": "24311999", "title": "Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor.", "abstract": "Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most \"threatening\" ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided." }, { "pmid": "25248193", "title": "Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking.", "abstract": "This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the N(2) operations per event associated with conventional kernel-based convolution operations with N × N kernels. The tracking performance was evaluated experimentally for each type of kernel in order to demonstrate the robustness of the proposed solution." } ]
PLoS Computational Biology
29447153
PMC5831643
10.1371/journal.pcbi.1005935
A model of risk and mental state shifts during social interaction
Cooperation and competition between human players in repeated microeconomic games offer a window onto social phenomena such as the establishment, breakdown and repair of trust. However, although a suitable starting point for the quantitative analysis of such games exists, namely the Interactive Partially Observable Markov Decision Process (I-POMDP), computational considerations and structural limitations have limited its application, and left unmodelled critical features of behavior in a canonical trust task. Here, we provide the first analysis of two central phenomena: a form of social risk-aversion exhibited by the player who is in control of the interaction in the game; and irritation or anger, potentially exhibited by both players. Irritation arises when partners apparently defect, and it potentially causes a precipitate breakdown in cooperation. Failing to model one’s partner’s propensity for it leads to substantial economic inefficiency. We illustrate these behaviours using evidence drawn from the play of large cohorts of healthy volunteers and patients. We show that for both cohorts, a particular subtype of player is largely responsible for the breakdown of trust, a finding which sheds new light on borderline personality disorder.
Earlier and related workTrust games of various kinds have been used in behavioural economics and psychology research (see [34]). In particular, the MRT we used was based on variants in several earlier studies (see examples in [17, 35, 36]).The current MRT was first modeled using regression models (see [16]) of various depths: one step models for the increase/decrease of the amount sent to the partner and models which track the effects of more distant investments/repayments. These models generated signals of increases and decreases in investments and returns that were correlated with fMRI data. One seminal study on the effect of BPD in the trustgame by King-Casas et al. (see [6]) included the concept of “coaxing” (repaying substanially more than the fair split) the partner (back) into cooperating/trust whenever trust was running low, as signified by small investments.Furthermore, an earlier study (see [37]) used clustering to associate trustgame investment and repayment levels to various clinical populations.An I-POMDP generative model for the trust task which included inequality aversion, inference and theory of mind level was previously proposed [8]. This model was later refined rather substantially to include faster calculation and planning as a parameter [10].The I-POMDP framework itself has been used in a considerable number of studies. Notable among these are investigations of the depth of tactical reasoning directly in competitive games (see [38–40]). It has also been used for deriving optimal strategies in repeated games (see [41]).The benefits of a variant of the framework for fitting human behavioural data were recently exhibited in [42].
[ "18255038", "23300423", "26053429", "16470710", "15802598", "11562505", "16588946", "23325247", "20510862", "21556131", "15631562", "20975934" ]
[ { "pmid": "18255038", "title": "Self responses along cingulate cortex reveal quantitative neural phenotype for high-functioning autism.", "abstract": "Attributing behavioral outcomes correctly to oneself or to other agents is essential for all productive social exchange. We approach this issue in high-functioning males with autism spectrum disorder (ASD) using two separate fMRI paradigms. First, using a visual imagery task, we extract a basis set for responses along the cingulate cortex of control subjects that reveals an agent-specific eigenvector (self eigenmode) associated with imagining oneself executing a specific motor act. Second, we show that the same self eigenmode arises during one's own decision (the self phase) in an interpersonal exchange game (iterated trust game). Third, using this exchange game, we show that ASD males exhibit a severely diminished cingulate self response when playing the game with a human partner. This diminishment covaries parametrically with their behaviorally assessed symptom severity, suggesting its value as an objective endophenotype. These findings may provide a quantitative assessment tool for high-functioning ASD." }, { "pmid": "23300423", "title": "Computational phenotyping of two-person interactions reveals differential neural response to depth-of-thought.", "abstract": "Reciprocating exchange with other humans requires individuals to infer the intentions of their partners. Despite the importance of this ability in healthy cognition and its impact in disease, the dimensions employed and computations involved in such inferences are not clear. We used a computational theory-of-mind model to classify styles of interaction in 195 pairs of subjects playing a multi-round economic exchange game. This classification produces an estimate of a subject's depth-of-thought in the game (low, medium, high), a parameter that governs the richness of the models they build of their partner. Subjects in each category showed distinct neural correlates of learning signals associated with different depths-of-thought. The model also detected differences in depth-of-thought between two groups of healthy subjects: one playing patients with psychiatric disease and the other playing healthy controls. The neural response categories identified by this computational characterization of theory-of-mind may yield objective biomarkers useful in the identification and characterization of pathologies that perturb the capacity to model and interact with other humans." }, { "pmid": "26053429", "title": "Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.", "abstract": "Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference." }, { "pmid": "16470710", "title": "Mechanisms of change in mentalization-based treatment of BPD.", "abstract": "There are very few less contentious issues than the role of attachment in psychotherapy. Concepts such as the therapeutic alliance speak directly to the importance of activating the attachment system, normally in relation to the therapist in individual therapy and in relation to other family members in family-based intervention, if therapeutic progress is to be made. In group therapy the attachment process may be activated by group membership. The past decade of neuroscientific research has helped us to understand some key processes that attachment entails at brain level. The article outlines this progress and links it to recent findings on the relationship between the neural systems underpinning attachment and other processes such as making of social judgments, theory of mind, and access to long-term memory. These findings allow intriguing speculations, which are currently undergoing empirical tests on the neural basis of individual differences in attachment as well as the nature of psychological disturbances associated with profound disturbances of the attachment system. In this article, we explore the crucial paradoxical brain state created by psychotherapy with powerful clinical implications for the maximization of therapeutic benefit from the talking cure." }, { "pmid": "15802598", "title": "Getting to know you: reputation and trust in a two-person economic exchange.", "abstract": "Using a multiround version of an economic exchange (trust game), we report that reciprocity expressed by one player strongly predicts future trust expressed by their partner-a behavioral finding mirrored by neural responses in the dorsal striatum. Here, analyses within and between brains revealed two signals-one encoded by response magnitude, and the other by response timing. Response magnitude correlated with the \"intention to trust\" on the next play of the game, and the peak of these \"intention to trust\" responses shifted its time of occurrence by 14 seconds as player reputations developed. This temporal transfer resembles a similar shift of reward prediction errors common to reinforcement learning models, but in the context of a social exchange. These data extend previous model-based functional magnetic resonance imaging studies into the social domain and broaden our view of the spectrum of functions implemented by the dorsal striatum." }, { "pmid": "11562505", "title": "A functional imaging study of cooperation in two-person reciprocal exchange.", "abstract": "Cooperation between individuals requires the ability to infer each other's mental states to form shared expectations over mutual gains and make cooperative choices that realize these gains. From evidence that the ability for mental state attribution involves the use of prefrontal cortex, we hypothesize that this area is involved in integrating theory-of-mind processing with cooperative actions. We report data from a functional MRI experiment designed to test this hypothesis. Subjects in a scanner played standard two-person \"trust and reciprocity\" games with both human and computer counterparts for cash rewards. Behavioral data shows that seven subjects consistently attempted cooperation with their human counterpart. Within this group prefrontal regions are more active when subjects are playing a human than when they are playing a computer following a fixed (and known) probabilistic strategy. Within the group of five noncooperators, there are no significant differences in prefrontal activation between computer and human conditions." }, { "pmid": "23325247", "title": "Computational substrates of norms and their violations during social exchange.", "abstract": "Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations." }, { "pmid": "20510862", "title": "States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.", "abstract": "Reinforcement learning (RL) uses sequential experience with situations (\"states\") and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior." }, { "pmid": "21556131", "title": "Disentangling the roles of approach, activation and valence in instrumental and pavlovian responding.", "abstract": "Hard-wired, Pavlovian, responses elicited by predictions of rewards and punishments exert significant benevolent and malevolent influences over instrumentally-appropriate actions. These influences come in two main groups, defined along anatomical, pharmacological, behavioural and functional lines. Investigations of the influences have so far concentrated on the groups as a whole; here we take the critical step of looking inside each group, using a detailed reinforcement learning model to distinguish effects to do with value, specific actions, and general activation or inhibition. We show a high degree of sophistication in Pavlovian influences, with appetitive Pavlovian stimuli specifically promoting approach and inhibiting withdrawal, and aversive Pavlovian stimuli promoting withdrawal and inhibiting approach. These influences account for differences in the instrumental performance of approach and withdrawal behaviours. Finally, although losses are as informative as gains, we find that subjects neglect losses in their instrumental learning. Our findings argue for a view of the Pavlovian system as a constraint or prior, facilitating learning by alleviating computational costs that come with increased flexibility." }, { "pmid": "15631562", "title": "Interpersonal and print nutrition communication for a Spanish-dominant Latino population: Secretos de la Buena Vida.", "abstract": "Participants (N=357) were randomly assigned to 1 of 3 conditions: lay health advisor (promotora) plus tailored print materials, tailored print materials only (tailored), or off-the-shelf print materials (control). The primary outcomes were calories from fat and daily grams of fiber. Secondary outcomes included total energy intake, total and saturated fat intake, and total carbohydrates. Adjusted for baseline values, calories from fat were 29%, 30%, and 30% for the promotora, tailored, and control conditions, respectively, and grams of fiber consumed were 16 g, 17 g, and 16 g. Significant Condition X Time interactions were not observed between baseline and 12-weeks postintervention. The LHA condition achieved significantly lower levels of energy intake, total fat and saturated fat, and total carbohydrates. The relative superiority of the promotora condition may derive from the personal touch achieved in the face-to-face interactions or from the women's use of print materials under the promotora's guidance." }, { "pmid": "20975934", "title": "Biosensor approach to psychopathology classification.", "abstract": "We used a multi-round, two-party exchange game in which a healthy subject played a subject diagnosed with a DSM-IV (Diagnostic and Statistics Manual-IV) disorder, and applied a Bayesian clustering approach to the behavior exhibited by the healthy subject. The goal was to characterize quantitatively the style of play elicited in the healthy subject (the proposer) by their DSM-diagnosed partner (the responder). The approach exploits the dynamics of the behavior elicited in the healthy proposer as a biosensor for cognitive features that characterize the psychopathology group at the other side of the interaction. Using a large cohort of subjects (n = 574), we found statistically significant clustering of proposers' behavior overlapping with a range of DSM-IV disorders including autism spectrum disorder, borderline personality disorder, attention deficit hyperactivity disorder, and major depressive disorder. To further validate these results, we developed a computer agent to replace the human subject in the proposer role (the biosensor) and show that it can also detect these same four DSM-defined disorders. These results suggest that the highly developed social sensitivities that humans bring to a two-party social exchange can be exploited and automated to detect important psychopathologies, using an interpersonal behavioral probe not directly related to the defining diagnostic criteria." } ]
Frontiers in Genetics
29559993
PMC5845696
10.3389/fgene.2018.00039
Griffin: A Tool for Symbolic Inference of Synchronous Boolean Molecular Networks
Boolean networks are important models of biochemical systems, located at the high end of the abstraction spectrum. A number of Boolean gene networks have been inferred following essentially the same method. Such a method first considers experimental data for a typically underdetermined “regulation” graph. Next, Boolean networks are inferred by using biological constraints to narrow the search space, such as a desired set of (fixed-point or cyclic) attractors. We describe Griffin, a computer tool enhancing this method. Griffin incorporates a number of well-established algorithms, such as Dubrova and Teslenko's algorithm for finding attractors in synchronous Boolean networks. In addition, a formal definition of regulation allows Griffin to employ “symbolic” techniques, able to represent both large sets of network states and Boolean constraints. We observe that when the set of attractors is required to be an exact set, prohibiting additional attractors, a naive Boolean coding of this constraint may be unfeasible. Such cases may be intractable even with symbolic methods, as the number of Boolean constraints may be astronomically large. To overcome this problem, we employ an Artificial Intelligence technique known as “clause learning” considerably increasing Griffin's scalability. Without clause learning only toy examples prohibiting additional attractors are solvable: only one out of seven queries reported here is answered. With clause learning, by contrast, all seven queries are answered. We illustrate Griffin with three case studies drawn from the Arabidopsis thaliana literature. Griffin is available at: http://turing.iimas.unam.mx/griffin.
3.2. Related workAs a first comparison between our work and related articles, it is important to point out a difference in the type of input data. In this work, Griffin input is composed of (partial) information about the network topology (R-graphs) along with other data representing biological constraints. R-graphs contain information on genetic connectivity that was inferred from data obtained by direct measurement of gene expression or protein interaction. By contrast, inference methods proposed by other authors (e.g., Laubenbacher and Stigler, 2004) have an input composed of time series together with other data that capture information on the network dynamics. Time series, unlike the R-graphs, represent experimental data obtained by direct measurement of gene expression or protein interaction.There are a number of tutorials on Boolean-network inference (D'haeseleer et al., 2000; Markowetz and Spang, 2007; Karlebach and Shamir, 2008; Hecker et al., 2009; Hickman and Hodgman, 2009; Berestovsky and Nakhleh, 2013). From these tutorials we can classify algorithms according to (1) the expected input, (2) the kind of model inferred, and (3) the search strategy.Much of the effort in Boolean-network inference has been aimed at having a binarized time-series data as input. Hence, multiple methods have been proposed and some of them have been compared with each other (Berestovsky and Nakhleh, 2013). An influential method in this category is reveal (Liang et al., 1998) (employing Shannon's mutual information between all pairs of molecules to extract an influence graph; the truth tables of the update functions for each molecule are simply taken from the time series). The use of mutual information and time-series for the inference of Boolean networks continues to develop (Barman and Kwon, 2017), as well as alternative methods based on time series. Example are Han et al. (2014) (using a Bayesian approximation), Shmulevich et al. (2003), Lähdesmäki et al. (2003), Akutsu et al. (1999), Laubenbacher and Stigler (2004), and Layek et al. (2011) (using a generate-and-test method, generating all possible update functions for one gene and testing with the input data). Extra information can be included in addition to time-series data. For example, an expected set of stable states (Layek et al., 2011), previously known regulations (Haider and Pal, 2012) and gene expression data (Chueh and Lu, 2012) are used as an aid to curtail the number of possible solutions.Griffin belongs to a second family of methods taking as input a possibly partial regulation graph (perhaps obtained form the literature). There are also approaches employing both time-series data and a regulation graph, such as Ostrowski et al. (2016).A third important area of research is the development of algorithms taking as input temporal-logic specifications, based on Model Checking (Clarke et al., 1999). Works following this approach are: Calzone et al. (2006b), Mateus et al. (2007), and Streck and Siebert (2015).As for the kind of model inferred, here we would be concerned with Boolean networks and similar formalisms. Among the nonprobabilistic approaches, we find synchronous Boolean networks, asynchronous networks based on Thomas's formalism (Bernot et al., 2004; Khalis et al., 2009; Corblin et al., 2012; Richard et al., 2012), and polynomial dynamical systems (Laubenbacher and Stigler, 2004). Typically, methods based on temporal logic infer Kripke structures, which are closely related to Boolean networks. Probabilistic models, one the other hand, include Bayesian networks, and have the advantage of being able to deal with noise and uncertainty.From the search-strategy point of view, Boolean-network inference methods may employ a simple random-value assignment (Pal et al., 2005), exhaustive search (Akutsu et al., 1999) or more elaborate algorithms. The work of Chueh and Lu (2012) is based on p-scores and that of Liang et al. (1998) guides search with Shannon's mutual information. Genetic algorithms are used by Saez-Rodriguez et al. (2009) and Ghaffarizadeh et al. (2017). Linear programming is the basis of Tarissan et al. (2008). The methods of Ostrowski et al. (2016) and Corblin et al. (2012), are based on Answer Set Programming (Brewka et al., 2011). Algebraic methods use reductions (of polynomials over a finite field) modulo an ideal of vanishing polynomials.Approaches based on temporal logic (sometimes augmented with constraints), such as Calzone et al. (2006b) and Mateus et al. (2007), normally employ Model Checking (Clarke et al., 1999). Model Checking, in turn, is often based on symbolic approaches: BDDs and SAT solvers. Biocham (Calzone et al., 2006b) and SMBioNet (Bernot et al., 2004; Khalis et al., 2009; Richard et al., 2012) use a model checker as part of a generate-and-test method.Having classified various approaches, we now mention a work similar to ours in spirit: Pal et al. (2005). These authors also propagate fixed-point constraints onto the truth tables of the update functions of each variable (molecule species). There is, however, no search technique, other than randomly giving values to the remaining entries of such truth tables. There is a random assignment of values to such entries, and a check for unwanted attractors. Neither is there a formal definition of regulation.In contrast with the logical approach of our work, we devote our attention now to an algebraic approach to the problem of inference (reverse engineering) of Boolean networks. Instead of using a Boolean network to model the dynamics of a gene network, the algebraic approach (Laubenbacher and Stigler, 2004; Jarrah et al., 2007; Veliz-Cuba, 2012) uses a polynomial dynamical system F: Sn → Sn where S has the structure of a finite field (ℤ/p for an appropriate prime number p). The first benefit of this algebraic approach is that each component of F can be expressed by a polynomial (in n variables with coefficients in S) such that the degree of each variable is at most equal to the cardinality of the field S.Following a computational algebra approach, in a framework of modeling with polynomial dynamical systems, Laubenbacher and Stigler (2004) propose a reverse-engineering algorithm. This algorithm takes as input a time series s1,…,sm∈Sn of network states (where S is a finite field), and produces as output a polynomial dynamical system F=(F1,…,Fn):Sn→Sn such that ∀i ∈ {1, …, n}: Fi ∈ S[x1, …, xn] and Fi(sj) = sj+1,i for j ∈ {1, …, m}. An advantage of the algebraic approach of this algorithm is that there exists a well-developed theory of algorithmic polynomial algebra, with a variety of procedures already implemented, supporting the implementation task. The time complexity of this algorithm is quadratic in the number of variables (n) and exponential in the number of time points (m).Comparing Griffin with the reverse-engineering algebraic algorithms proposed by Laubenbacher and Stigler (2004) and Veliz-Cuba (2012), we found three basic differences. (1) Algebraic algorithms can handle discrete multi-valued variables, while Griffin only handles Boolean (two-valued) variables. Multi-valued variables give more flexibility and detail to the modeling process, but Boolean variables (of Boolean networks) lead to simpler models (see section 1). (2) The input of the algebraic algorithms, typically time series, provides simple information coming directly from experimental measurements, while input of Griffin, R-graphs and Griffin queries, provides structured information allowing a more precise specification of the required Boolean network. (3) The algebraic algorithm of Veliz-Cuba (2012) uses a formal definition of regulation, but this definition does not match the definition of regulation used by Griffin. While Griffin allows for R-regulations based on Boolean combinations of positive and negative regulations, Veliz-Cuba (2012) uses regulations restricted to unate functions h such that, for all variable x: h does not depend on x, or h depends positively on x, or h depends negatively on x.Finally, we observe that sometimes results might have been reported overoptimistically. There have been some doubts cast upon the effectiveness of a number of methods of inference of network dynamics (Wimburly et al., 2003), especially those based on more general-purpose learning methods. It is therefore important to establish tests such as the dream challenges (Stolovitzky et al., 2007) emphasizing reproducibility.
[ "12782112", "22303253", "22192526", "20056001", "28186191", "23658556", "28178334", "23805196", "15234201", "11688712", "16239464", "18508746", "16672256", "22833747", "22952589", "18301750", "11099257", "21778527", "15486106", "17989686", "28426669", "18614585", "28254369", "23134816", "25551820", "19150482", "20014476", "10475062", "18797474", "5343519", "26442247", "22198150", "15246788", "21161088", "15037758", "17903286", "17636866", "16386358", "9714934", "10487867", "28753377", "27484338", "16150807", "22303435", "20659480", "19953085", "17925349", "25591917" ]
[ { "pmid": "12782112", "title": "The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster.", "abstract": "Expression of the Drosophila segment polarity genes is initiated by a pre-pattern of pair-rule gene products and maintained by a network of regulatory interactions throughout several stages of embryonic development. Analysis of a model of gene interactions based on differential equations showed that wild-type expression patterns of these genes can be obtained for a wide range of kinetic parameters, which suggests that the steady states are determined by the topology of the network and the type of regulatory interactions between components, not the detailed form of the rate laws. To investigate this, we propose and analyse a Boolean model of this network which is based on a binary ON/OFF representation of mRNA and protein levels, and in which the interactions are formulated as logical functions. In this model the spatial and temporal patterns of gene expression are determined by the topology of the network and whether components are present or absent, rather than the absolute levels of the mRNAs and proteins and the functional details of their interactions. The model is able to reproduce the wild-type gene expression patterns, as well as the ectopic expression patterns observed in overexpression experiments and various mutants. Furthermore, we compute explicitly all steady states of the network and identify the basin of attraction of each steady state. The model gives important insights into the functioning of the segment polarity gene network, such as the crucial role of the wingless and sloppy paired genes, and the network's ability to correct errors in the pre-pattern." }, { "pmid": "22303253", "title": "Flower development.", "abstract": "Flowers are the most complex structures of plants. Studies of Arabidopsis thaliana, which has typical eudicot flowers, have been fundamental in advancing the structural and molecular understanding of flower development. The main processes and stages of Arabidopsis flower development are summarized to provide a framework in which to interpret the detailed molecular genetic studies of genes assigned functions during flower development and is extended to recent genomics studies uncovering the key regulatory modules involved. Computational models have been used to study the concerted action and dynamics of the gene regulatory module that underlies patterning of the Arabidopsis inflorescence meristem and specification of the primordial cell types during early stages of flower development. This includes the gene combinations that specify sepal, petal, stamen and carpel identity, and genes that interact with them. As a dynamic gene regulatory network this module has been shown to converge to stable multigenic profiles that depend upon the overall network topology and are thus robust, which can explain the canalization of flower organ determination and the overall conservation of the basic flower plan among eudicots. Comparative and evolutionary approaches derived from Arabidopsis studies pave the way to studying the molecular basis of diverse floral morphologies." }, { "pmid": "22192526", "title": "\"Antelope\": a hybrid-logic model checker for branching-time Boolean GRN analysis.", "abstract": "BACKGROUND\nIn Thomas' formalism for modeling gene regulatory networks (GRNs), branching time, where a state can have more than one possible future, plays a prominent role. By representing a certain degree of unpredictability, branching time can model several important phenomena, such as (a) asynchrony, (b) incompletely specified behavior, and (c) interaction with the environment. Introducing more than one possible future for a state, however, creates a difficulty for ordinary simulators, because infinitely many paths may appear, limiting ordinary simulators to statistical conclusions. Model checkers for branching time, by contrast, are able to prove properties in the presence of infinitely many paths.\n\n\nRESULTS\nWe have developed Antelope (\"Analysis of Networks through TEmporal-LOgic sPEcifications\", http://turing.iimas.unam.mx:8080/AntelopeWEB/), a model checker for analyzing and constructing Boolean GRNs. Currently, software systems for Boolean GRNs use branching time almost exclusively for asynchrony. Antelope, by contrast, also uses branching time for incompletely specified behavior and environment interaction. We show the usefulness of modeling these two phenomena in the development of a Boolean GRN of the Arabidopsis thaliana root stem cell niche.There are two obstacles to a direct approach when applying model checking to Boolean GRN analysis. First, ordinary model checkers normally only verify whether or not a given set of model states has a given property. In comparison, a model checker for Boolean GRNs is preferable if it reports the set of states having a desired property. Second, for efficiency, the expressiveness of many model checkers is limited, resulting in the inability to express some interesting properties of Boolean GRNs.Antelope tries to overcome these two drawbacks: Apart from reporting the set of all states having a given property, our model checker can express, at the expense of efficiency, some properties that ordinary model checkers (e.g., NuSMV) cannot. This additional expressiveness is achieved by employing a logic extending the standard Computation-Tree Logic (CTL) with hybrid-logic operators.\n\n\nCONCLUSIONS\nWe illustrate the advantages of Antelope when (a) modeling incomplete networks and environment interaction, (b) exhibiting the set of all states having a given property, and (c) representing Boolean GRN properties with hybrid CTL." }, { "pmid": "20056001", "title": "Snazer: the simulations and networks analyzer.", "abstract": "BACKGROUND\nNetworks are widely recognized as key determinants of structure and function in systems that span the biological, physical, and social sciences. They are static pictures of the interactions among the components of complex systems. Often, much effort is required to identify networks as part of particular patterns as well as to visualize and interpret them.From a pure dynamical perspective, simulation represents a relevant way-out. Many simulator tools capitalized on the \"noisy\" behavior of some systems and used formal models to represent cellular activities as temporal trajectories. Statistical methods have been applied to a fairly large number of replicated trajectories in order to infer knowledge.A tool which both graphically manipulates reactive models and deals with sets of simulation time-course data by aggregation, interpretation and statistical analysis is missing and could add value to simulators.\n\n\nRESULTS\nWe designed and implemented Snazer, the simulations and networks analyzer. Its goal is to aid the processes of visualizing and manipulating reactive models, as well as to share and interpret time-course data produced by stochastic simulators or by any other means.\n\n\nCONCLUSIONS\nSnazer is a solid prototype that integrates biological network and simulation time-course data analysis techniques." }, { "pmid": "28186191", "title": "The combination of the functionalities of feedback circuits is determinant for the attractors' number and size in pathway-like Boolean networks.", "abstract": "Molecular regulation was initially assumed to follow both a unidirectional and a hierarchical organization forming pathways. Regulatory processes, however, form highly interlinked networks with non-hierarchical and non-unidirectional structures that contain statistically overrepresented circuits or motifs. Here, we analyze the behavior of pathways containing non-unidirectional (i.e. bidirectional) and non-hierarchical interactions that create motifs. In comparison with unidirectional and hierarchical pathways, our pathways have a high diversity of behaviors, characterized by the size and number of attractors. Motifs have been studied individually showing that feedback circuit motifs regulate the number and size of attractors. It is less clear what happens in molecular networks that usually contain multiple feedbacks. Here, we find that the way feedback circuits couple to each other (i.e., the combination of the functionalities of feedback circuits) regulate both the number and size of the attractors. We show that the different expected results of epistasis analysis (a method to infer regulatory interactions) are produced by many non-hierarchical and non-unidirectional structures. Thus, these structures cannot be correctly inferred by epistasis analysis. Finally, we show that the combinations of functionalities, combined with other network properties, allow for a better characterization of regulatory structures." }, { "pmid": "23658556", "title": "Finding Missing Interactions of the Arabidopsis thaliana Root Stem Cell Niche Gene Regulatory Network.", "abstract": "Over the last few decades, the Arabidopsis thaliana root stem cell niche (RSCN) has become a model system for the study of plant development and stem cell niche dynamics. Currently, many of the molecular mechanisms involved in RSCN maintenance and development have been described. A few years ago, we published a gene regulatory network (GRN) model integrating this information. This model suggested that there were missing components or interactions. Upon updating the model, the observed stable gene configurations of the RSCN could not be recovered, indicating that there are additional missing components or interactions in the model. In fact, due to the lack of experimental data, GRNs inferred from published data are usually incomplete. However, predicting the location and nature of the missing data is a not trivial task. Here, we propose a set of procedures for detecting and predicting missing interactions in Boolean networks. We used these procedures to predict putative missing interactions in the A. thaliana RSCN network model. Using our approach, we identified three necessary interactions to recover the reported gene activation configurations that have been experimentally uncovered for the different cell types within the RSCN: (1) a regulation of PHABULOSA to restrict its expression domain to the vascular cells, (2) a self-regulation of WOX5, possibly by an indirect mechanism through the auxin signaling pathway, and (3) a positive regulation of JACKDAW by MAGPIE. The procedures proposed here greatly reduce the number of possible Boolean functions that are biologically meaningful and experimentally testable and that do not contradict previous data. We believe that these procedures can be used on any Boolean network. However, because the procedures were designed for the specific case of the RSCN, formal demonstrations of the procedures should be shown in future efforts." }, { "pmid": "28178334", "title": "A novel mutual information-based Boolean network inference method from time-series gene expression data.", "abstract": "BACKGROUND\nInferring a gene regulatory network from time-series gene expression data in systems biology is a challenging problem. Many methods have been suggested, most of which have a scalability limitation due to the combinatorial cost of searching a regulatory set of genes. In addition, they have focused on the accurate inference of a network structure only. Therefore, there is a pressing need to develop a network inference method to search regulatory genes efficiently and to predict the network dynamics accurately.\n\n\nRESULTS\nIn this study, we employed a Boolean network model with a restricted update rule scheme to capture coarse-grained dynamics, and propose a novel mutual information-based Boolean network inference (MIBNI) method. Given time-series gene expression data as an input, the method first identifies a set of initial regulatory genes using mutual information-based feature selection, and then improves the dynamics prediction accuracy by iteratively swapping a pair of genes between sets of the selected regulatory genes and the other genes. Through extensive simulations with artificial datasets, MIBNI showed consistently better performance than six well-known existing methods, REVEAL, Best-Fit, RelNet, CST, CLR, and BIBN in terms of both structural and dynamics prediction accuracy. We further tested the proposed method with two real gene expression datasets for an Escherichia coli gene regulatory network and a fission yeast cell cycle network, and also observed better results using MIBNI compared to the six other methods.\n\n\nCONCLUSIONS\nTaken together, MIBNI is a promising tool for predicting both the structure and the dynamics of a gene regulatory network." }, { "pmid": "23805196", "title": "An Evaluation of Methods for Inferring Boolean Networks from Time-Series Data.", "abstract": "Regulatory networks play a central role in cellular behavior and decision making. Learning these regulatory networks is a major task in biology, and devising computational methods and mathematical models for this task is a major endeavor in bioinformatics. Boolean networks have been used extensively for modeling regulatory networks. In this model, the state of each gene can be either 'on' or 'off' and that next-state of a gene is updated, synchronously or asynchronously, according to a Boolean rule that is applied to the current-state of the entire system. Inferring a Boolean network from a set of experimental data entails two main steps: first, the experimental time-series data are discretized into Boolean trajectories, and then, a Boolean network is learned from these Boolean trajectories. In this paper, we consider three methods for data discretization, including a new one we propose, and three methods for learning Boolean networks, and study the performance of all possible nine combinations on four regulatory systems of varying dynamics complexities. We find that employing the right combination of methods for data discretization and network learning results in Boolean networks that capture the dynamics well and provide predictive power. Our findings are in contrast to a recent survey that placed Boolean networks on the low end of the \"faithfulness to biological reality\" and \"ability to model dynamics\" spectra. Further, contrary to the common argument in favor of Boolean networks, we find that a relatively large number of time points in the time-series data is required to learn good Boolean networks for certain data sets. Last but not least, while methods have been proposed for inferring Boolean networks, as discussed above, missing still are publicly available implementations thereof. Here, we make our implementation of the methods available publicly in open source at http://bioinfo.cs.rice.edu/." }, { "pmid": "15234201", "title": "Application of formal methods to biological regulatory networks: extending Thomas' asynchronous logical approach with temporal logic.", "abstract": "Based on the discrete definition of biological regulatory networks developed by René Thomas, we provide a computer science formal approach to treat temporal properties of biological regulatory networks, expressed in computational tree logic. It is then possible to build all the models satisfying a set of given temporal properties. Our approach is illustrated with the mucus production in Pseudomonas aeruginosa. This application of formal methods from computer science to biological regulatory networks should open the way to many other fruitful applications." }, { "pmid": "11688712", "title": "Modeling genetic networks and their evolution: a complex dynamical systems perspective.", "abstract": "After finishing the sequence of the human genome, a functional understanding of genome dynamics is the next major step on the agenda of the biosciences. New approaches, such as microarray techniques, and new methods of bioinformatics provide powerful tools aiming in this direction. In the last few years, important parts of genome organization and dynamics in a number of model organisms have been determined. However, an integrated view of gene regulation on a genomic scale is still lacking. Here, genome function is discussed from a complex dynamical systems perspective: which dynamical properties can a large genomic system exhibit in principle, given the local mechanisms governing the small subsystems that we know today? Models of artificial genetic networks are used to explore dynamical principles and possible emergent dynamical phenomena in networks of genetic switches. One observes evolution of robustness and dynamical self-organization in large networks of artificial regulators that are based on the dynamic mechanism of transcriptional regulators as observed in biological gene regulation. Possible biological observables and ways of experimental testing of global phenomena in genome function and dynamics are discussed. Models of artificial genetic networks provide a tool to address questions in genome dynamics and their evolution and allow simulation studies in evolutionary genomics." }, { "pmid": "18508746", "title": "Boolean network models of cellular regulation: prospects and limitations.", "abstract": "Computer models are valuable tools towards an understanding of the cell's biochemical regulatory machinery. Possible levels of description of such models range from modelling the underlying biochemical details to top-down approaches, using tools from the theory of complex networks. The latter, coarse-grained approach is taken where regulatory circuits are classified in graph-theoretical terms, with the elements of the regulatory networks being reduced to simply nodes and links, in order to obtain architectural information about the network. Further, considering dynamics on networks at such an abstract level seems rather unlikely to match dynamical regulatory activity of biological cells. Therefore, it came as a surprise when recently examples of discrete dynamical network models based on very simplistic dynamical elements emerged which in fact do match sequences of regulatory patterns of their biological counterparts. Here I will review such discrete dynamical network models, or Boolean networks, of biological regulatory networks. Further, we will take a look at such models extended with stochastic noise, which allow studying the role of network topology in providing robustness against noise. In the end, we will discuss the interesting question of why at all such simple models can describe aspects of biology despite their simplicity. Finally, prospects of Boolean models in exploratory dynamical models for biological circuits and their mutants will be discussed." }, { "pmid": "16672256", "title": "BIOCHAM: an environment for modeling biological systems and formalizing experimental knowledge.", "abstract": "UNLABELLED\nBIOCHAM (the BIOCHemical Abstract Machine) is a software environment for modeling biochemical systems. It is based on two aspects: (1) the analysis and simulation of boolean, kinetic and stochastic models and (2) the formalization of biological properties in temporal logic. BIOCHAM provides tools and languages for describing protein networks with a simple and straightforward syntax, and for integrating biological properties into the model. It then becomes possible to analyze, query, verify and maintain the model with respect to those properties. For kinetic models, BIOCHAM can search for appropriate parameter values in order to reproduce a specific behavior observed in experiments and formalized in temporal logic. Coupled with other methods such as bifurcation diagrams, this search assists the modeler/biologist in the modeling process.\n\n\nAVAILABILITY\nBIOCHAM (v. 2.5) is a free software available for download, with example models, at http://contraintes.inria.fr/BIOCHAM/." }, { "pmid": "22833747", "title": "An overview of existing modeling tools making use of model checking in the analysis of biochemical networks.", "abstract": "Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks." }, { "pmid": "22952589", "title": "Inference of biological pathway from gene expression profiles by time delay boolean networks.", "abstract": "One great challenge of genomic research is to efficiently and accurately identify complex gene regulatory networks. The development of high-throughput technologies provides numerous experimental data such as DNA sequences, protein sequence, and RNA expression profiles makes it possible to study interactions and regulations among genes or other substance in an organism. However, it is crucial to make inference of genetic regulatory networks from gene expression profiles and protein interaction data for systems biology. This study will develop a new approach to reconstruct time delay boolean networks as a tool for exploring biological pathways. In the inference strategy, we will compare all pairs of input genes in those basic relationships by their corresponding p-scores for every output gene. Then, we will combine those consistent relationships to reveal the most probable relationship and reconstruct the genetic network. Specifically, we will prove that O(log n) state transition pairs are sufficient and necessary to reconstruct the time delay boolean network of n nodes with high accuracy if the number of input genes to each gene is bounded. We also have implemented this method on simulated and empirical yeast gene expression data sets. The test results show that this proposed method is extensible for realistic networks." }, { "pmid": "18301750", "title": "Boolean network model predicts cell cycle sequence of fission yeast.", "abstract": "A Boolean network model of the cell-cycle regulatory network of fission yeast (Schizosaccharomyces Pombe) is constructed solely on the basis of the known biochemical interaction topology. Simulating the model in the computer faithfully reproduces the known activity sequence of regulatory proteins along the cell cycle of the living cell. Contrary to existing differential equation models, no parameters enter the model except the structure of the regulatory circuitry. The dynamical properties of the model indicate that the biological dynamical sequence is robustly implemented in the regulatory network, with the biological stationary state G1 corresponding to the dominant attractor in state space, and with the biological regulatory sequence being a strongly attractive trajectory. Comparing the fission yeast cell-cycle model to a similar model of the corresponding network in S. cerevisiae, a remarkable difference in circuitry, as well as dynamics is observed. While the latter operates in a strongly damped mode, driven by external excitation, the S. pombe network represents an auto-excited system with external damping." }, { "pmid": "11099257", "title": "Genetic network inference: from co-expression clustering to reverse engineering.", "abstract": "Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-cluster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e. who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting and bioengineering." }, { "pmid": "21778527", "title": "A SAT-based algorithm for finding attractors in synchronous Boolean networks.", "abstract": "This paper addresses the problem of finding attractors in synchronous Boolean networks. The existing Boolean decision diagram-based algorithms have limited capacity due to the excessive memory requirements of decision diagrams. The simulation-based algorithms can be applied to larger networks, however, they are incomplete. We present an algorithm, which uses a SAT-based bounded model checking to find all attractors in a Boolean network. The efficiency of the presented algorithm is evaluated by analyzing seven networks models of real biological processes, as well as 150,000 randomly generated Boolean networks of sizes between 100 and 7,000. The results show that our approach has a potential to handle an order of magnitude larger models than currently possible." }, { "pmid": "15486106", "title": "A gene regulatory network model for cell-fate determination during Arabidopsis thaliana flower development that is robust and recovers experimental gene expression profiles.", "abstract": "Flowers are icons in developmental studies of complex structures. The vast majority of 250,000 angiosperm plant species have flowers with a conserved organ plan bearing sepals, petals, stamens, and carpels in the center. The combinatorial model for the activity of the so-called ABC homeotic floral genes has guided extensive experimental studies in Arabidopsis thaliana and many other plant species. However, a mechanistic and dynamical explanation for the ABC model and prevalence among flowering plants is lacking. Here, we put forward a simple discrete model that postulates logical rules that formally summarize published ABC and non-ABC gene interaction data for Arabidopsis floral organ cell fate determination and integrates this data into a dynamic network model. This model shows that all possible initial conditions converge to few steady gene activity states that match gene expression profiles observed experimentally in primordial floral organ cells of wild-type and mutant plants. Therefore, the network proposed here provides a dynamical explanation for the ABC model and shows that precise signaling pathways are not required to restrain cell types to those found in Arabidopsis, but these are rather determined by the overall gene network dynamics. Furthermore, we performed robustness analyses that clearly show that the cell types recovered depend on the network architecture rather than on specific values of the model's gene interaction parameters. These results support the hypothesis that such a network constitutes a developmental module, and hence provide a possible explanation for the overall conservation of the ABC model and overall floral plan among angiosperms. In addition, we have been able to predict the effects of differences in network architecture between Arabidopsis and Petunia hybrida." }, { "pmid": "17989686", "title": "Executable cell biology.", "abstract": "Computational modeling of biological systems is becoming increasingly important in efforts to better understand complex biological behaviors. In this review, we distinguish between two types of biological models--mathematical and computational--which differ in their representations of biological phenomena. We call the approach of constructing computational models of biological systems 'executable biology', as it focuses on the design of executable computer algorithms that mimic biological phenomena. We survey the main modeling efforts in this direction, emphasize the applicability and benefits of executable models in biological research and highlight some of the challenges that executable biology poses for biology and computer science. We claim that for executable biology to reach its full potential as a mainstream biological technique, formal and algorithmic approaches must be integrated into biological research. This will drive biology toward a more precise engineering discipline." }, { "pmid": "28426669", "title": "A dynamic genetic-hormonal regulatory network model explains multiple cellular behaviors of the root apical meristem of Arabidopsis thaliana.", "abstract": "The study of the concerted action of hormones and transcription factors is fundamental to understand cell differentiation and pattern formation during organ development. The root apical meristem of Arabidopsis thaliana is a useful model to address this. It has a stem cell niche near its tip conformed of a quiescent organizer and stem or initial cells around it, then a proliferation domain followed by a transition domain, where cells diminish division rate before transiting to the elongation zone; here, cells grow anisotropically prior to their final differentiation towards the plant base. A minimal model of the gene regulatory network that underlies cell-fate specification and patterning at the root stem cell niche was proposed before. In this study, we update and couple such network with both the auxin and cytokinin hormone signaling pathways to address how they collectively give rise to attractors that correspond to the genetic and hormonal activity profiles that are characteristic of different cell types along A. thaliana root apical meristem. We used a Boolean model of the genetic-hormonal regulatory network to integrate known and predicted regulatory interactions into alternative models. Our analyses show that, after adding some putative missing interactions, the model includes the necessary and sufficient components and regulatory interactions to recover attractors characteristic of the root cell types, including the auxin and cytokinin activity profiles that correlate with different cellular behaviors along the root apical meristem. Furthermore, the model predicts the existence of activity configurations that could correspond to the transition domain. The model also provides a possible explanation for apparently paradoxical cellular behaviors in the root meristem. For example, how auxin may induce and at the same time inhibit WOX5 expression. According to the model proposed here the hormonal regulation of WOX5 might depend on the cell type. Our results illustrate how non-linear multi-stable qualitative network models can aid at understanding how transcriptional regulators and hormonal signaling pathways are dynamically coupled and may underlie both the acquisition of cell fate and the emergence of hormonal activity profiles that arise during complex organ development." }, { "pmid": "18614585", "title": "Synchronous versus asynchronous modeling of gene regulatory networks.", "abstract": "MOTIVATION\nIn silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes.\n\n\nRESULTS\nIn this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process.\n\n\nAVAILABILITY\nThe software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html." }, { "pmid": "28254369", "title": "Applying attractor dynamics to infer gene regulatory interactions involved in cellular differentiation.", "abstract": "The dynamics of gene regulatory networks (GRNs) guide cellular differentiation. Determining the ways regulatory genes control expression of their targets is essential to understand and control cellular differentiation. The way a regulatory gene controls its target can be expressed as a gene regulatory function. Manual derivation of these regulatory functions is slow, error-prone and difficult to update as new information arises. Automating this process is a significant challenge and the subject of intensive effort. This work presents a novel approach to discovering biologically plausible gene regulatory interactions that control cellular differentiation. This method integrates known cell type expression data, genetic interactions, and knowledge of the effects of gene knockouts to determine likely GRN regulatory functions. We employ a genetic algorithm to search for candidate GRNs that use a set of transcription factors that control differentiation within a lineage. Nested canalyzing functions are used to constrain the search space to biologically plausible networks. The method identifies an ensemble of GRNs whose dynamics reproduce the gene expression pattern for each cell type within a particular lineage. The method's effectiveness was tested by inferring consensus GRNs for myeloid and pancreatic cell differentiation and comparing the predicted gene regulatory interactions to manually derived interactions. We identified many regulatory interactions reported in the literature and also found differences from published reports. These discrepancies suggest areas for biological studies of myeloid and pancreatic differentiation. We also performed a study that used defined synthetic networks to evaluate the accuracy of the automated search method and found that the search algorithm was able to discover the regulatory interactions in these defined networks with high accuracy. We suggest that the GRN functions derived from the methods described here can be used to fill gaps in knowledge about regulatory interactions and to offer hypotheses for experimental testing of GRNs that control differentiation and other biological processes." }, { "pmid": "23134816", "title": "Boolean network inference from time series data incorporating prior biological knowledge.", "abstract": "BACKGROUND\nNumerous approaches exist for modeling of genetic regulatory networks (GRNs) but the low sampling rates often employed in biological studies prevents the inference of detailed models from experimental data. In this paper, we analyze the issues involved in estimating a model of a GRN from single cell line time series data with limited time points.\n\n\nRESULTS\nWe present an inference approach for a Boolean Network (BN) model of a GRN from limited transcriptomic or proteomic time series data based on prior biological knowledge of connectivity, constraints on attractor structure and robust design. We applied our inference approach to 6 time point transcriptomic data on Human Mammary Epithelial Cell line (HMEC) after application of Epidermal Growth Factor (EGF) and generated a BN with a plausible biological structure satisfying the data. We further defined and applied a similarity measure to compare synthetic BNs and BNs generated through the proposed approach constructed from transitions of various paths of the synthetic BNs. We have also compared the performance of our algorithm with two existing BN inference algorithms.\n\n\nCONCLUSIONS\nThrough theoretical analysis and simulations, we showed the rarity of arriving at a BN from limited time series data with plausible biological structure using random connectivity and absence of structure in data. The framework when applied to experimental data and data generated from synthetic BNs were able to estimate BNs with high similarity scores. Comparison with existing BN inference algorithms showed the better performance of our proposed algorithm for limited time series data. The proposed framework can also be applied to optimize the connectivity of a GRN from experimental data when the prior biological knowledge on regulators is limited or not unique." }, { "pmid": "25551820", "title": "A full bayesian approach for boolean genetic network inference.", "abstract": "Boolean networks are a simple but efficient model for describing gene regulatory systems. A number of algorithms have been proposed to infer Boolean networks. However, these methods do not take full consideration of the effects of noise and model uncertainty. In this paper, we propose a full Bayesian approach to infer Boolean genetic networks. Markov chain Monte Carlo algorithms are used to obtain the posterior samples of both the network structure and the related parameters. In addition to regular link addition and removal moves, which can guarantee the irreducibility of the Markov chain for traversing the whole network space, carefully constructed mixture proposals are used to improve the Markov chain Monte Carlo convergence. Both simulations and a real application on cell-cycle data show that our method is more powerful than existing methods for the inference of both the topology and logic relations of the Boolean network from observed data." }, { "pmid": "19150482", "title": "Gene regulatory network inference: data integration in dynamic models-a review.", "abstract": "Systems biology aims to develop mathematical models of biological systems by integrating experimental and theoretical techniques. During the last decade, many systems biological approaches that base on genome-wide data have been developed to unravel the complexity of gene regulation. This review deals with the reconstruction of gene regulatory networks (GRNs) from experimental data through computational methods. Standard GRN inference methods primarily use gene expression data derived from microarrays. However, the incorporation of additional information from heterogeneous data sources, e.g. genome sequence and protein-DNA interaction data, clearly supports the network inference process. This review focuses on promising modelling approaches that use such diverse types of molecular biological information. In particular, approaches are discussed that enable the modelling of the dynamics of gene regulatory systems. The review provides an overview of common modelling schemes and learning algorithms and outlines current challenges in GRN modelling." }, { "pmid": "20014476", "title": "Inference of gene regulatory networks using boolean-network inference methods.", "abstract": "The modeling of genetic networks especially from microarray and related data has become an important aspect of the biosciences. This review takes a fresh look at a specific family of models used for constructing genetic networks, the so-called Boolean networks. The review outlines the various different types of Boolean network developed to date, from the original Random Boolean Network to the current Probabilistic Boolean Network. In addition, some of the different inference methods available to infer these genetic networks are also examined. Where possible, particular attention is paid to input requirements as well as the efficiency, advantages and drawbacks of each method. Though the Boolean network model is one of many models available for network inference today, it is well established and remains a topic of considerable interest in the field of genetic network inference. Hybrids of Boolean networks with other approaches may well be the way forward in inferring the most informative networks." }, { "pmid": "10475062", "title": "Gene expression profiling, genetic networks, and cellular states: an integrating concept for tumorigenesis and drug discovery.", "abstract": "Genome-wide expression monitoring, a novel tool of functional genomics, is currently used mainly to identify groups of coregulated genes and to discover genes expressed differentially in distinct situations that could serve as drug targets. This descriptive approach. however, fails to extract \"distributed\" information embedded in the genomic regulatory network and manifested in distinct gene activation profiles. A model based on the formalism of boolean genetic networks in which cellular states are represented by attractors in a discrete dynamic system can serve as a conceptual framework for an integrative interpretation of gene expression profiles. Such a global (genome-wide) view of \"gene function\" in the regulation of the dynamic relationship between proliferation, differentiation, and apoptosis can provide new insights into cellular homeostasis and the origins of neoplasia. Implications for a rational approach to the identification of new drug targets for cancer treatment are discussed." }, { "pmid": "18797474", "title": "Modelling and analysis of gene regulatory networks.", "abstract": "Gene regulatory networks have an important role in every process of life, including cell differentiation, metabolism, the cell cycle and signal transduction. By understanding the dynamics of these networks we can shed light on the mechanisms of diseases that occur when these cellular processes are dysregulated. Accurate prediction of the behaviour of regulatory networks will also speed up biotechnological projects, as such predictions are quicker and cheaper than lab experiments. Computational methods, both for supporting the development of network models and for the analysis of their functionality, have already proved to be a valuable research tool." }, { "pmid": "26442247", "title": "Approximating Attractors of Boolean Networks by Iterative CTL Model Checking.", "abstract": "This paper introduces the notion of approximating asynchronous attractors of Boolean networks by minimal trap spaces. We define three criteria for determining the quality of an approximation: \"faithfulness\" which requires that the oscillating variables of all attractors in a trap space correspond to their dimensions, \"univocality\" which requires that there is a unique attractor in each trap space, and \"completeness\" which requires that there are no attractors outside of a given set of trap spaces. Each is a reachability property for which we give equivalent model checking queries. Whereas faithfulness and univocality can be decided by model checking the corresponding subnetworks, the naive query for completeness must be evaluated on the full state space. Our main result is an alternative approach which is based on the iterative refinement of an initially poor approximation. The algorithm detects so-called autonomous sets in the interaction graph, variables that contain all their regulators, and considers their intersection and extension in order to perform model checking on the smallest possible state spaces. A benchmark, in which we apply the algorithm to 18 published Boolean networks, is given. In each case, the minimal trap spaces are faithful, univocal, and complete, which suggests that they are in general good approximations for the asymptotics of Boolean networks." }, { "pmid": "22198150", "title": "A data-driven integrative model of sepal primordium polarity in Arabidopsis.", "abstract": "Flower patterning is determined by a complex molecular network but how this network functions remains to be elucidated. Here, we develop an integrative modeling approach that assembles heterogeneous data into a biologically coherent model to allow predictions to be made and inconsistencies among the data to be found. We use this approach to study the network underlying sepal development in the young flower of Arabidopsis thaliana. We constructed a digital atlas of gene expression and used it to build a dynamical molecular regulatory network model of sepal primordium development. This led to the construction of a coherent molecular network model for lateral organ polarity that fully recapitulates expression and interaction data. Our model predicts the existence of three novel pathways involving the HD-ZIP III genes and both cytokinin and ARGONAUTE family members. In addition, our model provides predictions on molecular interactions. In a broader context, this approach allows the extraction of biological knowledge from diverse types of data and can be used to study developmental processes in any multicellular organism." }, { "pmid": "15246788", "title": "A computational algebra approach to the reverse engineering of gene regulatory networks.", "abstract": "This paper proposes a new method to reverse engineer gene regulatory networks from experimental data. The modeling framework used is time-discrete deterministic dynamical systems, with a finite set of states for each of the variables. The simplest examples of such models are Boolean networks, in which variables have only two possible states. The use of a larger number of possible states allows a finer discretization of experimental data and more than one possible mode of action for the variables, depending on threshold values. Furthermore, with a suitable choice of state set, one can employ powerful tools from computational algebra, that underlie the reverse-engineering algorithm, avoiding costly enumeration strategies. To perform well, the algorithm requires wildtype together with perturbation time courses. This makes it suitable for small to meso-scale networks rather than networks on a genome-wide scale. An analysis of the complexity of the algorithm is performed. The algorithm is validated on a recently published Boolean network model of segment polarity development in Drosophila melanogaster." }, { "pmid": "21161088", "title": "From biological pathways to regulatory networks.", "abstract": "This paper presents a general theoretical framework for generating Boolean networks whose state transitions realize a set of given biological pathways or minor variations thereof. This ill-posed inverse problem, which is of crucial importance across practically all areas of biology, is solved by using Karnaugh maps which are classical tools for digital system design. It is shown that the incorporation of prior knowledge, presented in the form of biological pathways, can bring about a dramatic reduction in the cardinality of the network search space. Constraining the connectivity of the network, the number and relative importance of the attractors, and concordance with observed time-course data are additional factors that can be used to further reduce the cardinality of the search space. The networks produced by the approaches developed here should facilitate the understanding of multivariate biological phenomena and the subsequent design of intervention approaches that are more likely to be successful in practice. As an example, the results of this paper are applied to the widely studied p53 pathway and it is shown that the resulting network exhibits dynamic behavior consistent with experimental observations from the published literature." }, { "pmid": "15037758", "title": "The yeast cell-cycle network is robustly designed.", "abstract": "The interactions between proteins, DNA, and RNA in living cells constitute molecular networks that govern various cellular functions. To investigate the global dynamical properties and stabilities of such networks, we studied the cell-cycle regulatory network of the budding yeast. With the use of a simple dynamical model, it was demonstrated that the cell-cycle network is extremely stable and robust for its function. The biological stationary state, the G1 state, is a global attractor of the dynamics. The biological pathway, the cell-cycle sequence of protein states, is a globally attracting trajectory of the dynamics. These properties are largely preserved with respect to small perturbations to the network. These results suggest that cellular regulatory networks are robustly designed for their functions." }, { "pmid": "17903286", "title": "Inferring cellular networks--a review.", "abstract": "In this review we give an overview of computational and statistical methods to reconstruct cellular networks. Although this area of research is vast and fast developing, we show that most currently used methods can be organized by a few key concepts. The first part of the review deals with conditional independence models including Gaussian graphical models and Bayesian networks. The second part discusses probabilistic and graph-based methods for data from experimental interventions and perturbations." }, { "pmid": "17636866", "title": "Symbolic modeling of genetic regulatory networks.", "abstract": "Understanding the functioning of genetic regulatory networks supposes a modeling of biological processes in order to simulate behaviors and to reason on the model. Unfortunately, the modeling task is confronted to incomplete knowledge about the system. To deal with this problem we propose a methodology that uses the qualitative approach developed by Thomas. A symbolic transition system can represent the set of all possible models in a concise and symbolic way. We introduce a new method based on model-checking techniques and symbolic execution to extract constraints on parameters leading to dynamics coherent with known behaviors. Our method allows us to efficiently respond to two kinds of questions: is there any model coherent with a certain hypothetic behavior? Are there behaviors common to all selected models? The first question is illustrated with the example of the mucus production in Pseudomonas aeruginosa while the second one is illustrated with the example of immunity control in bacteriophage lambda." }, { "pmid": "16386358", "title": "A network model for the control of the differentiation process in Th cells.", "abstract": "T helper cells differentiate from a precursor type, Th0, to either the Th1 or Th2 phenotypes. While a number of molecules are known to participate in this process, it is not completely understood how they regulate each other to ensure differentiation. This article presents the core regulatory network controlling the differentiation of Th cells, reconstructed from published molecular data. This network encompasses 17 nodes, namely IFN-gamma, IL-4, IL-12, IL-18, IFN-beta, IFN-gammaR, IL-4R, IL-12R, IL-18R, IFN-betaR, STAT-1, STAT-6, STAT-4, IRAK, SOCS-1, GATA-3, and T-bet, as well as their cross-regulatory interactions. The reconstructed network was modeled as a discrete dynamical system, and analyzed in terms of its constituent feedback loops. The stable steady states of the Th network model are consistent with the stable molecular patterns of activation observed in wild type and mutant Th0, Th1 and Th2 cells." }, { "pmid": "9714934", "title": "Dynamics of the genetic regulatory network for Arabidopsis thaliana flower morphogenesis.", "abstract": "We present a network model and its dynamic analysis for the regulatory relationships among 11 genes that participate in Arabidopsis thaliana flower morphogenesis. The topology of the network and the relative strengths of interactions among these genes were based from published genetic and molecular data, mainly relying on mRNA expression patterns under wild type and mutant backgrounds. The network model is made of binary elements and we used a particular dynamic implementation for the network that we call semi-synchronic. Using this method the network reaches six attractors; four of them correspond to observed patterns of gene expression found in the floral organs of Arabidopsis (sepals, petals, stamens and carpels) as predicted by the ABC model of flower morphogenesis. The fifth state corresponds to cells that are not competent to flowering, and the sixth attractor predicted by the model is never found in wild-type plants, but it could be induced experimentally. We discuss the biological implications and the potential use of this network modeling approach to integrate functional data of regulatory genes of plant development." }, { "pmid": "10487867", "title": "Genetic control of flower morphogenesis in Arabidopsis thaliana: a logical analysis.", "abstract": "MOTIVATION\nA large number of molecular mechanisms at the basis of gene regulation have been described during the last few decades. It is now becoming possible to address questions dealing with both the structure and the dynamics of genetic regulatory networks, at least in the case of some of the best-characterized organisms. Most recent attempts to address these questions deal with microbial or animal model systems. In contrast, we analyze here a gene network involved in the control of the morphogenesis of flowers in a model plant, Arabidopsis thaliana.\n\n\nRESULTS\nThe genetic control of flower morphogenesis in Arabidopsis involves a large number of genes, of which 10 are considered here. The network topology has been derived from published genetic and molecular data, mainly relying on mRNA expression patterns under wild-type and mutant backgrounds. Using a 'generalized logical formalism', we provide a qualitative model and derive the parameter constraints accounting for the different patterns of gene expression found in the four floral organs of Arabidopsis (sepals, petals, stamens and carpels), plus a 'non-floral' state. This model also allows the simulation or the prediction of various mutant phenotypes. On the basis of our model analysis, we predict the existence of a sixth stable pattern of gene expression, yet to be characterized experimentally. Moreover, our dynamical analysis leads to the prediction of at least one more regulator of the gene LFY, likely to be involved in the transition from the non-flowering state to the flowering pathways. Finally, this work, together with other theoretical and experimental considerations, leads us to propose some general conclusions about the structure of gene networks controlling development." }, { "pmid": "28753377", "title": "Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.", "abstract": "Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle." }, { "pmid": "27484338", "title": "Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.", "abstract": "Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models." }, { "pmid": "16150807", "title": "Generating Boolean networks with a prescribed attractor structure.", "abstract": "MOTIVATION\nDynamical modeling of gene regulation via network models constitutes a key problem for genomics. The long-run characteristics of a dynamical system are critical and their determination is a primary aspect of system analysis. In the other direction, system synthesis involves constructing a network possessing a given set of properties. This constitutes the inverse problem. Generally, the inverse problem is ill-posed, meaning there will be many networks, or perhaps none, possessing the desired properties. Relative to long-run behavior, we may wish to construct networks possessing a desirable steady-state distribution. This paper addresses the long-run inverse problem pertaining to Boolean networks (BNs).\n\n\nRESULTS\nThe long-run behavior of a BN is characterized by its attractors. The rest of the state transition diagram is partitioned into level sets, the j-th level set being composed of all states that transition to one of the attractor states in exactly j transitions. We present two algorithms for the attractor inverse problem. The attractors are specified, and the sizes of the predictor sets and the number of levels are constrained. Algorithm complexity and performance are analyzed. The algorithmic solutions have immediate application. Under the assumption that sampling is from the steady state, a basic criterion for checking the validity of a designed network is that there should be concordance between the attractor states of the model and the data states. This criterion can be used to test a design algorithm: randomly select a set of states to be used as data states; generate a BN possessing the selected states as attractors, perhaps with some added requirements such as constraints on the number of predictors and the level structure; apply the design algorithm; and check the concordance between the attractor states of the designed network and the data states.\n\n\nAVAILABILITY\nThe software and supplementary material is available at http://gsp.tamu.edu/Publications/BNs/bn.htm" }, { "pmid": "22303435", "title": "Boolean models of biosurfactants production in Pseudomonas fluorescens.", "abstract": "Cyclolipopeptides (CLPs) are biosurfactants produced by numerous Pseudomonas fluorescens strains. CLP production is known to be regulated at least by the GacA/GacS two-component pathway, but the full regulatory network is yet largely unknown. In the clinical strain MFN1032, CLP production is abolished by a mutation in the phospholipase C gene (plcC) and not restored by plcC complementation. Their production is also subject to phenotypic variation. We used a modelling approach with Boolean networks, which takes into account all these observations concerning CLP production without any assumption on the topology of the considered network. Intensive computation yielded numerous models that satisfy these properties. All models minimizing the number of components point to a bistability in CLP production, which requires the presence of a yet unknown key self-inducible regulator. Furthermore, all suggest that a set of yet unexplained phenotypic variants might also be due to this epigenetic switch. The simplest of these Boolean networks was used to propose a biological regulatory network for CLP production. This modelling approach has allowed a possible regulation to be unravelled and an unusual behaviour of CLP production in P. fluorescens to be explained." }, { "pmid": "20659480", "title": "Attractor analysis of asynchronous Boolean models of signal transduction networks.", "abstract": "Prior work on the dynamics of Boolean networks, including analysis of the state space attractors and the basin of attraction of each attractor, has mainly focused on synchronous update of the nodes' states. Although the simplicity of synchronous updating makes it very attractive, it fails to take into account the variety of time scales associated with different types of biological processes. Several different asynchronous update methods have been proposed to overcome this limitation, but there have not been any systematic comparisons of the dynamic behaviors displayed by the same system under different update methods. Here we fill this gap by combining theoretical analysis such as solution of scalar equations and Markov chain techniques, as well as numerical simulations to carry out a thorough comparative study on the dynamic behavior of a previously proposed Boolean model of a signal transduction network in plants. Prior evidence suggests that this network admits oscillations, but it is not known whether these oscillations are sustained. We perform an attractor analysis of this system using synchronous and three different asynchronous updating schemes both in the case of the unperturbed (wild-type) and perturbed (node-disrupted) systems. This analysis reveals that while the wild-type system possesses an update-independent fixed point, any oscillations eventually disappear unless strict constraints regarding the timing of certain processes and the initial state of the system are satisfied. Interestingly, in the case of disruption of a particular node all models lead to an extended attractor. Overall, our work provides a roadmap on how Boolean network modeling can be used as a predictive tool to uncover the dynamic patterns of a biological system under various internal and environmental perturbations." }, { "pmid": "19953085", "title": "Discrete logic modelling as a means to link protein signalling networks with functional analysis of mammalian signal transduction.", "abstract": "Large-scale protein signalling networks are useful for exploring complex biochemical pathways but do not reveal how pathways respond to specific stimuli. Such specificity is critical for understanding disease and designing drugs. Here we describe a computational approach--implemented in the free CNO software--for turning signalling networks into logical models and calibrating the models against experimental data. When a literature-derived network of 82 proteins covering the immediate-early responses of human cells to seven cytokines was modelled, we found that training against experimental data dramatically increased predictive power, despite the crudeness of Boolean approximations, while significantly reducing the number of interactions. Thus, many interactions in literature-derived networks do not appear to be functional in the liver cells from which we collected our data. At the same time, CNO identified several new interactions that improved the match of model to data. Although missing from the starting network, these interactions have literature support. Our approach, therefore, represents a means to generate predictive, cell-type-specific models of mammalian signalling from generic protein signalling networks." }, { "pmid": "17925349", "title": "Dialogue on reverse-engineering assessment and methods: the DREAM of high-throughput pathway inference.", "abstract": "The biotechnological advances of the last decade have confronted us with an explosion of genetics, genomics, transcriptomics, proteomics, and metabolomics data. These data need to be organized and structured before they may provide a coherent biological picture. To accomplish this formidable task, the availability of an accurate map of the physical interactions in the cell that are responsible for cellular behavior and function would be exceedingly helpful, as these data are ultimately the result of such molecular interactions. However, all we have at this time is, at best, a fragmentary and only partially correct representation of the interactions between genes, their byproducts, and other cellular entities. If we want to succeed in our quest for understanding the biological whole as more than the sum of the individual parts, we need to build more comprehensive and cell-context-specific maps of the biological interaction networks. DREAM, the Dialogue on Reverse Engineering Assessment and Methods, is fostering a concerted effort by computational and experimental biologists to understand the limitations and to enhance the strengths of the efforts to reverse engineer cellular networks from high-throughput data. In this chapter we will discuss the salient arguments of the first DREAM conference. We will highlight both the state of the art in the field of reverse engineering as well as some of its challenges and opportunities." }, { "pmid": "25591917", "title": "Predicting protein functions using incomplete hierarchical labels.", "abstract": "BACKGROUND\nProtein function prediction is to assign biological or biochemical functions to proteins, and it is a challenging computational problem characterized by several factors: (1) the number of function labels (annotations) is large; (2) a protein may be associated with multiple labels; (3) the function labels are structured in a hierarchy; and (4) the labels are incomplete. Current predictive models often assume that the labels of the labeled proteins are complete, i.e. no label is missing. But in real scenarios, we may be aware of only some hierarchical labels of a protein, and we may not know whether additional ones are actually present. The scenario of incomplete hierarchical labels, a challenging and practical problem, is seldom studied in protein function prediction.\n\n\nRESULTS\nIn this paper, we propose an algorithm to Predict protein functions using Incomplete hierarchical LabeLs (PILL in short). PILL takes into account the hierarchical and the flat taxonomy similarity between function labels, and defines a Combined Similarity (ComSim) to measure the correlation between labels. PILL estimates the missing labels for a protein based on ComSim and the known labels of the protein, and uses a regularization to exploit the interactions between proteins for function prediction. PILL is shown to outperform other related techniques in replenishing the missing labels and in predicting the functions of completely unlabeled proteins on publicly available PPI datasets annotated with MIPS Functional Catalogue and Gene Ontology labels.\n\n\nCONCLUSION\nThe empirical study shows that it is important to consider the incomplete annotation for protein function prediction. The proposed method (PILL) can serve as a valuable tool for protein function prediction using incomplete labels. The Matlab code of PILL is available upon request." } ]