title
stringlengths
8
300
abstract
stringlengths
0
10k
ATR-FTIR and Flow Microcalorimetry Studies on the Initial Binding Kinetics of Arsenicals at the Organic-Hematite Interface.
The environmental fate of arsenic compounds depends on their surface interactions with geosorbents that include clays, metal oxides, and natural organic matter (NOM). While a number of batch studies reported that NOM inhibits the uptake of arsenicals, it remains unclear how different classes of organic functional groups affect their binding mechanisms. We report herein the adsorption kinetics of arsenate and dimethylarsinic acid (DMA) with hematite nanoparticles pre-exposed to three types of low molecular weight organics: citrate, oxalate, and pyrocatechol as representatives to the majority of reactive organic functional groups in NOM. These studies were conducted using attenuated total internal reflection Fourier transform infrared spectroscopy (ATR-FTIR) and flow microcalorimetry at pH 7 with an emphasis on the role that electrolytes (KCl, NaCl, and KBr) play in the adsorption process. Results show that (1) negatively charged carboxylate versus hydrophobic phenyl groups influence amounts and initial rates of arsenicals adsorption on hematite nanoparticles to varying degrees depending on the type of complexes they form, (2) the type of electrolytes affects initial adsorption rate of DMA to a greater extent than arsenate when oxalate is present on the surface, and (3) the extent of organics retention by hematite nanoparticles is influenced by the type of the desorbing agent.
A digital design flow for secure integrated circuits
Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS
Connective tissue graft technique assuring wide root coverage.
Gingival recession related to periodontal disease or developmental problems can result in root sensitivity, root caries, and esthetically unacceptable root exposures. Consequently, root restorations are performed that often complicate, rather than resolve, the problems created by exposed roots. This article presents a predictable procedure for root coverage on areas of wide denudation in the maxilla and the mandible.
Unification of Gravity and Quantum Theory
An overview of the four fundamental forces of physics as described by the Standard Model (SM) and prevalent unifying theories beyond it is provided. Background knowledge of the particles governing the fundamental forces is provided, as it will be useful in understanding the way in which the unification efforts of particle physics has evolved, either from the SM, or apart from it. It is shown that efforts to provide a quantum theory of gravity have allowed supersymmetry (SUSY) and M-Theory to become two of the prevailing theories for unifying gravity with the remaining non-gravitational forces.
Articulatory-based conversion of foreign accents with deep neural networks
We present an articulatory-based method for real-time accent conversion using deep neural networks (DNN). The approach consists of two steps. First, we train a DNN articulatory synthesizer for the non-native speaker that estimates acoustics from contextualized articulatory gestures. Then we drive the DNN with articulatory gestures from a reference native speaker –mapped to the nonnative articulatory space via a Procrustes transform. We evaluate the accent-conversion performance of the DNN through a series of listening tests of intelligibility, voice identity and nonnative accentedness. Compared to a baseline method based on Gaussian mixture models, the DNN accent conversions were found to be 31% more intelligible, and were perceived more native-like in 68% of the cases. The DNN also succeeded in preserving the voice identity of the nonnative speaker.
Performance of a low-power wide-area network based on LoRa technology: Doppler robustness, scalability, and coverage
The article provides an analysis and reports experimental validation of the various performance metrics of the LoRa low-power wide-area network technology. The LoRa modulation is based on chirp spread spectrum, which enables use of low-quality oscillators in the end device, and to make the synchronization faster and more reliable. Moreover, LoRa technology provides over 150 dB link budget, providing good coverage. Therefore, LoRa seems to be quite a promising option for implementing communication in many diverse Internet of Things applications. In this article, we first briefly overview the specifics of the LoRa technology and analyze the scalability of the LoRa wide-area network. Then, we introduce setups of the performance measurements. The results show that using the transmit power of 14 dBm and the highest spreading factor of 12, more than 60% of the packets are received from the distance of 30 km on water. With the same configuration, we measured the performance of LoRa communication in mobile scenarios. The presented results reveal that at around 40 km/h, the communication performance gets worse, because duration of the LoRa-modulated symbol exceeds coherence time. However, it is expected that communication link is more reliable when lower spreading factors are used.
Aspect Extraction Performance with POS Tag Pattern of Dependency Relation in Aspect-based Sentiment Analysis
The most important task in aspect-based sentiment analysis (ABSA) is the aspect and sentiment word extraction. It is a challenge to identify and extract each aspect and it specific associated sentiment word correctly in the review sentence that consists of multiple aspects with various polarities expressed for multiple sentiments. By exploiting the dependency relation between words in a review, the multiple aspects and its corresponding sentiment can be identified. However, not all types of dependency relation patterns are able to extract candidate aspect and sentiment word pairs. In this paper, a preliminary study was performed on the performance of different type of dependency relation with different POS tag patterns in pre-extracting candidate aspect from customer review. The result contributes to the identification of the specific type dependency relation with it POS tag pattern that lead to high aspect extraction performance. The combination of these dependency relations offers a solution for single aspect single sentiment and multi aspect multi sentiment cases.
ICE (ifosfamide, carboplatin, etoposide) as second-line chemotherapy in relapsed or primary progressive aggressive lymphoma--the Nordic Lymphoma Group experience.
OBJECTIVE To evaluate ICE (ifosfamide, carboplatin, etoposide) as second-line chemotherapy in relapsed or primary progressive aggressive lymphoma, in terms of objective response rate (ORR) and peripheral blood stem cell (PBSC) harvest mobilization rate. PATIENT POPULATION A total of 40 patients were included, with a median age of 57 yr. The major histopathological subgroup was diffuse large B-cell lymphoma (n = 27). The indication for ICE was relapse in 23 patients, primary progressive disease in 11, transformation in four and adjuvant primary chemotherapy in one patient. RESULTS After three cycles of ICE, the ORR was 59%. Among patients with primary progressive disease, ORR was 36% (four of 11). A PBSC harvest after ICE could be performed in 11 of 20 patients, and was sufficient for stem cell rescue in 10 of 20. The median number of collected CD34+ cells was 3.6 x 10(6) (range 1.4-12.5). In six of 10 patients, an adequate PBSC harvest could be performed with a second mobilization regimen. CONCLUSION In this patient population, the rate of response to ICE was comparable with other second-line regimens used in aggressive lymphoma. The rate of harvest failure (45%) was disappointingly high, compared with previous reports, possibly because of patient selection or differences in granulocyte-colony stimulating factor (G-CSF) dosage.
Flipped classroom — Students as producers
Flipped classroom is something that more and more teachers add to their teaching plans. To use video recordings of their lectures as a support for the students and then focus more on working with the curriculum in class has become a method that is adopted by an increasing number of lecturers. In higher education the students are adults. This implies that it needs to be a form of lecturing adapted to adults. From the area of organizational learning, and from andragogy, the key to learning lies in motivation and the motivation is triggered by engagement that in its turn stems from involvement. However, involving the learner in their own learning process is also about “letting go” of the teachers' full control. But is it necessary to maintain control? Is it possible to view the undertaking as a learning experience also for the teacher/lecturer? What control should be executed and what can one let go of? The research done at Hedmark University of Applied Sciences, show some interesting features. The courses have been “Learning Organizations” (autumn) and “Knowledge Management” (spring). The lectures have been in the form of streaming video and the course is organized as three full day seminars each semester/course. Each day has had a similar approach: a browse through the different chapters that are going to be discussed. Then follows solving assignments related to the presented topics, first in small groups, then in plenary. Before the lunch break, the students present suggestions towards possible new assignments. During the lunch break, the lecturer writes up the assignment using the input from the students. There is a quality check regarding the topics being within the scope of the seminar. After the lunch break, the students solve the assignment, first in small groups, then in plenary. The assignment and solution(s) are discussed using the following standard: 1. What did we learn from the assignment? 2. What did we learn from making the assignment? 3. Which issues raised in the assignment could be elaborated further, either as a mandatory assignment (fall) or an exam (fall and spring)? It is important to be clear and unambiguous about the learning objective of the course. It is also important to keep the scope within the limitations of the main literature. (This does, however, not exclude added resources like research papers, external links, etc.) Note also that there is a balance between the literature of the curriculum and the way the courses are taught. The course on “Learning Organizations” includes a section on how adults learn, and thus they are “convinced” about the method of teaching. The course “Knowledge Management” has the course “Learning Organizations” as a prerequisite, so it also “inherits” the way of lecturing/teaching/learning. This involvement, the students claim, has contributed to enhancing their learning outcomes. The students seem to grow accustomed to the organization and “expectations” the program. Also the average grades from last fall have improved from an average grade of C to an average grade of B. The activity in the classroom has shifted from the front of the classroom to the whole classroom. A few Observations made during this process are: 1. The students are far more strict than the lecturer 2. The students suggest wider assignments than suggested by lecturer 3. The lecturer receives numerous tips and hints to support creating new assignments 4. Even if it is not 6 hours lecturing, it is a demanding task to secure that the assignments and solutions at all times are within the framework of the learning objective 5. For the second and the third seminar, it is important to seek to include at least parts of the previous literature. The paper will detail the different issues tied to the process of this “flip” and seek to explain the findings using relevant theory.
Phantom Limb Pain: Mechanisms and Treatment Approaches
The vast amount of research over the past decades has significantly added to our knowledge of phantom limb pain. Multiple factors including site of amputation or presence of preamputation pain have been found to have a positive correlation with the development of phantom limb pain. The paradigms of proposed mechanisms have shifted over the past years from the psychogenic theory to peripheral and central neural changes involving cortical reorganization. More recently, the role of mirror neurons in the brain has been proposed in the generation of phantom pain. A wide variety of treatment approaches have been employed, but mechanism-based specific treatment guidelines are yet to evolve. Phantom limb pain is considered a neuropathic pain, and most treatment recommendations are based on recommendations for neuropathic pain syndromes. Mirror therapy, a relatively recently proposed therapy for phantom limb pain, has mixed results in randomized controlled trials. Most successful treatment outcomes include multidisciplinary measures. This paper attempts to review and summarize recent research relative to the proposed mechanisms of and treatments for phantom limb pain.
Treatment of Chronic Hepatitis C in Naive Patients
Treatment of naive patients with chronic hepatitis C has already been reviewed in international consensus meetings [1, 2]. The first consensus was held in Paris and standard interferon in combination with ribavirin was judged to be the best treatment for naive patients [1]. Later in 2002 the National Institutes of Health (NIH) consensus meeting stated that pegylated interferon in combination with ribavirin was the optimal treatment for chronic hepatitis C [2, 3]. There is no doubt that pegylated interferon is better than standard interferon and results in higher response rates, is easier to administer, and generally allows a better quality of life for patients during treatment, both as monotherapy and in combination with ribavirin [4-9]. Thus, at present the gold standard for treatment of naive patients with chronic hepatitis C is pegylated interferon in combination with ribavirin [2].
STUDENT MOTIVATION IN PHYSICAL EDUCATION - THE EVIDENCE IN A NUTSHELL
Student motivation plays an important role in the teaching and learning process in general educational settings as well as in physical education (PE). This review should provide a brief and concise overview of the empirical evidence that is available regarding student motivation in PE. The review will organize research findings on student motivation in PE according to its relation to physical activity, motivational factors, barriers to motivation, motivational profiles, and interventions.
Association rules mining analysis of app usage based on mobile traffic flow data
With the rapid development of mobile Internet, more and more Apps emerge in people's daily life. It is important to analyze the relations among Apps, which is helpful for network management and control. In this paper, we utilize network footprint data which consists of DPI data from ISPs and Crawler data from Web for App usage analysis. Focusing on the most popular Apps in China, we propose a distributed NFP data collection and processing framework. We do association rules mining on NFP data by using Apriori and MS-Apriori algorithm. Experimental results validate our proposed method and present some interesting association rules of Apps.
An Achilles Heel in Signature-Based IDS : Squealing False Positives in SNORT
We report a vulnerability to network signature-based IDS which we have tested using Snort and we call “Squealing”. This vulnerability has significant implications since it can easily be generalized to any IDS. The vulnerability of signature-based IDS to high false positive rates has been welldocumented but we go further to show (at a high level) how packets can be crafted to match attack signatures such that a alarms on a target IDS can be conditioned or disabled and then exploited. This is the first academic treatment of this vulnerability that has already been reported to the CERT Coordination Center and the National Infrastructure Protection Center. Independently, other tools based on “squealing” are poised to appear that, while validating our ideas, also gives cause for concern. keywords: squealing, false positive, intrusion detection, IDS, signature-based, misuse behavior, network intrusion detection, snort
Practical Image-Based Relighting and Editing with Spherical-Harmonics and Local Lights
We present a practical technique for image-based relighting under environmental illumination which greatly reduces the number of required photographs compared to traditional techniques, while still achieving high quality editable relighting results. The proposed method employs an optimization procedure to combine spherical harmonics, a global lighting basis, with a set of local lights. Our choice of lighting basis captures both low and high frequency components of typical surface reflectance functions while generating close approximations to the ground truth with an order of magnitude less data. This technique benefits the acquisition process by reducing the number of required photographs, while simplifying the modification of reflectance data and enabling artistic lighting edits for post-production effects. Here, we demonstrate two desirable lighting edits, modifying light intensity and angular width, employing the proposed lighting basis.
High-Performance Pattern-Matching for Intrusion Detection
New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.
Attitudinal effects of ad-evoked moods and emotions: The moderating role of motivation
The role of the moods and emotions evoked by advertisements in shaping the amount and valence of cognitive elaboration, as well as consumer brand attitudes, has recently begun to elicit research interest. It is shown that moods and emotions appear to influence brand attitudes more in low personal relevance (“low motivational involvement”) situations than under high-motivation conditions, by suppressing counterargumentation more in such low motivation situations. Implications are discussed for advertising theory and practice © 1994 John Wiley & Sons, Inc.
Ontology mediation , merging and aligning Jos
Ontology mediation is a broad field of research which is concerned with determining and overcoming differences between ontologies in order to allow the reuse of such ontologies, and the data annotated using these ontologies, throughout different heterogeneous applications. Ontology mediation can be subdivided into three areas: ontology mapping, which is mostly concerned with the representation of correspondences between ontologies; ontology alignment, which is concerned with the (semi-)automatic discovery of correspondences between ontologies; and ontology merging, which is concerned with creating a single new ontology, based on a number of source ontologies. This chapter reviews the work which has been done in the three mentioned areas and proposes an integrated approach to ontology mediation in the area of knowledge management. A language is developed for the representation of correspondences between ontologies. An algorithm, which generalizes current state-of-the-art alignment algorithms, is developed for the (semi-)automated discovery of such mappings. A tool is presented for browsing and editing ontology mappings. An ontology mapping can be used for a variety of different tasks, such as transforming data between different representations and querying different heterogeneous knowledge
The Regulative and the Constitutive In Kant’s and Hegel’s Theories of History
I show one reason why Hegel's theory of history is an improvement over Kant's. There is an ambiguity in Kant's theory of history. He wants, on the one hand, to distinguish empirical history (and, by extension, other empirical sciences which constitute experience) from reason's a priori regulative role in history. On the other hand, his view of the nature of sceinces and the role of reason precludes such a separation. I trce this problem to different roles assigned the faculties of understanding and reason in our experience. In Hegel's theory of history, both reason and undestanding together constitute the sciences, and thus exprience. Hegel argues that history is a unified field employing both understanding and reason. I conclude that the more consistent theory of history for idealists is Hegel's, and that this consistency partially explains the movement in German idealism form kantian to hegelian thought.
Osteoligamentous injuries of the medial ankle joint
Injuries of the ankle joint have a high incidence in daily life and sports, thus, playing an important socioeconomic role. Therefore, proper diagnosis and adequate treatment are mandatory. While most of the ligament injuries around the ankle joint are treated conservatively, great controversy exists on how to treat deltoid ligament injuries in ankle fractures. Missed injuries and inadequate treatment of the medial ankle lead to inferior outcome with instability, progressive deformity, and ankle joint osteoarthritis.
A tale of two cannabinoids: the therapeutic rationale for combining tetrahydrocannabinol and cannabidiol.
This study examines the current knowledge of physiological and clinical effects of tetrahydrocannabinol (THC) and cannabidiol (CBD) and presents a rationale for their combination in pharmaceutical preparations. Cannabinoid and vanilloid receptor effects as well as non-receptor mechanisms are explored, such as the capability of THC and CBD to act as anti-inflammatory substances independent of cyclo-oxygenase (COX) inhibition. CBD is demonstrated to antagonise some undesirable effects of THC including intoxication, sedation and tachycardia, while contributing analgesic, anti-emetic, and anti-carcinogenic properties in its own right. In modern clinical trials, this has permitted the administration of higher doses of THC, providing evidence for clinical efficacy and safety for cannabis based extracts in treatment of spasticity, central pain and lower urinary tract symptoms in multiple sclerosis, as well as sleep disturbances, peripheral neuropathic pain, brachial plexus avulsion symptoms, rheumatoid arthritis and intractable cancer pain. Prospects for future application of whole cannabis extracts in neuroprotection, drug dependency, and neoplastic disorders are further examined. The hypothesis that the combination of THC and CBD increases clinical efficacy while reducing adverse events is supported.
IoMT: A Reliable Cross Layer Protocol for Internet of Multimedia Things
The futuristic trend is toward the merging of cyber world with physical world leading to the development of Internet of Things (IoT) framework. Current research is focused on the scalar data-based IoT applications thus leaving the gap between services and benefits of IoT objects and multimedia objects. Multimedia IoT (IoMT) applications require new protocols to be developed to cope up with heterogeneity among the various communicating objects. In this paper, we have presented a cross-layer protocol for IoMT. In proposed methodology, we have considered the cross communication of physical, data link, and routing layers for multimedia applications. Response time should be less, and communication among the devices must be energy efficient in multimedia applications. IoMT has considered both the issues and the comparative simulations in MATLAB have shown that it outperforms over the traditional protocols and presents the optimized solution for IoMT.
TMSUI: A Trust Management Scheme of USB Storage Devices for Industrial Control Systems
The security of sensitive data and the safety of control signal are two core issues in industrial control system (ICS). However, the prevalence of USB storage devices brings a great challenge on protecting ICS in those respects. Unfortunately, there is currently no solution especially for ICS to provide a complete defense against data transmission between untrusted USB storage devices and critical equipment without forbidding normal USB device function. This paper proposes a trust management scheme of USB storage devices for ICS (TMSUI). By fully considering the background of application scenarios, TMSUI is designed based on security chip to achieve authoring a certain USB storage device to only access some exact protected terminals in ICS for a particular period of time. The issues about digital forensics and revocation of authorization are discussed. The prototype system is finally implemented and the evaluation on it indicates that TMSUI effectively meets the security goals with high compatibility and good performance.
Implementation of multiport dc-dc converter-based Solid State Transformer in smart grid system
A solid-state transformer (SST) would be at least as efficient as a conventional version but would provide other benefits as well, particularly as renewable power sources become more widely used. Among its more notable strong points are on-demand reactive power support for the grid, better power quality, current limiting, management of distributed storage devices and a dc bus. Most of the nation's power grid currently operates one way - power flows from the utility to the consumer - and traditional transformers simply change voltage from one level to another. But smart transformers, based on power semiconductor switches, are more versatile. Not only can they change voltage levels, but also can effectively control the power flow in both directions. The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this paper. The DC-DC stage is based on a quad active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A novel technique that complements the SISO controller by taking into account the cross coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models.
Do Semantic Parts Emerge in Convolutional Neural Networks?
Semantic object parts can be useful for several visual recognition tasks. Lately, these tasks have been addressed using Convolutional Neural Networks (CNN), achieving outstanding results. In this work we study whether CNNs learn semantic parts in their internal representation. We investigate the responses of convolutional filters and try to associate their stimuli with semantic parts. We perform two extensive quantitative analyses. First, we use ground-truth part bounding-boxes from the PASCAL-Part dataset to determine how many of those semantic parts emerge in the CNN. We explore this emergence for different layers, network depths, and supervision levels. Second, we collect human judgements in order to study what fraction of all filters systematically fire on any semantic part, even if not annotated in PASCAL-Part. Moreover, we explore several connections between discriminative power and semantics. We find out which are the most discriminative filters for object recognition, and analyze whether they respond to semantic parts or to other image patches. We also investigate the other direction: we determine which semantic parts are the most discriminative and whether they correspond to those parts emerging in the network. This enables to gain an even deeper understanding of the role of semantic parts in the network.
A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds
The pallido-recipient thalamus transmits information from the basal ganglia to the cortex and is critical for motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the basal ganglia, but the role of nonpallidal inputs, such as excitatory inputs from cortex, remains unclear. We simultaneously recorded from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a basal ganglia–recipient thalamic nucleus that is necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor cortical nucleus that is also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals that are important for exploratory behavior and learning.
The Effect of Traditional Vocal Warm-up Versus Semi-Occluded Vocal Tract Exercises on the Acoustic Parameters of Voice
...............................................................................................ii I. Literature Review.................................................................................1 II. Justification.......................................................................................25 III. Method...........................................................................................30 IV. Results...........................................................................................39 V. Discussion........................................................................................43 References...........................................................................................51 Appendices..........................................................................................59
Early changes in biochemical markers of bone turnover predict bone mineral density response to antiresorptive therapy in Korean postmenopausal women with osteoporosis.
Biochemical markers of bone turnover have been suggested to be useful in monitoring the efficacy of antiresorptive therapy. In this study, we investigated the predictive value of bone turnover markers to determine short-term response in bone mineral density (BMD) and to identify nonresponders in 138 postmenopausal women (mean age 58 years) with osteoporosis given with either hormone thearpy (HT) or alendronate. Urinary type I collagen N-telopeptide (NTx) and serum osteocalcin (OC) at baseline, 3, and 6 months after treatment as well as spine and femoral neck BMD at baseline and 12 months were measured. Significant decreases in both NTx and OC were evident in women on treatment with antiresorptive agents as early as 3 months (p<0.01). Percent change of NTx at 3 months correlated with the percent change of spinal BMD at 12 months of treatment. When bone turnover markers were stratified by tertiles, the average rate of lumbar spine BMD gain increased significantly with increasing tertiles of baseline value (p<0.05) and percent change (p<0.05) of urinary NTx at 3 month of treatment. In terms of BMD response, urinary NTx at 3 months decreased significantly more in BMD responders group than in nonresponders group. Logistic regression analysis demonstrated that percent change of NTx at 3 months is an independent predictor to identify BMD nonresponders, defined as those whose BMD gain remained within the precision error range of dual energy X-ray absorptiometer (DXA). We conclude that biochemical markers of bone turnover, especially percent change in urinary NTx levels, can be used to determine BMD response to antiresorptive therapy in Korean postmenopausal women with osteoporosis.
Is the Most Effective Team Leadership Shared ? The Impact of Shared Leadership , Age Diversity , and Coordination on Team Performance
In the present paper we examine the moderating effects of age diversity and team coordination on the relationship between shared leadership and team performance. Using a field sample of 96 individuals in 26 consulting project teams, team members assessed their team’s shared leadership and coordination. Six to eight weeks later, supervisors rated their teams’ performance. Results indicated that shared leadership predicted team performance and both age diversity and coordination moderated the impact of shared leadership on team performance. Thereby shared leadership was positively related to team performance when age diversity and coordination were low, whereas higher levels of age diversity and coordination appeared to compensate for lower levels of shared leadership effectiveness. In particular strong effects of shared leadership on team performance were evident when both age diversity and coordination were low, whereas shared leadership was not related to team performance when both age diversity and coordination were high.
A Fast Continuous Max-Flow Approach to Non-convex Multi-labeling Problems
This work addresses a class of multilabeling problems over a spatially continuous image domain, where the data fidelity term can be any bounded function, not n ecessarily convex. Two total variation based regularization terms are considered, the first favoring a linear rel ationship between the labels and the second independent of the label values (Pott’s model). In the spatially discret e setting, Ishikawa [33] showed that the first of these labeling problems can be solved exactly by standard max-flow and mi -cut algorithms over specially designed graphs. We will propose a continuous analogue of Ishikawa’s graph co nstruction [33] by formulating continuous max-flow and min-cut models over a specially designed domain. These m ax-flow and min-cut models are equivalent under a primal-dual perspective. They can be seen as exact convex re laxations of the original problem and can be used to compute global solutions. Fast continuous max-flow based al gorithms are proposed based on the max-flow models whose efficiency and reliability can be validated by both sta ndard optimization theories and experiments. In comparison to previous work [53, 52] on continuous generalizat ion of Ishikawa’s construction, our approach differs in the max-flow dual treatment which leads to the following main dvantages: A new theoretical framework which embeds the label order constraints implicitly and naturall y results in optimal labeling functions taking values in any predefined finite label set; A more general thresholding theo rem which, under some conditions, allows to produce a larger set of non-unique solutions to the original problem; Numerical experiments show the new max-flow algorithms converge faster than the fast primal-dual algorithm of [53, 52]. The speedup factor is especially significant at high precisions. In the end, our dual formulation and algorithms are extended to a recently proposed convex relaxation of Pott’s model [50], thereby avoiding expensive iterative co mputations of projections without closed form solution.
T-Hoarder: A framework to process Twitter data streams
With the eruption of online social networks, like Twitter and Facebook, a series of new APIs have appeared to allow access to the data that these new sources of information accumulate. One of most popular online social networks is the micro-blogging site Twitter. Its APIs allow many machines to access the torrent simultaneously to Twitter data, listening to tweets and accessing other useful information such as user profiles. A number of tools have appeared for processing Twitter data with different algorithms and for different purposes. In this paper T-Hoarder is described: a framework that enables tweet crawling, data filtering, and which is also able to display summarized and analytical information about the Twitter activity with respect to a certain topic or event in a web-page. This information is updated on a daily basis. The tool has been validated with real use-cases that allow making a series of analysis on the performance one may expect from this type of infrastructure.
Generative and Discriminative Text Classification with Recurrent Neural Networks
We empirically characterize the performance of discriminative and generative LSTM models for text classification. We find that although RNN-based generative models are more powerful than their bag-of-words ancestors (e.g., they account for conditional dependencies across words in a document), they have higher asymptotic error rates than discriminatively trained RNN models. However we also find that generative models approach their asymptotic error rate more rapidly than their discriminative counterparts—the same pattern that Ng & Jordan (2001) proved holds for linear classification models that make more naı̈ve conditional independence assumptions. Building on this finding, we hypothesize that RNNbased generative classification models will be more robust to shifts in the data distribution. This hypothesis is confirmed in a series of experiments in zero-shot and continual learning settings that show that generative models substantially outperform discriminative models.
A UWB Unidirectional Antenna With Dual-Polarization
A novel ±45° dual-polarized unidirectional antenna element is presented, consisting of two cross center-fed tapered mono-loops and two cross electric dipoles located against a reflector for ultrawideband applications. The operation principle of the antenna including the use of elliptically tapered transmission line for transiting the unbalanced energy to the balanced energy is described. Designs with different reflectors-planar or conical-are investigated. A measured overlapped impedance bandwidth of 126% (SWR <; 2) is demonstrated. Due to the complementary nature of the structure, the antenna has a relatively stable broadside radiation pattern with low cross polarization and low back lobe radiation over the operating band. The measured gain of the proposed antenna varies from 4 to 13 dBi and 7 to 14.5 dBi for port 1 and port 2, respectively, over the operating band, when mounted against a conical backed reflector. The measured coupling between the two ports is below -25 dB over the operating band.
Convolutional recurrent neural networks: Learning spatial dependencies for image representation
In existing convolutional neural networks (CNNs), both convolution and pooling are locally performed for image regions separately, no contextual dependencies between different image regions have been taken into consideration. Such dependencies represent useful spatial structure information in images. Whereas recurrent neural networks (RNNs) are designed for learning contextual dependencies among sequential data by using the recurrent (feedback) connections. In this work, we propose the convolutional recurrent neural network (C-RNN), which learns the spatial dependencies between image regions to enhance the discriminative power of image representation. The C-RNN is trained in an end-to-end manner from raw pixel images. CNN layers are firstly processed to generate middle level features. RNN layer is then learned to encode spatial dependencies. The C-RNN can learn better image representation, especially for images with obvious spatial contextual dependencies. Our method achieves competitive performance on ILSVRC 2012, SUN 397, and MIT indoor.
A Modular Framework for Versatile Conversational Agent Building
This paper illustrates a web-based infrastructure of an architecture for conversational agents equipped with a modular knowledge base. This solution has the advantage to allow the building of specific modules that deal with particular features of a conversation (ranging from its topic to the manner of reasoning of the chatbot). This enhances the agent interaction capabilities. The approach simplifies the chatbot knowledge base design process: extending, generalizing or even restricting the chatbot knowledge base in order to suit it to manage specific dialoguing tasks as much as possible.
Photonic Demodulator With Sensitivity Control
A current assisted photonic demodulator for use as a pixel in a 3-D time-of-flight imager shows nearly 100% static demodulator contrast and is operable beyond 30 MHz. An integrated tunable sensitivity control is also presented for increasing the distance measurement range and avoiding unwanted saturation during integration periods. This is achieved by application of a voltage on a dedicated drain tap showing a quenching of sensor sensitivity to below 1%
The social role of social media: the case of Chennai rains-2015
Social media has altered the way individuals communicate in present scenario. Individuals feel more connected on Facebook and Twitter with greater communication freedom to chat, share pictures, and videos. Hence, social media is widely employed by various companies to promote their product and services and establish better customer relationships. Owing to the increasing popularity of these social media platforms, their usage is also expanding significantly. Various studies have discussed the importance of social media in the corporate world for effective marketing communication, customer relationships, and firm performance, but no studies have focused on the social role of social media, i.e., in disaster resilience in India. Various academicians and practitioners have advocated the importance and use of social media in disaster resilience. This article focuses on the role that social media can play during the time of natural disasters, with the help of the recent case of Chennai floods in India. This study provides a better understanding about the role social media can play in natural disaster resilience in Indian context.
Effects of acculturative stress on PTSD, depressive, and anxiety symptoms among refugees resettled in Australia and Austria
BACKGROUND Research indicates that exposure to war-related traumatic events impacts on the mental health of refugees and leads to higher rates of posttraumatic stress disorder (PTSD), depression, and anxiety symptoms. Furthermore, stress associated with the migration process has also been shown to impact negatively on refugees' mental health, but the extent of these experiences is highly debatable as the relationships between traumatic events, migration, and mental health outcomes are complex and poorly understood. OBJECTIVE This study aimed to examine the influence of trauma-related and post-migratory factors on symptoms of PTSD, depression, and anxiety in two samples of Bosnian refugees that have resettled in two different host nations-Austria and Australia. METHOD Using multiple recruitment methods, 138 participants were recruited to complete self-report measures assessing acculturative stress, PTSD, depressive, and anxiety symptoms. RESULTS Hierarchical regressions indicated that after controlling for age, sex, and exposure to traumatic events, acculturative stress associated with post-migratory experiences predicted severity of PTSD and anxiety symptoms, while depressive symptoms were only predicted by exposure to traumatic events. This model, however, was only significant for Bosnian refugees resettled in Austria, as PTSD, depressive, and anxiety symptoms were only predicted by traumatic exposure in the Bosnian refugees resettled in Australia. CONCLUSION These findings point toward the importance of assessing both psychological and social stressors when assessing mental health of refugees. Furthermore, these results draw attention to the influence of the host society on post-migratory adaptation and mental health of refugees. Further research is needed to replicate these findings among other refugee samples in other host nations.
On automatic differentiation
During these last years, the environmental problems have acquired a growing place in our society. It becomes necessary to find a lasting way to the nuclear waste storage. The waste storage is in relation to the characteristics of wastes according to their activity and the life of the radionuclide. After many studies on the topic, led in particular by ANDRA, the storage in deep level is considered as the reference strategy for wastes with high or medium activity and long-lived. We have to make simulations of the radionucleide transport in underground in order to determine the impact of a possible propagation of radioelements. The modelling of the flow in porous media around the storage requires to know the physical parameters of the different geological layers. Those parameters (porosity and diffusion) are not directly accessible by measurements, hence we have to solve an inverse problem to recover them.
How shall I trust the faceless and the intangible? A literature review on the antecedents of online trust
0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.03.013 * Corresponding author. Tel.: +31 53 489 2322; fax E-mail addresses: [email protected], adbeldad@ [email protected] (M. de Jong), m.f.steehouder@utwen 1 Tel.: +31 53 489 3313; fax: +31 53 489 4259. 2 Tel.: +31 53 489 3315; fax: +31 53 489 4259. Trust is generally assumed to be an important precondition for people’s adoption of electronic services. This paper provides an overview of the available research into the antecedents of trust in both commercial and non-commercial online transactions and services. A literature review was conducted covering empirical studies on people’s trust in and adoption of computer-mediated services. Results are described using a framework of three clusters of antecedents: customer/client-based, website-based, and company/ organization-based antecedents. Results show that there are many possible antecedents of trust in electronic services. The majority of the research has been conducted in the context of e-commerce; only few studies are available in the domains of e-government and e-health. For many antecedents, some empirical support can be found, but the results are far from univocal. The research calls for more, and particularly more systematic, research attention for the antecedents of trust in electronic services. The review presented in this paper offers practitioners an overview of possibly relevant variables that may affect people’s trust in electronic services. It also gives a state-of-the-art overview of the empirical support for the relevance of these variables. 2010 Elsevier Ltd. All rights reserved.
Approximate String Matching with q-grams and Maximal Matches
Ukkonen, E., Approximate string-matching with q-grams and maximal matches, Theoretical Computer Science 92 (1992) 191-211. We study approximate string-matching in connection with two string distance functions that are computable in linear time. The first function is based on the so-called q-grams. An algorithm is given for the associated string-matching problem that finds the locally best approximate occurrences of pattern P, IPI = m, in text T, 1 TI = n, in time O(n log@-q)). The occurrences with distance <k can be found in time O(nlog k). The other distance function is based on finding maximal common substrings and allows a form of approximate string-matching in time O(n). Both distances give a lower bound for the edit distance (in the unit cost model), which leads to fast hybrid algorithms for the edit distance based string-matching.
Energy-Brushes: Interactive Tools for Illustrating Stylized Elemental Dynamics
Dynamic effects such as waves, splashes, fire, smoke, and explosions are an integral part of stylized animations. However, such dynamics are challenging to produce, as manually sketching key-frames requires significant effort and artistic expertise while physical simulation tools lack sufficient expressiveness and user control. We present an interactive interface for designing these elemental dynamics for animated illustrations. Users draw with coarse-scale energy brushes which serve as control gestures to drive detailed flow particles which represent local velocity fields. These fields can convey both realistic and artistic effects based on user specification. This painting metaphor for creating elemental dynamics simplifies the process, providing artistic control, and preserves the fluidity of sketching. Our system is fast, stable, and intuitive. An initial user evaluation shows that even novice users with no prior animation experience can create intriguing dynamics using our system.
A variational characterisation of spherical designs
In this paper we first establish a new variational characterisation of spherical designs: it is shown that a set XN = {x1, . . . ,xN} ⊂ Sd, where Sd := {x ∈ Rd+1 : ∑d j=1 x 2 j = 1}, is a spherical L-design if and only if a certain non-negative quantity AL,N (XN ) vanishes. By combining this result with a known “sampling theorem” for the sphere, we obtain the main result, which is that if XN ⊂ Sd is a stationary point set of AL,N whose “mesh norm” satisfies hXN < 1/(L + 1), then XN is a spherical L-design. The latter result seems to open a pathway to the elusive problem of proving (for fixed d) the existence of a spherical L-design with a number of points N of order (L + 1)d. A numerical example with d = 2 and L = 19 suggests that computational minimisation of AL,N can be a valuable tool for the discovery of new spherical designs for moderate and large values of L.
Automatic retina exudates segmentation without a manually labelled training set
Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy which can be assessed by detecting exudates (a type of bright lesion) in fundus images. In this work, two new methods for the detection of exudates are presented which do not use a supervised learning step; therefore, they do not require labelled lesion training sets which are time consuming to create, difficult to obtain and prone to human error. We introduce a new dataset of fundus images from various ethnic groups and levels of DME which we have made publicly available. We evaluate our algorithm with this dataset and compare our results with two recent exudate segmentation algorithms. In all of our tests, our algorithms perform better or comparable with an order of magnitude reduction in computational time.
Effectiveness of screening instruments in detecting substance use disorders among prisoners.
This study examined the effectiveness of several screening instruments in detecting substance use disorders among prison inmates. A sample of 400 male inmates were administered eight different substance abuse screening instruments and the Structured Clinical Interview for DSM-IV (SCID-IV), Version 2.0, Substance Abuse Disorders module. The latter was used as a diagnostic criterion measure to determine the presence of substance use disorders. Based on positive predictive value, sensitivity, and overall accuracy, the Texas Christian University Drug Screen, the Simple Screening Instrument, and a combined instrument-Alcohol Dependence Scale/Addiction Severity Index-Drug Use section were found to be the most effective in identifying substance abuse and dependence disorders.
Hierarchical Bayesian Neural Networks for Personalized Classification
Building robust classifiers trained on data susceptible to group or subject-specific variations is a challenging yet common problem in pattern recognition. Hierarchical models allow sharing of statistical strength across groups while preserving group-specific idiosyncrasies, and are commonly used for modeling such grouped data [3]. We develop flexible hierarchical Bayesian models that parameterize group-specific conditional distributions p(yg | xg,Wg) via multi-layered Bayesian neural networks. Sharing of statistical strength between groups allows us to learn large networks even when only a handful of labeled examples are available. We leverage recently proposed doubly stochastic variational Bayes algorithms to infer a full posterior distribution over the weights while scaling to large architectures. We find the inferred posterior leads to both improved classification performance and to more effective active learning for iteratively labeling data. Finally, we demonstrate state-of-the-art performance on the MSRC-12 Kinect Gesture Dataset [2].
Big Data for supply chain management in the service and manufacturing sectors: Challenges, opportunities, and future perspectives
Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.
Radiation Effects on the Flow of Powell-Eyring Fluid Past an Unsteady Inclined Stretching Sheet with Non-Uniform Heat Source/Sink
This study investigates the unsteady flow of Powell-Eyring fluid past an inclined stretching sheet. Unsteadiness in the flow is due to the time-dependence of the stretching velocity and wall temperature. Mathematical analysis is performed in the presence of thermal radiation and non-uniform heat source/sink. The relevant boundary layer equations are reduced into self-similar forms by suitable transformations. The analytic solutions are constructed in a series form by homotopy analysis method (HAM). The convergence interval of the auxiliary parameter is obtained. Graphical results displaying the influence of interesting parameters are given. Numerical values of skin friction coefficient and local Nusselt number are computed and analyzed.
Love, marriage, and divorce: newlyweds' stress hormones foreshadow relationship changes.
Neuroendocrine function, assessed in 90 couples during their first year of marriage (Time 1), was related to marital dissolution and satisfaction 10 years later. Compared to those who remained married, epinephrine levels of divorced couples were 34% higher during a Time 1 conflict discussion, 22% higher throughout the day, and both epinephrine and norepinephrine were 16% higher at night. Among couples who were still married, Time 1 conflict ACTH levels were twice as high among women whose marriages were troubled 10 years later than among women whose marriages were untroubled. Couples whose marriages were troubled at follow-up produced 34% more norepinephrine during conflict, 24% more norepinephrine during the daytime, and 17% more during nighttime hours at Time 1 than the untroubled.
Facebook addiction and loneliness in the post-graduate students of a university in southern India.
BACKGROUND Facebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals. AIM To assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness. METHODS A cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness. RESULTS More than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017). CONCLUSION With the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.
Addictive Personality and Problematic Mobile Phone Use
Mobile phone use is banned or regulated in some circumstances. Despite recognized safety concerns and legal regulations, some people do not refrain from using mobile phones. Such problematic mobile phone use can be considered to be an addiction-like behavior. To find the potential predictors, we examined the correlation between problematic mobile phone use and personality traits reported in addiction literature, which indicated that problematic mobile phone use was a function of gender, self-monitoring, and approval motivation but not of loneliness. These findings suggest that the measurements of these addictive personality traits would be helpful in the screening and intervention of potential problematic users of mobile phones.
Improving compliance with breast cancer screening in older women. Results of a randomized controlled trial.
BACKGROUND To compare three approaches for improving compliance with breast cancer screening in older women. METHODS Randomized controlled trial using three parallel group practices at a public hospital. Subjects included women aged 65 years and older (n = 803) who were seen by residents (n = 66) attending the ambulatory clinic from October 1, 1989, through March 31, 1990. All provider groups received intensive education in breast cancer screening. The control group received no further intervention. Staff in the second group offered education to patients at their visit. In addition, flowsheets were used in the "Prevention Team" group and staff had their tasks redefined to facilitate compliance. RESULTS Medical records were reviewed to determine documented offering/receipt of clinical breast examination and mammography. A subgroup of women without previous clinical breast examination (n = 540) and without previous mammography (n = 471) were analyzed to determine the effect of the intervention. During the intervention period, women without a previous clinical breast examination were offered an examination significantly more often in the Prevention Team group than in the control group, adjusting for age, race, and comorbidity and for physicians' gender and training level. The patients in the Prevention Team group were offered clinical breast examination (31.5%) more frequently than those in the patient education or control groups, but this was not significant after adjusting for the above covariates. Likewise, mammography was offered more frequently to patients in the Prevention Team and in the patient education group than to patients in the control group, after adjusting for the factors above using logistic regression. CONCLUSIONS The results provide support for patient education and organizational changes that involve nonphysician personnel to enhance breast cancer screening among older women, particularly those without previous screening.
Verbal forward digit span in Spanish population.
BACKGROUND Older people complain of difficulties in recalling telephone numbers and being able to dial them in the correct order. This study examined the developmental trend of verbal forward digit span across adulthood and aging in a Spanish population, as an index of one of the components of Baddeley’s working memory model—the phonological loop—, which illustrates these two aspects. METHOD A verbal digit span was administered to an incidental sample of 987 participants ranging from 35 to 90 years old. The maximum length was defined that participants could recall of at least two out of three series in the same order as presented with no errors. Demographic variables of gender and educational level were also examined. RESULTS The ANOVA showed that the three main factors (age group, gender and educational level) were significant, but none of the interactions was. Verbal forward digit span decreases during the lifespan, but gender and educational level affect it slightly. CONCLUSION Phonological loop is affected by age. The verbal forward digit span in this study is generally lower than the one reported in other studies.
Luhn Revisited: Significant Words Language Models
Users tend to articulate their complex information needs in only a few keywords, making underspecified statements of request the main bottleneck for retrieval effectiveness. Taking advantage of feedback information is one of the best ways to enrich the query representation, but can also lead to loss of query focus and harm performance in particular when the initial query retrieves only little relevant information when overfitting to accidental features of the particular observed feedback documents. Inspired by the early work of Luhn [23], we propose significant words language models of feedback documents that capture all, and only, the significant shared terms from feedback documents. We adjust the weights of common terms that are already well explained by the document collection as well as the weight of rare terms that are only explained by specific feedback documents, which eventually results in having only the significant terms left in the feedback model. Our main contributions are the following. First, we present significant words language models as the effective models capturing the essential terms and their probabilities. Second, we apply the resulting models to the relevance feedback task, and see a better performance over the state-of-the-art methods. Third, we see that the estimation method is remarkably robust making the models in- sensitive to noisy non-relevant terms in feedback documents. Our general observation is that the significant words language models more accurately capture relevance by excluding general terms and feedback document specific terms.
ACCURACY ANALYSIS FOR DEM AND ORTHOIMAGES DERIVED FROM SPOT HRS STEREO DATA WITHOUT USING GCP
ISPRS and CNES announced the HRS (High Resolution Stereo) Scientific Assessment Program during the ISPRS Commission I Symposium in Denver in November 2002. 9 test areas throughout the world have been selected for this program. One of the test sites is located in Bavaria, Germany, for which the PI comes from DLR. For a second region, which is situated in Catalonia – Barcelona and surroundings – DLR has the role of a Co-Investigator. The goal is to derive a DEM from the along-track stereo data of the SPOT HRS sensor and to assess the accuracy by comparison with ground control points and DEM data of superior quality. For the derivation of the DEM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera is used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data (DORIS and ULS) are extracted. According to CNES these data should lead to an absolute orientation accuracy of about 30 m. No bundle block adjustment with ground control is used in the first step of the photogrammetric evaluation. A dense image matching, using very dense positions as kernel centers provides the parallaxes. The quality of the matching is controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection leads to points in object space which are then interpolated to a DEM of the region in a regular grid. Additionally, orthoimages are generated from the images of the two looking directions. The orthoimage and DEM accuracy is determined by using the ground control points and the available DEM data of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). DEM filtering methods are applied and a comparison to SRTM-DEMs is performed. It is shown that a fusion of the DEMs derived from optical and radar data leads to higher accuracies. In the second step ground control points are used for bundle adjustment to improve the exterior orientation and the absolute accuracy of the SPOT-DEM.
A second-order method for convex l1-regularized optimization with active-set prediction
We describe an active-set method for the minimization of an objective function φ that is the sum of a smooth convex function and an l1-regularization term. A distinctive feature of the method is the way in which active-set identification and second-order subspace minimization steps are integrated to combine the predictive power of the two approaches. At every iteration, the algorithm selects a candidate set of free and fixed variables, performs an (inexact) subspace phase, and then assesses the quality of the new active set. If it is not judged to be acceptable, then the set of free variables is restricted and a new active-set prediction is made. We establish global convergence for our approach, and compare the new method against the state-of-the-art code LIBLINEAR.
A biomechanical approach to Achilles tendinopathy management
................................................................................................................... 2 List of tables ............................................................................................................ 11 List of Figures .......................................................................................................... 13 List of Abbreviations ............................................................................................... 14 Definitions .............................................................................................................. 15 Chapter 1 Introduction ......................................................................................... 16 7.1 The role of inflammatory mediators in tendinopathy ..................................... 17 7.2 Achilles structure ............................................................................................. 18 7.2.1 Macro Anatomy ........................................................................................ 18 7.2.2 Plantaris .................................................................................................... 19 7.2.3 Micro Anatomy (histology) ....................................................................... 19 7.3 Function of the Achilles tendon ....................................................................... 21 7.3.1 Tendon material properties ..................................................................... 21 7.3.2 Aetiology ................................................................................................... 22 7.4 Incidence of Achilles tendinopathy ................................................................. 23 7.5 Clinical diagnosis .............................................................................................. 23 7.5.1 Differential diagnosis ................................................................................ 24 7.5.2 Subcategorising Achilles tendinopathy .................................................... 25 7.6 Risk factors ....................................................................................................... 25 7.6.1 Modifiable and Non-Modifiable factors ................................................... 26 7.6.2 Extrinsic factors ........................................................................................ 26 7.7 Clinical management ....................................................................................... 27 7.8 Background and Aims of this thesis ................................................................. 28 Chapter 2 –A Delphi study: Risk Factors for Achilles TendinopathyOpinions of World Tendon Experts .................................................................................................... 31
elastix: A Toolbox for Intensity-Based Medical Image Registration
Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.
Passive radar from history to future
The history of passive radar dates back to the early days of radar in 1935 when the Daventry experiment was conducted in the UK. It continues in WW II with the German Klein Heidelberg passive radar and receives new interest today, as passive covert radar (PCR) systems like Silent Sentry and Homeland Alerter 100 are ready for operation. The future of PCR will strongly depend on the availability of transmitters of opportunity such as FM-radio and digital broadcast networks.
A comparison of behavioral parent training programs for fathers of children with attention-deficit/hyperactivity disorder.
Few behavioral parent training (BPT) treatment studies for attention-deficit/hyperactivity disorder (ADHD) have included and measured outcomes with fathers. In this study, fathers were randomly assigned to attend a standard BPT program or the Coaching Our Acting-Out Children: Heightening Essential Skills (COACHES) program. The COACHES program included BPT plus sports skills training for the children and parent-child interactions in the context of a soccer game. Groups did not differ at baseline, and father ratings of treatment outcome indicated improvement at posttreatment for both groups on measures of child behavior. There was no significant difference between groups on ADHD-related measures of child outcome. However, at posttreatment, fathers who participated in the COACHES program rated children as more improved, and they were significantly more engaged in the treatment process (e.g., greater attendance and arrival on time at sessions, more homework completion, greater consumer satisfaction). The implications for these findings and father-related treatment efforts are discussed.
Comparative effectiveness of MRI in breast cancer (COMICE) trial: a randomised controlled trial
BACKGROUND MRI might improve diagnosis of breast cancer, reducing rates of reoperation. We assessed the clinical efficacy of contrast-enhanced MRI in women with primary breast cancer. METHODS We undertook an open, parallel group trial in 45 UK centres, with 1623 women aged 18 years or older with biopsy-proven primary breast cancer who were scheduled for wide local excision after triple assessment. Patients were randomly assigned to receive either MRI (n=816) or no further imaging (807), with use of a minimisation algorithm incorporating a random element. The primary endpoint was the proportion of patients undergoing a repeat operation or further mastectomy within 6 months of random assignment, or a pathologically avoidable mastectomy at initial operation. Analysis was by intention to treat. This study is registered, ISRCTN number 57474502. FINDINGS 816 patients were randomly assigned to MRI and 807 to no MRI. Addition of MRI to conventional triple assessment was not significantly associated with reduced a reoperation rate, with 153 (19%) needing reoperation in the MRI group versus 156 (19%) in the no MRI group, (odds ratio 0.96, 95% CI 0.75-1.24; p=0.77). INTERPRETATION Our findings are of benefit to the NHS because they show that MRI might be unnecessary in this population of patients to reduce repeat operation rates, and could assist in improved use of NHS services. FUNDING National Institute for Health Research's Health Technology Assessment Programme.
How Data Will Transform Industrial Processes: Crowdsensing, Crowdsourcing and Big Data as Pillars of Industry 4.0
We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.
Partial Scan Selection Based on Dynamic Reachability and Observability Information
A partial scan selection strategy is proposed in which flip-flops are selected via newly proposed dynamic reachability and observability measures such that the remaining hard-to-detect faults are easily detected. This is done by taking advantage of the information available when a target fault is aborted by the test generator. A partial scan selection tool, IDROPS, has been developed which selects the best and smallest set of flip-flops to scan that will result in a high fault coverage. Results indicate that high fault coverages in hard-to-test circuits can be achieved using fewer scan flip-flops than in previous methods.
Assessment of degree of risk from sources of microbial contamination in cleanrooms ; 2 : Surfaces and liquids Introduction
The requirements for minimising microbial contamination in pharmaceutical cleanrooms are outlined in regulatory documents published by authorities that include the European Commission1 and the Food and Drug Administration in the USA2. These authorities also suggest the use of risk management and assessment techniques to identify and control sources of microbial contamination3,4. Risk assessment and management methods have been investigated by the authors of this article5–9 and other approaches are discussed by Mollah et al10. Risk assessment methods are used to calculate the degree of risk to the product from microbial sources in a cleanroom. Factors that influence risk are determined and assigned descriptors of risk, which are of the ‘high’, ‘medium’, and ‘low’ type that act as surrogates for actual numerical values. Numerical scores are assigned to these descriptors and the scores combined, usually by multiplication, to obtain a risk assessment for each source of contamination. However, a risk assessment carried out in this manner may not be accurate, for the following reasons.
General autonomic components of motion sickness.
This report refers to a body of investigations performed in support of experiments aboard the Space Shuttle, and designed to counteract the symptoms of Space Adapatation Syndrome, which resemble those of motion sickness on Earth. For these supporting studies we examined the autonomic manifestations of earth-based motion sickness. Heart rate, respiration rate, finger pulse volume and basal skin resistance were measured on 127 men and women before, during and after exposure to nauseogenic rotating chair tests. Significant changes in all autonomic responses were observed across the tests (p<.05). Significant differences in autonomic responses among groups divided according to motion sickness susceptibility were also observed (p<.05). Results suggest that the examination of autonomic responses as an objective indicator of motion sickness malaise is warranted and may contribute to the overall understanding of the syndrome on Earth and in Space. DESCRIPTORS: heart rate, respiration rate, finger pulse volume, skin resistance, biofeedback, motion sickness.
Axiomatic Rewriting Theory I: A Diagrammatic Standardization Theorem
By extending nondeterministic transition systems with concurrency and copy mechanisms, Axiomatic Rewriting Theory provides a uniform framework for a variety of rewriting systems, ranging from higher-order systems to Petri nets and process calculi. Despite its generality, the theory is surprisingly simple, based on a mild extension of transition systems with independence: an axiomatic rewriting system is defined as a 1-dimensional transition graph $\mathcal{G}$ equipped with 2-dimensional transitions describing the redex permutations of the system, and their orientation. In this article, we formulate a series of elementary axioms on axiomatic rewriting systems, and establish a diagrammatic standardization theorem.
Parallelizing Sequential Graph Computations
This paper presents GRAPE, a parallel system for graph computations. GRAPE differs from prior systems in its ability to parallelize existing sequential graph algorithms as a whole. Underlying GRAPE are a simple programming model and a principled approach, based on partial evaluation and incremental computation. We show that sequential graph algorithms can be "plugged into" GRAPE with minor changes, and get parallelized. As long as the sequential algorithms are correct, their GRAPE parallelization guarantees to terminate with correct answers under a monotonic condition. Moreover, we show that algorithms in MapReduce, BSP and PRAM can be optimally simulated on GRAPE. In addition to the ease of programming, we experimentally verify that GRAPE achieves comparable performance to the state-of-the-art graph systems, using real-life and synthetic graphs.
The Sexual Culture of the French Renaissance
Introduction: sexual culture? France? Renaissance? 1. The renaissance of sex: Orpheus, mythography and making sexual meaning 2. Heavens below: astrology, generation and sexual (un)certainty 3. Neoplatonism and the making of heterosexuality 4. Cupid makes you stupid: 'bad' poetry in the French Renaissance 5. Politics, promiscuity and potency: managing the king's sexual reputation Conclusion: dirty thoughts Bibliography.
Using artificial neural networks to predict first-year traditional students second year retention rates
This research investigates the use of Artificial Neural Networks (ANNs) to predict first year student retention rates. Based on a significant body of previous research, this work expands on previous attempts to predict student outcomes using machine-learning techniques. Using a large data set provided by Columbus State University's Information Technology department, ANNs were used to analyze incoming first-year traditional freshmen students' data over a period from 2005--2011. Using several different network designs, the students' data was analyzed, and a basic predictive network was devised. While the overall accuracy was high when including the first and second semesters worth of data, once the data set was reduced to a single semester, the overall accuracy dropped significantly. Using different network designs, more complex learning algorithms, and better training strategies, the prediction accuracy rate for a student's return to the second year approached 75% overall. Since the rate is still low, there is room for improvements, and several techniques that might increase the reliability of these networks are discussed.
Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions
Like traditional media, social media in China is subject to censorship. However, in limited cases, activists have employed homophones of censored keywords to avoid detection by keyword matching algorithms. In this paper, we show that it is possible to scale this idea up in ways that make it difficult to defend against. Specifically, we present a non-deterministic algorithm for generating homophones that create large numbers of false positives for censors, making it difficult to locate banned conversations. In two experiments, we show that 1) homophone-transformed weibos posted to Sina Weibo remain on-site three times longer than their previously censored counterparts, and 2) native Chinese speakers can recover the original intent behind the homophone-transformed messages, with 99% of our posts understood by the majority of our participants. Finally, we find that coping with homophone transformations is likely to cost the Sina Weibo censorship apparatus an additional 15 hours of human labor per day, per censored keyword. To conclude, we reflect briefly on the opportunities presented by this algorithm to build interactive, client-side tools that promote free speech.
Show and Recall: Learning What Makes Videos Memorable
With the explosion of video content on the Internet, there is a need for research on methods for video analysis which take human cognition into account. One such cognitive measure is memorability, or the ability to recall visual content after watching it. Prior research has looked into image memorability and shown that it is intrinsic to visual content, but the problem of modeling video memorability has not been addressed sufficiently. In this work, we develop a prediction model for video memorability, including complexities of video content in it. Detailed feature analysis reveals that the proposed method correlates well with existing findings on memorability. We also describe a novel experiment of predicting video sub-shot memorability and show that our approach improves over current memorability methods in this task. Experiments on standard datasets demonstrate that the proposed metric can achieve results on par or better than the state-of-the art methods for video summarization.
The Association Between Arterial Oxygen Tension and Neurological Outcome After Cardiac Arrest.
A number of observational studies have evaluated the association between arterial oxygen tensions and outcome after cardiac arrest with variable results. The objective of this study is to determine the association between arterial oxygen tension and neurological outcome after cardiac arrest. A retrospective cohort analysis was performed using the Penn Alliance for Therapeutic Hypothermia registry. Adult patients who experienced return of spontaneous circulation after in-hospital or out-of-hospital cardiac arrest (OHCA) and had a partial pressure of arterial oxygen (PaO2) recorded within 48 hours were included. Our primary exposure of interest was PaO2. Hyperoxemia was defined as PaO2 > 300 mmHg, hypoxemia as PaO2 < 60 mmHg, and optimal oxygenation as PaO2 60-300 mmHg. The primary outcome was neurological function at hospital discharge among survivors, as described by the cerebral performance category (CPC) score, dichotomized into "favorable" (CPCs 1-2) and "unfavorable" (CPCs 3-5). Secondary outcomes included in-hospital mortality. A total of 544 patients from 13 institutions were included. Average age was 61 years, 56% were male, and 51% were white. A total of 64% experienced OHCA, 81% of arrests were witnessed, and pulseless electrical activity was the most common initial rhythm (40%). More than 72% of the patients had cardiac etiology for their arrests, and 55% underwent targeted temperature management. A total of 38% of patients survived to hospital discharge. There was no significant association between PaO2 at any time interval and neurological outcome at hospital discharge. Hyperoxemia at 12 hours after cardiac arrest was associated with decreased odds of survival (OR 0.17 [0.03-0.89], p = 0.032). There was no significant association between arterial oxygen tension measured within the first 48 hours after cardiac arrest and neurological outcome.
The effect of prior breast biopsy method and concurrent definitive breast procedure on success and accuracy of sentinel lymph node biopsy
It has been suggested that sentinel lymph node (SLN) biopsy for breast cancer may be less accurate after excisional biopsy of the primary tumor compared with core needle biopsy. Furthermore, some have suggested an improved ability to identify the SLN when total mastectomy is performed compared with lumpectomy. This analysis was performed to determine the impact of the type of breast biopsy (needle vs. excisional) or definitive surgical procedure (lumpectomy vs. mastectomy) on the accuracy of SLN biopsy. The University of Louisville Breast Cancer Sentinel Lymph Node Study is a prospective multi-institutional study. Patients with clinical stage T1–2, N0 breast cancer were eligible. All patients underwent SLN biopsy and completion level I/II axillary dissection. Statistical comparison was performed by χ2 analysis. A total of 2206 patients were enrolled in the study. There were no statistically significant differences in SLN identification rate or false-negative rate between patients undergoing excisional versus needle biopsy. The SLN identification and false-negative rates also were not statistically different between patients who had total mastectomy compared with those who had a lumpectomy. Excisional biopsy does not significantly affect the accuracy of SLN biopsy, nor does the type of definitive surgical procedure.
Feasibility study of micro hydro power plant for rural electrification in Thailand by using axial flux permanent magnet
This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.
State-of-the-art in artificial neural network applications: A survey
This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.
The Perpetual Music Track The Phenomenon of Constant Musical Imagery
The perpetual music track is a new concept that describes a condition of constant or near-constant musical imagery. This condition appears to be very rare even among composers and musicians. I present here a detailed self-analysis of musical imagery for the purpose of defining the psychological features of a perpetual music track. I have music running through my head almost constantly during waking hours, consisting of a combination of recently-heard pieces and distant pieces that spontaneously pop into the head. Imagery consists mainly of short musical fragments that get looped repeatedly upon themselves. Corporeal manifestations of imagery occur in the form of unconscious finger movements whose patterns correspond to the melodic contour of the imagined piece. Musical dreams occur every week or two, and contain a combination of familiar and originallycomposed music. These results are discussed in light of theories of imagery, consciousness, hallucination, obsessive cognition, and most especially the notion that acoustic consciousness can be split into multiple parallel streams.
TA-COS 2018 : 2 nd Workshop on Text Analytics for Cybersecurity and Online Safety
In this study, we present toxicity annotation for a Thai Twitter Corpus as a preliminary exploration for toxicity analysis in the Thai language. We construct a Thai toxic word dictionary and select 3,300 tweets for annotation using the 44 keywords from our dictionary. We obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Finally, we conducted supervised classification using our corpus as a dataset and obtained an accuracy of 0.80, which is comparable with the inter-annotator agreement of this dataset. Our dataset is available on GitHub.
Augmented Human: Augmented Reality and Beyond
Will Augmented Reality (AR) allow us to access digital information, experience others' stories, and thus explore alternate realities? AR has recently attracted attention again due to the rapid advances in related multimedia technologies as well as various glass-type AR display devices. However, in order to widely adopt AR in alternated realities, it is necessary to improve various core technologies and integrate them into an AR platform. Especially, there are several remaining technical challenges such as 1) real-time recognition and tracking of multiple objects while generating an environment map, 2) organic user interface with awareness of user's implicit needs, intentions or emotion as well as explicit requests, 3) immersive multimodal content augmentation and just-in-time information visualization, 4) multimodal interaction and collaboration during augmented telecommunication, etc. In addition, in order to encourage user engagement and enable an AR ecosystem, AR standards should be established that support creating AR content, capturing user experiences, and sharing the captured experiences.
The status of the P versus NP problem
It's one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers.
What's smart about the smart grid?
The paper explores the meaning of smart grid, concluding that the term is closely linked with enhanced sensing, actuation and control of power systems. It is suggested that such cyber-physical systems would be more meaningfully described as responsive grids. The paper provides a brief historical perspective of the decision-making that underlies existing, seemingly inflexible, grid structures. It emphasizes the future needs for responsive grids, as a consequence of inevitable growth in renewable generation and newer types of loads such as plug-in electric vehicles. The paper considers the cyber-infrastructure requirements for supporting controllability of highly distributed generation and load resources.
Clinicopathological Features and Survival Outcomes of Colorectal Cancer in Young Versus Elderly
The incidence of colorectal cancer (CRC) in young adults is rising. We aimed to analyze the clinicopathological characteristics and survival outcomes of young versus elderly CRC patients. All patients diagnosed with CRC in the Surveillance, Epidemiology, and End Results program data (1988–2011) from the United States were evaluated. They were divided into 3 groups by age at diagnosis: group 1 (20–40 years old), group 2 (41–50 years old), and group 3 (>50 years old). The clinicopathological characteristics and CRC-specific survival (CRC-SS) were evaluated and compared among the 3 groups. A total of 279,623 CRC patients were included: 6700 (2.4%) in group 1, 19,385 (6.9%) in group 2, and 253,538 (90.7%) in group 3. Young CRC patients had more tumors located in rectum, fewer cases with multiple tumors, later stage, more mucinous carcinoma and signet ring-cell carcinoma, more poor differentiated tumors, and more lymph nodes (no. 12) examined. The 5-year CRC-SS rates of patients in groups 1, 2, and 3 were 65.1%, 67.1%, and 62.8%, respectively (group 1 vs group 2, P1⁄4 0.001; group 1 vs group 3, P< 0.001; group 2 vs group 3, P< 0.001). Multivariate analysis revealed older (>50 years old) age was an independent predictor of poor prognosis (hazard ratio, 1.545; 95% confidence interval, 1.456–1.639; P< 0.001). Young CRC patients had later stage presentation and more aggressive pathological features, but better survival. CRC patients aged 41 to 50 years had best CRC-SS in contrast to patients in another 2 age groups. (Medicine 94(35):e1402) Abbreviations: CI = confidence intervals, CRC = colorectal , MD, PhD, and Jie Ping, PhD INTRODUCTION C olorectal cancer (CRC) is the third most common cancer in the United States and a major health burden worldwide. The American Cancer Society estimated that 142,820 new CRC cases and 50,830 CRC deaths occurred in 2013. In spite of these sobering epidemiological data, the annual report of 2010 cancer current status highlighted that CRC incidence rates in the United States had been dropping off. This steady decline has largely been attributed to increases in the use of CRC screening in older population. It allows for the detection and removal of colorectal polyps before they progress to cancer. As a disease predominantly affecting older individuals, 90% of all CRC have been diagnosed in patients >50 years of age. However, recent evidences suggest a constantly rising incidence of CRC in young individuals, a population not receiving routine screening. The CRC incidence per 100,000 individuals for young individuals were 0.85 (ages 20–24 years) to 28.8 (ages 45–49 years) in the United States. Consequently, the percentage of young patients in total CRC patients had been reported to range from 0.4% to as high as 35.6% in another literature review. The limited studies reveal a wide range of reported clinicopathological characteristics and prognosis for young CRC patients. Some studies have demonstrated that young CRC patients presented poor pathological features and advanced stage compared with older patients. Nonetheless, others have found no difference when tumor stage and pathological features were compared with the older population. As regards survival of young CRC patients, there is also a controversy. These controversies are partly caused by no accepted clear definition of young CRC patient. Although most studies reported on young CRC patient as one 40 years old or less, some studies used the cutoff age of 50 years old. Furthermore, the biases associated with single-institution experiences or limit sample sizes may make the published data vary markedly. In this study, we used population-based data from the Surveillance, Epidemiology and End Results (SEER) program of the National Cancer Institute in the United States to compare aracteristics, prognostic factors, and overall survival among 3 age groups (20–40, 41–50, and >50 years) of CRC patients. MATERIALS AND METHODS
DeepPermNet: Visual Permutation Learning
We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. The goal of this task is to find the permutation that recovers the structure of data from shuffled versions of it. In the case of natural images, this task boils down to recovering the original image from patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end, we resort to a continuous approximation of these matrices using doubly-stochastic matrices which we generate from standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of DeepPermNet is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Our results show state-of-the-art performance on the Public Figures and OSR benchmarks for (i) and on the classification and segmentation tasks on the PASCAL VOC dataset for (ii).
SURVEY ON VANISHING POINT DETECTION METHOD FOR GENERAL ROAD REGION IDENTIFICATION
In today’s world automobile industries are coming out with so many new features in cars. Road infrastructures are also getting much better in all over the world. Due to this, the number of road accidents has increased and more number of people are dying in it. Researchers have tried to solve this problem by using virtual driver feature in the car. A computer vision based road detection and navigation system is put in the car that can help the driver about upcoming accident and rough driving. But the major problem in development of this type of system is identification of road using computer. And also detection of the unstructured roads or structured roads without remarkable boundaries and marking is a very difficult task for computer. For a given image of any road, that may difficult to identify clear edges or texture orientation or priori known color, is it possible for computer to find road? This paper gives answer to that question. First step is to find vanishing point, which uses Gabor filters to find texture orientation at each pixel and voting scheme is used to find vanishing point. Second step is to identify boundaries from this estimated vanishing point. This paper describes the considerable work that has been done towards vanishing point identification methods so that it can be used in real time for further applications.
The role of multichannel integration in customer relationship management
This paper reviews the strategic role of multichannel integration in customer relationship management (CRM) with the objective proposing a structured approach to the development of an integrated multichannel strategy. Alternative perspectives of CRM are reviewed and it is concluded that adoption of a strategic perspective is essential for success. Multichannel integration is posited as one of the key crossfunctional processes in CRM strategy development. The nature of industry channel structure and channel participants, channel options, and alternative channel strategies are reviewed. The customer experience is explored both within and across channels. Analytical tools, such as market structure maps, the customer relationship life cycle, and demand chain analysis, are described. Key steps in building an integrated multichannel strategy are examined. Major challenges faced by enterprises in their adoption of an integrated multichannel approach and areas for future research are discussed. D 2004 Elsevier Inc. All rights reserved.
Theta*: Any-Angle Path Planning on Grids
Grids with blocked and unblocked cells are often used to represent terrain in robotics and video games. However, paths formed by grid edges can be longer than true shortest paths in the terrain since their headings are artificially constrained. We present two new correct and complete anyangle path-planning algorithms that avoid this shortcoming. Basic Theta* and Angle-Propagation Theta* are both variants of A* that propagate information along grid edges without constraining paths to grid edges. Basic Theta* is simple to understand and implement, fast and finds short paths. However, it is not guaranteed to find true shortest paths. Angle-Propagation Theta* achieves a better worst-case complexity per vertex expansion than Basic Theta* by propagating angle ranges when it expands vertices, but is more complex, not as fast and finds slightly longer paths. We refer to Basic Theta* and Angle-Propagation Theta* collectively as Theta*. Theta* has unique properties, which we analyze in detail. We show experimentally that it finds shorter paths than both A* with post-smoothed paths and Field D* (the only other version of A* we know of that propagates information along grid edges without constraining paths to grid edges) with a runtime comparable to that of A* on grids. Finally, we extend Theta* to grids that contain unblocked cells with non-uniform traversal costs and introduce variants of Theta* which provide different tradeoffs between path length and runtime.
A class D output stage with zero dead time
An integrated class-D output stage has been realized with zero dead time, thereby removing one of the dominant sources of distortion in class-D amplifiers. Dead time is eliminated through proper dimensioning of the power transistor drivers and accurate matching of switch timing. Open-loop distortion of this output stage stays below 0.1% up to 35 W.
Outcome predictors in guided and unguided self-help for social anxiety disorder.
Internet-based self-help with therapist guidance has shown promise as an effective treatment and may increase access to evidence-based psychological treatment for social anxiety disorder (SAD). Although unguided self-help has been suggested primarily as a population-based preventive intervention, some studies indicate that patients with SAD may profit from unguided self-help. Gaining knowledge about predictors of outcome in guided and unguided self-help for SAD is important to ensure that these interventions can be offered to those who are most likely to respond. Utilizing a sample of 245 patients who received either guided or unguided self-help for SAD, the present study examined pre-treatment symptoms and program factors as predictors of treatment adherence and outcome. The results were in line with previous findings from the face-to-face treatment literature: namely, the intensity of baseline SAD symptoms, but not depressive symptoms, predicted treatment outcomes in both unguided and guided self-help groups. Outcomes were unrelated to whether a participant has generalized versus specific SAD. Furthermore, for the unguided self-help group, higher credibility ratings of the treatment program were associated with increased treatment adherence. The findings suggest that guided and unguided self-help may increase access to SAD treatment in a population that is more heterogeneous than previously assumed.
Wiggling through complex traffic: Planning trajectories constrained by predictions
The vision of autonomous driving is piecewise becoming reality. Still the problem of executing the driving task in a safe and comfortable way in all possible environments, for instance highway, city or rural road scenarios is a challenging task. In this paper we present a novel approach to planning trajectories for autonomous vehicles. Hereby we focus on the problem of planning a trajectory, given a specific behavior option, e.g. merging into a specific gap at a highway entrance or a roundabout. Therefore we explicitly take arbitrary road geometry and prediction information of other traffic participants into account. We extend former contributions in this field by providing a flexible problem description and a trajectory planner without specialization to distinct classes of maneuvers beforehand. Using a carefully chosen representation of the dynamic free space, the method is capable of considering multiple lanes including the predicted dynamics of other traffic participants, while being real-time capable at the same time. The combination of those properties in one general planning method represents the novelty of the proposed method. We demonstrate the capability of our algorithm to plan safe trajectories in simulation and in real traffic in real-time.
The Relationship between Income Inequality, Poverty and Globalization
This paper introduces two composite indices of globalization. The first is based on the Kearney/Foreign Policy magazine and the second is obtained from principal component analysis. They indicate the level of globalization and show how globalization has developed over time for different countries. The indices are composed of four components: economic integration, personal contact, technology and political engagement, each generated from a number of indicators. A breakdown of the index into major components provides possibilities to identify the sources of globalization at the country level and associate it with economic policy measures. The empirical results show that a low rank in the globalization process is due, in addition to involvement in conflicts, to economic and technology factors with limited possibility for the developing countries to affect. The high ranked developed countries share similar patterns in distribution of various components. The indices were also used in a regression analysis to study the causal relationships between income inequality, poverty and globalization. The results show evidence of a weak and negative relationship between globalization and income inequality and poverty.
An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition
Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.
Fast and Concurrent RDF Queries with RDMA-Based Distributed Graph Exploration
Many public knowledge bases are represented and stored as RDF graphs, where users can issue structured queries on such graphs using SPARQL. With massive queries over large and constantly growing RDF data, it is imperative that an RDF graph store should provide low latency and high throughput for concurrent query processing. However, prior systems still experience high perquery latency over large datasets and most prior designs have poor resource utilization such that each query is processed in sequence. We present Wukong, a distributed graph-based RDF store that leverages RDMA-based graph exploration to provide highly concurrent and low-latency queries over large data sets. Wukong is novel in three ways. First, Wukong provides an RDMA-friendly distributed key/value store that provides differentiated encoding and fine-grained partitioning of graph data to reduce RDMA transfers. Second, Wukong leverages full-history pruning to avoid the cost of expensive final join operations, based on the observation that the cost of one-sided RDMA operations is largely oblivious to the payload size to a certain extent. Third, countering conventional wisdom of preferring migration of execution over data, Wukong seamlessly combines data migration for low latency and execution distribution for high throughput by leveraging the low latency and high throughput of onesided RDMA operations, and proposes a worker-obliger model for efficient load balancing. Evaluation on a 6-node RDMA-capable cluster shows that Wukong significantly outperforms state-of-the-art systems like TriAD and Trinity.RDF for both latency and throughput, usually at the scale of orders of magnitude. Short for Sun Wukong, who is known as the Monkey King and is a main character in the Chinese classical novel “Journey to the West”. Since Wukong is known for his extremely fast speed (21,675 kilometers in one somersault) and the ability to fork himself to do massive multi-tasking, we term our system as Wukong. The source code and a brief instruction on how to install Wukong is available at http://ipads.se.sjtu.edu.cn/projects/wukong.
An efficient method of license plate location
License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate location. License plate area contains rich edge and texture information. We first extract out the vertical edges of the car image using image enhancement and Sobel operator, then remove most of the background and noise edges by an effective algorithm, and finally search the plate region by a rectangle window in the residual edge image and segment the plate out from the original car image. Experimental results demonstrate the great robustness and efficiency of our method. 2005 Elsevier B.V. All rights reserved.
Comparison of clinical and angiographic prognostic risk scores in patients with acute coronary syndromes: Analysis from the Acute Catheterization and Urgent Intervention Triage StrategY (ACUITY) trial.
BACKGROUND Several prognostic risk scores have been developed for patients with coronary artery disease, but their comparative use in patients with non-ST-segment elevation acute coronary syndromes (NSTEACS) undergoing percutaneous coronary intervention (PCI) has not been examined. We therefore investigated the accuracy of the Synergy Between PCI With Taxus and Cardiac Surgery (SYNTAX) score, Clinical Synergy Between PCI With Taxus and Cardiac Surgery score (CSS), New Risk Stratification (NERS) score (NERS), Age, Creatinine, Ejection Fraction (ACEF) score, Global Registry for Acute Coronary Events (GRACE) score, and Thrombolysis in Myocardial Infarction (TIMI) score for risk assessment of 1-year mortality, cardiac mortality, myocardial infarction, target vessel revascularization, and stent thrombosis in patients with NSTEACS undergoing PCI. METHODS The 6 scores were determined in 2,094 patients with NSTEACS treated with PCI enrolled in the angiographic substudy of the ACUITY trial. The prognostic accuracy of the 6 scores was assessed using the c statistic for discrimination and the Hosmer-Lemeshow test for calibration. The index of separation and net reclassification improvement (NRI) were also determined. RESULTS Scores incorporating clinical and angiographic variables (CSS and NERS) showed the best tradeoff between discrimination and calibration for most end points, with the best discrimination for all end points and good calibration for most of them. The CSS had the best index of separation for most ischemic endpoints and displayed an NRI for cardiac death and myocardial infarction (MI) compared to the other scores, whereas NERS displayed an NRI for all-cause death and target vessel revascularization. The 3 scores-CSS, NERS, and SYNTAX-were the only scores to have both good discrimination and calibration for cardiac mortality. CONCLUSIONS In patients with NSTEACS undergoing PCI, risk scores incorporating clinical and angiographic variables had the highest predictive accuracy for a broad spectrum of ischemic end points.
UNDERSTANDING NORMAL AND ATYPICAL OPERATIONS THROUGH ANALYSIS OF FLIGHT DATA
Over the past several years, airlines have initiated or participated in a number of safety data programs. Each involves collection of voluntary safety reports or the monitoring of flight data. These initiatives grew from recognition that mitigating safety risks requires monitoring a variety of data streams – reports, observations, and flight data. They have spawned technologies within air carriers, including Airline Safety Action Programs (ASAP), Line Operational Safety Audits (LOSA), improved analysis of training and checking data through the Advanced Qualification Program (AQP), and Flight Operational Quality Assurance (FOQA) programs. NASA has supported development of an Aviation Performance Measuring System (APMS) to facilitate development of advanced concepts and prototype software for the analysis of flight data, in order to further FOQA programs toward the proactive management of safety risk. This paper will discuss the functions that can be served by flight data analysis, describe the development of tools for those functions, and review applications of these tools which advance knowledge gained from flight data.
Be Fast, Cheap and in Control with SwitchKV
SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.
Tolkien, the Author and the Critic
Tolkien not only taught medieval literature but also edited medieval texts - such as Sir Gawain and the Green Knight and Ancrene Wisse -, translated Pearl (a 14th century poem), and wrote important articles on Beowulf and Sir Gawain. A common feature of these articles is the claim for originality: Tolkien asserts that he takes the opposite view from his predecessors; besides,his reflexion on medieval literature seems, over twenty years, extremely coherent concerning heroism and war. Moreover, what seems important to him in these texts corresponds to important elements in his fiction. If the relation between fiction and non-fiction is examined in itself, more globally, Tolkien's works appear to present us a peculiar relation between fiction and essays or articles (his critical comments on medieval literature).
Fast Cryptographic Computation on Intel ® Architecture Processors Via Function Stitching
Cryptographic applications often run more than one independent algorithm such as encryption and authentication. This fact provides a high level of parallelism which can be exploited by software and converted into instruction level parallelism to improve overall performance on modern super-scalar processors. We present fast and efficient methods of computing such pairs of functions on IA processors using a method called " function stitching ". Instead of computing pairs of functions sequentially as is done today in applications/libraries, we replace the function calls by a single call to a composite function that implements both algorithms. The execution time of this composite function can be made significantly shorter than the sums of the execution times for the individual functions and, in many cases, close to the execution time of the slower function. Function stitching is best done at a very fine grain, interleaving the code for the individual algorithms at an instruction-level granularity. This results in excellent utilization of the execution resources in the processor core with a single thread. We show how stitching pairs of functions together in a fine-grained manner results in excellent performance on IA processors. Currently, applications perform the functions sequentially. We demonstrate performance gains of 1.4X-1.9X with stitching over the best sequential function performance. We show performance results achieved by this method on the Intel ® processors based on the Westmere architecture.
Medicinal properties of terpenes found in Cannabis sativa and Humulus lupulus.
Cannabaceae plants Cannabis sativa L. and Humulus lupulus L. are rich in terpenes - both are typically comprised of terpenes as up to 3-5% of the dry-mass of the female inflorescence. Terpenes of cannabis and hops are typically simple mono- and sesquiterpenes derived from two and three isoprene units, respectively. Some terpenes are relatively well known for their potential in biomedicine and have been used in traditional medicine for centuries, while others are yet to be studied in detail. The current, comprehensive review presents terpenes found in cannabis and hops. Terpenes' medicinal properties are supported by numerous in vitro, animal and clinical trials and show anti-inflammatory, antioxidant, analgesic, anticonvulsive, antidepressant, anxiolytic, anticancer, antitumor, neuroprotective, anti-mutagenic, anti-allergic, antibiotic and anti-diabetic attributes, among others. Because of the very low toxicity, these terpenes are already widely used as food additives and in cosmetic products. Thus, they have been proven safe and well-tolerated.
BiNChE: A web tool and library for chemical enrichment analysis based on the ChEBI ontology
Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis. We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology. BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.
IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks
In this paper, we are interested in building lightweight and efficient convolutional neural networks. Inspired by the success of two design patterns, composition of structured sparse kernels, e.g., interleaved group convolutions (IGC), and composition of lowrank kernels, e.g., bottle-neck modules, we study the combination of such two design patterns, using the composition of structured sparse low-rank kernels, to form a convolutional kernel. Rather than introducing a complementary condition over channels, we introduce a loose complementary condition, which is formulated by imposing the complementary condition over super-channels, to guide the design for generating a dense convolutional kernel. The resulting network is called IGCV3. We empirically demonstrate that the combination of low-rank and sparse kernels boosts the performance and the superiority of our proposed approach to the state-of-the-arts, IGCV2 and MobileNetV2 over image classification on CIFAR and ImageNet and object detection on COCO. Code and models are available at https://github.com/homles11/IGCV3.