title
stringlengths
8
300
abstract
stringlengths
0
10k
Deep Learning For Video Saliency Detection
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: (1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data; and (2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency stimuli, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image datasets, which enables our network to learn diverse saliency stimuli and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency stimuli, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the DAVIS dataset (MAE of .06) and the FBMS dataset (MAE of .07), and do so with much improved speed (2fps with all steps) on one GPU.
Effects of Raloxifene on Fracture Severity in Postmenopausal Women with Osteoporosis: Results from the MORE Study
Raloxifene reduces the risk of new vertebral fractures, but its effect on the severity of these new fractures has not been determined. The MORE (Multiple Outcomes of Raloxifene Evaluation) trial studied the effects of placebo, raloxifene 60 or 120 mg/day in 7705 postmenopausal women with osteoporosis. Radiologists assessed new vertebral fractures from radiographs and graded the fracture severity as normal (no fracture) or mild, moderate or severe. New clinical vertebral fractures were defined as new vertebral fractures associated with symptoms, such as back pain, and confirmed in radiographs. In the total study population, the majority (76.4%) of the women who experienced clinical vertebral fractures were diagnosed with new moderate/severe vertebral fractures. In turn, women with moderate/severe vertebral fractures in the overall population were more likely to experience clinical symptoms suggestive of fracture than were women who had new mild-only vertebral fractures. The incidence of new mild-only and moderate/severe fractures was the same in women without prevalent vertebral fractures, but the incidence of new moderate/severe fractures was 2 to 3 times higher than that for new mild-only fractures in women with prevalent vertebral fractures. Raloxifene 60 mg/day decreased the risk of at least 1 new moderate/severe vertebral fracture by 61% in women without prevalent vertebral fractures [RR 0.39 (95% CI 0.17, 0.69)], and by 37% in women with prevalent vertebral fractures [RR 0.63 (95% CI 0.49, 0.83)] at 3 years. The risk reductions for at least 1 new moderate/severe vertebral fracture were not significantly different between the raloxifene doses, in women with and without prevalent vertebral fractures. The effects of raloxifene on significantly decreasing the risk of new moderate/severe vertebral fractures may explain the risk reduction for new painful clinical vertebral fractures observed with raloxifene, and is particularly important in postmenopausal women with severe osteoporosis who are at higher risk for moderate or severe fractures.
Foundations for relativistic quantum theory. I. Feynman’s operator calculus and the Dyson conjectures
In this paper, we provide a representation theory for the Feynman operator calculus. This allows us to solve the general initial-value problem and construct the Dyson series. We show that the series is asymptotic, thus proving Dyson’s second conjecture for quantum electrodynamics. In addition, we show that the expansion may be considered exact to any finite order by producing the remainder term. This implies that every nonperturbative solution has a perturbative expansion. Using a physical analysis of information from experiment versus that implied by our models, we reformulate our theory as a sum over paths. This allows us to relate our theory to Feynman’s path integral, and to prove Dyson’s first conjecture that the divergences are in part due to a violation of Heisenberg’s uncertainly relations.
Association between perioperative dexmedetomidine and arrhythmias after surgery for congenital heart disease.
BACKGROUND Dexmedetomidine is commonly used after congenital heart surgery and may be associated with a decreased incidence of postoperative tachyarrhythmias. Using a large cohort of patients undergoing congenital heart surgery, we examined for an association between dexmedetomidine use in the immediate postoperative period and subsequent arrhythmia development. METHODS AND RESULTS A total of 1593 surgical procedures for congenital heart disease were performed. Dexmedetomidine was administered in the immediate postoperative period after 468 (29%) surgical procedures. When compared with 1125 controls, the group receiving dexmedetomidine demonstrated significantly fewer tachyarrhythmias (29% versus 38%; P<0.001), tachyarrhythmias receiving intervention (14% versus 23%; P<0.001), bradyarrhythmias (18% versus 22%; P=0.03), and bradyarrhythmias receiving intervention (12% versus 16%; P=0.04). After propensity score matching with 468 controls, the arrhythmia incidence between groups became similar: tachyarrhythmias (29% versus 31%; P=0.66), tachyarrhythmias receiving intervention (14% versus 17%; P=0.16), bradyarrhythmias (18% versus 15%; P=0.44), and bradyarrhythmias receiving intervention (12% versus 9%; P=0.17). After excluding controls exposed to dexmedetomidine at a later time in the hospitalization, dexmedetomidine was associated with increased odds of bradyarrhythmias receiving intervention (odds ratio, 2.18; 95% confidence interval, 1.02-4.65). Furthermore, there was a dose-dependent increase in the odds of bradyarrhythmias (odds ratio, 1.04; 95% confidence interval, 1.01-1.07) and bradyarrhythmias receiving intervention (odds ratio, 1.05; 95% confidence interval, 1.01-1.08). CONCLUSIONS Although dexmedetomidine exposure in the immediate postoperative period is not associated with a clinically meaningful difference in the incidence of tachyarrhythmias after congenital heart surgery, it may be associated with increased odds of bradyarrhythmias.
Gingival Health and Plaque Regrowth Response Following a Four-Week Interdental Hygiene Intervention.
OBJECTIVES To compare the efficacy of three adjunct interproximal cleaning methods versus a manual toothbrush alone on gingivitis, and demonstrate that the Philips Sonicare AirflossPro™ interproximal (IP) cleaning device provides a similar reduction in gingivitis and plaque compared to string floss. METHODS A randomized, single-blind, parallel-design study was conducted on generally healthy adults exhibiting mild to moderate gingivitis. Eligible subjects were non-smokers, aged 18-65 years, with ≥ 0.5 per the Rustogi Modified Navy Plaque Index (RMNPI) and a Gingival Bleeding Index (GBI) of ≥ 1 on at least 10 sites. Eligible subjects were randomly assigned to use one of four oral hygiene regimens: manual toothbrush (MTB) alone; MTB plus string floss (SF); MTB plus Philips Sonicare AirflossPro used with Cool Mint Listerine® Antiseptic (AFPL); and MTB plus Philips Sonicare AirflossPro used with BreathRx™ (AFPB). Subjects were followed over a 28-day home-use period, with follow-up visits for efficacy and safety conducted at Days 14 and 28. All subjects were instructed to use the MTB twice daily and perform interproximal cleaning once daily, if assigned. Study efficacy endpoints included the Modified Gingival Index (MGI), Rustogi Modified Navy Plaque Index, and the Gingival Bleeding Index. RESULTS Of 290 randomized subjects, 287 were followed to Day 14 and 286 were followed to Day 28. For the primary endpoint at Day 14, significantly larger reductions in MGI were observed in each of the three IP cleaning groups compared to MTB alone (p < 0.001). The adjusted mean reductions and standard error estimates (SE) for MGI expressed as a percent reduction from Baseline at Day 14 were: 0.22% (0.55%) for MTB; 4.30% (0.44%) for SF; 4.55% (0.45%) for AFPL; and 4.20% (0.44%) for AFPB. A non-inferiority test comparing AirflossPro to SF showed AirflossPro to be non-inferior to SF (p < 0.001). CONCLUSIONS The addition of interproximal cleaning to manual tooth brushing statistically significantly reduces gingivitis and plaque compared to manual tooth brushing alone. Among the adjunct interproximal cleaning regimens, AirflossPro provides a similar reduction in gingivitis and plaque to string floss. All study regimens were safe on oral tissues.
RFID-based localization and tracking technologies
Radio frequency identification usually incorporates a tag into an object for the purpose of identification or localization using radio signals. It has gained much attention recently due to its advantages in terms of low cost and ease of deployment. This article presents an overview of RFID-based localization and tracking technologies, including tag-based (e.g., LANDMARC), reader-based (e.g., reverse RFID), transceiver-free, and hybrid approaches. These technologies mainly use the readily available resource of radio signal strength information or RSS change information to localize the target objects. A number of well-known approaches and their limitations are introduced. This article also indicates the challenges and possible solutions in the near and long terms. The challenges include multipath propagation, interference, and localizing multiple objects, among others. Most of these challenges exist not only for RFID-based localization, but also for other RF-based localization technologies.
Risk Factors in Oil and Gas Industry Returns: International Evidence
This paper analyzes the exposure of the oil and gas industry of 34 countries to oil prices. Using a multifactor panel model to estimate the oil and gas excess stock returns, our results strongly support the view that oil price is a globally priced factor for the oil industry. In particular, the response of the oil and gas sector to changes oil prices is positive and larger for developed countries than for emerging markets. The industry response is asymmetric, with positive oil price changes having a greater impact on the oil sector returns than negative changes. Furthermore, local market index returns, currency rates and oil price volatility also have a significant impact on oil industry's excess returns. Finally, industry local sensitivities seem to vary with stock market activity and with levels of appropriation of industry revenues by governments. Results are robust to a battery of tests.
CLASSIFICATION BASED ON ASSOCIATION-RULE MINING TECHNIQUES : A GENERAL SURVEY AND EMPIRICAL COMPARATIVE EVALUATION
In this paper classification and association rule mining algorithms are discussed and demonstrated. Particularly, the problem of association rule mining, and the investigation and comparison of popular association rules algorithms. The classic problem of classification in data mining will be also discussed. The paper also considers the use of association rule mining in classification approach in which a recently proposed algorithm is demonstrated for this purpose. Finally, a comprehensive experimental study against 13 UCI data sets is presented to evaluate and compare traditional and association rule based classification techniques with regards to classification accuracy, number of derived rules, rules features and processing time.
Reproductive and hormonal factors, and ovarian cancer risk for BRCA1 and BRCA2 mutation carriers: results from the International BRCA1/2 Carrier Cohort Study.
BACKGROUND Several reproductive and hormonal factors are known to be associated with ovarian cancer risk in the general population, including parity and oral contraceptive (OC) use. However, their effect on ovarian cancer risk for BRCA1 and BRCA2 mutation carriers has only been investigated in a small number of studies. METHODS We used data on 2,281 BRCA1 carriers and 1,038 BRCA2 carriers from the International BRCA1/2 Carrier Cohort Study to evaluate the effect of reproductive and hormonal factors on ovarian cancer risk for mutation carriers. Data were analyzed within a weighted Cox proportional hazards framework. RESULTS There were no significant differences in the risk of ovarian cancer between parous and nulliparous carriers. For parous BRCA1 mutation carriers, the risk of ovarian cancer was reduced with each additional full-term pregnancy (P trend = 0.002). BRCA1 carriers who had ever used OC were at a significantly reduced risk of developing ovarian cancer (hazard ratio, 0.52; 95% confidence intervals, 0.37-0.73; P = 0.0002) and increasing duration of OC use was associated with a reduced ovarian cancer risk (P trend = 0.0004). The protective effect of OC use for BRCA1 mutation carriers seemed to be greater among more recent users. Tubal ligation was associated with a reduced risk of ovarian cancer for BRCA1 carriers (hazard ratio, 0.42; 95% confidence intervals, 0.22-0.80; P = 0.008). The number of ovarian cancer cases in BRCA2 mutation carriers was too small to draw definitive conclusions. CONCLUSIONS The results provide further confirmation that OC use, number of full-term pregnancies, and tubal ligation are associated with ovarian cancer risk in BRCA1 carriers to a similar relative extent as in the general population.
Building Information Modeling
Building Information Modeling is based on the idea of the continuous use of digital building models throughout the entire lifecycle of a built facility, starting from the early conceptual design and detailed design phases, to the construction phase, and the long phase of operation. BIM significantly improves information flow between stakeholders involved at all stages, resulting in an increase in efficiency by reducing the laborious and error-prone manual re-entering of information that dominates conventional paper-based workflows. Thanks to its many advantages, BIM is already practiced in many construction projects throughout the entire world. However, the fragmented nature of the construction industry still impedes its more widespread use. Government initiatives around the world play an important role in increasing BIM adoption: as the largest client of the construction industry in many countries, the state has the power to significantly change its work practices. This chapter discusses the motivation for applying BIM, offers a detailed definition of BIM along with an overview of typical use cases, describes the common BIM maturity grades and reports on BIM adoption levels in various countries around the globe. A. Borrmann ( ) Chair of Computational Modeling and Simulation, Technical University of Munich, München, Germany e-mail: [email protected] M. König Chair of Computing in Engineering, Ruhr University Bochum, Bochum, Germany e-mail: [email protected] C. Koch Chair of Intelligent Technical Design, Bauhaus-Universität Weimar, Weimar, Germany e-mail: [email protected] J. Beetz Chair of Design Computation, RWTH Aachen University, Aachen, Germany e-mail: [email protected] © Springer International Publishing AG, part of Springer Nature 2018 A. Borrmann et al. (eds.), Building Information Modeling, https://doi.org/10.1007/978-3-319-92862-3_1 1 2 A. Borrmann et al. 1.1 Building Information Modeling: Why? In the last decade, digitalization has transformed a wide range of industrial sectors, resulting in a tremendous increase in productivity, product quality and product variety. In the Architecture, Engineering, Construction (AEC) industry, digital tools are increasingly adopted for designing, constructing and operating buildings and infrastructure assets. However, the continuous use of digital information along the entire process chain falls significantly behind other industry domains. All too often, valuable information is lost because information is still predominantly handed over in the form of drawings, either as physical printed plots on paper or in a digital but limited format. Such disruptions in the information flow occur across the entire lifecycle of a built facility: in its design, construction and operation phases as well as in the very important handovers between these phases. The planning and realization of built facilities is a complex undertaking involving a wide range of stakeholders from different fields of expertise. For a successful construction project, a continuous reconciliation and intense exchange of information among these stakeholders is necessary. Currently, this typically involves the handover of technical drawings of the construction project in graphical manner in the form of horizontal and vertical sections, views and detail drawings. The software used to create these drawings imitate the centuries-old way of working using a drawing board. However, line drawings cannot be comprehensively understood by computers. The information they contain can only be partially interpreted and processed by computational methods. Basing the information flow on drawings alone therefore fails to harness the great potential of information technology for supporting project management and building operation. A key problem is that the consistency of the diverse technical drawings can only be checked manually. This is a potentially massive source of errors, particularly if we take into account that the drawings are typically created by experts from different design disciplines and across multiple companies. Design changes are particularly challenging: if they are not continuously tracked and relayed to all related plans, inconsistencies can easily arise and often remain undiscovered until the actual construction – where they then incur significant extra costs for ad-hoc solutions on site. In conventional practice, design changes are marked only by means of revision clouds in the drawings, which can be hard to detect and ambiguous. The limited information depth of technical drawings also has a significant drawback in that information on the building design cannot be directly used by downstream applications for any kind of analysis, calculation and simulation, but must be re-entered manually which again requires unnecessary additional work and is a further source of errors. The same holds true for the information handover to the building owner after the construction is finished. He must invest considerable effort into extracting the required information for operating the building from the drawings and documents and enter it into a facility management system. At each of 1 Building Information Modeling: Why? What? How? 3 Conceptual Design Construction Detailed Design Operation Time Conventional workflows Digital workflows Information loss Project information Fig. 1.1 Loss of information caused by disruptions in the digital information flow. (Based on Eastman et al. 2008) these information exchange points, data that was once available in digital form is lost and has to be laboriously re-created (Fig. 1.1). This is where Building Information Modeling comes into play. By applying the BIM method, a much more profound use of computer technology in the design, engineering, construction and operation of built facilities is realized. Instead of recording information in drawings, BIM stores, maintains and exchanges information using comprehensive digital representations: the building information models. This approach dramatically improves the coordination of the design activities, the integration of simulations, the setup and control of the construction process, as well as the handover of building information to the operator. By reducing the manual re-entering of data to a minimum and enabling the consequent re-use of digital information, laborious and error-prone work is avoided, which in turn results in an increase in productivity and quality in construction projects. Other industry sectors, such as the automotive industry, have already undergone the transition to digitized, model-based product development and manufacturing which allowed them to achieve significant efficiency gains (Kagermann 2015). The Architecture Engineering and Construction (AEC) industry, however, has its own particularly challenging boundary conditions: first and foremost, the process and value creation chain is not controlled by one company, but is dispersed across a large number of enterprises including architectural offices, engineering consultancies, and construction firms. These typically cooperate only for the duration of an individual construction project and not for a longer period of time. Consequently, there are a large number of interfaces in the ad-hoc network of companies where digital information has to be handed over. As these information flows must be supervised and controlled by a central instance, the onus is on the building owner to specify and enforce the use of Building Information Modeling. 4 A. Borrmann et al. 1.2 Building Information Modeling: What? A Building Information Model is a comprehensive digital representation of a built facility with great information depth. It typically includes the three-dimensional geometry of the building components at a defined level of detail. In addition, it also comprises non-physical objects, such as spaces and zones, a hierarchical project structure, or schedules. Objects are typically associated with a well-defined set of semantic information, such as the component type, materials, technical properties, or costs, as well as the relationships between the components and other physical or logical entities (Fig. 1.2). The term Building Information Modeling (BIM) consequently describes both the process of creating such digital building models as well as the process of maintaining, using and exchanging them throughout the entire lifetime of the built facility (Fig. 1.3). The US National Building Information Modeling Standard defines BIM as follows (NIBS 2012): Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. Fig. 1.2 A BIM model comprises both the 3D geometry of each building element as well as a rich set of semantic information provided by attributes and relationships 1 Building Information Modeling: Why? What? How? 5 Construction Detailed Design Operation Conceptual Design Modification Demolition Facility Management, Maintenance, Repair Cost estimation Design Options Progress Monitoring Simulations and Analyses Logistics Process Simulation Coordination Visualization Spatial Program
Committed subcutaneous preadipocytes are reduced in human obesity
The aim of this study was to test whether the availability of committed preadipocytes in abdominal and femoral subcutaneous adipose tissue varies with obesity and body fat distribution. Body composition, fat cell size, committed preadipocytes and macrophages were measured in subcutaneous abdominal and femoral adipose depots of 17 lean, 16 upper-body-obese (UBO) and 13 lower-body-obese (LBO) women. Preadipocytes and macrophages were identified by simultaneous staining with the respective markers aP2 and CD68. In a subset of samples we measured preadipocyte proliferation, differentiation and susceptibility to apoptosis. Abdominal adipocytes were smaller in lean than in obese women. Committed preadipocytes represented a greater fraction of stromovascular cells in lean than in obese women but were similar between UBO and LBO women (abdomen: ∼30 ± 3 vs ∼17 ± 2%; thigh: ∼30 ± 3 vs ∼17 ± 2%). Preliminary data suggested that preadipocyte kinetics were similar in LBO and lean women, whereas preadipocytes of UBO women differentiated less and were more susceptible to apoptotic stimuli. The fraction of stromovascular cells that were macrophages was greater in both depots in obese women (UBO and LBO) than in normal-weight women, but the difference was not statistically significant. The proportion of subcutaneous adipose tissue stromovascular cells that are committed preadipocytes is reduced with obesity. This could be due to greater recruitment of preadipocytes to adipogenesis or greater preadipocyte apoptosis, depending upon the obesity phenotype. These data are consistent with the concept that body fat distribution may be regulated partly through differences in adipogenesis.
Selected Odes of Pablo Neruda
This is the perfect gift for Valentine's Day. Selected Poems contains Neruda's resonant, exploratory, intensely individualistic verse, rooted in the physical landscape and people of Chile. Here we find sensuous songs of love, tender odes to the sea, melancholy lyrics of heartache, fiery political statements and a frank celebration of sex. This is an enticing, distinctive and celebrated collection of poetry from the greatest twentieth century Latin American poet.
Compact four-port MIMO antenna system at 3.5 GHz
This work proposed a compact multiple-input multiple-output (MIMO) antenna system having four antennas each fed by coplanar waveguide (CPW) operating for 3.5GHz band (3400–3600) MHz. The proposed MIMO antenna system is realized with two groups of printed Inverted L-monopole of size 5×4 mm2 placed orthogonally to each other to attain the diversity capability of polarization and pattern at the edge of non-ground portion of size 15 × 15 mm2 of mobile handset. The proposed MIMO antenna system consists of four monopole inverted-L shape radiator each with parasitic shorted stripe extending from the ground plane having same dimensions. The parasitic inverted-L shorted striped served as tuning element by reducing the resonating frequency of the monopole radiator and improving the impedance matching. The mutual coupling between the closely packed antennas is addressed by the neutralization line by feeding anti-phase current to the neighbor antenna. The simulation result showed that that the impedance bandwidth at S11 ≤ −10 dB is 200 MHz enough for 5G network communication channel. The diversity parameters of interest are envelop correlation coefficient and mean effective gain are also calculated.
Classifying topics and detecting topic shifts in political manifestos
General political topics, like social security and foreign affairs, recur in electoral manifestos across countries. The Comparative Manifesto Project collects and manually codes manifestos of political parties from all around the world, detecting political topics at sentence level. Since manual coding is time-consuming and allows for annotation inconsistencies, in this work we present an automated approach to topical coding of political manifestos. We first train three independent sentence-level classifiers – one for detecting the topic and two for detecting topic shifts – and then globally optimize their predictions using a Markov Logic network. Experimental results show that the proposed global model achieves high classification performance and significantly outperforms the local sentence-level topic classifier.
Studies ofWWandWZproduction and limits on anomalousWWγandWWZcouplings
Evidence of anomalous WW and WZ production was sought in pp? collisions at a center-of-mass energy of ?s=1.8TeV. The final states WW(WZ)??? jet jet+X, WZ???ee+X and WZ?e?ee+X were studied using a data sample corresponding to an integrated luminosity of approximately 90pb-1. No evidence of anomalous diboson production was found. Limits were set on anomalous WW? and WWZ couplings and were combined with our previous results. The combined 95% confidence level anomalous coupling limits for ?=2TeV are -0.25<~??<~0.39 (?=0) and -0.18<~?<~0.19 (??=0), assuming the WW? couplings are equal to the WWZ couplings
Asymmetrical interleaving strategy for multi-channel PFC
The continuously increasing efficiency and power density requirement of the AC-DC front-end converter posed a big challenge for today's power factor correction (PFC) circuit design. The multi-channel interleaved PFC is a promising candidate to achieve the goals. In this paper, the multi-channel interleaving impact on the EMI filter design and the output capacitor life time is investigated. By properly choosing the interleaving channel number and the switching frequency, the EMI filter size and cost can be effectively reduced. Further more; multi-channel PFC with asymmetrical interleaving strategy is introduced, and the additional benefit on the EMI filter is identified. At the output side, different interleaving schemes impact on the output capacitor ripple cancellation effect is also investigated and compared.
A study of deep convolutional auto-encoders for anomaly detection in videos
The detection of anomalous behaviors in automated video surveillance is a recurrent topic in recent computer vision research. Depending on the application field, anomalies can present different characteristics and challenges. Convolutional Neural Networks have achieved the state-of-the-art performance for object recognition in recent years, since they learn features automatically during the training process. From the anomaly detection perspective, the Convolutional Autoencoder (CAE) is an interesting choice, since it captures the 2D structure in image sequences during the learning process. This work uses a CAE in the anomaly detection context, by applying the reconstruction error of each frame as an anomaly score. By exploring the CAE architecture, we also propose a method for aggregating high-level spatial and temporal features with the input frames and investigate how they affect the CAE performance. An easy-to-use measure of video spatial complexity was devised and correlated with the classification performance of the CAE. The proposed methods were evaluated by means of several experiments with public-domain datasets. The promising results support further research in this area. © 2017 Published by Elsevier B.V.
Intracoronary and systemic melatonin to patients with acute myocardial infarction: protocol for the IMPACT trial.
INTRODUCTION Ischaemia-reperfusion injury following acute myocardial infarctions (AMI) is an unavoidable consequence of the primary percutaneous coronary intervention (pPCI) procedure. A pivotal mechanism in ischaemia-reperfusion injury is the production of reactive oxygen species following reperfusion. The endogenous hormone, melatonin, works as an antioxidant and could potentially minimise the ischaemia-reperfusion injury. Given intracoronarily, it enables melatonin to work directly at the site of reperfusion. We wish to test if melatonin, as an antioxidant, can minimise the reperfusion injury following pPCI in patients with AMI. MATERIAL AND METHODS The IMPACT trial is a multicentre, randomised, double-blinded, placebo-controlled study. We wish to include 2 × 20 patients with ST-elevation myocardial infarctions undergoing pPCI within six hours from symptom onset. The primary end-point is the Myocardial Salvage Index assessed by cardiovascular magnetic resonance imaging on day 4 (± 1) after pPCI. The secondary end-points are high-sensitivity troponin, creatinekinase myocardial band and clinical events. CONCLUSION The aim of the IMPACT trial is to evaluate the effect of melatonin on reperfusion injuries following pPCI. Owing to its relatively non-toxic profile, melatonin is an easily implementable drug in the clinical setting, and melatonin has the potential to reduce morbidity in patients with AMI. FUNDING This study received no financial support from the industry. TRIAL REGISTRATION www.clinicaltrials.gov, clinical trials identifier: NCT01172171.
Nonlinear Dynamics of Chaotic Attractor of Chua Circuit and Its Application for Secure Communication
The Chua circuit is among the simplest non-linear circuits that shows most complex dynamical behavior, including chaos which exhibits a variety of bifurcation phenomena and attractors. In this paper, Chua attractor’s chaotic oscillator, synchronization and masking communication circuits were designed and simulated. The electronic circuit oscilloscope outputs of the realized Chua system is also presented. Simulation and oscilloscope outputs are used to illustrate the accuracy of the designed and realized Chua chaotic oscillator circuits. The Chua system is addressed suitable for chaotic synchronization circuits and chaotic masking communication circuits using Matlab® and MultiSIM® software. Simulation results are used to visualize and illustrate the effectiveness of Chua chaotic system in synchronization and application of secure communication.
Geometry of the Squared Distance Function to Curves and Surfaces
We investigate the geometry of that function in the plane or 3-space, which associates to each point the square of the shortest distance to a given curve or surface. Particular emphasis is put on second order Taylor approximants and other local quadratic approximants. Their key role in a variety of geometric optimization algorithms is illustrated at hand of registration in Computer Vision and surface approximation.
Phase II trial of CPT-11 in myelodysplastic syndromes with excess of marrow blasts
CPT-11 is an antineoplastic agent which acts as a specific inhibitor of DNA topisomerase 1 and has a broad spectrum of activity in solid tumors. Very few studies have evaluated the activity of CPT-11 in hematological malignancies. We conducted a phase II trial of CPT-11 in 26 patients with high-risk MDS (RAEB 1: n = 4; RAEB 2: n = 9; MDS having progressed to AML: n = 10; CMML: n = 3) who could not receive anthracycline/cytarabine intensive chemotherapy. Induction therapy consisted of four courses of CPT-11 given intravenously at 200 mg/m2 every 2 weeks. Patient characteristics were: median age, 71 (range 51–77); sex, (M/F), 21/5, median % marrow blasts cells, 13.5 (range 7–52). Cytogenetics according to IPSS were: low-risk n = 13, intermediate-risk n = 6, high-risk n = 3, failure or not done n = 4. Six patients stopped treatment after only one or two courses of CPT-11 due to severe infection (n = 2), progressive disease (n = 3), acute lysis syndrome with renal failure (n = 1). In the 20 patients who received at least three cycles of CPT-11, complete remission was achieved in one case, partial remission in four cases, and hematological improvement in three cases with an overall response rate of 33% in the 26 patients. Duration of response was short (median 4 months, range 1–6 months) and median survival was 8 months (range 1–23 months). Digestive toxicity (diarrhea) occurred in 26/89 (29%) courses, but was mild (grade 1, 20% courses; grade 2 or 3, 9% courses). Hematological toxicity was difficult to assess in non-responders because of initial pancytopenia, but all the patients who responded had grade 3/4 hematological toxicity associated with grade ⩾2 infection requiring hospitalization in 18% of the courses. No other major toxicity was observed. Thus CPT-11 has an interesting activity in MDS with excess of blasts; toxicity is easily managed and most patients can be treated in the out-clinic setting. These results suggest that further evaluation of CPT-11 in MDS is warranted.
A Generalized Dual-Band Wilkinson Power Divider With Parallel $L, C,$ and $R$ Components
A generalized model of a two-way dual-band Wilkinson power divider (WPD) with a parallel LC circuit at midpoints of two-segment transformers is proposed and compared with that of a conventional two-way dual-band WPD with a parallel LC circuit at the ends of two-segment transformers. The sum of power reflected at an output port and power transmitted to an isolation port from another isolation port in the proposed divider is smaller than that in the conventional divider. Therefore, wide bandwidths for S22, S33, and S32 can be expected for proposed dividers. In the case of equal power division, frequency characteristics of return loss at output ports and isolation of the proposed divider are wider than those of the convention one. The resonant frequencies of LC circuits in the proposed divider and a conventional divider are equal; however, the inductance L used in the proposed divider is always smaller than that in the conventional divider. Design charts and calculated bandwidths as a function of frequency ratio from 1 to 7 are presented. In experiments, two symmetrical and two asymmetrical circuits were fabricated. The experimental results showed good agreement with theoretical results.
Data Storage Security and Privacy in Cloud Computing : A Comprehensive Survey
Cloud Computing is a form of distributed computing wherein resources and application platforms are distributed over the Internet through on demand and pay on utilization basis. Data Storage is main feature that cloud data centres are provided to the companies/organizations to preserve huge data. But still few organizations are not ready to use cloud technology due to lack of security. This paper describes the different techniques along with few security challenges, advantages and also disadvantages. It also provides the analysis of data security issues and privacy protection affairs related to cloud computing by preventing data access from unauthorized users, managing sensitive data, providing accuracy and consistency of data stored.
A Nonrandomized, Open-Label, Multicenter, Phase 4 Pilot Study on the Effect and Safety of ILUVIEN® in Chronic Diabetic Macular Edema Patients Considered Insufficiently Responsive to Available Therapies (RESPOND).
PURPOSE The aim of this study was to assess the effectiveness and safety of ILUVIEN® in patients with chronic diabetic macular edema (DME) who were insufficiently responsive to prior therapies. METHODS This is a prospective, nonrandomized, multicenter, open-label, phase 4 pilot study assessing the effectiveness and safety of ILUVIEN® involving 12 patients insufficiently responsive to available therapies. Assessments were performed at screening, baseline, week 1, and months 1, 3, 6, 9, and 12. Demographics, medical/ophthalmic history, prior laser, anti-VEGF, and steroid treatments, and lab tests were recorded at screening. A complete ophthalmic examination and SD-OCT were performed at screening and at all follow-up visits. RESULTS The patients showed improvements in best-corrected visual acuity (+3.7 letters), with greater improvement among pseudophakic patients (+6.8 letters) compared with phakic patients (-2.5 letters) 12 months after ILUVIEN®. The mean central subfield thickness decrease from baseline to month 12 was statistically significant, with a rapid reduction in the first week. Regarding safety, only 2 patients showed an intraocular pressure (IOP) increase over 25 mm Hg during the study, and the rise in IOP was well managed with eye drops only. CONCLUSIONS This prospective and pilot study suggests that ILUVIEN® is safe and may be considered effective for chronic DME patients insufficiently responsive to other available therapies as it showed a rapid and sustained improvement of macular edema obtained after treatment with ILUVIEN®.
Adult PTSD and Its Treatment With EMDR : A Review of Controversies , Evidence , and Theoretical Knowledge
117 © 2009 EMDR International Association DOI: 10.1891/1933-3196.3.3.117 “Experiencing trauma is an essential part of being human; history is written in blood” (van der Kolk & McFarlane, 1996, p. 3). As humans, however, we do have an extraordinary ability to adapt to trauma, and resilience is our most common response (Bonanno, 2005). Nonetheless, traumatic experiences can alter one’s social, psychological, and biological equilibrium, and for years memories of the event can taint experiences in the present. Despite advances in our knowledge of posttraumatic stress disorder (PTSD) and the development of psychosocial treatments, almost half of those who engage in treatment for PTSD fail to fully recover (Bradley, Greene, Russ, Dutra, & Westen, 2005). Furthermore, no theory as yet provides an adequate account of all the complex phenomena and processes involved in PTSD, and our understanding of the mechanisms that underlie effective treatment, such as eye movement desensitization and reprocessing (EMDR) and exposure therapy remains unclear. Historical Overview of PTSD
Resonant synchronous rectification for high frequency DC/DC converter
Resonant synchronous rectification for high frequency low voltage high current DC/DC converter is proposed in this paper. It features soft-switching and no body-diode conduction, which is very promising for high switching frequency operation. A family of isolated soft-switching DC/DC converters and two non-isolated soft-switching DC/DC converters implementing the resonant synchronous rectification are proposed in this paper. A prototype of 1 MHz 48 V-to-1.2 V/28 A 1/16 brick DC-DC converter demonstrates the 84% overall efficiency.
On Kinds of Indiscernibility in Logic and Metaphysics
Using the Hilbert-Bernays account as a spring-board, we first define four ways in which two objects can be discerned from one another, using the non-logical vocabulary of the language concerned. (These definitions are based on definitions made by Quine and Saunders.) Because of our use of the Hilbert-Bernays account, these definitions are in terms of the syntax of the language. But we also relate our definitions to the idea of permutations on the domain of quantification, and their being symmetries. These relations turn out to be subtle---some natural conjectures about them are false. We will see in particular that the idea of symmetry meshes with a species of indiscernibility that we will call `absolute indiscernibility'. We then report all the logical implications between our four kinds of discernibility. We use these four kinds as a resource for stating four metaphysical theses about identity. Three of these theses articulate two traditional philosophical themes: viz. the principle of the identity of indiscernibles (which will come in two versions), and haecceitism. The fourth is recent. Its most notable feature is that it makes diversity (i.e. non-identity) weaker than what we will call individuality (being an individual): two objects can be distinct but not individuals. For this reason, it has been advocated both for quantum particles and for spacetime points. Finally, we locate this fourth metaphysical thesis in a broader position, which we call structuralism. We conclude with a discussion of the semantics suitable for a structuralist, with particular reference to physical theories as well as elementary model theory.
Tilt set-point correction system for balancing robot using PID controller
Balancing robot is a robot that relies on two wheels in the process of movement. Basically, to be able to remain standing balanced, the control requires an angle value to be used as tilt set-point. That angle value is a balance point of the robot itself which is the robot's center of gravity. Generally, to find the correct balance point, requires manual measurement or through trial and error, depends on the robot's mechanical design. However, when the robot is at balance state and its balance point changes because of the mechanical moving parts or bringing a payload, the robot will move towards the heaviest side and then fall. In this research, a cascade PID control system is developed for balancing robot to keep it balanced without changing the set-point even if the balance point changes. Two parameter is used as feedback for error variable, angle and distance error. When the robot is about to fall, distance taken from the starting position will be calculated and used to correct angle error so that the robot will still balance without changing the set-point but manipulating the control's error value. Based on the research that has been done, payload that can be brought by the robot is up to 350 grams.
Model-driven engineering of information systems: 10 years and 1000 versions
This paper reports upon ten years of experience in the development and application of model-driven technology. The technology in question was inspired by work on formal methods: in particular, by the B toolkit. It was used in the development of a number of information systems, all of which were successfully deployed in real world situations. The paper reports upon three systems: one that informed the design of the technology, one that was used by an internal customer, and one that is currently in use outside the development organisation. It records a number of lessons regarding the application of model-driven techniques. ? We report upon ten years of experience in model-driven engineering. ? We discuss three different, successful applications. ? We make comparisons with a conventional approach. ? We draw conclusions as 'lessons' in the application of MDE. ? We present the Booster technology.
A new interpretation of nonlinear energy operator and its efficacy in spike detection
A nonlinear energy operator (NEO) gives an estimate of the energy content of a linear oscillator. This has been used to quantify the AM-FM modulating signals present in a sinusoid. Here, the authors give a new interpretation of NEO and extend its use in stochastic signals. They show that NEO accentuates the high-frequency content. This instantaneous nature of NEO and its very low computational burden make it an ideal tool for spike detection. The efficacy of the proposed method has been tested with simulated signals as well as with real electroencephalograms (EEGs).
System-Optimal Routing of Traffic Flows with User Constraints in Networks with Congestion
The design of route guidance systems faces a well-known dilemma. The approach that theoretically yields the system-optimal traffic pattern may discriminate against some users in favor of others. Proposed alternate models, however, do not directly address the system perspective and may result in inferior performance. We propose a novel model and corresponding algorithms to resolve this dilemma. We present computational results on real-world instances and compare the new approach with the well-established traffic assignment model. The essence of this study is that system-optimal routing of traffic flow with explicit integration of user constraints leads to a better performance than the user equilibrium, while simultaneously guaranteeing superior fairness compared to the pure system optimum.
Range of Motion Requirements for Upper-Limb Activities of Daily Living.
OBJECTIVE We quantified the range of motion (ROM) required for eight upper-extremity activities of daily living (ADLs) in healthy participants. METHOD Fifteen right-handed participants completed several bimanual and unilateral basic ADLs while joint kinematics were monitored using a motion capture system. Peak motions of the pelvis, trunk, shoulder, elbow, and wrist were quantified for each task. RESULTS To complete all activities tested, participants needed a minimum ROM of -65°/0°/105° for humeral plane angle (horizontal abduction-adduction), 0°-108° for humeral elevation, -55°/0°/79° for humeral rotation, 0°-121° for elbow flexion, -53°/0°/13° for forearm rotation, -40°/0°/38° for wrist flexion-extension, and -28°/0°/38° for wrist ulnar-radial deviation. Peak trunk ROM was 23° lean, 32° axial rotation, and 59° flexion-extension. CONCLUSION Full upper-limb kinematics were calculated for several ADLs. This methodology can be used in future studies as a basis for developing normative databases of upper-extremity motions and evaluating pathology in populations.
Automatic Number Plate Recognition (ANPR) system for Indian conditions
Automatic Number Plate Recognition (ANPR) is a real time embedded system which automatically recognizes the license number of vehicles. In this paper, the task of recognizing number plate for Indian conditions is considered, where number plate standards are rarely followed.
Cognitive Behavior Therapy for Schizophrenia: Effect Sizes, Clinical Models, and Methodological Rigor
BACKGROUND Guidance in the United States and United Kingdom has included cognitive behavior therapy for psychosis (CBTp) as a preferred therapy. But recent advances have widened the CBTp targets to other symptoms and have different methods of provision, eg, in groups. AIM To explore the effect sizes of current CBTp trials including targeted and nontargeted symptoms, modes of action, and effect of methodological rigor. METHOD Thirty-four CBTp trials with data in the public domain were used as source data for a meta-analysis and investigation of the effects of trial methodology using the Clinical Trial Assessment Measure (CTAM). RESULTS There were overall beneficial effects for the target symptom (33 studies; effect size = 0.400 [95% confidence interval [CI] = 0.252, 0.548]) as well as significant effects for positive symptoms (32 studies), negative symptoms (23 studies), functioning (15 studies), mood (13 studies), and social anxiety (2 studies) with effects ranging from 0.35 to 0.44. However, there was no effect on hopelessness. Improvements in one domain were correlated with improvements in others. Trials in which raters were aware of group allocation had an inflated effect size of approximately 50%-100%. But rigorous CBTp studies showed benefit (estimated effect size = 0.223; 95% CI = 0.017, 0.428) although the lower end of the CI should be noted. Secondary outcomes (eg, negative symptoms) were also affected such that in the group of methodologically adequate studies the effect sizes were not significant. CONCLUSIONS As in other meta-analyses, CBTp had beneficial effect on positive symptoms. However, psychological treatment trials that make no attempt to mask the group allocation are likely to have inflated effect sizes. Evidence considered for psychological treatment guidance should take into account specific methodological detail.
Combining bagging, boosting, rotation forest and random subspace methods
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.
Performance and carcass characteristics of broiler chickens with different growth potential and submitted to heat stress
In order to evaluate the effects of broiler genotype and of heat exposure on performance, carcass characteristics, and protein and fat accretion, six hundred one-day-old male broilers were randomly assigned in a 2 x 3 factorial arrangement, according to the following factors: genetic group (selected and non-selected broilers) and pair-feeding scheme (Ad 32 reared under heat stress and fed ad libitum; Ad 23 reared at thermoneutrality and fed ad libitum; Pf 23 reared at thermoneutrality and pair fed with Ad 32 ), with a total of six treatments with four replicates of 25 birds each. Independent of pair-feeding scheme, selected broilers showed better feed conversion, higher carcass yield, and lower abdominal fat deposition rate. However, as compared to non-selected broilers, they reduced more intensively feed intake when heat exposed, which promoted significant breast-yield decrease, and more pronounced changes on carcass chemical composition. These findings allows concluding that, in both genetic groups, both environmental temperature and feed-intake restriction influence abdominal fat deposition rate and other carcass characteristics; however, the impact of heat exposure on broiler performance is more noticeable on the selected line.
Back-translation for discovering distant protein homologies in the presence of frameshift mutations
Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. We developed a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. Our implementation is freely available at http://bioinfo.lifl.fr/path/ . Our approach allows to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.
Mitigating Catastrophic Forgetting in Temporal Difference Learning with Function Approximation
Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has prevented reinforcement learning from taking full advantage of scalable neural networks is that of catastrophic forgetting. The latter affects supervised learning systems when highly correlated input samples are presented, as well as when input patterns are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space. Unfortunately, reinforcement learning presents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. Meaningful training examples are acquired as the agent explores different regions of its state/action space. When the agent is in one such region, only highly correlated samples from that region are typically acquired. Moreover, the regions that the agent is likely to visit will depend on its current policy, suggesting that an agent that has a good policy may avoid exploring particular regions. The confluence of these factors means that without some mitigation techniques, supervised neural networks as function approximation in temporal-difference learning will only be applicable to the simplest test cases. In this work, we develop a feed forward neural network architecture that mitigates catastrophic forgetting by partitioning the input space in a manner that selectively activates a different subset of hidden neurons for each region of the input space. We demonstrate the effectiveness of the proposed framework on a cart-pole balancing problem for which other neural network architectures exhibit training instability likely due to catastrophic forgetting. We demonstrate that our technique produces better results, particularly with respect to a performance-stability measure.
Advancing the “ E ” in K-12 STEM Education
Technological fields, like engineering, are in desperate need of more qualified workers, yet not enough students are pursuing studies in science, technology, engineering, or mathematics (STEM) that would prepare them for technical careers. Unfortunately, many students have no interest in STEM careers, particularly engineering, because they are not exposed to topics in these fields during their K-12 studies. Most K-12 teachers have not been trained to integrate relevant STEM topics into their classroom teaching and curriculum materials. This article explores best practices for bringing engineering into the science and mathematics curriculum of secondary school classrooms by describing a project that utilizes concepts representing the merger of medicine, robotics, and information technology. Specific examples demonstrating the integration into the teaching of physics, biology, and chemistry are provided. Also considered are the critical issues of professional development for classroom teachers, improved preparation of future teachers of STEM, and the development of curriculum materials that address state and national content standards. Introduction Not enough students are interested in pursuing careers in science, mathematics, technology and especially engineering, at a time when the United States currently has a shortage of qualified workers in STEM fields (NSB, 2008). One of the more critical reasons most students are not interested in pursuing careers in these fields is that they are not exposed to relevant topics in STEM, particularly engineering, during their K12 studies. Quality curricular materials in these areas are scarce and teachers have not been trained to incorporate these topics into their curriculum and instruction (Kimmel, Carpinelli, Burr-Alexander, & Rockland, 2006). Therefore, students are not adequately prepared to enter STEM programs in college or pursue careers in STEM fields (NSB, 2008). As a result, there has been a growing interest in higher education to bring engineering principles and applications to secondary school mathematics and science classrooms (Kimmel & Rockland, 2002; Kimmel, Carpinelli, Burr-Alexander, & Rockland, 2006). The integration of engineering concepts and applications into the different content areas in the curriculum is one approach. The engineering design process can provide a context that would support teachers in teaching about scientific inquiry since these processes are parallel in nature and have similar problemsolving characteristics. Robotics encompasses the diverse areas of technology, computer science, engineering, and the sciences. Because of its multidisciplinary nature, using robotics in the classroom can be a valuable tool to increase student motivation and learning. The use of practical, hands-on applications of mathematical and scientific concepts across various engineering topics will help students to link scientific concepts with technology, problem solving, and design, and to apply their classroom lessons to real-life problems. Teachers require a certain set of skills and knowledge to begin integrating technology and engineering concepts into their classroom practices (Boettcher, Carlson, Cyr, & Shambhang, 2005; Zarske, Sullivan, Carlson, & Yowell, 2004). For new teachers this can be part of their pre-service training, but for current teachers comprehensive professional development programs are needed. Some identified factors that should be included in successful professional development programs include: long-term effort, technical assistance, and support networks, collegial atmosphere in which teachers share views and experiences, opportunities for reflection on one’s own practice, focus on teaching for understanding through personal learning experiences, and professional development grounded in classroom practice. This article provides a brief account of efforts to address the aforementioned issues and summarizes work that has been conducted at the New Jersey Institute of Technology to develop K-12 STEM curricular materials and training programs for secondary science and mathematics teachers in order to integrate engineering principles into classroom instruction. Advancing the “E” in K-12 STEM Education Ronald Rockland, Diane S. Bloom, John Carpinelli, Levelle Burr-Alexander, Linda S. Hirsch and Howard Kimmel
Nonharmonic multitone of singing bowls and digital signal generators
This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.
An Empirical Evaluation of Density-Based Clustering Techniques
Emergence of modern techniques for scientific data collection has resulted in large scale accumulation of data pertaining to diverse fields. Conventional database querying methods are inadequate to extract useful information from huge data banks. Cluster analysis is one of the major data analysis methods. It is the art of detecting groups of similar objects in large data sets without having specified groups by means of explicit features. The problem of detecting clusters of points is challenging when the clusters are of different size, density and shape. The development of clustering algorithms has received a lot of attention in the last few years and many new clustering algorithms have been proposed. This paper gives a survey of density based clustering algorithms. DBSCAN [15] is a base algorithm for density based clustering techniques. One of the advantages of using these techniques is that method does not require the number of clusters to be given a prior nor do they make any kind of assumption concerning the density or the variance within the clusters that may exist in the data set. It can detect the clusters of different shapes and sizes from large amount of data which contains noise and outliers. OPTICS [14] on the other hand does not produce a clustering of a data set explicitly, but instead creates an augmented ordering of the database representing its density based clustering structure. This paper shows the comparison of two density based clustering methods i.e. DBSCAN [15] & OPTICS [14] based on essential parameters such as distance type, noise ratio as well as run time of simulations performed as well as number of clusters formed needed for a good clustering algorithm. We analyze the algorithms in terms of the parameters essential for creating meaningful clusters. Both the algorithms are tested using synthetic data sets for low as well as high dimensional data sets. Glory H. Shah, Computer Engineering at Dhramsinh Desai University, Nadiad and Assistant Professor at Charotar University of Science Technology (CHARUSAT), Education Campus, Changa, Gujarat, India. E-mail: [email protected]. C. K. Bhensdadia, Professor & Head at Department of Computer Engineering, Faculty of Technology, Dharmsinh Desai University, Nadiad, Gujarat, India. E-mail: [email protected]. Amit P. Ganatra, Associate Professor at Charotar University of Science Technology (CHARUSAT), Education Campus, Changa, Gujarat, India E-mail: [email protected].
Detection of Red Tomato on Plants using Image Processing Techniques
Tomatoes are the best-known grown fruit in greenhouses that have been recently attempted to be picked up automatically. Tomato is a plant which its fruit does not ripe simultaneously, therefore it is necessary to develop an algorithm to distinguish red tomatoes. In the current study, a new segmentation algorithm based on region growing was proposed for guiding a robot to pick up red tomatoes. For this purpose, several colour images of tomato plants were acquired in a greenhouse. The colour images of tomato were captured under natural light, without any artificial lighting equipment. To recognize red tomatoes form non-red ones, at first background of images were removed. For removing the background, subtraction of red and green components (R-G) was applied. Usually tomatoes touch together, so separating touching tomatoes was next step. In this step, the watershed algorithm was used that was followed by improving process. Afterwards, red tomato was detected by the region growing approach. Results obtained from testing the developed algorithm showed an encouraging accuracy (82.38%) to develop an expert system for online recognition of red
Applied ontologies and standards for service robots
Service robotics is an emerging application area for human-centered technologies. The rise of household and personal assistance robots forecasts a human–robot collaborative society. One of the robotics community’s major task is to streamline development trends, work on the harmonization of taxonomies and ontologies, along with the standardization of terms, interfaces and technologies. It is important to keep the scientific progress and public understanding synchronous, through efficient outreach and education. These efforts support the collaboration among research groups, and lead to widely accepted standards, beneficial for bothmanufacturers and users. This article describes the necessity of developing robotics ontologies and standards focusing on the past and current research efforts. In addition, the paper proposes a roadmap for service robotics ontology development. The IEEE Robotics & Automation Society is sponsoring the working group Ontologies for Robotics and Automation. The efforts of theWorking group are presented here, aiming to connect the cutting edge technology with the users of these services—the general public. © 2013 Elsevier B.V. All rights reserved.
An Exact Double-Oracle Algorithm for Zero-Sum Extensive-Form Games with Imperfect Information
Developing scalable solution algorithms is one of the central problems in computational game theory. We present an iterative algorithm for computing an exact Nash equilibrium for two-player zero-sum extensive-form games with imperfect information. Our approach combines two key elements: (1) the compact sequence-form representation of extensiveform games and (2) the algorithmic framework of double-oracle methods. The main idea of our algorithm is to restrict the game by allowing the players to play only selected sequences of available actions. After solving the restricted game, new sequences are added by finding best responses to the current solution using fast algorithms. We experimentally evaluate our algorithm on a set of games inspired by patrolling scenarios, board, and card games. The results show significant runtime improvements in games admitting an equilibrium with small support, and substantial improvement in memory use even on games with large support. The improvement in memory use is particularly important because it allows our algorithm to solve much larger game instances than existing linear programming methods. Our main contributions include (1) a generic sequence-form double-oracle algorithm for solving zero-sum extensive-form games; (2) fast methods for maintaining a valid restricted game model when adding new sequences; (3) a search algorithm and pruning methods for computing best-response sequences; (4) theoretical guarantees about the convergence of the algorithm to a Nash equilibrium; (5) experimental analysis of our algorithm on several games, including an approximate version of the algorithm.
Dyadic measures of the parent-child relationship during the transition to adolescence and glycemic control in children with type 1 diabetes.
To identify aspects of family behavior associated with glycemic control in youth with type 1 diabetes mellitus during the transition to adolescence, the authors studied 121 9- to 14-year-olds (M = 12.1 yrs) and their parents, who completed the Diabetes Family Conflict Scale (DFCS) and the Diabetes Family Responsibility Questionnaire (DFRQ). From the DFRQ, the authors derived 2 dyadic variables, frequency of agreement (exact parent and child concurrence about who was responsible for a task) and frequency of discordance (opposite parent and child reports about responsibility). The authors divided the cohort into Younger (n = 57, M = 10.6 yrs) and Older (n = 64, M = 13.5 yrs) groups. Family conflict was significantly related to glycemic control in the entire cohort and in both the Younger and Older groups. However, only in the Younger group was Agreement related to glycemic control, with higher Agreement associated with better glycemic control. Findings suggest that Agreement about sharing of diabetes responsibilities may be an important target for family-based interventions aiming to optimize glycemic control in preteen youth.
NiftyNet: a deep-learning platform for medical imaging
BACKGROUND AND OBJECTIVES Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. METHODS The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. RESULTS We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. CONCLUSIONS The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
Interaction between the native and second language phonetic subsystems
The underlying premise of this study was that the two phonetic subsystems of a bilingual interact. The study tested the hypothesis that the vowels a bilingual produces in a second language (L2) may differ from vowels produced by monolingual native speakers of the L2 as the result of either of two mechanisms: phonetic category assimilation or phonetic category dissimilation. Earlier work revealed that native speakers of Italian identify English /e/ tokens as instances of the Italian /e/ category even though English /e/ is produced with more tongue movement than Italian /e/ is. Acoustic analyses in the present study examined /e/s produced by four groups of Italian–English bilinguals who differed according to their age of arrival in Canada from Italy (early versus late) and frequency of continued Italian use (low-L1use versus high-L1-use). Early bilinguals who seldom used Italian (Early-low) were found to produce English /e/ with significantly more movement than native English speakers. However, both groups of late bilinguals (Late-low, Latehigh) tended to produced /e/ with lessmovement than NE speakers. The exaggerated movement in /e/s produced by the Early-low group participants was attributed to the dissimilation of a phonetic category they formed for English /e/ from Italian /e/. The undershoot of movement in /e/s produced by late bilinguals, on the other hand, was attributed to their failure to establish a new category for English /e/, which led to the merger of the phonetic properties of English /e/ and Italian /e/. 2002 Elsevier Science B.V. All rights reserved.
The role of temporal and dynamic signal components in the perception of syllable-final stop voicing by children and adults.
Adults whose native languages permit syllable-final obstruents, and show a vocalic length distinction based on the voicing of those obstruents, consistently weight vocalic duration strongly in their perceptual decisions about the voicing of final stops, at least in laboratory studies using synthetic speech. Children, on the other hand, generally disregard such signal properties in their speech perception, favoring formant transitions instead. These age-related differences led to the prediction that children learning English as a native language would weight vocalic duration less than adults, but weight syllable-final transitions more in decisions of final-consonant voicing. This study tested that prediction. In the first experiment, adults and children (eight and six years olds) labeled synthetic and natural CVC words with voiced or voiceless stops in final C position. Predictions were strictly supported for synthetic stimuli only. With natural stimuli it appeared that adults and children alike weighted syllable-offset transitions strongly in their voicing decisions. The predicted age-related difference in the weighting of vocalic duration was seen for these natural stimuli almost exclusively when syllable-final transitions signaled a voiced final stop. A second experiment with adults and children (seven and five years old) replicated these results for natural stimuli with four new sets of natural stimuli. It was concluded that acoustic properties other than vocalic duration might play more important roles in voicing decisions for final stops than commonly asserted, sometimes even taking precedence over vocalic duration.
A Model Building Process for Identifying Actionable Static Analysis Alerts
Automated static analysis can identify potential source code anomalies early in the software process that could lead to field failures. However, only a small portion of static analysis alerts may be important to the developer (actionable). The remainder are false positives (unactionable). We propose a process for building false positive mitigation models to classify static analysis alerts as actionable or unactionable using machine learning techniques. For two open source projects, we identify sets of alert characteristics predictive of actionable and unactionable alerts out of 51 candidate characteristics. From these selected characteristics, we evaluate 15 machine learning algorithms, which build models to classify alerts. We were able to obtain 88-97% average accuracy for both projects in classifying alerts using three to 14 alert characteristics. Additionally, the set of selected alert characteristics and best models differed between the two projects, suggesting that false positive mitigation models should be project-specific.
Low-Temperature Sintering of Nanoscale Silver Paste for Attaching Large-Area $({>}100~{\rm mm}^{2})$ Chips
A low-temperature sintering technique enabled by a nanoscale silver paste has been developed for attaching large-area (>100 mm2) semiconductor chips. This development addresses the need of power device or module manufacturers who face the challenge of replacing lead-based or lead-free solders for high-temperature applications. The solder-reflow technique for attaching large chips in power electronics poses serious concern on reliability at higher junction temperatures above 125°C. Unlike the soldering process that relies on melting and solidification of solder alloys, the low-temperature sintering technique forms the joints by solid-state atomic diffusion at processing temperatures below 275°C with the sintered joints having the melting temperature of silver at 961°C. Recently, we showed that a nanoscale silver paste could be used to bond small chips at temperatures similar to soldering temperatures without any externally applied pressure. In this paper, we extend the use of the nanomaterial to attach large chips by introducing a low pressure up to 5 MPa during the densification stage. Attachment of large chips to substrates with silver, gold, and copper metallization is demonstrated. Analyses of the sintered joints by scanning acoustic imaging and electron microscopy showed that the attachment layer had a uniform microstructure with micrometer-sized porosity with the potential for high reliability under high-temperature applications.
DO ACTIONS SPEAK LOUDER THAN WORDS? PRESCHOOL CHILDREN'S USE OF THE VERBAL- NONVERBAL CONSISTENCY PRINCIPLE DURING INCONSISTENT COMMUNICATIONS
The present study investigated whether preschool children could use the conventional “actions speak louder than words” principle (or the verbal-nonverbal consistency principle) to process information in situations where verbal cues contradict nonverbal cues. Three-, 4-, and 5-year-olds were shown a video in which an actor drank a beverage and made a verbal statement (e.g., “I like it”) that was inconsistent with her emotional expression (e.g., frowning), and were asked whether the actor liked or disliked the beverage. If children used the verbal-nonverbal consistency principle, they should respond according to the information conveyed by the actor’s emotional expression. Results showed that when the message was more naturalistic, the majority of children tended to respond based on the actor’s verbal message. However, when the inconsistency between the verbal and nonverbal messages was made salient, more children appeared to rely on the nonverbal cue. Younger children’s reliance on verbal cues reported in previous research may be partly explained by the salience of the verbal message.
MicroRNA-based biotechnology for plant improvement.
MicroRNAs (miRNAs) are an extensive class of newly discovered endogenous small RNAs, which negatively regulate gene expression at the post-transcription levels. As the application of next-generation deep sequencing and advanced bioinformatics, the miRNA-related study has been expended to non-model plant species and the number of identified miRNAs has dramatically increased in the past years. miRNAs play a critical role in almost all biological and metabolic processes, and provide a unique strategy for plant improvement. Here, we first briefly review the discovery, history, and biogenesis of miRNAs, then focus more on the application of miRNAs on plant breeding and the future directions. Increased plant biomass through controlling plant development and phase change has been one achievement for miRNA-based biotechnology; plant tolerance to abiotic and biotic stress was also significantly enhanced by regulating the expression of an individual miRNA. Both endogenous and artificial miRNAs may serve as important tools for plant improvement.
Association between corticosteroids and infection, sepsis, and infectious death in pediatric acute myeloid leukemia (AML): results from the Canadian infections in AML research group.
BACKGROUND Infection continues to be a major problem for children with acute myeloid leukemia (AML). Objectives were to identify factors associated with infection, sepsis, and infectious deaths in children with newly diagnosed AML. METHODS We conducted a retrospective, population-based cohort study that included children ≤ 18 years of age with de novo, non-M3 AML diagnosed between January 1995 and December 2004, treated at 15 Canadian centers. Patients were monitored for infection from initiation of AML treatment until recovery from the last cycle of chemotherapy, conditioning for hematopoietic stem cell transplantation, relapse, persistent disease, or death (whichever occurred first). Consistent trained research associates abstracted all information from each site. RESULTS 341 patients were included. Median age was 7.1 years (interquartile range [IQR], 2.0-13.5) and 29 (8.5%) had Down syndrome. In sum, 26 (7.6%) experienced death as a first event. There were 1277 courses of chemotherapy administered in which sterile site microbiologically documented infection occurred in 313 courses (24.5%). Sepsis and infectious death occurred in 97 (7.6%) and 16 (1.3%) courses, respectively. The median days of corticosteroid administration was 2 per course (IQR, 0-6). In multiple regression analysis, duration of corticosteroid exposure was significantly associated with more microbiologically documented sterile site infection, bacteremia, fungal infection, and sepsis. The only factor significantly associated with infectious death was days of corticosteroid exposure (odds ratio, 1.05; 95% confidence interval, 1.02-1.08; P = .001). CONCLUSIONS In pediatric AML, infection, sepsis, and infectious death were associated with duration of corticosteroid exposure. Corticosteroids should be avoided when possible for this population.
Cross Validation of Experts Versus Registration Methods for Target Localization in Deep Brain Stimulation
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Towards a Rigorous Definition of Information System Survivability
The computer systems that provide the information underpinnings for critical infrastructure applications, both military and civilian, are essential to the operation of those applications. Failure of the information systems can cause a major loss of service, and so their dependability is a major concern. Current facets of dependability, such as reliability and availability, do not address the needs of critical information systems adequately because they do not include the notion of degraded service as an explicit requirement. What is needed is a precise notion of what forms of degraded service are acceptable to users, under what circumstances each form is most useful, and the fraction of time such degraded service levels are acceptable. This concept is termed survivability. In this paper, we present the basis for a rigorous definition of survivability and an example of its use.
S3 - Guidelines on the treatment of psoriasis vulgaris (English version). Update.
Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).
The ARTEMIS European driving cycles for measuring car pollutant emissions.
In the past 10 years, various work has been undertaken to collect data on the actual driving of European cars and to derive representative real-world driving cycles. A compilation and synthesis of this work is provided in this paper. In the frame of the European research project: ARTEMIS, this work has been considered to derive a set of reference driving cycles. The main objectives were as follows: to derive a common set of reference real-world driving cycles to be used in the frame of the ARTEMIS project but also in the frame of on-going national campaigns of pollutant emission measurements, to ensure the compatibility and integration of all the resulting emission data in the European systems of emission inventory; to ensure and validate the representativity of the database and driving cycles by comparing and taking into account all the available data regarding driving conditions; to include in three real-world driving cycles (urban, rural road and motorway) the diversity of the observed driving conditions, within sub-cycles allowing a disaggregation of the emissions according to more specific driving conditions (congested and free-flow urban). Such driving cycles present a real advantage as they are derived from a large database, using a methodology that was widely discussed and approved. In the main, these ARTEMIS driving cycles were designed using the available data, and the method of analysis was based to some extent on previous work. Specific steps were implemented. The study includes characterisation of driving conditions and vehicle uses. Starting conditions and gearbox use are also taken into account.
Voltage doubler application in isolated resonant converters
Two basic conditions of voltage-doubler adopted in isolated DC/DC converter are analyzed. According to the basic conditions, that voltage-doubler can be both adopted in LC series resonant converter and LLC multi-resonant converter. Operation characteristic of voltage-doubler adopted in isolated DC/DC converter is described in detail based on the analysis of voltage-doubler rectifying LLC multi-resonant converter. Advantages of voltage-doubler rectifying LLC multi-resonant converter are described in detail as follows, structure of transformer, in which one winding is involved on its secondary side, is simple, voltage stress of output capacitor is half of output and extra voltage balancing circuit is excluded. Two rectifier diodes are included on output end, voltage and current stress of diodes respectively equal output voltage and current. Therefore, the topology is a preferable candidate for such application as low-medium power level DC/DC converter with high output voltage. In addition, principle of automatic voltage balancing for output capacitor is presented, moreover design of key parameter is specified as well. Presented characteristics of the very topology are verified by a prototype, whose output is 500 V and efficiency of full load reaches 92.3%.
Adult attachment styles, early relationships, antenatal attachment, and perceptions of infant temperament: A study of first-time mothers
We explored the relative contributions of first-time mothers’ romantic attachment styles and early relations with their own mothers to the prediction of infant temperamental difficulty. A mediating role for mother’s attachment to the unborn baby was assumed. In a prospective longitudinal study of 115 mothers of healthy babies, a structural model was delineated according to a conception of maternally reported infant temperament as a reflection of basic aspects of the mother’s personality and mother-infant relationships. Mothers’ experiences with their own mothers as supportive and nonintrusive differentiated between securely and insecurely attached participants. Security of attachment was found to facilitate antenatal attachment and perceptions of the 4-month-old infant as easier. Findings indicate that the effects of the mothers’ romantic attachment styles on their perceptions of temperamental difficulty are mediated by their antenatal attachment. Moreover, the pattern of findings obtained suggests a link among mothers’ history of relationships, romantic attachment styles, and caregiving characteristics that is congruent with evolutionary theoretical assumptions The basic tenets of attachment theory (Bowlby, 1969,1973,1979) are that humans have a propensity toward making strong and affectionate bonds to particular others, and that the mental representations of early experiences with caregivers guide subsequent significant interpersonal relations. These assumptions have spawned a great deal of research on the attachment of inWe are very grateful to the Public Health nursing staff of the Israel Ministry of Health, Southern District, for their cooperation in making this research possible. We wish to thank the anonymous reviewers for their very helpful comments on a draft of the article and the ideas involved. This research was supported in part by the B. Steiner Family Center at the Ben-Gurion Uni-
Accurate estimation of influenza epidemics using Google search data via ARGO.
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
Do sleep problems or urinary incontinence predict falls in elderly women?
The objectives of this cross-sectional study were: (1) To determine if night-time sleep disturbance, daytime sleepiness, or urinary incontinence were associated with an increased risk of falling in older Australian women and (2) to explore the interrelationships between daytime sleepiness, night-time sleep problems, and urge incontinence. Participants were 782 ambulatory, community-dwelling women aged 75 to 86 recruited from within the existing Calcium Intake Fracture Outcome Study, in which women above 70 years were selected at random from the electoral roll. Daytime sleepiness, night-time sleep problems, urinary incontinence and falls data were collected via self-complete questionnaires. Thirty-five per cent of participants had fallen at least once in the past 12 months and 37.7% reported at least one night-time sleep problem. However, only 8.1% of the study sample experienced abnormal daytime sleepiness (Epworth Sleepiness Scale score > 10). Pure stress, pure urge, and mixed incontinence occurred in 36.8%, 3.7%, and 32.6% of participants respectively. In forward stepwise multiple logistic regression analysis, urge incontinence (OR 1.76; 95% CI 1.29 to 2.41) and abnormal daytime sleepiness (OR 2.05; 95% CI 1.21 to 3.49) were significant independent risk factors for falling after controlling for other falls risk factors (age, central nervous system drugs, cardiovascular drugs). As urge incontinence and abnormal daytime sleepiness were independently associated with an increased falls risk, effective management of these problems may reduce the risk of falling in older women.
Smart vehicle accident detection and alarming system using a smartphone
Vehicle accident is the paramount thread for the people's life which causes a serious wound or even dead. The automotive companies have made lots of progress in alleviating this thread, but still the probability of detrimental effect due to an accident is not reduced. Infringement of spFieed is one of the elementary reasons for a vehicle accident. Therewithal, external pressure and change of tilt angle with road surface blameworthy for this mishap. As soon as the emergency service could divulge about an accident, the more the effect would be mitigated. For this purpose, we developed an Android based application that detects an accidental situation and sends emergency alert message to the nearest police station and health care center. This application is integrated with an external pressure sensor to extract the outward force of the vehicle body. It measures speed and change of tilt angle with GPS and accelerometer sensors respectively on Android phone. By checking conditions, this application also capable of reducing the rate of false alarm.
Epitaxial GaN Layers: Low Temperature Growth Using Laser Molecular Beam Epitaxy Technique and Characterizations
Generally, the GaN growth by conventional techniques like metal organic chemical vapor deposition (MOCVD) and plasma assisted- molecular beam epitaxy (PA-MBE) techniques employ a higher growth temperature (900–1000 °C in MOCVD and >720 °C in PA-MBE), in which, the probability of forming unwanted alloys or compounds with the substrate at interface will be quite high. To minimize the formation of undesirable interfacial compounds, a low temperature growth is favorable. Here, we have explored the possibility of low temperature growth of GaN layers on sapphire (0001) substrates using an ultra-high vacuum laser assisted molecular beam epitaxy (LMBE) system under different growth conditions. GaN epitaxial layers have been grown by laser ablating liquid Ga metal and polycrystalline solid GaN targets in the presence of active nitrogen environment supplied by radio frequency nitrogen plasma source. The structural and optical properties of the epitaxial GaN layers were characterized using reflection high energy electron diffraction, high resolution x-ray diffraction, atomic force microscopy, Raman spectroscopy, Rutherford Backscattering Spectroscopy, secondary ion mass spectroscopy and photoluminescence spectroscopy. The low temperature LMBE grown GaN layers showed high crystalline structures with a screw dislocation density in the range of 107 cm−2 as calculated from the x-ray rocking measurements along (0002) plane, which is the lowest value obtained so far using LMBE technique. A strong near band-edge photoluminescence emission has been obtained for the grown GaN layers at room temperature with a relatively weak yellow band emission. Our results indicate that the LMBE technique is capable of growing high quality III-nitride crystalline films at a relatively lower growth temperature compared to the conventional growth techniques.
Social cognition, artefacts, and stigmergy: A comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity
Collective behaviour is often characterised by the so-called “coordination paradox” : Looking at individual ants, for example, they do not seem to cooperate or communicate explicitly, but nevertheless at the social level cooperative behaviour, such as nest building, emerges, apparently without any central coordination. In the case of social insects such emergent coordination has been explained by the theory of stigmergy, which describes how individuals can effect the behaviour of others (and their own) through artefacts, i.e. the product of their own activity (e.g., building material in the ants’ case). Artefacts clearly also play a strong role in human collective behaviour, which has been emphasised, for example, by proponents of activity theory and distributed cognition. However, the relation between theories of situated/social cognition and theories of social insect behaviour has so far received relatively li ttle attention in the cognitive science literature. This paper aims to take a step in this direction by comparing three theoretical frameworks for the study of cognition in the context of agent-environment interaction (activity theory, situated action, and distributed cognition) to each other and to the theory of stigmergy as a possible minimal common ground. The comparison focuses on what each of the four theories has to say about the role/nature of (a) the agents involved in collective behaviour, (b) their environment, (c) the collective activities addressed, and (d) the role that artefacts play in the interaction between agents and their environments, and in particular in the coordination
Alvimopan accelerates gastrointestinal recovery after radical cystectomy: a multicenter randomized placebo-controlled trial.
BACKGROUND Radical cystectomy (RC) for bladder cancer is frequently associated with delayed gastrointestinal (GI) recovery that prolongs hospital length of stay (LOS). OBJECTIVE To assess the efficacy of alvimopan to accelerate GI recovery after RC. DESIGN, SETTING, AND PARTICIPANTS We conducted a randomized double-blind placebo-controlled trial in patients undergoing RC and receiving postoperative intravenous patient-controlled opioid analgesics. INTERVENTION Oral alvimopan 12 mg (maximum: 15 inpatient doses) versus placebo. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS The two-component primary end point was time to upper (first tolerance of solid food) and lower (first bowel movement) GI recovery (GI-2). Time to discharge order written, postoperative LOS, postoperative ileus (POI)-related morbidity, opioid consumption, and adverse events (AEs) were evaluated. An independent adjudication of cardiovascular AEs was performed. RESULTS AND LIMITATIONS Patients were randomized to alvimopan (n=143) or placebo (n=137); 277 patients were included in the modified intention-to-treat population. The alvimopan cohort experienced quicker GI-2 recovery (5.5 vs 6.8 d; hazard ratio: 1.8; p<0.0001), shorter mean LOS (7.4 vs 10.1 d; p=0.0051), and fewer episodes of POI-related morbidity (8.4% vs 29.1%; p<0.001). The incidence of opioid consumption and AEs or serious AEs (SAEs) was comparable except for POI, which was lower in the alvimopan group (AEs: 7% vs 26%; SAEs: 5% vs 20%, respectively). Cardiovascular AEs occurred in 8.4% (alvimopan) and 15.3% (placebo) of patients (p=0.09). Generalizability may be limited due to the exclusion of epidural analgesia and the inclusion of mostly high-volume centers utilizing open laparotomy. CONCLUSIONS Alvimopan is a useful addition to a standardized care pathway in patients undergoing RC by accelerating GI recovery and shortening LOS, with a safety profile similar to placebo. PATIENT SUMMARY This study examined the effects of alvimopan on bowel recovery in patients undergoing radical cystectomy for bladder cancer. Patients receiving alvimopan experienced quicker bowel recovery and had a shorter hospital stay compared with those who received placebo, with comparable safety. TRIAL REGISTRATION ClinicalTrials.gov identifier NCT00708201.
CubeNet: Equivariance to 3D Rotation and Translation
3D Convolutional Neural Networks are sensitive to transformations applied to their input. This is a problem because a voxelized version of a 3D object, and its rotated clone, will look unrelated to each other after passing through to the last layer of a network. Instead, an idealized model would preserve a meaningful representation of the voxelized object, while explaining the pose-difference between the two inputs. An equivariant representation vector has two components: the invariant identity part, and a discernable encoding of the transformation. Models that can’t explain pose-differences risk “diluting” the representation, in pursuit of optimizing a classification or regression loss function. We introduce a Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions. We call this network CubeNet, reflecting its cube-like symmetry. By construction, this network helps preserve a 3D shape’s global and local signature, as it is transformed through successive layers. We apply this network to a variety of 3D inference problems, achieving state-of-the-art on the ModelNet10 classification challenge, and comparable performance on the ISBI 2012 Connectome Segmentation Benchmark. To the best of our knowledge, this is the first 3D rotation equivariant CNN for voxel representations.
Gating of somatosensory evoked potentials during voluntary movement of the lower limb in man
Somatosensory evoked potentials (SEPs) evoked by stimulation of the tibial nerve (TN) in the popliteal fossa, the sural nerve (Sur) at the lateral malleole, and an Achilles tendon (Achilles) tap were recorded before and during voluntary plantarflexion, dorsiflexion, and cocontraction of the ipsi- and contralateral foot in normal subjects. Suppression (gating) of the TN-SEP began around 60 ms before the onset of electromyographic activity (EMG), and became maximal 50–100 ms after the onset of EMG. Similar gating was observed for the SEP evoked by activation of muscle afferents (Achilles) and cutaneous afferents (Sur). The TN-SEP was similarly depressed at the onset of a plantarflexion as at the onset of dorsiflexion. A depression, although much smaller, was also observed at the onset of movement of the contralateral limb. The depression of the TN-SEP after the onset of EMG decreased when fast-conducting afferents were blocked by ischemia below the knee joint. The TN-SEP was equally depressed during tonic dorsiflexion, plantarflexion, and cocontraction of dorsi- and plantarflexors. The TN-SEP was depressed for up to 300 ms when preceded by stimulation of Sur or a biceps femoris tendon tap. Gating of lower limb SEPs thus appears to have both central and peripheral components of which neither seems to be specific for the muscle being contracted or the sensory afferents being stimulated. We encourage that caution is taken when drawing functional conclusions regarding movement-specific modulation of afferent inflow to the somatosensory cortex based on observations of gating of lower limb SEP.
Students’ Needs, Teachers’ Support, and Motivation for Doing Homework: A Cross-Sectional Study
Self-determination theory provided the theoretical framework for a cross-sectional investigation of elementary and junior high school students’ autonomous motivation for homework. More specifically, the study focused on the role of teachers’ support of students’ psychological needs in students’ motivation for homework in the two school systems. The study also investigated the contribution of a match between teachers’ support and students’ expressed level of psychological needs to autonomous motivation for homework. The findings indicated that teacher support partially mediated the difference in autonomous motivation for homework between students in the two school systems. In addition, the findings suggested that whereas students’ with different level of expressed needs may perceive different levels of teachers’ support, and that teachers’ support might be more important for students who express higher level of needs, perceived teachers’ support of psychological needs was important for students’ adaptive motivation for homework, irrespective of their expressed level of needs.
An empirical study of low-power wireless
We present empirical measurements of the packet delivery performance of the latest sensor platforms: Micaz and Telos motes. In this article, we present observations that have implications to a set of common assumptions protocol designers make while designing sensornet protocols—specifically—the MAC and network layer protocols. We first distill these common assumptions in to a conceptual model and show how our observations support or dispute these assumptions. We also present case studies of protocols that do not make these assumptions. Understanding the implications of these observations to the conceptual model can improve future protocol designs.
How low does ethical leadership flow ? Test of a trickle-down model
0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.04.002 * Corresponding author. Fax: +1 407 823 3725. E-mail address: [email protected] (D.M. Mayer) This research examines the relationships between top management and supervisory ethical leadership and group-level outcomes (e.g., deviance, OCB) and suggests that ethical leadership flows from one organizational level to the next. Drawing on social learning theory [Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall.; Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.] and social exchange theory [Blau, p. (1964). Exchange and power in social life. New York: John Wiley.], the results support our theoretical model using a sample of 904 employees and 195 managers in 195 departments. We find a direct negative relationship between both top management and supervisory ethical leadership and group-level deviance, and a positive relationship with group-level OCB. Finally, consistent with the proposed trickle-down model, the effects of top management ethical leadership on group-level deviance and OCB are mediated by supervisory ethical leadership. 2008 Elsevier Inc. All rights reserved.
Learning Global Term Weights for Content-based Recommender Systems
Recommender systems typically leverage two types of signals to effectively recommend items to users: user activities and content matching between user and item profiles, and recommendation models in literature are usually categorized into collaborative filtering models, content-based models and hybrid models. In practice, when rich profiles about users and items are available, and user activities are sparse (cold-start), effective content matching signals become much more important in the relevance of the recommendation. The de-facto method to measure similarity between two pieces of text is computing the cosine similarity of the two bags of words, and each word is weighted by TF (term frequency within the document) × IDF (inverted document frequency of the word within the corpus). In general sense, TF can represent any local weighting scheme of the word within each document, and IDF can represent any global weighting scheme of the word across the corpus. In this paper, we focus on the latter, i.e., optimizing the global term weights, for a particular recommendation domain by leveraging supervised approaches. The intuition is that some frequent words (lower IDF, e.g. “database”) can be essential and predictive for relevant recommendation, while some rare words (higher IDF, e.g. the name of a small company) could have less predictive power. Given plenty of observed activities between users and items as training data, we should be able to learn better domain-specific global term weights, which can further improve the relevance of recommendation. We propose a unified method that can simultaneously learn the weights of multiple content matching signals, as well as global term weights for specific recommendation tasks. Our method is efficient to handle large-scale training data ∗This work was conducted during an internship at LinkedIn. Copyright is held by the International World Wide Web Conference Committee (IW3C2). IW3C2 reserves the right to provide a hyperlink to the author’s site if the Material is used in electronic media. WWW 2016, April 11–15, 2016, Montréal, Québec, Canada. ACM 978-1-4503-4143-1/16/04. http://dx.doi.org/10.1145/2872427.2883069 . generated by production recommender systems. And experiments on LinkedIn job recommendation data justify the effectiveness of our approach.
Induction coil sensors—a review
The induction coil sensors (called also search coils, pickup coils or magnetic loop sensors) are described. The design methods of air coils and ferromagnetic core coils are compared and summarized. The frequency properties of the coil sensors are analyzed and various methods of output signal processing are presented. Special kinds of induction sensors, as Rogowski coil, gradiometer sensors, vibrating coil sensors, tangential field sensors and needle probes are described. The application of coil sensor as magnetic antenna is presented. Index Terms – coil sensor, magnetic field measurement, search coil, Rogowski coil, vibrating coil, gradiometer, integrator circuit, H-coil sensor, needle probe method.
The immune system's role in sepsis progression, resolution, and long-term outcome.
Sepsis occurs when an infection exceeds local tissue containment and induces a series of dysregulated physiologic responses that result in organ dysfunction. A subset of patients with sepsis progress to septic shock, defined by profound circulatory, cellular, and metabolic abnormalities, and associated with a greater mortality. Historically, sepsis-induced organ dysfunction and lethality were attributed to the complex interplay between the initial inflammatory and later anti-inflammatory responses. With advances in intensive care medicine and goal-directed interventions, early 30-day sepsis mortality has diminished, only to steadily escalate long after "recovery" from acute events. As so many sepsis survivors succumb later to persistent, recurrent, nosocomial, and secondary infections, many investigators have turned their attention to the long-term sepsis-induced alterations in cellular immune function. Sepsis clearly alters the innate and adaptive immune responses for sustained periods of time after clinical recovery, with immune suppression, chronic inflammation, and persistence of bacterial representing such alterations. Understanding that sepsis-associated immune cell defects correlate with long-term mortality, more investigations have centered on the potential for immune modulatory therapy to improve long-term patient outcomes. These efforts are focused on more clearly defining and effectively reversing the persistent immune cell dysfunction associated with long-term sepsis mortality.
Deep Reinforcement Learning for Green Security Games with Real-Time Information
Green Security Games (GSGs) have been proposed and applied to optimize patrols conducted by law enforcement agencies in green security domains such as combating poaching, illegal logging and overfishing. However, real-time information such as footprints and agents’ subsequent actions upon receiving the information, e.g., rangers following the footprints to chase the poacher, have been neglected in previous work. To fill the gap, we first propose a new game model GSG-I which augments GSGs with sequential movement and the vital element of real-time information. Second, we design a novel deep reinforcement learning-based algorithm, DeDOL, to compute a patrolling strategy that adapts to the real-time information against a best-responding attacker. DeDOL is built upon the double oracle framework and the policy-space response oracle, solving a restricted game and iteratively adding best response strategies to it through training deep Q-networks. Exploring the game structure, DeDOL uses domain-specific heuristic strategies as initial strategies and constructs several local modes for efficient and parallelized training. To our knowledge, this is the first attempt to use Deep Q-Learning for security games.
Autobank: a semi-automatic annotation tool for developing deep Minimalist Grammar treebanks
This paper presents Autobank, a prototype tool for constructing a widecoverage Minimalist Grammar (MG) (Stabler, 1997), and semi-automatically converting the Penn Treebank (PTB) into a deep Minimalist treebank. The front end of the tool is a graphical user interface which facilitates the rapid development of a seed set of MG trees via manual reannotation of PTB preterminals with MG lexical categories. The system then extracts various dependency mappings between the source and target trees, and uses these in concert with a non-statistical MG parser to automatically reannotate the rest of the corpus. Autobank thus enables deep treebank conversions (and subsequent modifications) without the need for complex transduction algorithms accompanied by cascades of ad hoc rules; instead, the locus of human effort falls directly on the task of grammar construction itself.
Feature warping for robust speaker verification
We propose a novel feature mapping approach that is robust to channel mismatch, additive noise and to some extent, nonlinear effects attributed to handset transducers. These adverse effects can distort the short-term distribution of the speech features. Some methods have addressed this issue by conditioning the variance of the distribution, but not to the extent of conforming the speech statistics to a target distribution. The proposed target mapping method warps the distribution of a cepstral feature stream to a standardised distribution over a specified time interval. We evaluate a number of the enhancement methods for speaker verification, and compare them against a Gaussian target mapping implementation. Results indicate improvements of the warping technique over a number of methods such as Cepstral Mean Subtraction (CMS), modulation spectrum processing, and short-term windowed CMS and variance normalisation. This technique is a suitable feature post-processing method that may be combined with other techniques to enhance speaker recognition robustness under adverse conditions.
Automated Abdominal Multi-Organ Segmentation With Subject-Specific Atlas Generation
A robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance. Many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs. We present a general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans. The method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation, two widely used methods in brain segmentation. The final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step, incorporating high-level spatial knowledge. The proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs. We have evaluated the segmentation on a database of 150 manually segmented CT images. The achieved results compare well to state-of-the-art methods, that are usually tailored to more specific questions, with Dice overlap values of 94%, 93%, 70%, and 92% for liver, kidneys, pancreas, and spleen, respectively.
Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.
Results in the operative treatment of elderly patients with intracranial meningioma
With life expectancy in the industrial nations increasing during recent years, the number of patients older than 70 years with intracranial tumours and, especially, meningiomas is rising. To evaluate the indications for operative treatment, we reviewed 66 patients older than 70 years who were operated upon for intracranial meningioma in our department between 1991 and 1997. The mean age was 75 years. The oldest patients were 86 years old. Thirteen patients with recurrent meningiomas were operated upon. The mortality rate was 7.6%. Neurological symptoms improved in 38 patients (57.6%), were unchanged in 11 (16.6%), and deteriorated in 12 (18.2%). Patients with recurrent meningiomas seem to have a higher operative risk and their outcome is worse than after a primary operation. In general, there were good postoperative results in patients with few concomitant diseases, small meningiomas, small edema, short time of operation, and accessible location (convexity rather than skull base). Age in general is not a contraindication for operation. In cases of incidental findings of small meningiomas, we recommend observation and MRI follow-up. Symptomatic meningiomas should be removed whenever there is an acceptable risk from an internal or anaesthesiological point of view.
The SI! Program for Cardiovascular Health Promotion in Early Childhood: A Cluster-Randomized Trial.
BACKGROUND The preschool years offer a unique window of opportunity to instill healthy life-style behaviors and promote cardiovascular health. OBJECTIVES This study sought to evaluate the effect of a 3-year multidimensional school-based intervention to improve life-style-related behaviors. METHODS We performed a cluster-randomized controlled intervention trial involving 24 public schools in Madrid, Spain, that were assigned to either the SI! Program intervention or the usual curriculum and followed for 3 years. The SI! Program aimed to instill and develop healthy behaviors in relation to diet, physical activity, and understanding how the human body and heart work. The primary outcome was change in the overall knowledge, attitudes, and habits (KAH) score (range 0 to 80). The intervention's effect on adiposity markers was also evaluated. RESULTS A total of 2,062 children from 3 to 5 years of age were randomized. After 3 years of follow-up, the overall KAH score was 4.9% higher in children in the intervention group compared with the control group (21.7 vs. 16.4; p < 0.001). A peak effect was observed at the second year (improvement 7.1% higher than in the control group; p < 0.001). Physical activity was the main driver of the change in KAH at all evaluation times. Children in the intervention group for 2 years and 1 year showed greater improvement than control subjects (5.9%; p < 0.001 and 2.9%; p = 0.002, respectively). After 3 years, the intervention group showed a higher probability than the control group of reducing the triceps skinfold z-score by at least 0.1 (hazard ratio: 1.40, 95% confidence interval: 1.04 to 1.89; p = 0.027). CONCLUSIONS The SI! Program is an effective strategy for instilling healthy habits among preschoolers, translating into a beneficial effect on adiposity, with maximal effect when started at the earliest age and maintained over 3 years. Wider adoption may have a meaningful effect on cardiovascular health promotion. (Evaluation of the Program SI! for Preschool Education: A School-Based Randomized Controlled Trial [Preschool_PSI!]; NCT01579708).
Mitigating IoT security threats with a trusted Network element
Securing the growing amount of IoT devices is a challenge for both the end-users bringing IoT devices into their homes, as well as the corporates and industries exposing these devices into the Internet as part of their service or operations. The exposure of these devices, often poorly configured and secured, offers malicious actors an easy access to the private information of their users, or potential to utilize the devices in further activities, e.g., attacks on other devices via Distributed Denial of Service. This paper discusses the current security challenges of IoT devices and proposes a solution to secure these devices via a trusted Network Edge Device. NED offloads the security countermeasures of the individual devices into the trusted network elements. The major benefit of this approach is that the system can protect the IoT devices with user-defined policies, which can be applied to all devices regardless of the constraints of computing resources in the IoT tags. Additional benefit is the possibility to manage the countermeasures of multiple IoT devices/gateways at once, via a shared interface, thus largely avoiding the per-device maintenance operations.
On the Exploitation of User Personality in Recommender Systems
In this paper we revise the state of the art on personality-aware recommender systems, identifying main research trends and achievements up to date, and discussing open issues that may be addressed in the future.
A community-based feasibility study using wheat bran fiber supplementation to lower colon cancer risk.
METHODS In this feasibility study, free-living older adults (n = 180; means = 67.5 years old) were randomly assigned to one of three levels of a 3-month standardized compliance enhancement program. RESULTS Regarding subject compliance with the 18 g/day wheat bran fiber supplement, the high compliance enhancement group had a superior regimen compliance rate (88%) versus the medium and low groups, (66 and 29%, respectively) (P = 0.01), with similar attrition rates. CONCLUSION No significant gastrointestinal side effects and changes in body weight were reported. For similar efficacy, the comprehensive compliance enhancement group had the greatest cost effectiveness.
A novel randomly textured phosphor structure for highly efficient white light-emitting diodes
We have successfully demonstrated the enhanced luminous flux and lumen efficiency in white light-emitting diodes by the randomly textured phosphor structure. The textured phosphor structure was fabricated by a simple imprinting technique, which does not need an expensive dry-etching machine or a complex patterned definition. The textured phosphor structure increases luminous flux by 5.4% and 2.5% at a driving current of 120 mA, compared with the flat phosphor and half-spherical lens structures, respectively. The increment was due to the scattering of textured surface and also the phosphor particles, leading to the enhancement of utilization efficiency of blue light. Furthermore, the textured phosphor structure has a larger view angle at the full width at half maximum (87°) than the reference LEDs.
XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings
Image translation refers to the task of mapping images from a visual domain to another. Given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce xgan, a dual adversarial auto-encoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the learned embedding to preserve semantics shared across domains. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose, ”CartoonSet”, is also publicly available as a new benchmark for semantic style transfer at https://google.github.io/cartoonset/index.html.
Cascades of two-pole-two-zero asymmetric resonators are good models of peripheral auditory function.
A cascade of two-pole-two-zero filter stages is a good model of the auditory periphery in two distinct ways. First, in the form of the pole-zero filter cascade, it acts as an auditory filter model that provides an excellent fit to data on human detection of tones in masking noise, with fewer fitting parameters than previously reported filter models such as the roex and gammachirp models. Second, when extended to the form of the cascade of asymmetric resonators with fast-acting compression, it serves as an efficient front-end filterbank for machine-hearing applications, including dynamic nonlinear effects such as fast wide-dynamic-range compression. In their underlying linear approximations, these filters are described by their poles and zeros, that is, by rational transfer functions, which makes them simple to implement in analog or digital domains. Other advantages in these models derive from the close connection of the filter-cascade architecture to wave propagation in the cochlea. These models also reflect the automatic-gain-control function of the auditory system and can maintain approximately constant impulse-response zero-crossing times as the level-dependent parameters change.
Distributed Data Streams
DEFINITION A majority of today’s data is constantly evolving and fundam entally distributed in nature. Data for almost any large-sc ale data-management task is continuously collected over a wide area, and at a much greater rate than ever before. Compared to t aditional, centralized stream processing, querying such la rge-scale, evolving data collections poses new challenges , due mainly to the physical distribution of the streaming data and the co mmunication constraints of the underlying network. Distri buted stream processing algorithms should guarantee efficiency n ot o ly in terms ofspaceand processing time(as conventional streaming techniques), but also in terms of the communication loadimposed on the network infrastructure.
Seeds and oil of the Styrian oil pumpkin: Components and biological activities
Cucurbita pepo subsp. pepo var. Styriaca is a phylogenetically young member of the Cucurbita spp. since the mutation leading to dark green seeds with stunted outer hulls arose only in the 19th century. This mutation defined the so-called Styrian oil pumpkin and facilitated the production of Styrian pumpkin seed oil. It is a regional specialty oil in the south-eastern part of Europe. In this article, we describe the production and economic value of this edible oil as well as its composition on a molecular basis, including fatty acids, vitamins, phytosterols, minerals, polyphenols, and the compounds responsible for its pigments, taste and flavor. We also describe contaminants of Styrian pumpkin seed oil and the most relevant field pests of the Styrian oil pumpkin plant. Finally, we review the putative beneficial health effects of Styrian oil pumpkin seeds and of their products.
Discriminant validity of well-being measures.
The convergent and discriminant validities of well-being concepts were examined using multitrait-multimethod matrix analyses (D. T. Campbell & D. W. Fiske, 1959) on 3 sets of data. In Study 1, participants completed measures of life satisfaction, positive affect, negative affect, self-esteem, and optimism on 2 occasions 4 weeks apart and also obtained 3 informant ratings. In Study 2, participants completed each of the 5 measures on 2 occasions 2 years apart and collected informant reports at Time 2. In Study 3, participants completed 2 different scales for each of the 5 constructs. Analyses showed that (a) life satisfaction is discriminable from positive and negative affect, (b) positive affect is discriminable from negative affect, (c) life satisfaction is discriminable from optimism and self-esteem, and (d) optimism is separable from trait measures of negative affect.
An Integrated Model for Effective Saliency Prediction
In this paper, we proposed an integrated model of both semantic-aware and contrast-aware saliency (SCA) combining both bottom-up and top-down cues for effective eye fixation prediction. The proposed SCA model contains two pathways. The first pathway is a deep neural network customized for semantic-aware saliency, which aims to capture the semantic information in images, especially for the presence of meaningful objects and object parts. The second pathway is based on on-line feature learning and information maximization, which learns an adaptive representation for the input and discovers the high contrast salient patterns within the image context. The two pathways characterize both long-term and short-term attention cues and are integrated using maxima normalization. Experimental results on artificial images and several benchmark dataset demonstrate the superior performance and better plausibility of the proposed model over both classic approaches and recent deep models.
Academic and emotional functioning in early adolescence: longitudinal relations, patterns, and prediction by experience in middle school.
Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.
Streaming submodular maximization: massive data summarization on the fly
How can one summarize a massive data set "on the fly", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of "representativeness" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.
Histological study on the effects of microablative fractional CO2 laser on atrophic vaginal tissue: an ex vivo study.
OBJECTIVE Microablative fractional CO2 laser has been proven to determine tissue remodeling with neoformation of collagen and elastic fibers on atrophic skin. The aim of our study is to evaluate the effects of microablative fractional CO2 laser on postmenopausal women with vulvovaginal atrophy using an ex vivo model. METHODS This is a prospective ex vivo cohort trial. Consecutive postmenopausal women with vulvovaginal atrophy managed with pelvic organ prolapse surgical operation were enrolled. After fascial plication, the redundant vaginal edge on one side was treated with CO2 laser (SmartXide2; DEKA Laser, Florence, Italy). Five different CO2 laser setup protocols were tested. The contralateral part of the vaginal wall was always used as control. Excessive vagina was trimmed and sent for histological evaluation to compare treated and nontreated tissues. Microscopic and ultrastructural aspects of the collagenic and elastic components of the matrix were studied, and a specific image analysis with computerized morphometry was performed. We also considered the fine cytological aspects of connective tissue proper cells, particularly fibroblasts. RESULTS During the study period, five women were enrolled, and 10 vaginal specimens were finally retrieved. Four different settings of CO2 laser were compared. Protocols were tested twice each to confirm histological findings. Treatment protocols were compared according to histological findings, particularly in maximal depth and connective changes achieved. All procedures were uneventful for participants. CONCLUSIONS This study shows that microablative fractional CO2 laser can produce a remodeling of vaginal connective tissue without causing damage to surrounding tissue.
Webs of Five-Branes and N = 2 Superconformal Field Theories
We describe configurations of 5-branes and 7-branes which realize, when compactified on a circle, new isolated four-dimensional N = 2 superconformal field theories recently constructed by Gaiotto. Our diagrammatic method allows to easily count the dimensions of Coulomb and Higgs branches, with the help of a generalized s-rule. We furthermore show that superconformal field theories with E 6,7,8 flavor symmetry can be analyzed in a uniform manner in this framework; in particular we realize these theories at infinitely strongly-coupled limits of quiver theories with SU gauge groups.
Revisiting epithelial‐mesenchymal transition in cancer metastasis: the connection between epithelial plasticity and stemness
Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, and cancer metastasis. During cancer progression, the activation of EMT permits cancer cells to acquire migratory, invasive, and stem-like properties. A growing body of evidence supports the critical link between EMT and cancer stemness. However, contradictory results have indicated that the inhibition of EMT also promotes cancer stemness, and that mesenchymal-epithelial transition, the reverse process of EMT, is associated with the tumor-initiating ability required for metastatic colonization. The concept of 'intermediate-state EMT' provides a possible explanation for this conflicting evidence. In addition, recent studies have indicated that the appearance of 'hybrid' epithelial-mesenchymal cells is favorable for the establishment of metastasis. In summary, dynamic changes or plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype is more likely to occur in tumors in the clinical setting. Further studies aimed at validating and consolidating the concept of intermediate-state EMT and hybrid tumors are needed for the establishment of a comprehensive profile of cancer metastasis.
Maternal Carriage and Antimicrobial Resistance Profile of Group B Streptococcus
Background: The aim of this study was to determine the prevalence of group B Streptococcus (GBS) colonization and to evaluate the antimicrobial resistance profile in women in the third trimester of pregnancy. Materials and Methods: A total of 310 pregnant women, referred in weeks 35 to 37 of gestation, were screened for GBS colonization during a 10-month period. Samples were collected from the vagina and the rectum. Results: The colonization rate was 10.6 % and 22 women (66.7%) had both positive vaginal and rectal cultures. Rates of GBS colonization were significantly lower in patients aged 24 years or older and in those with a third or later pregnancy. None of the isolates were resistant to penicillin and ampicillin, whereas 21.2% and 9.1% showed resistance to erythromycin and clindamycin, respectively. Conclusion: Screening and antimicrobial susceptibility testing of GBS during pregnancy are important to guide appropiate therapy.
"How to do things with words" in health professions education.
This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.
Spotting words in handwritten Arabic documents
The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images– where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20, 000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.
Depth CNNs for RGB-D scene recognition: learning from scratch better than transferring from RGB-CNNs
Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.
Genome scan for meat quality traits in Nelore beef cattle.
Meat quality traits are economically important because they affect consumers' acceptance, which, in turn, influences the demand for beef. However, selection to improve meat quality is limited by the small numbers of animals on which meat tenderness can be evaluated due to the cost of performing shear force analysis and the resultant damage to the carcass. Genome wide-association studies for Warner-Bratzler shear force measured at different times of meat aging, backfat thickness, ribeye muscle area, scanning parameters [lightness, redness (a*), and yellowness] to ascertain color characteristics of meat and fat, water-holding capacity, cooking loss (CL), and muscle pH were conducted using genotype data from the Illumina BovineHD BeadChip array to identify quantitative trait loci (QTL) in all phenotyped Nelore cattle. Phenotype count for these animals ranged from 430 to 536 across traits. Meat quality traits in Nelore are controlled by numerous QTL of small effect, except for a small number of large-effect QTL identified for a*fat, CL, and pH. Genomic regions harboring these QTL and the pathways in which the genes from these regions act appear to differ from those identified in taurine cattle for meat quality traits. These results will guide future QTL mapping studies and the development of models for the prediction of genetic merit to implement genomic selection for meat quality in Nelore cattle.