Spelling suggestions: "subject:"component"" "subject:"dcomponent""
341 |
Detection and Localization of Power and Coherence Dynamics with EEGGhahremani, Ayda 04 1900 (has links)
<p>It has been observed by researchers that periodic auditory stimuli can cause the activities in different brain areas to be periodically synchronized. Fast auditory stimuli have been shown to cause the brain sources to synchronize at the rate of stimuli. Brain sources respond to them not only by increase in local synchronization, but also in the global synchronization of cortical regions often regarded as functional connectivity. Spectral power and coherence are often used to characterize such neural synchronization. Beta band oscillations have been reported to underlie the neural mechanism during repetitive auditory stimuli. Cortical generators of these underlying beta oscillations were investigated in several studies based on MEG measurements. This research is intended to investigate (1) EEG can be used to detect and localize neural sources changing in power and coherence and (2) beta oscillations underlie such neural synchronization during fast repetitive auditory stimuli based on EEG measurements. The procedure of this study consists of several steps. First, the minimum variance (MV) scalar beamformer, an adaptive spatial filter, is used to estimate the temporal signals in the brain source space, given EEG recordings. The analysis of the estimated source temporal signals then consists of two stages firstly the power analysis and secondly the coherence analysis. The dynamics of power and coherence is investigated instantaneously over time and in the lower beta frequency band [14,20Hz]. This is done by detecting the most prominent changes in the two spectral parameters through singular value decomposition (SVD). Two coherence measures imaginary component (IC) and magnitude-squared coherence (MSC) are employed and compared in terms of their performance both mathematically and experimentally. In the simulations, we show the capability of using EEG to detect and localize power co-variations and dynamic functional connectivity in the cortical regions. We also perform the procedure on the recorded real data from subjects passively listening to rhythmic auditory stimuli. Beta oscillations are found to underlie the neural activity to percept auditory stimuli. This is shown by localization of auditory cortices and detection of power co-variation in this frequency band. We demonstrate the feasibility of using EEG to identify coupled and co-activated brain sources similar to those obtained from MEG signals in the previous studies. These include auditory and motor regions which were found to be functionally coherent and have a functional role in the auditory perception. The superiority of IC over MSC measure is proven mathematically and validated in both simulations and real data experiments.</p> / Master of Applied Science (MASc)
|
342 |
Approximation to K-Means-Type ClusteringWei, Yu 05 1900 (has links)
<p> Clustering involves partitioning a given data set into several groups based on some similarity/dissimilarity measurements. Cluster analysis has been widely used in information retrieval, text and web mining, pattern recognition, image segmentation and software reverse engineering.</p> <p> K-means is the most intuitive and popular clustering algorithm and
the working horse for clustering. However, the classical K-means suffers from several flaws. First, the algorithm is very sensitive to the initialization method and can be easily trapped at a local minimum regarding to the measurement (the sum of squared errors) used in the model. On the other hand, it has been proved that finding a global minimal sum of the squared errors is NP-hard even when k = 2. In the present model for K-means clustering, all the variables are required to be discrete and the objective is nonlinear and nonconvex.</p> <p> In the first part of the thesis, we consider the issue of how to derive an optimization model to the minimum sum of squared errors for a given data set based on continuous convex optimization. For this, we first transfer the K-means clustering into a novel optimization model, 0-1 semidefinite programming where the eigenvalues of involved matrix argument must be 0 or 1. This provides an unified way for many other clustering approaches such as spectral clustering and normalized cut. Moreover, the new optimization model also allows us to attack the original problem based on the relaxed linear and semidefinite programming.</p> <p> Moreover, we consider the issue of how to get a feasible solution of the original clustering from an approximate solution of the relaxed problem. By using principal component analysis, we construct a rounding procedure to extract a feasible clustering and show that our algorithm can provide a 2-approximation to the global solution of the original problem. The complexity of our rounding procedure is O(n^(k2(k-1)/2)), which improves substantially a
similar rounding procedure in the literature with a complexity O(n^k3/2). In particular, when k = 2, our rounding procedure runs in O(n log n) time. To the best of our knowledge, this is the lowest complexity that has been reported in the literature to find a solution to K-means clustering with guaranteed quality.</p> <p> In the second part of the thesis, we consider approximation methods for the so-called balanced bi-clustering. By using a simple heuristics, we prove that we can improve slightly the constrained K-means for bi-clustering. For the special case where the size of each cluster is fixed, we develop a new algorithm, called Q means, to find a 2-approximation solution to the balanced bi-clustering. We prove that the Q-means has a complexity O(n^2).</p> <p> Numerical results based our approaches will be reported in the thesis as well.</p> / Thesis / Master of Science (MSc)
|
343 |
Characterization of the PilS-PilR two component regulatory system of Pseudomonas aeruginosaKilmury, Sara LN 11 1900 (has links)
Two-component regulatory systems are an important means for most prokaryotes to adapt quickly to changes in their environment. Canonical systems are composed of a sensor kinase, which detects signals that trigger autophosphorylation, and a response regulator, which imparts changes within the cell, usually through transcriptional regulation. The opportunistic pathogen, Pseudomonas aeruginosa, expresses a plethora of two-component systems including the PilS-PilR sensor-regulator pair, which directs transcription of the major component of the type IV pilus (T4P) system, pilA, in response to an unknown signal. T4P are surface appendages that are required for full virulence, as they perform several important functions including twitching motility, cell surface attachment, surface sensing, and biofilm formation. While loss of pili is known to decrease virulence, the effect of surplus surface pili on pathogenicity was unknown. In other T4P-expressing bacteria, PilR regulates the expression of non-T4P related genes, but its regulon in P. aeruginosa was undefined. Here, we identify PilA as an intramembrane signal for PilS, regulating its own expression. When PilS-PilR function is altered through the use of activating point mutations, which induce hyperpiliation, pathogenicity in C. elegans was significantly impaired compared to both wild type and non-piliated strains of P. aeruginosa. This phenotype could be recapitulated using other hyperpiliation-inducing mutations, providing evidence that over production of surface pili likely prevents productive engagement of contact-dependent virulence factors. Last, transcriptomic analyses revealed that expression of over 50 genes – including several involved in flagellar biosynthesis and function – is modulated by PilSR, suggesting coordinate regulation of motility in P. aeruginosa. Together, this work provides new information on the control of pilA transcription and suggests novel roles for surface pili and the PilSR two component system in virulence and swimming motility, respectively. The knowledge gained from this work could be applied to the development of a PilS or PilR based anti-virulence therapeutic. / Thesis / Doctor of Philosophy (PhD) / Pseudomonas aeruginosa is a Gram negative bacterium and a common cause of hospital acquired infections. The World Health Organization recently ranked P. aeruginosa as one of the top “priority pathogens” for which new treatments are desperately needed, in part due to its intrinsic resistance to many antibiotics. Among the key features that contribute to the infectivity of P. aeruginosa are its Type IV pili (T4P), which are flexible, retractile surface appendages involved in cell surface attachment, movement across solid surfaces and other important functions. Production of the major pilin protein, PilA, which forms most of the pilus, is tightly controlled by the two-component regulatory system, PilS-PilR, where PilS is a sensor and PilR is a regulator that directly controls pilin expression. The aim of this work was to identify the signal(s) detected by the sensor, as well as additional genes or systems under PilSR control. We showed that the pilin protein interacts directly with the sensor to control its own expression, and that dysregulation of the PilS-PilR two-component system impairs both pathogenicity and other forms of motility. Together, the data presented here provide insight into how PilS-PilR control expression of systems required for virulence of P. aeruginosa and highlight the potential of these proteins as possible therapeutic targets.
|
344 |
Product Performance and Contracts in Multi-component System Industries: Theory and EvidenceShekari, Saeed January 2017 (has links)
This dissertation will investigate how Product Performance Contracts are organized in Multi Component Systems contexts that proliferate contemporary OEM industries. The last two decades have seen a big change in both practice as well as the product engineering technologies that form the ecosystem within which suppliers and buyers negotiate the scale and scope of their transaction contracts. While we have seen the focus of industrial procurement move from specifications based contracts to performance based contracts, we are also witnessing a burgeoning technological capability that allows remote monitoring of product performance. These capabilities are part of the interconnectivity driving the much-touted Internet of Things (IoT) technology and at the heart of the Industrial Big Data ecosystem. The dissertation will attempt to explain three major phenomena in the industrial buyer and seller relationship in the context of Multi Component System Industries.
First, we uncover the factors that explain the choice of product performance contract specificity between the OEM and suppliers. We first set up an analytical model to explain the notion of an optimal contract specificity level and predict and further empirically test the role of different factors in the choice of contract specificity. We find that while the technology uncertainty decreases the level of optimal contract specificity, OEM’s transaction specific investment, unconstrained mixing-and-matching of branded component, and extent of product monitoring technology increases the level of optimal contract specificity.
Second, we provide empirical evidence that any deviation from optimal contract specificity erodes value in the form of an increase in total transaction cost. In our transaction cost efficiency model, we also illustrate with a precise granularity that under-specified contracts lead to more ex-post dispute costs, and over-specified contracts lead to more ex-post contract monitoring cost and ex-ante contract writing cost.
Third, we investigate how contracts, investments in strategic capabilities such as monitoring technology, the overall firm strategy, and transaction costs determine the firm performance. We find that not every transaction cost is a dead weight loss in terms of product performance. Most notably we find that ex-post dispute costs are associated with higher product performance when there is a major incident such as component failure between the OEM and the supplier.
Methodologically, this dissertation proposes to use a combination of field work, mathematical modeling, conceptual theory building, and empirical analysis of primary data about firm practices. / Thesis / Doctor of Philosophy (PhD)
|
345 |
Investigating the Beverage Patterns of Children and Youth with Obesity at the Time of Enrollment into Canadian Pediatric Weight Management Programs / Beverage Intake of Children and Youth with ObesityBradbury, Kelly January 2019 (has links)
Introduction: Beverages influence diet quality, however, beverage intake among youth with obesity is not well-described in literature. Dietary pattern analysis can identify how beverages cluster together and enable exploration of population characteristics.
Objectives: 1) Assess the frequency of children and youth with obesity who fail thresholds of: no sugar-sweet beverages (SSB), <1 serving/week of SSB, ≥2 servings/day of milk and factors influencing the likelihood of failing to meet these cut-offs. 2) Derive patterns of beverage intake and examine related social and behavioural factors and health outcomes at entry into Canadian pediatric weight management programs.
Methods: Beverage intake of youth (2–17 years) enrolled in the CANPWR study (n=1425) was reported at baseline visits from 2013-2017. Beverage thresholds identified weekly SSB consumers and approximated Canadian recommendations. The relationship of sociodemographic (income, guardian education, race, household status) and behaviours (eating habits, physical activity, screen time) to the likelihood of failing cut-offs was explored using multivariable logistic regression. Beverage patterns were derived using Principal Component Analysis. Related sociodemographic, behavioural and health outcomes (lipid profile, fasting glucose, HbA1c, liver enzymes) were evaluated with multiple linear regression.
Results: Nearly 80% of youth consumed ≥1 serving/week of SSB. This was more common in males, lower educated families and was related to eating habits and higher screen time. Two-thirds failed to drink ≥2 servings milk/day and were more likely female, demonstrated favourable eating habits and lower screen time. Five beverage patterns were identified: 1) SSB, 2) 1% Milk, 3) 2% Milk, 4) Alternatives, 5) Sports Drinks/Flavoured Milks. Patterns were related to social and lifestyle determinants; the only related health outcome was HDL.
Conclusion: Many children and youth with obesity consumed SSB weekly. Fewer drank milk twice daily. Beverage intake was predicted by sex, socioeconomic status and other behaviours, however most beverage patterns were unrelated to health outcomes. / Thesis / Master of Science (MSc) / Beverage intake can influence diet and health outcomes in population-based studies. However, patterns of beverage consumption are not well-described among youth with obesity. This study examined beverage intake and relationships with sociodemographic information, behaviours and health outcomes among youth (2-17 years) at time of entry into Canadian pediatric weight management programs (n=1425). In contrast to current recommendations, 80% of youth consumed ≥1 serving/week of sugar-sweetened beverages and 66% consumed 2 servings/day of milk. Additionally, five distinct patterns of beverage intake were identified using dietary pattern analysis. Social factors (age, sex, socioeconomic status) and behaviours (screen time, eating habits) were related to the risk of failing to meet recommendations and to beverage patterns. Identifying sociodemographic characteristics and behaviours of youth with obesity who fail to meet beverage intakes thresholds and adhere to certain patterns of consumption may provide insight for clinicians to guide youth to improved health in weight management settings.
|
346 |
New robust and fragile watermarking scheme for colour images captured by mobile phone camerasJassim, Taha Dawood, Abd-Alhameed, Raed, Al-Ahmad, Hussain January 2013 (has links)
No / This paper examines and evaluates a new robust and fragile watermarking scheme for colour images captured by mobile phone cameras. The authentication has been checked by using the fragile watermarking, while the copyright protection has been examined by using the robust one. The mobile phone number, including the international code, is a unique number across the whole world and it is used as a robust watermark. The number is embedded in the frequency domain using the discrete wavelet transform. On the other hand, hash codes are used as fragile watermarks and inserted in the spatial domain of the RGB image. The scheme is blind and the extraction process of the watermarks (Robust and Fragile) does not require the original image. The fragile watermark can detect any tampering in the image while the robust watermark is strong enough to survive against several attacks. The watermarking algorithm causes minimal distortion to the images. The proposed algorithm has been successfully tested, evaluated and compared with other algorithms.
|
347 |
Open Digital LibrariesSuleman, Hussein 26 November 2002 (has links)
Digital Libraries (DLs) are software systems specifically designed to assist users in information seeking activities. Stemming from the intersection of library sciences and computer networking, traditional DL systems impose library philosophies of structure and management on the sprawling collections of data that are made possible through the Internet.
DLs evolve to keep pace with innovation on the Internet so there is little standardization in the architecture of such systems. However, in attempting to provide users with the highest possible levels of service with the minimum possible effort, many systems work collaboratively with others, e.g., meta-search engines. This type of system interoperability is encouraged by the emergence of simple data transfer protocols such as the Open Archives Initiative?s Protocol for Metadata Harvesting (OAI-PMH).
Open Digital Libraries are an extension of the work of the OAI. It is proposed in this dissertation that the philosophy and approach adopted by the OAI can easily be extended to support inter-component interaction within a componentized DL. In particular, DLs can be built by connecting small components that communicate through a family of lightweight protocols, using XML as the data interchange mechanism. In order to test the feasibility of this, a set of protocols was designed based on a generalization of the work of the OAI. Components adhering to these protocols were implemented and integrated into production and research DLs. These systems were then evaluated for simplicity, reusability, and performance.
On the whole, this study has shown promise in the approach of applying the fundamental concepts of the OAI protocol to the task of DL component design and implementation. Further, it has shown the feasibility of building componentized DL systems using techniques that are a precursor to the Web Services approach to system design. / Ph. D.
|
348 |
Specifying and Verifying Collaborative Behavior in Component-Based SystemsYilmaz, Levent 03 April 2002 (has links)
In a parameterized collaboration design, one views software as a collection of components that play specific roles in interacting, giving rise to collaborative behavior. From this perspective, collaboration designs revolve around reusing collaborations that typify certain design patterns. Unfortunately, verifying that active, concurrently executing components obey the synchronization and communication requirements needed for the collaboration to work is a serious problem. At least two major complications arise in concurrent settings: (1) it may not be possible to analytically identify components that violate the synchronization constraints required by a collaboration, and (2) evolving participants in a collaboration independently often gives rise to unanticipated synchronization conflicts. This work presents a solution technique that addresses both of these problems. Local (that is, role-to-role) synchronization consistency conditions are formalized and associated decidable inference mechanisms are developed to determine mutual compatibility and safe refinement of synchronization behavior. More specifically, given generic parameterized collaborations and components with specific roles, mutual compatibility analysis verifies that the provided and required synchronization models are consistent and integrate correctly. Safe refinement, on the other hand, guarantees that the local synchronization behavior is maintained consistently as the roles and the collaboration are refined during development. This form of local consistency is necessary, but insufficient to guarantee a consistent collaboration overall. As a result, a new notion of global consistency (that is, among multiple components playing multiple roles) is introduced: causal process constraint analysis. A method for capturing, constraining, and analyzing global causal processes, which arise due to causal interference and interaction of components, is presented. Principally, the method allows one to: (1) represent the intended causal processes in terms of interactions depicted in UML collaboration graphs; (2) formulate constraints on such interactions and their evolution; and (3) check that the causal process constraints are satisfied by the observed behavior of the component(s) at run-time. / Ph. D.
|
349 |
Macroeconomic Forecasting: Statistically Adequate, Temporal Principal ComponentsDorazio, Brian Arthur 05 June 2023 (has links)
The main goal of this dissertation is to expand upon the use of Principal Component Analysis (PCA) in macroeconomic forecasting, particularly in cases where traditional principal components fail to account for all of the systematic information making up common macroeconomic and financial indicators. At the outset, PCA is viewed as a statistical model derived from the reparameterization of the Multivariate Normal model in Spanos (1986). To motivate a PCA forecasting framework prioritizing sound model assumptions, it is demonstrated, through simulation experiments, that model mis-specification erodes reliability of inferences. The Vector Autoregressive (VAR) model at the center of these simulations allows for the Markov (temporal) dependence inherent in macroeconomic data and serves as the basis for extending conventional PCA. Stemming from the relationship between PCA and the VAR model, an operational out-of-sample forecasting methodology is prescribed incorporating statistically adequate, temporal principal components, i.e. principal components which capture not only Markov dependence, but all of the other, relevant information in the original series. The macroeconomic forecasts produced from applying this framework to several, common macroeconomic indicators are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons. / Doctor of Philosophy / The landscape of macroeconomic forecasting and nowcasting has shifted drastically in the advent of big data. Armed with significant growth in computational power and data collection resources, economists have augmented their arsenal of statistical tools to include those which can produce reliable results in big data environments. At the forefront of such tools is Principal Component Analysis (PCA), a method which reduces the number of predictors into a few factors containing the majority of the variation making up the original data series. This dissertation expands upon the use of PCA in the forecasting of key, macroeconomic indicators, particularly in instances where traditional principal components fail to account for all of the systematic information comprising the data. Ultimately, a forecasting methodology which incorporates temporal principal components, ones capable of capturing both time dependence as well as the other, relevant information in the original series, is established. In the final analysis, the methodology is applied to several, common macroeconomic and financial indicators. The forecasts produced using this framework are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons.
|
350 |
Reduction of Printed Circuit Card Placement Time Through the Implementation of PanelizationTester, John T. 09 October 1999 (has links)
Decreasing the cycle time of panels in the printed circuit card manufacturing process has been a significant research topic over the past decade. The research objective in such literature has been to reduce the placement machine cycle times by finding the optimal placement sequences and component-feeder allocation for a given, fixed, panel component layout for a given machine type. Until now, no research has been found which allows the alteration of the panel configuration itself, when panelization is a part of that electronic panel design. This research will be the first effort to incorporate panelization into the cycle time reduction field. The PCB circuit design is not to be altered; rather, the panel design (i.e., the arrangement of the PCB in the panel) is altered to reduce the panel assembly time. Component placement problem models are developed for three types of machines: The automated insertion machine (AIM), the pick-and-place (PAPM) machine, and the rotary turret head machine (RTHM). Two solution procedures are developed which are based upon a genetic algorithm (GA) approach. One procedure simultaneously produces solutions for the best panel design and component placement sequence. The other procedure first selects a best panel design based upon an estimation of its worth to the minimization problem. Then that procedure uses a more traditional GA to solve for the component placement and component type allocation problem for that panel design. Experiments were conducted to discover situations where the consideration of panelization can make a significant difierence in panel assembly times. It was shown that the PAPM scenario benefits most from panelization and the RTHM the least, though all three machine types show improvements under certain conditions established in the experiments.
NOTE: An updated copy of this ETD was added on 09/17/2010. / Ph. D.
|
Page generated in 0.0508 seconds