• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 318
  • 151
  • 35
  • 32
  • 25
  • 20
  • 19
  • 16
  • 14
  • 14
  • 7
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 787
  • 787
  • 758
  • 142
  • 129
  • 122
  • 108
  • 93
  • 77
  • 73
  • 69
  • 58
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Principal Component Algorithm for Feedforward Active Noise and Vibration Control

Cabell, Randolph H. III 28 April 1998 (has links)
A principal component least mean square (PC-LMS) adaptive algorithm is described that has considerable benefits for large control systems used to implement feedforward control of single frequency disturbances. The algorithm is a transform domain version of the multichannel filtered-x LMS algorithm. The transformation corresponds to the principal components of the transfer function matrix between the sensors and actuators in a control system at a single frequency. The method is similar to other transform domain LMS algorithms because the transformation can be used to accelerate convergence when the control system is ill-conditioned. This ill-conditioning is due to actuator and sensor placement on a continuous structure. The principal component transformation rotates the control filter coefficient axes to a more convenient coordinate system where (1) independent convergence factors can be used on each coordinate to accelerate convergence, (2) insignificant control coordinates can be eliminated from the controller, and (3) coordinates that require excessive control effort can be eliminated from the controller. The resulting transform domain algorithm has lower computational requirements than the filtered-x LMS algorithm. The formulation of the algorithm given here applies only to single frequency control problems, and computation of the decoupling transforms requires an estimate of the transfer function matrix between control actuators and error sensors at the frequency of interest. The feasibility of the method was demonstrated in real-time noise control experiments involving 48 microphones and 12 control actuators mounted on a closed cylindrical shell. Convergence of the PC-LMS algorithm was more stable than the filtered-x LMS algorithm. In addition, the PC-LMS controller produced more noise reduction with less control effort than the filtered-x LMS controller in several tests. / Ph. D.
52

Principal component analysis uncovers cytomegalovirus-associated NK cell activation in Ph+ leukemia patients treated with dasatinib / 主成分分析により明らかになったダサチニブ治療中のフィラデルフィア染色体陽性白血病患者におけるサイトメガロウイルス関連NK細胞の活性化

Ishiyama, Ken-ichi 23 January 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第20072号 / 医博第4165号 / 新制||医||1018(附属図書館) / 33188 / 京都大学大学院医学研究科医学専攻 / (主査)教授 前川 平, 教授 小川 誠司, 教授 小柳 義夫 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
53

Machine Learning Driven Model Inversion Methodology To Detect Reniform Nematodes In Cotton

Palacharla, Pavan Kumar 09 December 2011 (has links)
Rotylenchulus reniformis is a nematode species affecting the cotton crop and quickly spreading throughout the southeastern United States. Effective use of nematicides at a variable rate is the only economic counter measure. It requires the intraield variable nematode population, which in turn depends on the collection of soil samples from the field and analyzing them in the laboratory. This process is economically prohibitive. Hence estimating the nematode infestation on the cotton crop using remote sensing and machine learning techniques which are cost and time effective is the motivation for this study. In the current research, the concept of multi-temporal remote sensing has been implemented in order to design a robust and generalized Nematode detection regression model. Finally, a user friendly web-service is created which is gives trustworthy results for the given input data and thereby reducing the nematode infestation in the crop and their expenses on nematicides.
54

The Objective Assessment of Movement Quality Using Motion Capture and Machine Learning

Ross, Gwyneth Butler 05 January 2022 (has links)
Background: Movement screens are frequently used to identify abnormal movement patterns that may increase risk of injury and/or hinder performance. However, abnormal patterns are often detected visually based on the observations of a coach or clinician leading to poor inter- and intrarater reliability. In addition, they have been criticized for having poor validity and sensitivity. Quantitative, or data-driven methods can increase objectivity, remove issues related to inter-rater reliability and offer the potential to detect new and important features that may not be observable by the human eye. The combination of motion capture data, pattern recognition and machine learning could provide a quantitative method to better assess movement competency. Purpose: The purpose of this doctoral thesis was to create the foundation for the development of an objective movement screening tool that combines motion capture data, pattern recognition and machine learning. This doctoral thesis is part of a larger project to bring an objective movement screening tool for use in the field to market. Methods: This thesis is comprised of four studies based on a single data collection and a common series of pre-processing steps. Data from 542 athletes were collected by Motus Global, a for-profit biomechanics company, with athletes ranging in competition level from youth to professional and competing in a wide-range of sports. For the first study of this thesis, an online software program was developed to examine the inter- and intra-reliability of a movement screen, with intrareliability being further examined to compare reliability when body-shape was and was not modified. The second study developed the objective movement screen framework that utilized motion capture, pattern recognition and machine learning. Study 3 and 4 assessed different types of input data, classification goals (e.g., skill level and sport played), feature reduction and selection methods, and increasingly complex machine learning algorithms. Results: For Study 1, when looking at inter- and intra-rater reliability of expert assessors during subjective scoring of movements, intra-rater reliability was better than inter-rater reliability. When assessing the effects of body-shape, on average, reliability worsened when body-shape was manipulated. Study 2 provided proof-of-principle that athletes were able to be classified based on skill level using marker-based optical motion capture data, principal component analysis (PCA) and linear discriminant analysis. For Study 3, PCA in combination with linear classifiers outperformed non-linear classifiers when classifying athletes based on skill level; feature selection increased classification rates, and classification rates when using simulated inertial measurement unit data as the input data were on average better than when using marker-based optical motion capture data. In Study 4, athletes were able to be differentiated based on sport played and recurrent neural nets (RNNs) and PCA in combination with traditional linear classifiers were the optimal machine learning algorithms when classifying athletes based on skill level and sport played. Conclusion: This thesis demonstrates that objective methods can differentiate athletes based on desired demographics using motion capture, pattern recognition and machine learning. This thesis is part of a larger project to bring an objective movement screening tool for field-use to market and provides a solid foundation to use in the continued development of an objective movement screening tool.
55

Non-destructive Analysis Of Trace Textile Fiber Evidence Via Room-temperature Fluorescence Spectrocopy

Appalaneni, Krishnaveni 01 January 2013 (has links)
Forensic fiber evidence plays an important role in many criminal investigations. Nondestructive techniques that preserve the physical integrity of the fibers for further court examination are highly valuable in forensic science. Non-destructive techniques that can either discriminate between similar fibers or match a known to a questioned fiber - and still preserve the physical integrity of the fibers for further court examination - are highly valuable in forensic science. When fibers cannot be discriminated by non-destructive tests, the next reasonable step is to extract the questioned and known fibers for dye analysis with a more selective technique such as high-performance liquid chromatography (HPLC) and/or gas chromatography-mass spectrometry (GC-MS). The common denominator among chromatographic techniques is to primarily focus on the dyes used to color the fibers and do not investigate other potential discriminating components present on the fiber. Differentiating among commercial dyes with very similar chromatographic behaviors and almost identical absorption spectra and/or fragmentation patterns is a challenging task. This dissertation explores a different aspect of fiber analysis as it focuses on the total fluorescence emission of fibers. In addition to the contribution of the textile dye (or dyes) to the fluorescence spectrum of the fiber, we investigate the contribution of intrinsic fluorescence impurities – i.e. impurities imbedded into the fibers during fabrication of garments - as a reproducible source of fiber comparison. Differentiation of visually indistinguishable fibers is achieved by comparing excitation-emission matrices (EEMs) recorded from single textile fibers with the aid of a commercial spectrofluorimeter coupled to an epi-fluorescence microscope. Statistical data comparison was carried out via principal component analysis. An application of iv this statistical approach is demonstrated using challenging dyes with similarities both in twodimensional absorbance spectra and in three dimensional EEM data. High accuracy of fiber identification was observed in all the cases and no false positive identifications were observed at 99% confidence levels.
56

De-Mixing Decision Representations in Rodent dmPFC to Investigate Strategy Change During Delay Discounting

White, Shelby M. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Several pathological disorders are characterized by maladaptive decision-making (Dalley & Robbins, 2017). Decision-making tasks, such as Delay Discounting (DD), are used to assess the behavioral manifestations of maladaptive decision-making in both clinical and preclinical settings (de Wit, Flory, Acheson, Mccloskey, & Manuck, 2007). DD measures cognitive impulsivity and broadly refers to the inability to delay gratification (Hamilton et al., 2015). How decisions are made in tasks that measure DD can be understood by assessing patterns of behavior that are observable in the sequences of choices or the statistics that accompany each choice (e.g. response latency). These measures have led to insights that suggest strategies that are used by the agent to facilitate the decision (Linsenbardt, Smoker, Janetsian-Fritz, & Lapish, 2016). The current set of analyses aims to use individual trial data to identify the neural underpinnings associated with strategy transition during DD. A greater understanding of how strategy change occurs at a neural level will be useful for developing cognitive and behavioral strategies aimed at reducing impulsive choice. The rat dorso-medial prefrontal cortex (dmPFC) has been implicated as an important brain region for recognizing the need to change strategy during DD (Powell & Redish, 2016). Using advanced statistical techniques, such as demixed principal component analysis (dPCA), we can then begin to understand how decision representations evolve over the decision- making process to impact behaviors such as strategy change. This study was the first known attempt at using dPCA applied to individual sessions to accurately model how decision representations evolve across individual trials. Evidence exists that representations follow a breakdown and remapping at the individual trial level (Karlsson, Tervo, & Karpova, 2012; Powell & Redish, 2016). Furthermore, these representational changes across individual trials have previously been proposed to act as a signal to change strategies (Powell & Redish, 2016). This study aimed to test the hypothesis that a ‘breakdown’ followed by a ‘remapping’ of the decision representation would act as a signal to change strategy that is observable in the behavior of the animal. To investigate the relationship between trials surrounding the breakdown and/or subsequent remapping of the decision representation and trials surrounding strategy changes, sequences of trials surrounding the breakdown and/or remapping were compared to sequences of 9 trials surrounding the strategy-change trial. Strategy types consisted of either exploiting the immediate lever (IM-Exploit), delay lever (DEL-Exploit), or exploring between the two lever options (Explore). Contrary to the hypothesis, an overall relationship between breakdown and remapping trial sequences were not associated with change-trial sequences. In partial support of the hypothesis however, at the 4-sec delay when the subjective value of the immediate reward was high, a relationship between breakdown sequence and strategy change sequence was detected for when the animal was exploiting the delay lever (e.g. DEL-Exploit strategy). This result suggests that a breakdown in decision representation may act as a signal to prompt strategy change under certain contexts. One notable finding of this study was that the decision representation was much more robust at the 4-sec delay compared to the 8-sec delay, suggesting that decisions at the 4-sec delay contain more context that differentiate the two choice options (immediate or delay). In other words, the encoding of the two choice options was more dissociable at the 4-sec delay compared to the 8-sec delay, which was quantified by measuring the average distance between the two representations (immediate and delay) on a given trial. Given that Wistar rats are equally likely to choose between the immediate and delay choice alternatives at the 8-sec delay (Linsenbardt et al., 2016), this finding provides further support for current prevalent theories of how animals use a cognitive search process to mentally imagine choice alternatives during deliberation. If context which differentiates choice options at the 8-sec delay is less dissociable, it is likely that the cognitive search process would be equally likely to find either choice option. If the choice options are equally likely to be found, it would be assumed that the choice alternatives would also be equally likely to be chosen, which is what has been observed in Wistar rats at the 8-sec delay.
57

Limitations of Principal Component Analysis for Dimensionality-Reduction for Classification of Hyperspectral Data

Cheriyadat, Anil Meerasa 13 December 2003 (has links)
It is a popular practice in the remote-sensing community to apply principal component analysis (PCA) on a higher-dimensional feature space to achieve dimensionality-reduction. Several factors that have led to the popularity of PCA include its simplicity, ease of use, availability as part of popular remote-sensing packages, and optimal nature in terms of mean square error. These advantages have prompted the remote-sensing research community to overlook many limitations of PCA when used as a dimensionality-reduction tool for classification and target-detection applications. This thesis addresses the limitations of PCA when used as a dimensionality-reduction technique for extracting discriminating features from hyperspectral data. Theoretical and experimental analyses are presented to demonstrate that PCA is not necessarily an appropriate feature-extraction method for high-dimensional data when the objective is classification or target-recognition. The influence of certain data-distribution characteristics, such as within-class covariance, between-class covariance, and correlation on PCA transformation, is analyzed in this thesis. The classification accuracies obtained using PCA features are compared to accuracies obtained using other feature-extraction methods like variants of Karhunen-Loève transform and greedy search algorithms on spectral and wavelet domains. Experimental analyses are conducted for both two-class and multi-class cases. The classification accuracies obtained from higher-order PCA components are compared to the classification accuracies of features extracted from different regions of the spectrum. The comparative study done on the classification accuracies that are obtained using above feature-extraction methods, ascertain that PCA may not be an appropriate tool for dimensionality-reduction of certain hyperspectral data-distributions, when the objective is classification or target-recognition.
58

A FUZZY MODEL FOR ESTIMATING REMAINING LIFETIME OF A DIESEL ENGINE

FANEGAN, JULIUS BOLUDE January 2007 (has links)
No description available.
59

Three Dimensional Face Recognition Using Two Dimensional Principal Component Analysis

Aljarrah, Inad A. 14 April 2006 (has links)
No description available.
60

Atmospheric circulation types associated with cause-specific daily mortality in the central United States

Coleman, Jill S. M. 10 August 2005 (has links)
No description available.

Page generated in 0.0761 seconds