51 |
Non-destructive Analysis Of Trace Textile Fiber Evidence Via Room-temperature Fluorescence SpectrocopyAppalaneni, Krishnaveni 01 January 2013 (has links)
Forensic fiber evidence plays an important role in many criminal investigations. Nondestructive techniques that preserve the physical integrity of the fibers for further court examination are highly valuable in forensic science. Non-destructive techniques that can either discriminate between similar fibers or match a known to a questioned fiber - and still preserve the physical integrity of the fibers for further court examination - are highly valuable in forensic science. When fibers cannot be discriminated by non-destructive tests, the next reasonable step is to extract the questioned and known fibers for dye analysis with a more selective technique such as high-performance liquid chromatography (HPLC) and/or gas chromatography-mass spectrometry (GC-MS). The common denominator among chromatographic techniques is to primarily focus on the dyes used to color the fibers and do not investigate other potential discriminating components present on the fiber. Differentiating among commercial dyes with very similar chromatographic behaviors and almost identical absorption spectra and/or fragmentation patterns is a challenging task. This dissertation explores a different aspect of fiber analysis as it focuses on the total fluorescence emission of fibers. In addition to the contribution of the textile dye (or dyes) to the fluorescence spectrum of the fiber, we investigate the contribution of intrinsic fluorescence impurities – i.e. impurities imbedded into the fibers during fabrication of garments - as a reproducible source of fiber comparison. Differentiation of visually indistinguishable fibers is achieved by comparing excitation-emission matrices (EEMs) recorded from single textile fibers with the aid of a commercial spectrofluorimeter coupled to an epi-fluorescence microscope. Statistical data comparison was carried out via principal component analysis. An application of iv this statistical approach is demonstrated using challenging dyes with similarities both in twodimensional absorbance spectra and in three dimensional EEM data. High accuracy of fiber identification was observed in all the cases and no false positive identifications were observed at 99% confidence levels.
|
52 |
De-Mixing Decision Representations in Rodent dmPFC to Investigate Strategy Change During Delay DiscountingWhite, Shelby M. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Several pathological disorders are characterized by maladaptive decision-making (Dalley & Robbins, 2017). Decision-making tasks, such as Delay Discounting (DD), are used to assess the behavioral manifestations of maladaptive decision-making in both clinical and preclinical settings (de Wit, Flory, Acheson, Mccloskey, & Manuck, 2007). DD measures cognitive impulsivity and broadly refers to the inability to delay gratification (Hamilton et al., 2015). How decisions are made in tasks that measure DD can be understood by assessing patterns of behavior that are observable in the sequences of choices or the statistics that accompany each choice (e.g. response latency). These measures have led to insights that suggest strategies that are used by the agent to facilitate the decision (Linsenbardt, Smoker, Janetsian-Fritz, & Lapish, 2016).
The current set of analyses aims to use individual trial data to identify the neural underpinnings associated with strategy transition during DD. A greater understanding of how strategy change occurs at a neural level will be useful for developing cognitive and behavioral strategies aimed at reducing impulsive choice. The rat dorso-medial prefrontal cortex (dmPFC) has been implicated as an important brain region for recognizing the need to change strategy during DD (Powell & Redish, 2016).
Using advanced statistical techniques, such as demixed principal component analysis (dPCA), we can then begin to understand how decision representations evolve over the decision- making process to impact behaviors such as strategy change. This study was the first known attempt at using dPCA applied to individual sessions to accurately model how decision representations evolve across individual trials. Evidence exists that representations follow a breakdown and remapping at the individual trial level (Karlsson, Tervo, & Karpova, 2012; Powell & Redish, 2016). Furthermore, these representational changes across individual trials have previously been proposed to act as a signal to change strategies (Powell & Redish, 2016). This study aimed to test the hypothesis that a ‘breakdown’ followed by a ‘remapping’ of the decision representation would act as a signal to change strategy that is observable in the behavior of the animal.
To investigate the relationship between trials surrounding the breakdown and/or subsequent remapping of the decision representation and trials surrounding strategy changes, sequences of trials surrounding the breakdown and/or remapping were compared to sequences of
9
trials surrounding the strategy-change trial. Strategy types consisted of either exploiting the immediate lever (IM-Exploit), delay lever (DEL-Exploit), or exploring between the two lever options (Explore). Contrary to the hypothesis, an overall relationship between breakdown and remapping trial sequences were not associated with change-trial sequences. In partial support of the hypothesis however, at the 4-sec delay when the subjective value of the immediate reward was high, a relationship between breakdown sequence and strategy change sequence was detected for when the animal was exploiting the delay lever (e.g. DEL-Exploit strategy). This result suggests that a breakdown in decision representation may act as a signal to prompt strategy change under certain contexts.
One notable finding of this study was that the decision representation was much more robust at the 4-sec delay compared to the 8-sec delay, suggesting that decisions at the 4-sec delay contain more context that differentiate the two choice options (immediate or delay). In other words, the encoding of the two choice options was more dissociable at the 4-sec delay compared to the 8-sec delay, which was quantified by measuring the average distance between the two representations (immediate and delay) on a given trial. Given that Wistar rats are equally likely to choose between the immediate and delay choice alternatives at the 8-sec delay (Linsenbardt et al., 2016), this finding provides further support for current prevalent theories of how animals use a cognitive search process to mentally imagine choice alternatives during deliberation. If context which differentiates choice options at the 8-sec delay is less dissociable, it is likely that the cognitive search process would be equally likely to find either choice option. If the choice options are equally likely to be found, it would be assumed that the choice alternatives would also be equally likely to be chosen, which is what has been observed in Wistar rats at the 8-sec delay.
|
53 |
Limitations of Principal Component Analysis for Dimensionality-Reduction for Classification of Hyperspectral DataCheriyadat, Anil Meerasa 13 December 2003 (has links)
It is a popular practice in the remote-sensing community to apply principal component analysis (PCA) on a higher-dimensional feature space to achieve dimensionality-reduction. Several factors that have led to the popularity of PCA include its simplicity, ease of use, availability as part of popular remote-sensing packages, and optimal nature in terms of mean square error. These advantages have prompted the remote-sensing research community to overlook many limitations of PCA when used as a dimensionality-reduction tool for classification and target-detection applications. This thesis addresses the limitations of PCA when used as a dimensionality-reduction technique for extracting discriminating features from hyperspectral data. Theoretical and experimental analyses are presented to demonstrate that PCA is not necessarily an appropriate feature-extraction method for high-dimensional data when the objective is classification or target-recognition. The influence of certain data-distribution characteristics, such as within-class covariance, between-class covariance, and correlation on PCA transformation, is analyzed in this thesis. The classification accuracies obtained using PCA features are compared to accuracies obtained using other feature-extraction methods like variants of Karhunen-Loève transform and greedy search algorithms on spectral and wavelet domains. Experimental analyses are conducted for both two-class and multi-class cases. The classification accuracies obtained from higher-order PCA components are compared to the classification accuracies of features extracted from different regions of the spectrum. The comparative study done on the classification accuracies that are obtained using above feature-extraction methods, ascertain that PCA may not be an appropriate tool for dimensionality-reduction of certain hyperspectral data-distributions, when the objective is classification or target-recognition.
|
54 |
A FUZZY MODEL FOR ESTIMATING REMAINING LIFETIME OF A DIESEL ENGINEFANEGAN, JULIUS BOLUDE January 2007 (has links)
No description available.
|
55 |
Three Dimensional Face Recognition Using Two Dimensional Principal Component AnalysisAljarrah, Inad A. 14 April 2006 (has links)
No description available.
|
56 |
Atmospheric circulation types associated with cause-specific daily mortality in the central United StatesColeman, Jill S. M. 10 August 2005 (has links)
No description available.
|
57 |
The Effects of Agriculture on Canada's Major WatershedsRamunno, Daniel 10 1900 (has links)
<p>Water contamination is one of the major environmental issues that negatively impacts water quality of watersheds. It negatively affects drinking water and aquatic wildlife, which can indirectly have negative effects on everyone's health. Many different institutions collected samples of water from four of Canada's major watersheds and counted the number of bacteria in each sample. The data used in this paper was taken from one of these institutions and was analysed to investigate if agricultural waste impacts the water quality of these four watersheds. It was found that the agricultural waste produced from nearby farms significantly impacts the water quality of three of these watersheds. Principal component analysis was also done on these data, and it was found that all of the data can be expressed in terms of one variable without losing very much information of the data. The bootstrap distributions of the principal component analysis parameters were estimated, and it was found that the sampling distributions of these parameters are stable. There was also evidence that the variables in the data are not normally distributed and not all the variables are independent.</p> / Master of Science (MSc)
|
58 |
Investigating the Beverage Patterns of Children and Youth with Obesity at the Time of Enrollment into Canadian Pediatric Weight Management Programs / Beverage Intake of Children and Youth with ObesityBradbury, Kelly January 2019 (has links)
Introduction: Beverages influence diet quality, however, beverage intake among youth with obesity is not well-described in literature. Dietary pattern analysis can identify how beverages cluster together and enable exploration of population characteristics.
Objectives: 1) Assess the frequency of children and youth with obesity who fail thresholds of: no sugar-sweet beverages (SSB), <1 serving/week of SSB, ≥2 servings/day of milk and factors influencing the likelihood of failing to meet these cut-offs. 2) Derive patterns of beverage intake and examine related social and behavioural factors and health outcomes at entry into Canadian pediatric weight management programs.
Methods: Beverage intake of youth (2–17 years) enrolled in the CANPWR study (n=1425) was reported at baseline visits from 2013-2017. Beverage thresholds identified weekly SSB consumers and approximated Canadian recommendations. The relationship of sociodemographic (income, guardian education, race, household status) and behaviours (eating habits, physical activity, screen time) to the likelihood of failing cut-offs was explored using multivariable logistic regression. Beverage patterns were derived using Principal Component Analysis. Related sociodemographic, behavioural and health outcomes (lipid profile, fasting glucose, HbA1c, liver enzymes) were evaluated with multiple linear regression.
Results: Nearly 80% of youth consumed ≥1 serving/week of SSB. This was more common in males, lower educated families and was related to eating habits and higher screen time. Two-thirds failed to drink ≥2 servings milk/day and were more likely female, demonstrated favourable eating habits and lower screen time. Five beverage patterns were identified: 1) SSB, 2) 1% Milk, 3) 2% Milk, 4) Alternatives, 5) Sports Drinks/Flavoured Milks. Patterns were related to social and lifestyle determinants; the only related health outcome was HDL.
Conclusion: Many children and youth with obesity consumed SSB weekly. Fewer drank milk twice daily. Beverage intake was predicted by sex, socioeconomic status and other behaviours, however most beverage patterns were unrelated to health outcomes. / Thesis / Master of Science (MSc) / Beverage intake can influence diet and health outcomes in population-based studies. However, patterns of beverage consumption are not well-described among youth with obesity. This study examined beverage intake and relationships with sociodemographic information, behaviours and health outcomes among youth (2-17 years) at time of entry into Canadian pediatric weight management programs (n=1425). In contrast to current recommendations, 80% of youth consumed ≥1 serving/week of sugar-sweetened beverages and 66% consumed 2 servings/day of milk. Additionally, five distinct patterns of beverage intake were identified using dietary pattern analysis. Social factors (age, sex, socioeconomic status) and behaviours (screen time, eating habits) were related to the risk of failing to meet recommendations and to beverage patterns. Identifying sociodemographic characteristics and behaviours of youth with obesity who fail to meet beverage intakes thresholds and adhere to certain patterns of consumption may provide insight for clinicians to guide youth to improved health in weight management settings.
|
59 |
Macroeconomic Forecasting: Statistically Adequate, Temporal Principal ComponentsDorazio, Brian Arthur 05 June 2023 (has links)
The main goal of this dissertation is to expand upon the use of Principal Component Analysis (PCA) in macroeconomic forecasting, particularly in cases where traditional principal components fail to account for all of the systematic information making up common macroeconomic and financial indicators. At the outset, PCA is viewed as a statistical model derived from the reparameterization of the Multivariate Normal model in Spanos (1986). To motivate a PCA forecasting framework prioritizing sound model assumptions, it is demonstrated, through simulation experiments, that model mis-specification erodes reliability of inferences. The Vector Autoregressive (VAR) model at the center of these simulations allows for the Markov (temporal) dependence inherent in macroeconomic data and serves as the basis for extending conventional PCA. Stemming from the relationship between PCA and the VAR model, an operational out-of-sample forecasting methodology is prescribed incorporating statistically adequate, temporal principal components, i.e. principal components which capture not only Markov dependence, but all of the other, relevant information in the original series. The macroeconomic forecasts produced from applying this framework to several, common macroeconomic indicators are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons. / Doctor of Philosophy / The landscape of macroeconomic forecasting and nowcasting has shifted drastically in the advent of big data. Armed with significant growth in computational power and data collection resources, economists have augmented their arsenal of statistical tools to include those which can produce reliable results in big data environments. At the forefront of such tools is Principal Component Analysis (PCA), a method which reduces the number of predictors into a few factors containing the majority of the variation making up the original data series. This dissertation expands upon the use of PCA in the forecasting of key, macroeconomic indicators, particularly in instances where traditional principal components fail to account for all of the systematic information comprising the data. Ultimately, a forecasting methodology which incorporates temporal principal components, ones capable of capturing both time dependence as well as the other, relevant information in the original series, is established. In the final analysis, the methodology is applied to several, common macroeconomic and financial indicators. The forecasts produced using this framework are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons.
|
60 |
A Statistical Examination of the Climatic Human Expert System, The Sunset Garden Zones for CaliforniaLogan, Ben 11 January 2008 (has links)
Twentieth Century climatology was dominated by two great figures: Wladamir Köppen and C. Warren Thornthwaite. The first carefully developed climatic parameters to match the larger world vegetation communities. The second developed complex formulas of "Moisture Factors" that provided efficient understanding of how evapotranspiration influences plant growth and health, both for native and non-native communities.
In the latter half of the Twentieth Century, the Sunset Magazine Corporation develop a purely empirical set of Garden Zones, first for California, then for the thirteen states of the West, now for the entire nation in the National Garden Maps. The Sunset Garden Zones are well recognized and respected in Western States for illustrating the several factors of climate that distinguish zones. But the Sunset Garden Zones have never before been digitized and examined statistically for validation of their demarcations.
This thesis examines the digitized zones with reference to PRISM climate data. Variable coverages resembling those described by Sunset are extracted from the PRISM data. These variable coverages are collected for two buffered areas, one in northern California and one in southern California. The coverages are exported from ArcGIS 9.1 to SAS® where they are processed first through a Principal Component Analysis, and then the first five principal components are entered into a Ward's Hierarchical Cluster Analysis. The resulting clusters were translated back into ArcGIS as a raster coverage, where the clusters were climatic regions. This process is quite amenable for further examination of other regions of California / Master of Science
|
Page generated in 0.1117 seconds