71 |
Investigating the Beverage Patterns of Children and Youth with Obesity at the Time of Enrollment into Canadian Pediatric Weight Management Programs / Beverage Intake of Children and Youth with ObesityBradbury, Kelly January 2019 (has links)
Introduction: Beverages influence diet quality, however, beverage intake among youth with obesity is not well-described in literature. Dietary pattern analysis can identify how beverages cluster together and enable exploration of population characteristics.
Objectives: 1) Assess the frequency of children and youth with obesity who fail thresholds of: no sugar-sweet beverages (SSB), <1 serving/week of SSB, ≥2 servings/day of milk and factors influencing the likelihood of failing to meet these cut-offs. 2) Derive patterns of beverage intake and examine related social and behavioural factors and health outcomes at entry into Canadian pediatric weight management programs.
Methods: Beverage intake of youth (2–17 years) enrolled in the CANPWR study (n=1425) was reported at baseline visits from 2013-2017. Beverage thresholds identified weekly SSB consumers and approximated Canadian recommendations. The relationship of sociodemographic (income, guardian education, race, household status) and behaviours (eating habits, physical activity, screen time) to the likelihood of failing cut-offs was explored using multivariable logistic regression. Beverage patterns were derived using Principal Component Analysis. Related sociodemographic, behavioural and health outcomes (lipid profile, fasting glucose, HbA1c, liver enzymes) were evaluated with multiple linear regression.
Results: Nearly 80% of youth consumed ≥1 serving/week of SSB. This was more common in males, lower educated families and was related to eating habits and higher screen time. Two-thirds failed to drink ≥2 servings milk/day and were more likely female, demonstrated favourable eating habits and lower screen time. Five beverage patterns were identified: 1) SSB, 2) 1% Milk, 3) 2% Milk, 4) Alternatives, 5) Sports Drinks/Flavoured Milks. Patterns were related to social and lifestyle determinants; the only related health outcome was HDL.
Conclusion: Many children and youth with obesity consumed SSB weekly. Fewer drank milk twice daily. Beverage intake was predicted by sex, socioeconomic status and other behaviours, however most beverage patterns were unrelated to health outcomes. / Thesis / Master of Science (MSc) / Beverage intake can influence diet and health outcomes in population-based studies. However, patterns of beverage consumption are not well-described among youth with obesity. This study examined beverage intake and relationships with sociodemographic information, behaviours and health outcomes among youth (2-17 years) at time of entry into Canadian pediatric weight management programs (n=1425). In contrast to current recommendations, 80% of youth consumed ≥1 serving/week of sugar-sweetened beverages and 66% consumed 2 servings/day of milk. Additionally, five distinct patterns of beverage intake were identified using dietary pattern analysis. Social factors (age, sex, socioeconomic status) and behaviours (screen time, eating habits) were related to the risk of failing to meet recommendations and to beverage patterns. Identifying sociodemographic characteristics and behaviours of youth with obesity who fail to meet beverage intakes thresholds and adhere to certain patterns of consumption may provide insight for clinicians to guide youth to improved health in weight management settings.
|
72 |
Macroeconomic Forecasting: Statistically Adequate, Temporal Principal ComponentsDorazio, Brian Arthur 05 June 2023 (has links)
The main goal of this dissertation is to expand upon the use of Principal Component Analysis (PCA) in macroeconomic forecasting, particularly in cases where traditional principal components fail to account for all of the systematic information making up common macroeconomic and financial indicators. At the outset, PCA is viewed as a statistical model derived from the reparameterization of the Multivariate Normal model in Spanos (1986). To motivate a PCA forecasting framework prioritizing sound model assumptions, it is demonstrated, through simulation experiments, that model mis-specification erodes reliability of inferences. The Vector Autoregressive (VAR) model at the center of these simulations allows for the Markov (temporal) dependence inherent in macroeconomic data and serves as the basis for extending conventional PCA. Stemming from the relationship between PCA and the VAR model, an operational out-of-sample forecasting methodology is prescribed incorporating statistically adequate, temporal principal components, i.e. principal components which capture not only Markov dependence, but all of the other, relevant information in the original series. The macroeconomic forecasts produced from applying this framework to several, common macroeconomic indicators are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons. / Doctor of Philosophy / The landscape of macroeconomic forecasting and nowcasting has shifted drastically in the advent of big data. Armed with significant growth in computational power and data collection resources, economists have augmented their arsenal of statistical tools to include those which can produce reliable results in big data environments. At the forefront of such tools is Principal Component Analysis (PCA), a method which reduces the number of predictors into a few factors containing the majority of the variation making up the original data series. This dissertation expands upon the use of PCA in the forecasting of key, macroeconomic indicators, particularly in instances where traditional principal components fail to account for all of the systematic information comprising the data. Ultimately, a forecasting methodology which incorporates temporal principal components, ones capable of capturing both time dependence as well as the other, relevant information in the original series, is established. In the final analysis, the methodology is applied to several, common macroeconomic and financial indicators. The forecasts produced using this framework are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons.
|
73 |
Facilitating Self-As-Context: A Treatment Component StudyWilliams, Neville Farley 31 July 2015 (has links)
A crucial step in assessing the scientific basis of a psychotherapeutic intervention is examining the individual components of the treatment to determine if they are additive or important to treatment outcomes. The construct of self-as-context (S-A-C), a central process in the acceptance and commitment therapy (ACT) approach, has not yet been studied in a component analysis. A previous dismantling trial, however, has shown this process has an additive effect as part of an ACT package (Williams, 2006). The current study is a preliminary trial of feasibility and efficacy to determine a) the practicality of assessing S-A-C in isolation in a laboratory setting, and b) the impact of manipulating S-A-C on theoretically related variables, including theorized mechanisms of change in various clinical approaches. 68 participants (55 female, 13 male) were randomly assigned to receive either a brief S-A-C intervention employing a common therapeutic metaphor (the chessboard metaphor), or the control condition, which involved discussing a mildly positive topic with the researcher. Results from the main analyses showed that there was no group-by-time interaction on measures to assess immediate impact on the construct, previously validated therapeutic mediation measures, or symptom measures. Several possible explanations for the failure to identify significant findings are discussed, including limitations of construct measurement. When analyses were repeated using only those participants whose scores were in the mild range or higher for stress, anxiety, or depression, time by condition interactions were significant for stress and approached significance for depression, with participants in the S-A-C group doing better than those in the control group, offering tentative support for the utility of this process among individuals with clinical difficulties. Implications for future studies are reported. / Ph. D.
|
74 |
A Statistical Examination of the Climatic Human Expert System, The Sunset Garden Zones for CaliforniaLogan, Ben 11 January 2008 (has links)
Twentieth Century climatology was dominated by two great figures: Wladamir Köppen and C. Warren Thornthwaite. The first carefully developed climatic parameters to match the larger world vegetation communities. The second developed complex formulas of "Moisture Factors" that provided efficient understanding of how evapotranspiration influences plant growth and health, both for native and non-native communities.
In the latter half of the Twentieth Century, the Sunset Magazine Corporation develop a purely empirical set of Garden Zones, first for California, then for the thirteen states of the West, now for the entire nation in the National Garden Maps. The Sunset Garden Zones are well recognized and respected in Western States for illustrating the several factors of climate that distinguish zones. But the Sunset Garden Zones have never before been digitized and examined statistically for validation of their demarcations.
This thesis examines the digitized zones with reference to PRISM climate data. Variable coverages resembling those described by Sunset are extracted from the PRISM data. These variable coverages are collected for two buffered areas, one in northern California and one in southern California. The coverages are exported from ArcGIS 9.1 to SAS® where they are processed first through a Principal Component Analysis, and then the first five principal components are entered into a Ward's Hierarchical Cluster Analysis. The resulting clusters were translated back into ArcGIS as a raster coverage, where the clusters were climatic regions. This process is quite amenable for further examination of other regions of California / Master of Science
|
75 |
Iterative issues of ICA, quality of separation and number of sources: a study for biosignal applicationsNaik, Ganesh Ramachandra, ganesh.naik@rmit.edu.au January 2009 (has links)
This thesis has evaluated the use of Independent Component Analysis (ICA) on Surface Electromyography (sEMG), focusing on the biosignal applications. This research has identified and addressed the following four issues related to the use of ICA for biosignals: The iterative nature of ICA The order and magnitude ambiguity problems of ICA Estimation of number of sources based on dependency and independency nature of the signals Source separation for non-quadratic ICA (undercomplete and overcomplete) This research first establishes the applicability of ICA for sEMG and also identifies the shortcomings related to order and magnitude ambiguity. It has then developed, a mitigation strategy for these issues by using a single unmixing matrix and neural network weight matrix corresponding to the specific user. The research reports experimental verification of the technique and also the investigation of the impact of inter-subject and inter-experimental variations. The results demonstrate that while using sEMG without separation gives only 60% accuracy, and sEMG separated using traditional ICA gives an accuracy of 65%, this approach gives an accuracy of 99% for the same experimental data. Besides the marked improvement in accuracy, the other advantages of such a system are that it is suitable for real time operations and is easy to train by a lay user. The second part of this thesis reports research conducted to evaluate the use of ICA for the separation of bioelectric signals when the number of active sources may not be known. The work proposes the use of value of the determinant of the Global matrix generated using sparse sub band ICA for identifying the number of active sources. The results indicate that the technique is successful in identifying the number of active muscles for complex hand gestures. The results support the applications such as human computer interface. This thesis has also developed a method of determining the number of independent sources in a given mixture and has also demonstrated that using this information, it is possible to separate the signals in an undercomplete situation and reduce the redundancy in the data using standard ICA methods. The experimental verification has demonstrated that the quality of separation using this method is better than other techniques such as Principal Component Analysis (PCA) and selective PCA. This has number of applications such as audio separation and sensor networks.
|
76 |
The application of multivariate statistical analysis and batch process control in industrial processesLin, Haisheng January 2010 (has links)
To manufacture safe, effective and affordable medicines with greater efficiency, process analytical technology (PAT) has been introduced by the Food and Drug Agency to encourage the pharmaceutical industry to develop and design well-understood processes. PAT requires chemical imaging techniques to be used to collect process variables for real-time process analysis. Multivariate statistical analysis tools and process control tools are important for implementing PAT in the development and manufacture of pharmaceuticals as they enable information to be extracted from the PAT measurements. Multivariate statistical analysis methods such as principal component analysis (PCA) and independent component analysis (ICA) are applied in this thesis to extract information regarding a pharmaceutical tablet. ICA was found to outperform PCA and was able to identify the presence of five different materials and their spatial distribution around the tablet.Another important area for PAT is in improving the control of processes. In the pharmaceutical industry, many of the processes operate in a batch strategy, which introduces difficult control challenges. Near-infrared (NIR) spectroscopy is a non-destructive analytical technique that has been used extensively to extract chemical and physical information from a product sample based on the scattering effect of light. In this thesis, NIR measurements were incorporated as feedback information into several control strategies. Although these controllers performed reasonably well, they could only regulate the NIR spectrum at a number of wavenumbers, rather than over the full spectrum.In an attempt to regulate the entire NIR spectrum, a novel control algorithm was developed. This controller was found to be superior to the only comparable controller and able to regulate the NIR similarly. The benefits of the proposed controller were demonstrated using a benchmark simulation of a batch reactor.
|
77 |
Detection And Classification Of Buried Radioactive MaterialsWei, Wei 09 December 2011 (has links)
This dissertation develops new approaches for detection and classification of buried radioactive materials. Different spectral transformation methods are proposed to effectively suppress noise and to better distinguish signal features in the transformed space. The contributions of this dissertation are detailed as follows. 1) Propose an unsupervised method for buried radioactive material detection. In the experiments, the original Reed-Xiaoli (RX) algorithm performs similarly as the gross count (GC) method; however, the constrained energy minimization (CEM) method performs better if using feature vectors selected from the RX output. Thus, an unsupervised method is developed by combining the RX and CEM methods, which can efficiently suppress the background noise when applied to the dimensionality-reduced data from principle component analysis (PCA). 2) Propose an approach for buried target detection and classification, which applies spectral transformation followed by noisejusted PCA (NAPCA). To meet the requirement of practical survey mapping, we focus on the circumstance when sensor dwell time is very short. The results show that spectral transformation can alleviate the effects from spectral noisy variation and background clutters, while NAPCA, a better choice than PCA, can extract key features for the following detection and classification. 3) Propose a particle swarm optimization (PSO)-based system to automatically determine the optimal partition for spectral transformation. Two PSOs are incorporated in the system with the outer one being responsible for selecting the optimal number of bins and the inner one for optimal bin-widths. The experimental results demonstrate that using variable bin-widths is better than a fixed bin-width, and PSO can provide better results than the traditional Powell’s method. 4) Develop parallel implementation schemes for the PSO-based spectral partition algorithm. Both cluster and graphics processing units (GPU) implementation are designed. The computational burden of serial version has been greatly reduced. The experimental results also show that GPU algorithm has similar speedup as cluster-based algorithm.
|
78 |
Feature Extraction using Dimensionality Reduction Techniques: Capturing the Human PerspectiveColeman, Ashley B. January 2015 (has links)
No description available.
|
79 |
Utilização de análise de componentes principais em séries temporais / Use of principal component analysis in time seriesTeixeira, Sérgio Coichev 12 April 2013 (has links)
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade. / The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
|
80 |
Utilização de análise de componentes principais em séries temporais / Use of principal component analysis in time seriesSérgio Coichev Teixeira 12 April 2013 (has links)
Um dos principais objetivos da análise de componentes principais consiste em reduzir o número de variáveis observadas em um conjunto de variáveis não correlacionadas, fornecendo ao pesquisador subsídios para entender a variabilidade e a estrutura de correlação dos dados observados com uma menor quantidade de variáveis não correlacionadas chamadas de componentes principais. A técnica é muito simples e amplamente utilizada em diversos estudos de diferentes áreas. Para construção, medimos a relação linear entre as variáveis observadas pela matriz de covariância ou pela matriz de correlação. Entretanto, as matrizes de covariância e de correlação podem deixar de capturar importante informações para dados correlacionados sequencialmente no tempo, autocorrelacionados, desperdiçando parte importante dos dados para interpretação das componentes. Neste trabalho, estudamos a técnica de análise de componentes principais que torna possível a interpretação ou análise da estrutura de autocorrelação dos dados observados. Para isso, exploramos a técnica de análise de componentes principais para o domínio da frequência que fornece para dados autocorrelacionados um resultado mais específico e detalhado do que a técnica de componentes principais clássica. Pelos métodos SSA (Singular Spectrum Analysis) e MSSA (Multichannel Singular Spectrum Analysis), a análise de componentes principais é baseada na correlação no tempo e entre as diferentes variáveis observadas. Essas técnicas são muito utilizadas para dados atmosféricos na identificação de padrões, tais como tendência e periodicidade. / The main objective of principal component analysis (PCA) is to reduce the number of variables in a small uncorrelated data sets, providing support and helping researcher understand the variation present in all the original variables with small uncorrelated amount of variables, called components. The principal components analysis is very simple and frequently used in several areas. For its construction, the components are calculated through covariance matrix. However, the covariance matrix does not capture the autocorrelation information, wasting important information about data sets. In this research, we present some techniques related to principal component analysis, considering autocorrelation information. However, we explore the principal component analysis in the domain frequency, providing more accurate and detailed results than classical component analysis time series case. In subsequent method SSA (Singular Spectrum Analysis) and MSSA (Multichannel Singular Spectrum Analysis), we study the principal component analysis considering relationship between locations and time points. These techniques are broadly used for atmospheric data sets to identify important characteristics and patterns, such as tendency and periodicity.
|
Page generated in 0.0704 seconds