• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 28
  • 10
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
32

Gait Variability for Predicting Individual Performance in Military-Relevant Tasks

Ulman, Sophia Marie 03 October 2019 (has links)
Human movement is inherently complex, requiring the control and coordination of many neurophysiological and biomechanical degrees-of-freedom, and the extent to which individuals exhibit variation in their movement patterns is captured by the construct of motor variability (MV). MV is being used increasingly to describe movement quality and function among clinical populations and elderly individuals. However, current evidence presents conflicting views on whether increased MV offers benefits or is a hindrance to performance. To better understand the utility of MV for performance prediction, we focused on current research needs in the military domain. Dismounted soldiers, in particular, are expected to perform at a high level in complex environments and under demanding physical conditions. Hence, it is critical to understand what strategies allow soldiers to better adapt to fatigue and diverse environmental factors, and to develop predictive tools for estimating changes in soldier performance. Different aspects of performance such as motor learning, experience, and adaptability to fatigue were investigated when soldiers performed various gait tasks, and gait variability (GV) was quantified using four different types of measures (spatiotemporal, joint kinematics, detrended fluctuation analysis, and Lyapunov exponents). During a novel obstacle course task, we found that frontal plane coordination variability of the hip-knee and knee-ankle joint couples exhibited strong association with rate of learning the novel task, explaining 62% of the variance, and higher joint kinematic variability during the swing phase of baseline gait was associated with faster learning rate. In a load carriage task, GV measures were more sensitive than average gait measures in discriminating between experience and load condition: experienced cadets exhibited reduced GV (in spatiotemporal measures and joint kinematics) and lower long-term local dynamic stability at the ankle, compared to the novice group. In the final study investigating multiple measures of obstacle performance, and variables predictive of changes in performance following intense whole-body fatigue, joint kinematic variability of baseline gait explained 28-59% of the variance in individual performances changes. In summary, these results support the feasibility of anticipating and augmenting task performance based on individual motor variability. This work also provides guidelines for future research and the development of training programs specifically for improving military training, performance prediction, and performance enhancement. / Doctor of Philosophy / All people move with some level of inherent variability, even when doing the same activity, and the extent to which individuals exhibit variation in their movement patterns is captured by the construct of motor variability (MV). MV is being increasingly used to describe movement quality and function among clinical populations and elderly individuals. However, it is still unclear whether increased MV offers benefits or is a hindrance to performance. To better understand the utility of MV for performance prediction, we focused on current research needs in the military domain. Dismounted soldiers, in particular, are expected to perform at a high level in complex environments and under demanding physical conditions. Hence, it is critical to understand what strategies allow soldiers to better adapt to fatigue and diverse environmental factors, and to develop tools that might predict changes in soldier performance. Different aspects of performance were investigated, including learning a new activity, experience, and adaptability to fatigue, and gait variability was quantified through different approaches. When examining how individual learn a novel obstacle course task, we found that certain aspects of gait variability had strong associations with learning rate. In a load carriage task, variability measures were determined to be more sensitive to difference in experience level and load condition compared to typical average measures of gait. Specifically, variability increased with load, and the experienced group was less variable overall and more stable in the long term. Lastly, a subset of gait variability measures were associated with individual differences in fatigue-related changes in performance during an obstacle course. In summary, the results presented here support that it may be possible to both anticipate and enhance task performance based on individual variability. This work also provides guidelines for future research and the development of training programs specifically for improving military training, performance prediction, and performance enhancement.
33

Modelo matemático para estudo da variabilidade da frequência cardíaca

Evaristo, Ronaldo Mendes 08 December 2017 (has links)
Submitted by Angela Maria de Oliveira (amolivei@uepg.br) on 2018-02-08T13:10:32Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Ronaldo Mendes Evaristo.pdf: 2297085 bytes, checksum: 293f7e08aca7690caa0d317480f9e18e (MD5) / Made available in DSpace on 2018-02-08T13:10:32Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Ronaldo Mendes Evaristo.pdf: 2297085 bytes, checksum: 293f7e08aca7690caa0d317480f9e18e (MD5) Previous issue date: 2017-12-08 / Nos ultimos anos, o aumento da incidência de doenças cardiovasculares na população mundial vem motivando a comunidade científica a buscar novas técnicas ou inovações tecnológicas para complementar os métodos existentes para avaliação do desempenho do coração. Dentre elas, destaca-se a análise da Variabilidade da Frequência Cardíarca (VFC) via eletrocardiograma (ECG), método não invasivo importante na detecção de patologias leves e moderadas cada vez mais frequentes nos seres humanos como doenças coronarianas, arritmias, bradicardias e taquicardias, além de distúrbios na relação entre os sistemas nervosos simpático e parassimpático. Neste trabalho é introduzida uma inovação em um modelo matemático baseado em modulações de exponenciais gaussianas utilizado para reproduzir a morfologia do ECG de seres humanos. Trata-se da introdução de tacogramas gerados por um processo estocástico autorregressivo (AR), previamente à integração das equações diferenciais do modelo, capaz de reproduzir com maior fidelidade a VFC quando comparados com dados experimentais de adultos saudaveis e de adultos com doença arterial coronariana (DAC). Para validar o modelo, os resultados simulados são comparados com dados experimentais via espectro de potencia da transformada wavelet discreta (TWD), gráficos de Poincare e pela analise de flutuação sem tendência. Verificamos que a DAC não altera a morfologia do ECG em situação de repouso, mas influencia significativamente na VFC, sendo que o modelo matemático proposto absorve e reproduz esse comportamento. / In recent years, the increase in the incidence of cardiovascular diseases in the world population has motivated the scientific community to seek new techniques or technological innovations to complement existing methods for assessing heart performance. Among them, stands out the analysis of the Heart Rate Variability (HRV) by electrocardiogram (ECG), an important non-invasive method for the detection of mild and moderate pathologies that are increasingly frequent in humans such as coronary diseases, arrhythmias, bradycardia and tachycardias, besides disturbances in the relationship between the sympathetic and parasympathetic nervous systems. This work introduces an innovation in a mathematical model based on Gaussian exponential modulations used to reproduce the ECG morphology of humans. This is the introduction of tachograms generated by an autoregressive stochastic process (AR), prior to the integration of the diferential equations of the model, capable of reproducing with better delity the HRV when compared with experimental data of healthy adults and adults with coronary artery disease (CAD). In order to validate the model, the simulated results are compared with experimental data using the discrete wavelet transform (DWT) power spectrum, Poincare plots and the detrended uctuation analysis (DFA). We verified that CAD does not change the morphology of ECG in resting state, but it has a significant in uence on HRV, and the proposed mathematical model absorbs and reproduces this behavior.
34

An?lise de agrupamentos dos dados de DFA oriundos de perfis el?tricos de indu??o de po?os de petr?leo / Clustering analysis of the data of DFA profiles of eletric induction in oil wells

Mata, Maria das Vit?rias Medeiros da 24 July 2009 (has links)
Made available in DSpace on 2014-12-17T14:08:35Z (GMT). No. of bitstreams: 1 MariaVMMpdf.pdf: 1276052 bytes, checksum: 2a1c6384ed87c24c3ab5a2346947a35d (MD5) Previous issue date: 2009-07-24 / The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis / O principal objetivo do presente trabalho foi aplicar m?todos recentemente desenvolvidos em f?sica-estat?stica ?s s?ries temporais, em especial a dados de perfis el?tricos de indu??o de 54 po?os de petr?leo localizados no Campo de Namorado Bacia de Campos - RJ, para estudar a similaridade petrof?sica dos po?os numa distribui??o espacial. Para isto, utilizamos o m?todo do DFA com o intuito de saber se podemos, ou n?o, utilizar esta t?cnica para caracterizar espacialmente o campo. Depois de obtidos os valores de DFA para todos os po?os, fizemos uma an?lise de agrupamento com rela??o a estas caracter?sticas; para tanto, utilizamos o m?todo de agrupamento n?o-hier?rquico chamado m?todo K-m?dia. Geralmente baseado na dist?ncia euclidiana, o K-m?dia consiste em dividir os elementos de uma matriz n de dados em k grupos bem definidos, de maneira que as semelhan?as existentes entre elementos pertencentes a grupos distintos sejam as menores poss?veis. Com o objetivo de verificar se um conjunto de dados gerados pelo m?todo do K-m?dia ou gerado aleatoriamente forma padr?es espaciais, criamos o par?metro Ω (?ndice de vizinhan?a). Altos valores de Ω implicam em dados mais agregados e baixos valores de Ω em dados dispersos ou sem correla??o espacial. Com aux?lio do m?todo de Monte Carlo observamos que dados agrupados aleatoriamente apresentam uma distribui??o de Ω inferior ao valor emp?rico. Desta forma conclu?mos que os dados de DFA obtidos nos 54 po?os est?o agrupados e podem ser usados na caracteriza??o espacial de campos. Ao cruzar os dados das curvas de n?vel com os resultados obtidos pelo K-m?dia, confirmamos a efici?ncia do mesmo para correlacionar po?os em distribui??o espacial

Page generated in 0.5035 seconds