271 |
A wearable real-time system for physical activity recognition and fall detectionYang, Xiuxin 23 September 2010 (has links)
This thesis work designs and implements a wearable system to recognize physical activities and detect fall in real time. Recognizing peoples physical activity has a broad range of applications. These include helping people maintaining their energy balance by developing health assessment and intervention tools, investigating the links between common diseases and levels of physical activity, and providing feedback to motivate individuals to exercise. In addition, fall detection has become a hot research topic due to the increasing population over 65 throughout the world, as well as the serious effects and problems caused by fall.<p>
In this work, the Sun SPOT wireless sensor system is used as the hardware platform to recognize physical activity and detect fall. The sensors with tri-axis accelerometers are used to collect acceleration data, which are further processed and extracted with useful information. The evaluation results from various algorithms indicate that Naive Bayes algorithm works better than other popular algorithms both in accuracy and implementation in this particular application.<p>
This wearable system works in two modes: indoor and outdoor, depending on users demand. Naive Bayes classifier is successfully implemented in the Sun SPOT sensor. The results of evaluating sampling rate denote that 20 Hz is an optimal sampling frequency in this application. If only one sensor is available to recognize physical activity, the best location is attaching it to the thigh. If two sensors are available, the combination at the left thigh and the right thigh is the best option, 90.52% overall accuracy in the experiment.<p>
For fall detection, a master sensor is attached to the chest, and a slave sensor is attached to the thigh to collect acceleration data. The results show that all falls are successfully detected. Forward, backward, leftward and rightward falls have been distinguished from standing and walking using the fall detection algorithm. Normal physical activities are not misclassified as fall, and there is no false alarm in fall detection while the user is wearing the system in daily life.
|
272 |
Animal Movement in Pelagic Ecosystems: from Communities to IndividualsSchick, Robert Schilling January 2009 (has links)
<p>Infusing models for animal movement with more behavioral realism has been a goal of movement ecologists for several years. As ecologists have begun to collect more and more data on animal distribution and abundance, a clear need has arisen for more sophisticated analysis. Such analysis could include more realistic movement behavior, more information on the organism-environment interaction, and more ways to separate observation error from process error. Because landscape ecologists and behavioral ecologists typically study these same themes at very different scales, it has been proposed that their union could be productive for all (Lima and Zollner, 1996). </p><p>By understanding how animals interact with their land- and seascapes, we can better understand how species partition up resources are large spatial scales. Accordingly I begin this dissertation with a large spatial scale analysis of distribution data for marine mammals from Nova Scotia through the Gulf of Mexico. I analyzed these data in three separate regions, and in the two data-rich regions, find compelling separation between the different communities. In the northernmost region, this separation is broadly along diet based partitions. This research provides a baseline for future study of marine mammal systems, and more importantly highlights several gaps in current data collections.</p><p>In the last 6 years several movement ecologists have begun to imbue sophisticated statistical analyses with increasing amounts of movement behavior. This has changed the way movement ecologists think about movement data and movement processes. In this dissertation I focus my research on continuing this trend. I reviewed the state of movement modeling and then proposed a new Bayesian movement model that builds on three questions of: behavior; organism-environment interaction; and process-based inference with noisy data.</p><p>Application of this model to two different datasets, migrating right whales in the NW Atlantic, and foraging monk seals in the Northwest Hawaiian Islands, provides for the first time estimates of how moving animals make choices about the suitability of patches within their perceptual range. By estimating parameters governing this suitability I provide right whale managers a clear depiction of the gaps in their protection in this vulnerable and understudied migratory corridor. For monk seals I provide a behaviorally based view into how animals in different colonies and age and sex groups move throughout their range. This information is crucial for managers who translocate individuals to new habitat as it provides them a quantitative glimpse of how members of certain groups perceive their landscape.</p><p>This model provides critical information about the behaviorally based movement choices animals make. Results can be used to understand the ecology of these patterns, and can be used to help inform conservation actions. Finally this modeling framework provides a way to unite fields of movement ecology and graph theory.</p> / Dissertation
|
273 |
Bayesian Model Uncertainty and Prior Choice with Applications to Genetic Association StudiesWilson, Melanie Ann January 2010 (has links)
<p>The Bayesian approach to model selection allows for uncertainty in both model specific parameters and in the models themselves. Much of the recent Bayesian model uncertainty literature has focused on defining these prior distributions in an objective manner, providing conditions under which Bayes factors lead to the correct model selection, particularly in the situation where the number of variables, <italic>p</italic>, increases with the sample size, <italic>n</italic>. This is certainly the case in our area of motivation; the biological application of genetic association studies involving single nucleotide polymorphisms. While the most common approach to this problem has been to apply a marginal test to all genetic markers, we employ analytical strategies that improve upon these marginal methods by modeling the outcome variable as a function of a multivariate genetic profile using Bayesian variable selection. In doing so, we perform variable selection on a large number of correlated covariates within studies involving modest sample sizes. </p>
<p>In particular, we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally 'validated' in independent studies. </p>
<p></p>
<p>In the context of Bayesian model uncertainty for problems involving a large number of correlated covariates we characterize commonly used prior distributions on the model space and investigate their implicit multiplicity correction properties first in the extreme case where the model includes an increasing number of redundant covariates and then under the case of full rank design matrices. We provide conditions on the asymptotic (in <italic>n</italic> and <italic>p</italic>) behavior of the model space prior </p>
<p>required to achieve consistent selection of the global hypothesis of at least one associated variable in the analysis using global posterior probabilities (i.e. under 0-1 loss). In particular, under the assumption that the null model is true, we show that the commonly used uniform prior on the model space leads to inconsistent selection of the global hypothesis via global posterior probabilities (the posterior probability of at least one association goes to <italic>1</italic>) when the rank of the design matrix is finite. In the full rank case, we also show inconsistency when <italic>p</italic> goes to infinity faster than the square root of <italic>n</italic>. Alternatively, we show that any model space prior such that the global prior odds of association increases at a rate slower than the square root of <italic>n<italic> results in consistent selection of the global hypothesis in terms of posterior probabilities.</p> / Dissertation
|
274 |
Nonparametric Bayesian Methods for Multiple Imputation of Large Scale Incomplete Categorical Data in Panel StudiesSi, Yajuan January 2012 (has links)
<p>The thesis develops nonparametric Bayesian models to handle incomplete categorical variables in data sets with high dimension using the framework of multiple imputation. It presents methods for ignorable missing data in cross-sectional studies, and potentially non-ignorable missing data in panel studies with refreshment samples.</p><p>The first contribution is a fully Bayesian, joint modeling approach of multiple imputation for categorical data based on Dirichlet process mixtures of multinomial distributions. The approach automatically models complex dependencies while being computationally expedient. </p><p>I illustrate repeated sampling properties of the approach</p><p>using simulated data. This approach offers better performance than default chained equations methods, which are often used in such settings. I apply the methodology to impute missing background data in the 2007 Trends in International Mathematics and Science Study.</p><p>For the second contribution, I extend the nonparametric Bayesian imputation engine to consider a mix of potentially non-ignorable attrition and ignorable item nonresponse in multiple wave panel studies. Ignoring the attrition in models for panel data can result in biased inference if the reason for attrition is systematic and related to the missing values. Panel data alone cannot estimate the attrition effect without untestable assumptions about the missing data mechanism. Refreshment samples offer an extra data source that can be utilized to estimate the attrition effect while reducing reliance on strong assumptions of the missing data mechanism. </p><p>I consider two novel Bayesian approaches to handle the attrition and item non-response simultaneously under multiple imputation in a two wave panel with one refreshment sample when the variables involved are categorical and high dimensional. </p><p>First, I present a semi-parametric selection model that includes an additive non-ignorable attrition model with main effects of all variables, including demographic variables and outcome measures in wave 1 and wave 2. The survey variables are modeled jointly using Bayesian mixture of multinomial distributions. I develop the posterior computation algorithms for the semi-parametric selection model under different prior choices for the regression coefficients in the attrition model. </p><p>Second, I propose two Bayesian pattern mixture models for this scenario that use latent classes to model the dependency among the variables and the attrition. I develop a dependent Bayesian latent pattern mixture model for which variables are modeled via latent classes and attrition is treated as a covariate in the class allocation weights. And, I develop a joint Bayesian latent pattern mixture model, for which attrition and variables are modeled jointly via latent classes.</p><p>I show via simulation studies that the pattern mixture models can recover true parameter estimates, even when inferences based on the panel alone are biased from attrition. </p><p>I apply both the selection and pattern mixture models to data from the 2007-2008 Associated Press/Yahoo News election panel study.</p> / Dissertation
|
275 |
Efficient Tools For Reliability Analysis Using Finite Mixture DistributionsCross, Richard J. (Richard John) 02 December 2004 (has links)
The complexity of many failure mechanisms and variations in component manufacture often make standard probability distributions inadequate for reliability modeling. Finite mixture distributions provide the necessary flexibility for modeling such complex phenomena but add considerable difficulty to the inference. This difficulty is overcome by drawing an analogy to neural networks. With appropropriate modifications, a neural network can represent a finite mixture CDF or PDF exactly. Training with Bayesian Regularization gives an efficient empirical Bayesian inference of the failure time distribution. Training also yields an effective number of parameters from which the number of components in the mixture can be estimated. Credible sets for functions of the model parameters can be estimated using a simple closed-form expression. Complete, censored, and inpection samples can be considered by appropriate choice of the likelihood function.
In this work, architectures for Exponential, Weibull, Normal, and Log-Normal mixture networks have been derived. The capabilities of mixture networks have been demonstrated for complete, censored, and inspection samples from Weibull and Log-Normal mixtures. Furthermore, mixture networks' ability to model arbitrary failure distributions has been demonstrated. A sensitivity analysis has been performed to determine how mixture network estimator errors are affected my mixture component spacing and sample size. It is shown that mixture network estimators are asymptotically unbiased and that errors decay with sample size at least as well as with MLE.
|
276 |
Cross-Lingual Category Integration TechniqueTzeng, Guo-han 30 August 2006 (has links)
With the emergence of the Internet, many innovative and interesting applications from different countries have been stimulated and e-commerce is also getting more and more pervasive. Under this scenario, tremendous amount of information expressed in different languages are exchanged and shared by not only organizations but also individuals in the modern global environment. A large proportion of information is typically formatted and available as textual documents and managed by using categories. Consequently, the development of a practical and effective technique to deal with the problem of cross-lingual category integration (CLCI) becomes a very essential and important issue. Several category integration techniques have been proposed, but all of them deal with category integration involving only monolingual documents. In response, in this study, we combine the existing cross-lingual text categorization techniques with an existing monolingual category integration technique (specifically, Enhanced Naive Bayes) and proposed a CLCI solution to address cross-lingual category integration. Our empirical evaluation results show that our proposed CLCI technique demonstrates its feasibility and superior effectiveness.
|
277 |
Summarizing FLARE assay images in colon carcinogenesisLeyk Williams, Malgorzata 12 April 2006 (has links)
Intestinal tract cancer is one of the more common cancers in the United States. While in some individuals a genetic component causes the cancer, the rate of cancer in the remainder of the population is believed to be affected by diet. Since cancer usually develops slowly, the amount of oxidative damage to DNA can be used as a cancer biomarker. This dissertation examines effective ways of analyzing FLARE assay data, which quantifies oxidative damage. The statistical methods will be implemented on data from a FLARE assay experiment, which examines cells from the duodenum and the colon to see if there is a difference in the risk of cancer due to corn or fish oil diets. Treatments of the oxidizing agent dextran sodium sulfate (DSS), DSS with a recovery period, as well as a control will also be used.
Previous methods presented in the literature examined the FLARE data by summarizing the DNA damage of each cell with a single number, such as the relative tail moment (RTM). Variable skewness is proposed as an alternative measure, and shown to be as effective as the RTM in detecting diet and treatment differences in the standard analysis. The RTM and skewness data is then analyzed using a hierarchical model, with both the skewness and RTM showing diet/treatment differences. Simulated data for this model is also considered, and shows that a Bayes Factor (BF) for higher dimensional models does not follow guidelines presented by Kass and Raftery (1995).
It is hypothesized that more information is obtained by describing the DNA damage functions, instead of summarizing them with a single number. From each function, seven points are picked. First, they are modeled independently, and only diet effects are found. However, when the correlation between points at the cell and rat level is modeled, much stronger diet and treatment differences are shown both in the colon and the duodenum than for any of the previous methods. These results are also easier to interpret and represent graphically, showing that the latter is an effective method of analyzing the FLARE data.
|
278 |
Domain knowledge, uncertainty, and parameter constraintsMao, Yi 24 August 2010 (has links)
No description available.
|
279 |
Validity generalization and transportability [electronic resource] : an investigation of random-effects meta-analytic methods / by Jennifer L. Kisamore.Kisamore, Jennifer L. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 134 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Validity generalization work over the past 25 years has called into question the veracity of the assumption that validity is situationally specific. Recent theoretical and methodological work has suggested that validity coefficients may be transportable even if true validity is not a constant. Most transportability work is based on the assumption that the distribution of rho ( ) is normal, yet, no empirical evidence exists to support this assumption. The present study used a competing model approach in which a new procedure for assessing transportability was compared with two more commonly used methods. Empirical Bayes estimation (Brannick, 2001; Brannick & Hall, 2003) was evaluated alongside both the Schmidt-Hunter multiplicative model (Hunter & Schmidt, 1990) and a corrected Hedges-Vevea (see Hall & Brannick, 2002; Hedges & Vevea, 1998) model. The purpose of the present study was two-fold. The first part of the study compared the accuracy of estimates of the mean, standard deviation, and the lower bound of 90 and 99 percent credibility intervals computed from the three different methods across 32 simulated conditions. The mean, variance, and shape of the distribution varied across the simulated conditions. The second part of the study involved comparing results of analyses of the three methods based on previously published validity coefficients. The second part of the study was used to show whether choice of method for determining whether transportability is warranted matters in practice. Results of the simulation analyses suggest that the Schmidt-Hunter method is superior to the other methods even when the distribution of true validity parameters violates the assumption of normality. Results of analyses conducted on real data show trends consistent with those evident in the analyses of the simulated data. Conclusions regarding transportability, however, did not change as a function of method used for any of the real data sets. Limitations of the present study as well as recommendations for practice and future research are provided. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
|
280 |
A Bayesian model for curve clustering with application to gene expression data analysis /Zhou, Chuan, January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (leaves 178-195).
|
Page generated in 0.0404 seconds