• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 43
  • 11
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 270
  • 80
  • 77
  • 67
  • 53
  • 46
  • 34
  • 33
  • 32
  • 32
  • 30
  • 30
  • 29
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Empirical and Kinetic Models for the Determination of Pharmaceutical Product Stability

Khalifa, Nagwa 24 January 2011 (has links)
Drug stability is one of the vital subjects in the pharmaceutical industry. All drug products should be kept stable and protected against any chemical, physical, and microbiological degradation to ensure their efficacy and safety until released for public use. Hence, stability is very important to be estimated or predicted. This work involved studying the stability of three different drug agents using three different mathematical models. These models included both empirical models (linear regression and artificial neural network), and mechanistic (kinetic) models. The stability of each drug in the three cases studied was expressed in terms of concentration, hardness, temperature and humidity. The predicted values obtained from the models were compared to the observed values of drug concentrations obtained experimentally and then evaluated by calculating the mean of squared. Among the models used in this work, the mechanistic model was found to be the most accurate and reliable method of stability testing given the fact that it had the smallest calculated errors. Overall, the accuracy of these mathematical models as indicated by the proximity of their stability measurements to the observed values, led to the assumption that such models can be reliable and time-saving alternatives to the analytical techniques used in practice.
12

Automatic instant messaging dialogue using statistical models and dialogue acts

Ivanovic, Edward January 2008 (has links)
Instant messaging dialogue is used for communication by hundreds of millions of people worldwide, but has received relatively little attention in computational linguistics. We describe methods aimed at providing a shallow interpretation of messages sent via instant messaging. This is done by assigning labels known as dialogue acts to utterances within messages. Since messages may contain more than one utterance, we explore automatic message segmentation using combinations of parse trees and various statistical models to achieve high accuracy for both classification and segmentation tasks. Finally, we gauge the immediate usefulness of dialogue acts in conversation management by presenting a dialogue simulation program that uses dialogue acts to predict utterances during a conversation. The predictions are evaluated via qualitative means where we obtain very encouraging results.
13

Surrogate variable analysis /

Leek, Jeffrey Tullis. January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 113-121).
14

Statistical Models for Predicting College Success

Nunez, Yelen 13 November 2013 (has links)
Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.
15

Assessment of Potential Changes in Crop Yields in the Central United States Under Climate Change Regimes

Matthews-Pennanen, Neil 01 May 2018 (has links)
Climate change is one of the great challenges facing agriculture in the 21st century. The goal of this study was to produce projections of crop yields for the central United States in the 2030s, 2060s, and 2090s based on the relationship between weather and yield from historical crop yields from 1980 to 2010. These projections were made across 16 states in the US, from Louisiana in the south to Minnesota in the north. They include projections for maize, soybeans, cotton, spring wheat, and winter wheat. Simulated weather variables based on three climate scenarios were used to project future crop yields. In addition, factors of soil characteristics, topography, and fertilizer application were used in the crop production models. Two technology scenarios were used: one simulating a future in which crop technology continues to improve and the other a future in which crop technology remains similar to where it is today. Results showed future crop yields to be responsive to both the different climate scenarios and the different technology scenarios. The effects of a changing climate regime on crop yields varied both geographically throughout the study area and from crop to crop. One broad geographic trend was greater potential for crop yield losses in the south and greater potential for gains in the north. Whether or not new technologies enable crop yields to continue to increase as the climate becomes less favorable is a major factor in agricultural production in the coming century. Results of this study indicate the degree to which society relies on these new technologies will be largely dependent on the degree of the warming that occurs. Continued research into the potential negative impacts of climate change on the current crop system in the United States is needed to mitigate the widespread losses in crop productivity that could result. In addition to study of negative impacts, study should be undertaken with an interest to determine any potential new opportunities for crop development with the onset of higher temperatures as a result of climate change. Studies like this one with a broad geographic range should be complemented by studies of narrower scope that can manipulate climatic variables under controlled conditions. Investment into these types of agricultural studies will give the agricultural sector in the United States greater tools with which they can mitigate the disruptive effects of a changing climate.
16

A Teleological Approach to Robot Programming by Demonstration

Sweeney, John Douglas 01 February 2011 (has links)
This dissertation presents an approach to robot programming by demonstration based on two key concepts: demonstrator intent is the most meaningful signal that the robot can observe, and the robot should have a basic level of behavioral competency from which to interpret observed actions. Intent is a teleological, robust teaching signal invariant to many common sources of noise in training. The robot can use the knowledge encapsulated in sensorimotor schemas to interpret the demonstration. Furthermore, knowledge gained in prior demonstrations can be applied to future sessions. I argue that programming by demonstration be organized into declarative and pro-cedural components. The declarative component represents a reusable outline of underlying behavior that can be applied to many different contexts. The procedural component represents the dynamic portion of the task that is based on features observed at run time. I describe how statistical models, and Bayesian methods in particular, can be used to model these components. These models have many features that are beneficial for learning in this domain, such as tolerance for uncertainty, and the ability to incorporate prior knowledge into inferences. I demonstrate this architecture through experiments on a bimanual humanoid robot using tasks from the pick and place domain. Additionally, I develop and experimentally validate a model for generating grasp candidates using visual features that is learned from demonstration data. This model is especially useful in the context of pick and place tasks.
17

Likelihood Inference of Some Cure Rate Models and Applications

Liu, Xiaofeng 04 1900 (has links)
<p>In this thesis, we perform a survival analysis for right-censored data of populations with a cure rate. We consider two cure rate models based on the Geometric distribution and Poisson distribution, which are the special cases of the Conway-Maxwell distribution. The models are based on the assumption that the number of competing causes of the event of interest follows Conway-Maxwell distribution. For various sample sizes, we implement a simulation process to generate samples with a cure rate. Under this setup, we obtain the maximum likelihood estimator (MLE) of the model parameters by using the gamlss R package. Using the asymptotic distribution of the MLE as well as the parametric bootstrap method, we discuss the construction of confidence intervals for the model parameters and their performance is then assessed through Monte Carlo simulations.</p> / Master of Science (MSc)
18

Novel Statistical Models for Quantitative Shape-Gene Association Selection

Dai, Xiaotian 01 December 2017 (has links)
Other research reported that genetic mechanism plays a major role in the development process of biological shapes. The primary goal of this dissertation is to develop novel statistical models to investigate the quantitative relationships between biological shapes and genetic variants. However, these problems can be extremely challenging to traditional statistical models for a number of reasons: 1) the biological phenotypes cannot be effectively represented by single-valued traits, while traditional regression only handles one dependent variable; 2) in real-life genetic data, the number of candidate genes to be investigated is extremely large, and the signal-to-noise ratio of candidate genes is expected to be very high. In order to address these challenges, we propose three statistical models to handle multivariate, functional, and multilevel functional phenotypes, with applications to biological shape data using different shape descriptors. To the best of our knowledge, there is no statistical model developed for multilevel functional phenotypes. Even though multivariate regressions have been well-explored and these approaches can be applied to genetic studies, we show that the model proposed in this dissertation can outperform other alternatives regarding variable selection and prediction through simulation examples and real data examples. Although motivated ultimately by genetic research, the proposed models can be used as general-purpose machine learning algorithms with far-reaching applications.
19

Statistical models for catch-at-length data with birth cohort information

Chung, Sai-ho., 鍾世豪. January 2005 (has links)
published_or_final_version / abstract / Social Sciences / Doctoral / Doctor of Philosophy
20

Developing An Alternative Way to Analyze NanoString Data

Shen, Shu 01 January 2016 (has links)
Nanostring technology provides a new method to measure gene expressions. It's more sensitive than microarrays and able to do more gene measurements than RT-PCR with similar sensitivity. This system produces counts for each target gene and tabulates them. Counts can be normalized by using an Excel macro or nSolver before analysis. Both methods rely on data normalization prior to statistical analysis to identify differentially expressed genes. Alternatively, we propose to model gene expressions as a function of positive controls and reference gene measurements. Simulations and examples are used to compare this model with Nanostring normalization methods. The results show that our model is more stable, efficient, and able to control false positive proportions. In addition, we also derive asymptotic properties of a normalized test of control versus treatment.

Page generated in 0.1016 seconds