Spelling suggestions: "subject:"cuantitative"" "subject:"1uantitative""
41 |
Quantitative Physiologically-Based Sleep Modeling: Dynamical Analysis and Clinical ApplicationsFulcher, Benjamin David January 2009 (has links)
Master of Science / In this thesis, a recently developed physiologically-based model of the sleep-wake switch is analyzed and applied to a variety of clinically-relevant protocols. In contrast to phenomenological models, which have dominated sleep modeling in the past, the present work demonstrates the advantages of the physiologically-based approach. Dynamical and linear stability analyses of the Phillips-Robinson sleep model allow us to create a general framework for determining its response to arbitrary external stimuli. The effects of near-stable wake and sleep ghosts on the model’s dynamics are found to have implications for arousal during sleep, sleep deprivation, and sleep inertia. Impulsive sensory stimuli during sleep are modeled modeled according to their known physiological mechanism. The predicted arousal threshold variation matches experimental data from the literature. In simulating a sleep fragmentation protocol, the model simultaneously reproduces the body temperature and arousal threshold variation measured in another existing clinical study. In the second part of the thesis, we simulate sleep deprivation by introducing a wake-effort drive that is required to maintain wakefulness during normal sleeping periods. We interpret this drive both physiologically and psychologically, and demonstrate quantitative agreement between the model’s output and experimental subjective fatigue-related data. As well as subjective fatigue, the model is simultaneously able to reproduce adrenaline excretion and body temperature variations. In the final part of the thesis, the model is extended to include the orexinergic neurons of the lateral hypothalamic area. Due to the dynamics of the orexin group, the extended model exhibits sleep inertia, and an inhibitory circadian projection to the orexin group produces a postlunch dip in performance – both of which are well-known behavioral features. Including both homeostatic and circadian inputs to the orexin group, the model produces a waking arousal variation that quantitatively matches published clinical data.
|
42 |
The Generalized DEA Model of Fundamental Analysis of Public Firms, with Application to Portfolio SelectionZhang, Xin 01 December 2007 (has links)
Fundamental analysis is an approach for evaluating a public firm for its investmentworthiness by looking at its business at the basic or fundamental financial level. The focus of this thesis is on utilizing financial statement data and a new generalization of the Data Envelopment Analysis, termed the GDEA model, to determine a relative financial strength (RFS) indicator that represents the underlying business strength of a firm. This approach is based on maximizing a correlation metric between GDEA-based score of financial strength and stock price performance. The correlation maximization problem is a difficult binary nonlinear optimization that requires iterative re-configuration of parameters of financial statements as inputs and outputs. A two-step heuristic algorithm that combines random sampling and local search optimization is developed. Theoretical optimality conditions are also derived for checking solutions of the GDEA model. Statistical tests are developed for validating the utility of the RFS indicator for portfolio selection, and the approach is computationally tested and compared with competing approaches.
The GDEA model is also further extended by incorporating Expert Information on input/output selection. In addition to deriving theoretical properties of the model, a new methodology is developed for testing if such exogenous expert knowledge can be significant in obtaining stronger RFS indicators. Finally, the RFS approach under expert information is applied in a Case Study, involving more than 800 firms covering all sectors of the U.S. stock market, to determine optimized RFS indicators for stock selection. Those selected stocks are then used within portfolio optimization models to demonstrate the superiority of the techniques developed in this thesis.
|
43 |
Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic AlgorithmBeal, Dennis Jack 01 December 2009 (has links)
Kernel density estimation is a data smoothing technique that depends heavily on the bandwidth selection. The current literature has focused on optimal selectors for the univariate case that are primarily data driven. Plug-in and cross validation selectors have recently been extended to the general multivariate case.
This dissertation will introduce and develop new and novel techniques for data mining with multivariate kernel density regression using information complexity and the genetic algorithm as a heuristic optimizer to choose the optimal bandwidth and the best predictors in kernel regression models. Simulated and real data will be used to cross validate the optimal bandwidth selectors using information complexity. The genetic algorithm is used in conjunction with information complexity to determine kernel density estimates for variable selection from high dimension multivariate data sets.
Kernel regression is also hybridized with the implicit enumeration algorithm to determine the set of independent variables for the global optimal solution using information criteria as the objective function. The results from the genetic algorithm are compared to the optimal solution from the implicit enumeration algorithm and the known global optimal solution from an explicit enumeration of all possible subset models.
|
44 |
Algorithms for Multi-Sample Cluster AnalysisAlmutairi, Fahad 01 August 2007 (has links)
In this study, we develop algorithms to solve the Multi-Sample Cluster Analysis (MSCA) problem. This problem arises when we have multiple samples and we need to find the statistical model that best fits the cluster structure of these samples. One important area among others in which our algorithms can be used is international market segmentation. In this area, samples about customers’preferences and characteristics are collected from di¤erent regions in the market. The goal in this case is to join the regions with similar customers’characteristics in clusters (segments).
We develop branch and bound algorithms and a genetic algorithm. In these algorithms, any of the available information criteria (AIC, CAIC, SBC, and ICOMP) can be used as the objective function to be optimized. Our algorithms use the Clique Partitioning Problem (CPP) formulation. They are the first algorithms to use information criteria with the CPP formulation.
When the branch and bound algorithms are allowed to run to completion, they converge to the optimal MSCA alternative. These methods also proved to find good solutions when they were stopped short of convergence. In particular, we develop a branching strategy which uses a "look-ahead" technique. We refer to this strategy as the complete adaptive branching strategy. This strategy makes the branch and bound algorithm quickly search for the optimal solution in multiple branches of the enumeration tree before using a depth- first branching strategy. In computational tests, this method’s performance was superior to other branching methods as well as to the genetic algorithm.
|
45 |
Approximation Methods for the Standard Deviation of Flow Times in the G/G/s QueueZhao, Xiaofeng 01 August 2007 (has links)
We provide approximation methods for the standard deviation of flow time in system for a general multi-server queue with infinite waiting capacity (G / G / s ). The approximations require only the mean and standard deviation or the coefficient of variation of the inter-arrival and service time distributions, and the number of servers.
These approximations are simple enough to be implemented in manual or spreadsheet calculations, but in comparisons to Monte Carlo simulations have proven to give good approximations (within ±10%) for cases in which the coefficients of variation for the interarrival and service times are between 0 and 1. The approximations also have the desirable properties of being exact for the specific case of Markov queue model M / M / s, as well as some imbedded Markov queuing models ( Ek / M / 1 and M / Eα / 1).
The practical significance of this research is that (1) many real world queuing problems involve the G / G / s queuing systems, and (2) predicting the range of variation of the time in the system (rather than just the average) is needed for decision making. For example, one job shop facility with which the authors have worked, guarantees its customers a nine day turnaround time and must determine the minimum number of machines of each type required to achieve nine days as a “worst case” time in the system. In many systems, the “worst case” value of flow time is very relevant because it represents the lead time that can safely be promised to customers. To estimate this we need both the average and standard deviation of the time in system.
The usefulness of our results stems from the fact that they are computationally simple and thus provide quick approximations without resorting to complex numerical techniques or Monte Carlo simulations. While many accurate approximations for the G / G / s queue have been proposed previously, they often result in algebraically intractable expressions. This hinders attempts to derive closed-form solutions to the decision variables incorporated in optimization models, and inevitably leads to the use of complex numeric methods. Furthermore, actual application of many of these approximations often requires specification of the actual distributions of the inter-arrival time and the service time. Also, these results have tended to focus on delay probabilities and average waiting time, and do not provide a means of estimating the standard deviation of the time in the system.
We also extend the approximations to computing the standard deviation of flow times of each priority class in the G / G / s priority queues and compare the results to those obtained via Monte Carlo simulations. These simulation experiments reveal good approximations for all priority classes with the exception of the lowest priority class in queuing systems with high utilization. In addition, we use the approximations to estimate the average and the standard deviation of the total flow time through queuing networks and have validated these results via Monte Carlo Simulations.
The primary theoretical contributions of this work are the derivations of an original expression for the coefficient of variation of waiting time in the G / G / s queue, which holds exactly for G / M / s and M / G /1 queues. We also do some error sensitivity analysis of the formula and develop interpolation models to calculate the probability of waiting, since we need to estimate the probability of waiting for the G / G / s queue to calculate the coefficient of variation of waiting time.
Technically we develop a general queuing system performance predictor, which can be used to estimate all kinds of performances for any steady state, infinite queues. We intend to make available a user friendly predictor for implementing our approximation methods. The advantages of these models are that they make no assumptions about distribution of inter-arrival time and service time. Our techniques generalize the previously developed approximations and can also be used in queuing networks and priority queues. Hopefully our approximation methods will be beneficial to those practitioners who like simple and quick practical answers to their multi-server queuing systems.
Key words and Phrases: Queuing System, Standard Deviation, Waiting Time, Stochastic Process, Heuristics, G / G/ s, Approximation Methods, Priority Queue, and Queuing Networks.
|
46 |
Financial Fraud: A Game of Cat and MouseGornall, William January 2010 (has links)
This thesis models rational criminals and regulators with flawed incentives. In it we develop a rational model of crime and regulation that we use to show the SEC's current incentive structure is ineffective at preventing fraud. Under our model, criminals balance the monetary rewards of
larger frauds against an increased chance of
being apprehended; and regulators design regulations to minimize either the damage caused by fraud or some
other metric. We show that under this model, the SEC's focus on 'stats' and 'quick hits' leads to large frauds and a large social loss. We argue that regulators need to focus not just on successful prosecutions, but also on harm reduction and deterrence.
|
47 |
The development of processing methods for a quantitative histological investigation of rat heartsJetton, Emily Hope 15 November 2004 (has links)
In order to understand the mechanical functions of the cardiac muscle it is important to first understand the microstructure of the tissue. Young et al. (1998) realized that quantitative three-dimensional information about the ventricular myocardium is necessary to analyze myocardial mechanics. They developed a technique using confocal fluorescence laser scanning microscopy to obtain three-dimensional images. While this method worked well in rebuilding the myocardial tissue image by image, it was quite extensive and costly. Costa et al. (1999) developed a method that was used to perform three-dimensional reconstruction as well. Their method, while less expensive and much less time consuming, required sheet assumptions and did not look directly at the cross-fiber plane.
From Dr. Criscione's previous work on canines (Ashikaga et al., 2004), we found that the sheet structure can be accurately determined from cross-fiber sections without making any sheet assumptions. We have now expanded on those ideas and created a method to perform the quantitative histological investigation of the rat hearts in a way that is both timely and cost effective. We developed a processing method that preserves the orientation of the fiber and sheet angles. This method was carried out using plastic embedding since the dehydration process used in paraffin embedding has a tendency to grossly distort tissue. Once the heart was fixed in formalin, we then removed the septum and sliced it several times vertically. This allowed us to image the tissue at several depths and find an average fiber angle for each slice. Next, the specimen was hardened, and the sheet orientation was evaluated using polarized light. Once both fiber and sheet angles were obtained from several depths within the septum, we then constructed a three-dimension model of the wall. This method was both cost effective and less time consuming than previous ones and will be a method that can be used in the future to compare the myocardial tissue of diseased and healthy rat hearts so that we may better understand the mechanical functions of the heart as it remodels due to disease.
|
48 |
QTL analysis of physiological and biochemical traits contributing to drought resistance in stylosanthes /Thumma, Bala Reddy. January 2001 (has links) (PDF)
Thesis (Ph. D.)--University of Queensland, 2001. / Includes bibliographical references.
|
49 |
Computational methods for stochastic control problems with applications in financeMitchell, Daniel Allen 01 July 2014 (has links)
Stochastic control is a broad tool with applications in several areas of academic interest. The financial literature is full of examples of decisions made under uncertainty and stochastic control is a natural framework to deal with these problems. Problems such as optimal trading, option pricing and economic policy all fall under the purview of stochastic control. These problems often face nonlinearities that make analytical solutions infeasible and thus numerical methods must be employed to find approximate solutions. In this dissertation three types of stochastic control formulations are used to model applications in finance and numerical methods are developed to solve the resulting nonlinear problems. To begin with, optimal stopping is applied to option pricing. Next, impulse control is used to study the problem of interest rate control faced by a nation's central bank, and finally a new type of hybrid control is developed and applied to an investment decision faced by money managers. / text
|
50 |
Reading without bounds| How different magnification methods affect the performance of students with low visionHallett, Elyse C. 18 November 2015 (has links)
<p> Computer users with low vision must use additional methods to enlarge content in order to perceive content comfortably. One common method is a screen magnifier, which typically requires horizontal scrolling. Another method is through the web browser zoom controls, and with the coding technique, responsive web design (RWD), content remains within the browser window as it is enlarged. The purpose of the present study was to assess how the different magnification methods affect reading comprehension and visual fatigue of people with low vision when reading on a computer screen. After reading on a screen magnifier for about an hour, participants tended to report higher levels of nausea. Younger participants also completed the second half of reading passages quicker than the first with this method. This finding was likely due to a strong aversion for using a screen magnifier for extended periods of time due to the need to horizontally scroll.</p>
|
Page generated in 0.08 seconds