• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 136
  • 76
  • 33
  • 23
  • 16
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 642
  • 287
  • 251
  • 78
  • 75
  • 73
  • 64
  • 62
  • 60
  • 58
  • 58
  • 54
  • 53
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Minimizing Travel Time Through Multiple Media With Various Borders

Miick, Tonja 01 May 2013 (has links)
This thesis consists of two main chapters along with an introduction andconclusion. In the introduction, we address the inspiration for the thesis, whichoriginates in a common calculus problem wherein travel time is minimized across two media separated by a single, straight boundary line. We then discuss the correlation of this problem with physics via Snells Law. The first core chapter takes this idea and develops it to include the concept of two media with a circular border. To make the problem easier to discuss, we talk about it in terms of running and swimming speeds. We first address the case where the starting and ending points for the passage are both on the boundary. We find the possible optimal paths, and also determine the conditions under which we travel along each path. Next we move the starting point to a location outside the boundary. While we are not able to determine the exact optimal path, we do arrive at some conclusions about what does not constitute the optimal path. In the second chapter, we alter this problem to address a rectangular enclosed boundary, which we refer to as a swimming pool. The variations in this scenario prove complex enough that we focus on the case where both starting and ending points are on the boundary. We start by considering starting and ending points on adjacent sides of the rectangle. We identify three possibilities for the fastest path, and are able to identify the conditions that will make each path optimal. We then address the case where the points are on opposite sides of the pool. We identify the possible paths for a minimum time and once again ascertain the conditions that make each path optimal. We conclude by briefly designating some other scenarios that we began to investigate, but were not able to explore in depth. They promise insightful results, and we hope to be able to address them in the future.
262

Eradicating Malaria: Improving a Multiple-Timestep Optimization Model of Malarial Intervention Policy

Ohashi, Taryn M 18 May 2013 (has links)
Malaria is a preventable and treatable blood-borne disease whose complications can be fatal. Although many interventions exist in order to reduce the impacts of malaria, the optimal method of distributing these interventions in a geographical area with limited resources must be determined. This thesis refines a model that uses an integer linear program and a compartmental model of epidemiology called an SIR model of ordinary differential equations. The objective of the model is to find an intervention strategy over multiple time steps and multiple geographic regions that minimizes the number of days people spend infected with malaria. In this paper, we refine the resolution of the model and conduct sensitivity analysis on its parameter values.
263

Efficient Simulation, Accurate Sensitivity Analysis and Reliable Parameter Estimation for Delay Differential Equations

ZivariPiran, Hossein 03 March 2010 (has links)
Delay differential equations (DDEs) are a class of differential equations that have received considerable recent attention and been shown to model many real life problems, traditionally formulated as systems of ordinary differential equations (ODEs), more naturally and more accurately. Ideally a DDE modeling package should provide facilities for approximating the solution, performing a sensitivity analysis and estimating unknown parameters. In this thesis we propose new techniques for efficient simulation, accurate sensitivity analysis and reliable parameter estimation of DDEs. We propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value problem (IVP) solver that is based on a general linear method (GLM) and can provide dense output. This is done by treating a general DDE as a special example of a discontinuous IVP. We identify a precise process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes. We introduce an equation governing the dynamics of sensitivities for the most general system of parametric DDEs. Then, having a similar view as the simulation (DDEs as discontinuous ODEs), we introduce a formula for finding the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. This leads to an algorithm which can compute sensitivities for various kind of parameters very accurately. We also develop an algorithm for reliable parameter identification of DDEs. We propose a method for adding extra constraints to the optimization problem, changing a possibly non-smooth optimization to a smooth problem. These constraints are effectively handled using information from the simulator and the sensitivity analyzer. Finally, we discuss the structure of our evolving modeling package DDEM. We present a process that has been used for incorporating existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design with reusable and customizable components. The package is programmed in C++ and provides a user-friendly calling sequences. The numerical results are very encouraging and show the effectiveness of the techniques.
264

Efficient Simulation, Accurate Sensitivity Analysis and Reliable Parameter Estimation for Delay Differential Equations

ZivariPiran, Hossein 03 March 2010 (has links)
Delay differential equations (DDEs) are a class of differential equations that have received considerable recent attention and been shown to model many real life problems, traditionally formulated as systems of ordinary differential equations (ODEs), more naturally and more accurately. Ideally a DDE modeling package should provide facilities for approximating the solution, performing a sensitivity analysis and estimating unknown parameters. In this thesis we propose new techniques for efficient simulation, accurate sensitivity analysis and reliable parameter estimation of DDEs. We propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value problem (IVP) solver that is based on a general linear method (GLM) and can provide dense output. This is done by treating a general DDE as a special example of a discontinuous IVP. We identify a precise process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes. We introduce an equation governing the dynamics of sensitivities for the most general system of parametric DDEs. Then, having a similar view as the simulation (DDEs as discontinuous ODEs), we introduce a formula for finding the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. This leads to an algorithm which can compute sensitivities for various kind of parameters very accurately. We also develop an algorithm for reliable parameter identification of DDEs. We propose a method for adding extra constraints to the optimization problem, changing a possibly non-smooth optimization to a smooth problem. These constraints are effectively handled using information from the simulator and the sensitivity analyzer. Finally, we discuss the structure of our evolving modeling package DDEM. We present a process that has been used for incorporating existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design with reusable and customizable components. The package is programmed in C++ and provides a user-friendly calling sequences. The numerical results are very encouraging and show the effectiveness of the techniques.
265

En arbeidsplass, to arbeidsgivere : Opplevelser av trygghet, ansettelsesbarhet, engasjement og solidaritet på en arbeidsplass som bruker innleid arbeidskraft / One workplace, two employers : Experiences of security, employability, commitment and solidarity in a company that use temporary hired employees

Bru, Linn Sunniva January 2012 (has links)
Formålet med studiet er med et brett perspektiv studere hvordan arbeidslivsituasjon og interaksjon oppleves av ulike typer av ansatte som arbeider på en arbeidsplass som bruker innleid arbeidskraft. Det ville også skapes en forståelse om hvordan ordinære og innleid personal i en bedrift opplever sin arbeidslivssituasjon gjennom fire perspektiver: trygghet, ansettelsesbarhet, engasjement og solidaritet. Det ble gjennomført ti kvalitative intervjuer.    Med hjelp av den innsamlede empirien, tidligere forskning samt teoretiske utgangspunkter gjøres det en analyse som beskriver og forklarer grunnen til likheter og ulikheter i opplevelsene av de fire perspektivene til de to gruppene av personal.  Hvordan de fire perspektivene påvirker hverandre analyseres med hjelp av en matrise. Det brede perspektivet var interessant for å få en forståelse av opplevelsene til personal som berøres av innleid arbeidskraft. Det viktigste bidraget til forskningen er at studiet bidrar til en dypere forståelse av ordinære og innleides opplevelser av arbeidslivssituasjon og interaksjon på en arbeidsplass. / The research intension is to study how the workplace situation and interaction is perceived by employees working in a workplace that also uses temporary hired employees through employment agencies. I will investigate how ordinary employees and temporary hired employees in a workplace experiences four different aspects; security, employability, commitment and solidarity. The study is based on ten qualitative interviews.   By using the collected empirical, earlier research and theories, I will do an analysis which describes and explains the reasons for the similarities and differences in the two groups of employees, and their perspective of the four aspects studied.  How the four different perspectives is influenced by one another, will be analyzed by using a matrix. Through the wide perspective of the terms security, employability, organizational commitment and solidarity I found interesting views of the experiences of the permanent employees are affected by the company’s use of temporary hired employees.
266

Further discussion in considering structural break for the long-term relationship between health policy and GDP per capital

Feng, I-ling 26 August 2010 (has links)
This paper uses the panel data of 11 OECD countries over a period from 1971 to 2006. Unlike the traditional cointegration model which omitted the impact of structural breaks in the analysis, this paper applies panel cointegration with structural break test proposed by Westerlund (2006), panel unit root test, and panel dynamic OLS test. The empirical results indicate that health care expenditure and economic growth (GDP per capita) are non-stationary in the series; and between the two variables, a long-term cointegration relationship exists. Moreover, a positive correlation between HCE and economic growth is found in the panel dynamic OLS model. The researcher concludes that investing in health capital improves human capital and that boosts economic growth in the sample countries, and vice versa. More importantly, allowing structural breaks in the cointegration analysis obtains reliability in the estimation and proves more detailed and specific information on the consequence of the momentous events on the two variables; and thus enables policy makers and health economists to propose more effective strategies.
267

Estimation of Orthogonal Regression Under Censored Data.

Ho, Chun-shian 19 July 2008 (has links)
The method of least squares has been used in general for regression analysis. It is usually assumed that the errors are confined to the dependent variable, but in many cases both dependent and independent variables are typically measured with some stochastic errors. The statistical method of orthogonal regression has been used when both variables under investigation are subject to stochastic errors. Furthermore, the measurements sometimes may not be exact but have been censored. In this situation doing orthogonal regression with censored data directly between the two variables, it may yield an incorrect estimates of the relationship. In this work we discuss the estimation of orthogonal regression under censored data in one variable and then provide a method of estimation and two criteria on when the method is applicable. When the observations satisfy the criteria provided here, there will not be very large differences between the estimated orthogonal regression line and the theoretical orthogonal regression line.
268

Parametric and Bayesian Modeling of Reliability and Survival Analysis

Molinares, Carlos A. 01 January 2011 (has links)
The objective of this study is to compare Bayesian and parametric approaches to determine the best for estimating reliability in complex systems. Determining reliability is particularly important in business and medical contexts. As expected, the Bayesian method showed the best results in assessing the reliability of systems. In the first study, the Bayesian reliability function under the Higgins-Tsokos loss function using Jeffreys as its prior performs similarly as when the Bayesian reliability function is based on the squared-error loss. In addition, the Higgins-Tsokos loss function was found to be as robust as the squared-error loss function and slightly more efficient. In the second study, we illustrated that--through the power law intensity function--Bayesian analysis is applicable in the power law process. The power law intensity function is the key entity of the power law process (also called the Weibull process or the non-homogeneous Poisson process). It gives the rate of change of a system's reliability as a function of time. First, using real data, we demonstrated that one of our two parameters behaves as a random variable. With the generated estimates, we obtained a probability density function that characterizes the behavior of this random variable. Using this information, under the commonly used squared-error loss function and with a proposed adjusted estimate for the second parameter, we obtained a Bayesian reliability estimate of the failure probability distribution that is characterized by the power law process. Then, using a Monte Carlo simulation, we showed the superiority of the Bayesian estimate compared with the maximum likelihood estimate and also the better performance of the proposed estimate compared with its maximum likelihood counterpart. In the next study, a Bayesian sensitivity analysis was performed via Monte Carlo simulation, using the same parameter as in the previous study and under the commonly used squared-error loss function, using mean square error comparison. The analysis was extended to the second parameter as a function of the first, based on the relationship between their maximum likelihood estimates. The simulation procedure demonstrated that the Bayesian estimates are superior to the maximum likelihood estimates and that the selection of the prior distribution was sensitive. Secondly, we found that the proposed adjusted estimate for the second parameter has better performance under a noninformative prior. In the fourth study, a Bayesian approach was applied to real data from breast cancer research. The purpose of the study was to investigate the applicability of a Bayesian analysis to survival time of breast cancer data and to justify the applicability of the Bayesian approach to this domain. The estimation of one parameter, the survival function, and hazard function were analyzed. The simulation analysis showed that the Bayesian estimate of the parameter performed better compared with the estimated value under the Wheeler procedure. The excellent performance of the Bayesian estimate is reflected even for small sample sizes. The Bayesian survival function was also found to be more efficient than its parametric counterpart. In the last study, a Bayesian analysis was carried out to investigate the sensitivity to the choice of the loss function. One of the parameters of the distribution that characterized the survival times for breast cancer data was estimated applying a Bayesian approach and under two different loss functions. Also, the estimates of the survival function were determined under the same setting. The simulation analysis showed that the choice of the squared-error loss function is robust in estimating the parameter and the survival function.
269

Capturing random utility maximization behavior in continuous choice data : application to work tour scheduling

Lemp, Jason David 06 November 2012 (has links)
Recent advances in travel demand modeling have concentrated on adding behavioral realism by focusing on an individual’s activity participation. And, to account for trip-chaining, tour-based methods are largely replacing trip-based methods. Alongside these advances and innovations in dynamic traffic assignment (DTA) techniques, however, time-of-day (TOD) modeling remains an Achilles’ heel. As congestion worsens and operators turn to variable road pricing, sensors are added to networks, cell phones are GPS-enabled, and DTA techniques become practical, accurate time-of-day forecasts become critical. In addition, most models highlight tradeoffs between travel time and cost, while neglecting variations in travel time. Research into stated and revealed choices suggests that travel time variability can be highly consequential. This dissertation introduces a method for imputing travel time variability information as a continuous function of time-of-day, while utilizing an existing method for imputing average travel times (by TOD). The methods employ ordinary least squares (OLS) regression techniques, and rely on reported travel time information from survey data (typically available to researchers), as well as travel time and distance estimates by origin-destination (OD) pair for free-flow and peak-period conditions from network data. This dissertation also develops two models of activity timing that recognize the imputed average travel times and travel time variability. Both models are based in random utility theory and both recognize potential correlations across time-of-day alternatives. In addition, both models are estimated in a Bayesian framework using Gibbs sampling and Metropolis-Hastings (MH) algorithms, and model estimation relies on San Francisco Bay Area data collected in 2000. The first model is the continuous cross-nested logit (CCNL) and represents tour outbound departure time choice in a continuous context (rather than discretizing time) over an entire day. The model is formulated as a generalization of the discrete cross-nested logit (CNL) for continuous choice and represents the first random utility maximization model to incorporate the ability to capture correlations across alternatives in a continuous choice context. The model is then compared to the continuous logit, which represents a generalization of the multinomial logit (MNL) for continuous choice. Empirical results suggest that the CCNL out-performs the continuous logit in terms of predictive accuracy and reasonableness of predictions for three tolling policy simulations. Moreover, while this dissertation focuses on time-of-day modeling, the CCNL could be used in a number of other continuous choice contexts (e.g., location/destination, vehicle usage, trip durations, and profit-maximizing production). The second model is a bivariate multinomial probit (BVMNP) model. While the model relies on discretization of time (into 30-minute intervals), it captures both key dimensions of a tour’s timing (rather than just one, as in this dissertation’s application of the CCNL model), which is important for tour- and activity-based models of travel demand. The BVMNP’s ability to capture correlations across scheduling alternatives is something no existing two-dimensional choice models of tour timing can claim. Both models represent substantial contributions for continuous choice modeling in transportation, business, biology, and various other fields. In addition, the empirical results of the models evaluated here enhance our understanding of individuals’ time-of-day decisions. For instance, average travel time and its variance are estimated to have a negative effect on workers’ utilities, as expected, but are not found to be that practically relevant here, probably because most workers are rather constrained in their activity scheduling and/or work hours. However, correlations are found to be rather strong in both models, particularly for home-to-work journeys, suggesting that if models fail to accommodate such correlations, biased application results may emerge. / text
270

On the Aubry-Mather theory for partial differential equations and the stability of stochastically forced ordinary differential equations

Blass, Timothy James 01 June 2011 (has links)
This dissertation is organized into four chapters: an introduction followed by three chapters, each based on one of three separate papers. In Chapter 2 we consider gradient descent equations for energy functionals of the type [mathematical equation] where A is a second-order uniformly elliptic operator with smooth coefficients. We consider the gradient descent equation for S, where the gradient is an element of the Sobolev space H[superscipt beta], [beta is an element of](0, 1), with a metric that depends on A and a positive number [gamma] > sup |V₂₂|. The main result of Chapter 2 is a weak comparison principle for such a gradient flow. We extend our methods to the case where A is a fractional power of an elliptic operator, and we provide an application to the Aubry-Mather theory for partial differential equations and pseudo-differential equations by finding plane-like minimizers of the energy functional. In Chapter 3 we investigate the differentiability of the minimal average energy associated to the functionals [mathematical equation] using numerical and perturbation methods. We use the Sobolev gradient descent method as a numerical tool to compute solutions of the Euler-Lagrange equations with some periodicity conditions; this is the cell problem in homogenization. We use these solutions to determine the minimal average energy as a function of the slope. We also obtain a representation of the solutions to the Euler-Lagrange equations as a Lindstedt series in the perturbation parameter [epsilon], and use this to confirm our numerical results. Additionally, we prove convergence of the Lindstedt series. In Chapter 4 we present a method for determining the stability of a class of stochastically forced ordinary differential equations, where the forcing term can be obtained by passing white noise through a filter of arbitrarily high degree. We use the Fokker-Planck equation to write a partial differential equation for the second moments, which we turn into an eigenvalue problem for a second-order differential operator. We develop ladder operators to determine analytic expressions for the eigenvalues and eigenfunctions of this differential operator, and thus determine the stability. / text

Page generated in 0.0714 seconds