• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1284
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 12
  • 12
  • 10
  • 10
  • Tagged with
  • 2847
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 162
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Probabilistic bicriteria models : sampling methodologies and solution strategies

Rengarajan, Tara 14 December 2010 (has links)
Many complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management. As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem. In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier. As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties. / text
612

Characterization of insoluble carbonaceous material in atmospheric particulates by pyrolysis/gas chromatography/mass spectrometry procedures

Kunen, Steven Maxwell January 1978 (has links)
No description available.
613

THE USE OF SAMPLING IN ARCHAEOLOGICAL SURVEY

Mueller, James W. January 1972 (has links)
No description available.
614

Sampling Frequency for Semi-Arid Streams and Rivers: Implications for National Parks in the Sonoran Desert Network

Lindsey, Melanie January 2010 (has links)
In developing a water quality monitoring program, the sampling frequency chosen should be able to reliably detect changes in water quality trends. Three datasets are evaluated for Minimal Detectable Change in surface water quality to examine the loss of trend detectability as sampling frequency decreases for sites within the National Park Service's Sonoran Desert Network by re-sampling the records as quarterly and annual datasets and by superimposing step and linear trends over the natural data to estimate the time it takes the Seasonal Kendall Test to detect trends of a specific threshold. Wilcoxon Rank Sum analyses found that monthly and quarterly sampling consistently draw from the same distribution of trend detection times; however, annual sampling can take significantly longer. Therefore, even with a loss in power from reduced sampling, quarterly sampling of Park waters adequately detects trends (70%) compared to monthly whereas annual sampling is insufficient in trend detection (30%).
615

Models and Methods for Multiple Resource Constrained Job Scheduling under Uncertainty

Keller, Brian January 2009 (has links)
We consider a scheduling problem where each job requires multiple classes of resources, which we refer to as the multiple resource constrained scheduling problem(MRCSP). Potential applications include team scheduling problems that arise in service industries such as consulting and operating room scheduling. We focus on two general cases of the problem. The first case considers uncertainty of processing times, due dates, and resource availabilities consumption, which we denote as the stochastic MRCSP with uncertain parameters (SMRCSP-U). The second case considers uncertainty in the number of jobs to schedule, which arises in consulting and defense contracting when companies bid on future contracts but may or may not win the bid. We call this problem the stochastic MRCSP with job bidding (SMRCSP-JB).We first provide formulations of each problem under the framework of two-stage stochastic programming with recourse. We then develop solution methodologies for both problems. For the SMRCSP-U, we develop an exact solution method based on the L-shaped method for problems with a moderate number of scenarios. Several algorithmic enhancements are added to improve efficiency. Then, we embed the L-shaped method within a sampling-based solution method for problems with a large number of scenarios. We modify a sequential sampling procedure to allowfor approximate solution of integer programs and prove desired properties. The sampling-based method is applicable to two-stage stochastic integer programs with integer first-stage variables. Finally, we compare the solution methodologies on a set of test problems.For SMRCSP-JB, we utilize the disjunctive decomposition (D2 ) algorithm for stochastic integer programs with mixed-binary subproblems. We develop several enhancements to the D2 algorithm. First, we explore the use of a cut generation problem restricted to a subspace of the variables, which yields significant computational savings. Then, we examine generating alternative disjunctive cuts based on the generalized upper bound (GUB) constraints that appear in the second-stage of the SMRCSP-JB. We establish convergence of all D2 variants and present computational results on a set of instances of SMRCSP-JB.
616

Situational and Trait Influences on Dynamic Justice

Stein, Jordan January 2010 (has links)
As the past twenty years of justice research have demonstrated, perceiving the workplace as fair is associated with higher levels of organizational commitment, job satisfaction, work-related effort, acceptance of work-related policies and procedures, and decreased absenteeism. However, although not always explicitly stated in theories of fairness, there has been a tacit understanding that justice perceptions are not static, but influenced by a variety of factors. In short, extant justice theories assume there are underlying dynamic elements within the construct, but the measures and previous research examining justice has assessed it as if it were a stable and static perception. The purpose of this research, therefore, was to take the first step to explore and describe the frequency and intensity of injustice perceptions at work and how individuals' affective states and traits influence these perceptions. A snow-ball sample of working individuals from across the United States provided ESM data by responding to palmtop computers at randomly scheduled intervals several times a day for 3 work weeks. Additionally, participants provided event-contingent injustice data when they perceived unfair events during their workday. The results of this examination, as well as the use of experience sampling for the study of dynamic workplace injustice, are discussed.
617

A Model of Information Sampling using Visual Occlusion

Chen, Huei-Yen Winnie 08 January 2014 (has links)
Three stages of research were carried out to investigate the use of the self-paced visual occlusion technique, and to model visual information sampling. Stage 1. A low-fidelity driving simulator study was carried out to investigate the effect of glance duration, a key parameter of the self-paced occlusion technique, on occlusion times. Results from this experiment, paired with analysis of data available from an on-road driving study, found an asymptotic relationship between the two variables. This finding has practical implications for establishing the appropriate glance duration in experimental studies that use self-paced visual occlusion. Stage 2. A model of visual information sampling was proposed, which incorporates elements of uncertainty development, subjective thresholds, and an awareness of past and current states of the system during occlusion. Using this modelling framework, average information sampling behaviour in occlusion studies can be analysed via mean occlusion times, and moment-by-moment responses to system output can be analysed via individual occlusion times. Analysis using the on-road driving data found that experienced drivers demonstrated a more complex and dynamic sampling strategy than inexperienced drivers. Stage 3. Findings from Stage 2 led to a simple monitoring experiment that investigated whether human operators are in fact capable of predicting system output when temporarily occluded. The platform was designed such that the dynamics of the system naturally facilitated predictions without making the monitoring task trivial. Results showed that participants were able to take predictive information into account in their sampling decisions, in addition to using the content of the information they observed from each visual sample.
618

A Model of Information Sampling using Visual Occlusion

Chen, Huei-Yen Winnie 08 January 2014 (has links)
Three stages of research were carried out to investigate the use of the self-paced visual occlusion technique, and to model visual information sampling. Stage 1. A low-fidelity driving simulator study was carried out to investigate the effect of glance duration, a key parameter of the self-paced occlusion technique, on occlusion times. Results from this experiment, paired with analysis of data available from an on-road driving study, found an asymptotic relationship between the two variables. This finding has practical implications for establishing the appropriate glance duration in experimental studies that use self-paced visual occlusion. Stage 2. A model of visual information sampling was proposed, which incorporates elements of uncertainty development, subjective thresholds, and an awareness of past and current states of the system during occlusion. Using this modelling framework, average information sampling behaviour in occlusion studies can be analysed via mean occlusion times, and moment-by-moment responses to system output can be analysed via individual occlusion times. Analysis using the on-road driving data found that experienced drivers demonstrated a more complex and dynamic sampling strategy than inexperienced drivers. Stage 3. Findings from Stage 2 led to a simple monitoring experiment that investigated whether human operators are in fact capable of predicting system output when temporarily occluded. The platform was designed such that the dynamics of the system naturally facilitated predictions without making the monitoring task trivial. Results showed that participants were able to take predictive information into account in their sampling decisions, in addition to using the content of the information they observed from each visual sample.
619

Inference from finite population sampling : a unified approach.

January 2007 (has links)
In this thesis, we have considered the inference aspects of sampling from a finite population. There are significant differences between traditional statistical inference and finite population sampling inference. In the case of finite population sampling, the statistician is free to choose his own sampling design and is not confined to independent and identically distributed observations as is often the case with traditional statistical inference. We look at the correspondence between the sampling design and the sampling scheme. We also look at methods used for drawing samples. The non – existence theorems (Godambe (1955), Hanurav and Basu (1971)) are also discussed. Since the minimum variance unbiased estimator does not exist for infinite populations, a number of estimators need to be considered for estimating the same parameter. We discuss the admissible properties of estimators and the use of sufficient statistics and the Rao-Blackwell Theorem for the improvement of inefficient inadmissible estimators. Sampling strategies using auxiliary information, relating to the population, need to be used as no sampling strategy can provide an efficient estimator of the population parameter in all situations. Finally few well known sampling strategies are studied and compared under a super population model. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2007.
620

The Application of FROID in MR Image Reconstruction

Vu, Linda January 2010 (has links)
In magnetic resonance imaging (MRI), sampling methods that lead to incomplete data coverage of k-space are used to accelerate imaging and reduce overall scan time. Non-Cartesian sampling trajectories such as radial, spiral, and random trajectories are employed to facilitate advanced imaging techniques, such as compressed sensing, or to provide more efficient coverage of k-space for a shorter scan period. When k-space is undersampled or unevenly sampled, traditional methods of transforming Fourier data to obtain the desired image, such as the FFT, may no longer be applicable. The Fourier reconstruction of optical interferometer data (FROID) algorithm is a novel reconstruction method developed by A. R. Hajian that has been successful in the field of optical interferometry in reconstructing images from sparsely and unevenly sampled data. It is applicable to cases where the collected data is a Fourier representation of the desired image or spectrum. The framework presented allows for a priori information, such as the positions of the sampled points, to be incorporated into the reconstruction of images. Initially, FROID assumes a guess of the real-valued spectrum or image in the form of an interpolated function and calculates the corresponding integral Fourier transform. Amplitudes are then sampled in the Fourier space at locations corresponding to the acquired measurements to form a model dataset. The guess spectrum or image is then adjusted such that the model dataset in the Fourier space is least squares fitted to measured values. In this thesis, FROID has been adapted and implemented for use in MRI where k-space is the Fourier transform of the desired image. By forming a continuous mapping of the image and modelling data in the Fourier space, a comparison and optimization with respect to data acquired in k-space that is either undersampled or irregularly sampled can be performed as long as the sampling positions are known. To apply FROID to the reconstruction of magnetic resonance images, an appropriate objective function that expresses the desired least squares fit criteria was defined and the model for interpolating Fourier data was extended to include complex values of an image. When an image with two Gaussian functions was tested, FROID was able to reconstruct images from data randomly sampled in k-space and was not restricted to data sampled evenly on a Cartesian grid. An MR image of a bone with complex values was also reconstructed using FROID and the magnitude image was compared to that reconstructed by the FFT. It was found that FROID outperformed the FFT in certain cases even when data were rectilinearly sampled.

Page generated in 0.0564 seconds