611 |
Estimating Network Features and Associated Measures of Uncertainty and Their Incorporation in Network Generation and AnalysisGoyal, Ravi 19 November 2012 (has links)
The efficacy of interventions to control HIV spread depends upon many features of the communities where they are implemented, including not only prevalence, incidence, and per contact risk of transmission, but also properties of the sexual or transmission network. For this reason, HIV epidemic models have to take into account network properties including degree distribution and mixing patterns. The use of sampled data to estimate properties of a network is a common practice; however, current network generation methods do not account for the uncertainty in the estimates due to sampling. In chapter 1, we present a framework for constructing collections of networks using sampled data collected from ego-centric surveys. The constructed networks not only target estimates for density, degree distributions and mixing frequencies, but also incorporate the uncertainty due to sampling. Our method is applied to the National Longitudinal Study of Adolescent Health and considers two sampling procedures. We demonstrate how a collection of constructed networks using the proposed methods are useful in investigating variation in unobserved network topology, and therefore also insightful for studying processes that operate on networks. In chapter 2, we focus on the degree to which impact of concurrency on HIV incidence in a community may be overshadowed by differences in unobserved, but local, network properties. Our results demonstrate that even after controlling for cumulative ego-centric properties, i.e. degree distribution and concurrency, other network properties, which include degree mixing and clustering, can be very influential on the size of the potential epidemic. In chapter 3, we demonstrate the need to incorporate information about degree mixing patterns in such modeling. We present a procedure to construct collections of bipartite networks, given point estimates for degree distribution, that either makes use of information on the degree mixing matrix or assumes that no such information is available. These methods permit a demonstration of the differences between these two network collections, even when degree sequence is fixed. Methods are also developed to estimate degree mixing patterns, given a point estimate for the degree distribution.
|
612 |
Probabilistic bicriteria models : sampling methodologies and solution strategiesRengarajan, Tara 14 December 2010 (has links)
Many complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management.
As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem.
In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier.
As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties. / text
|
613 |
Characterization of insoluble carbonaceous material in atmospheric particulates by pyrolysis/gas chromatography/mass spectrometry proceduresKunen, Steven Maxwell January 1978 (has links)
No description available.
|
614 |
THE USE OF SAMPLING IN ARCHAEOLOGICAL SURVEYMueller, James W. January 1972 (has links)
No description available.
|
615 |
Sampling Frequency for Semi-Arid Streams and Rivers: Implications for National Parks in the Sonoran Desert NetworkLindsey, Melanie January 2010 (has links)
In developing a water quality monitoring program, the sampling frequency chosen should be able to reliably detect changes in water quality trends. Three datasets are evaluated for Minimal Detectable Change in surface water quality to examine the loss of trend detectability as sampling frequency decreases for sites within the National Park Service's Sonoran Desert Network by re-sampling the records as quarterly and annual datasets and by superimposing step and linear trends over the natural data to estimate the time it takes the Seasonal Kendall Test to detect trends of a specific threshold. Wilcoxon Rank Sum analyses found that monthly and quarterly sampling consistently draw from the same distribution of trend detection times; however, annual sampling can take significantly longer. Therefore, even with a loss in power from reduced sampling, quarterly sampling of Park waters adequately detects trends (70%) compared to monthly whereas annual sampling is insufficient in trend detection (30%).
|
616 |
Models and Methods for Multiple Resource Constrained Job Scheduling under UncertaintyKeller, Brian January 2009 (has links)
We consider a scheduling problem where each job requires multiple classes of resources, which we refer to as the multiple resource constrained scheduling problem(MRCSP). Potential applications include team scheduling problems that arise in service industries such as consulting and operating room scheduling. We focus on two general cases of the problem. The first case considers uncertainty of processing times, due dates, and resource availabilities consumption, which we denote as the stochastic MRCSP with uncertain parameters (SMRCSP-U). The second case considers uncertainty in the number of jobs to schedule, which arises in consulting and defense contracting when companies bid on future contracts but may or may not win the bid. We call this problem the stochastic MRCSP with job bidding (SMRCSP-JB).We first provide formulations of each problem under the framework of two-stage stochastic programming with recourse. We then develop solution methodologies for both problems. For the SMRCSP-U, we develop an exact solution method based on the L-shaped method for problems with a moderate number of scenarios. Several algorithmic enhancements are added to improve efficiency. Then, we embed the L-shaped method within a sampling-based solution method for problems with a large number of scenarios. We modify a sequential sampling procedure to allowfor approximate solution of integer programs and prove desired properties. The sampling-based method is applicable to two-stage stochastic integer programs with integer first-stage variables. Finally, we compare the solution methodologies on a set of test problems.For SMRCSP-JB, we utilize the disjunctive decomposition (D2 ) algorithm for stochastic integer programs with mixed-binary subproblems. We develop several enhancements to the D2 algorithm. First, we explore the use of a cut generation problem restricted to a subspace of the variables, which yields significant computational savings. Then, we examine generating alternative disjunctive cuts based on the generalized upper bound (GUB) constraints that appear in the second-stage of the SMRCSP-JB. We establish convergence of all D2 variants and present computational results on a set of instances of SMRCSP-JB.
|
617 |
Situational and Trait Influences on Dynamic JusticeStein, Jordan January 2010 (has links)
As the past twenty years of justice research have demonstrated, perceiving the workplace as fair is associated with higher levels of organizational commitment, job satisfaction, work-related effort, acceptance of work-related policies and procedures, and decreased absenteeism. However, although not always explicitly stated in theories of fairness, there has been a tacit understanding that justice perceptions are not static, but influenced by a variety of factors. In short, extant justice theories assume there are underlying dynamic elements within the construct, but the measures and previous research examining justice has assessed it as if it were a stable and static perception. The purpose of this research, therefore, was to take the first step to explore and describe the frequency and intensity of injustice perceptions at work and how individuals' affective states and traits influence these perceptions. A snow-ball sample of working individuals from across the United States provided ESM data by responding to palmtop computers at randomly scheduled intervals several times a day for 3 work weeks. Additionally, participants provided event-contingent injustice data when they perceived unfair events during their workday. The results of this examination, as well as the use of experience sampling for the study of dynamic workplace injustice, are discussed.
|
618 |
A Model of Information Sampling using Visual OcclusionChen, Huei-Yen Winnie 08 January 2014 (has links)
Three stages of research were carried out to investigate the use of the self-paced visual occlusion technique, and to model visual information sampling.
Stage 1. A low-fidelity driving simulator study was carried out to investigate the effect of glance duration, a key parameter of the self-paced occlusion technique, on occlusion times. Results from this experiment, paired with analysis of data available from an on-road driving study, found an asymptotic relationship between the two variables. This finding has practical implications for establishing the appropriate glance duration in experimental studies that use self-paced visual occlusion.
Stage 2. A model of visual information sampling was proposed, which incorporates elements of uncertainty development, subjective thresholds, and an awareness of past and current states of the system during occlusion. Using this modelling framework, average information sampling behaviour in occlusion studies can be analysed via mean occlusion times, and moment-by-moment responses to system output can be analysed via individual occlusion times. Analysis using the on-road driving data found that experienced drivers demonstrated a more complex and dynamic sampling strategy than inexperienced drivers.
Stage 3. Findings from Stage 2 led to a simple monitoring experiment that investigated whether human operators are in fact capable of predicting system output when temporarily occluded. The platform was designed such that the dynamics of the system naturally facilitated predictions without making the monitoring task trivial. Results showed that participants were able to take predictive information into account in their sampling decisions, in addition to using the content of the information they observed from each visual sample.
|
619 |
A Model of Information Sampling using Visual OcclusionChen, Huei-Yen Winnie 08 January 2014 (has links)
Three stages of research were carried out to investigate the use of the self-paced visual occlusion technique, and to model visual information sampling.
Stage 1. A low-fidelity driving simulator study was carried out to investigate the effect of glance duration, a key parameter of the self-paced occlusion technique, on occlusion times. Results from this experiment, paired with analysis of data available from an on-road driving study, found an asymptotic relationship between the two variables. This finding has practical implications for establishing the appropriate glance duration in experimental studies that use self-paced visual occlusion.
Stage 2. A model of visual information sampling was proposed, which incorporates elements of uncertainty development, subjective thresholds, and an awareness of past and current states of the system during occlusion. Using this modelling framework, average information sampling behaviour in occlusion studies can be analysed via mean occlusion times, and moment-by-moment responses to system output can be analysed via individual occlusion times. Analysis using the on-road driving data found that experienced drivers demonstrated a more complex and dynamic sampling strategy than inexperienced drivers.
Stage 3. Findings from Stage 2 led to a simple monitoring experiment that investigated whether human operators are in fact capable of predicting system output when temporarily occluded. The platform was designed such that the dynamics of the system naturally facilitated predictions without making the monitoring task trivial. Results showed that participants were able to take predictive information into account in their sampling decisions, in addition to using the content of the information they observed from each visual sample.
|
620 |
Inference from finite population sampling : a unified approach.January 2007 (has links)
In this thesis, we have considered the inference aspects of sampling from a
finite population. There are significant differences between traditional
statistical inference and finite population sampling inference. In the case of
finite population sampling, the statistician is free to choose his own sampling
design and is not confined to independent and identically distributed
observations as is often the case with traditional statistical inference. We look
at the correspondence between the sampling design and the sampling
scheme. We also look at methods used for drawing samples. The non –
existence theorems (Godambe (1955), Hanurav and Basu (1971)) are also
discussed. Since the minimum variance unbiased estimator does not exist for
infinite populations, a number of estimators need to be considered for
estimating the same parameter. We discuss the admissible properties of
estimators and the use of sufficient statistics and the Rao-Blackwell Theorem
for the improvement of inefficient inadmissible estimators. Sampling
strategies using auxiliary information, relating to the population, need to be
used as no sampling strategy can provide an efficient estimator of the
population parameter in all situations. Finally few well known sampling
strategies are studied and compared under a super population model. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2007.
|
Page generated in 0.0309 seconds