Spelling suggestions: "subject:"istatistical estimation"" "subject:"bystatistical estimation""
1 |
Statistical estimation and changepoint detection methods in public health surveillanceReynolds, Sue Bath 27 May 2016 (has links)
This thesis focuses on assessing and improving statistical methods implemented in two areas of public health research. The first topic involves estimation of national influenza-associated mortality rates via mathematical modeling. The second topic involves the timely detection of infectious disease outbreaks using statistical process control monitoring. For over fifty years, the Centers for Disease Control and Prevention has been estimating annual rates of U.S. deaths attributable to influenza. These estimates have been used to determine costs and benefits associated with influenza prevention and control strategies. Quantifying the effect of influenza on mortality, however, can be challenging since influenza infections typically are not confirmed virologically nor specified on death certificates. Consequently, a wide range of ecologically based, mathematical modeling approaches have been applied to specify the association between influenza and mortality. To date, all influenza-associated death estimates have been based on mortality data first aggregated at the national level and then modeled. Unfortunately, there are a number of local-level seasonal factors that may confound the association between influenza and mortality - thus suggesting that data be modeled at the local level and then pooled to make national estimates of death. The first component of the thesis topic involving mortality estimation addresses this issue by introducing and implementing a two-stage hierarchical Bayesian modeling approach. In the first stage, city-level data with varying trends in mortality and weather were modeled using semi-parametric, generalized additive models. In the second stage, the log-relative risk estimates calculated for each city in stage 1 represented the “outcome” variable, and were modeled two ways: (1) assuming spatial independence across cities using a Bayesian generalized linear model, and (2) assuming correlation among cities using a Bayesian spatial correlation model. Results from these models were compared to those from a more-conventional approach. The second component of this topic examines the extent to which seasonal confounding and collinearity affect the relationship between influenza and mortality at the local (city) level. Disentangling the effects of temperature, humidity, and other seasonal confounders on the association between influenza and mortality is challenging since these covariates are often temporally collinear with influenza activity. Three modeling strategies with varying representations of background seasonality were compared. Seasonal covariates entered into the model may have been measured (e.g., ambient temperature) or unmeasured (e.g., time-based smoothing splines or Fourier terms). An advantage of modeling background seasonality via time splines is that the amount of seasonal curvature can be controlled by the number of degrees of freedom specified for the spline. A comparison of the effects of influenza activity on mortality based on these varying representations of seasonal confounding is assessed. The third component of this topic explores the relationship between mortality rates and influenza activity using a flexible, natural cubic spline function to model the influenza term. The conventional approach of fitting influenza-activity terms linearly in regression was found to be too constraining. Results show that the association is best represented nonlinearly. The second area of focus in this thesis involves infectious disease outbreak detection. A fundamental goal of public health surveillance, particularly syndromic surveillance, is the timely detection of increases in the rate of unusual events. In syndromic surveillance, a significant increase in the incidence of monitored disease outcomes would trigger an alert, possibly prompting the implementation of an intervention strategy. Public health surveillance generally monitors count data (e.g., counts of influenza-like illness, sales of over-the-counter remedies, and number of visits to outpatient clinics). Statistical process control charts, designed for quality control monitoring in industry, have been widely adapted for use in disease and syndromic surveillance. The behavior of these detection methods on discrete distributions, however, has not been explored in detail. For this component of the thesis, a simulation study was conducted to compare the CuSum and EWMA methods for detection of increases in negative binomial rates with varying amounts of dispersion. The goal of each method is to detect an increase in the mean number of cases as soon as possible after an upward rate shift has occurred. The performance of the CuSum and EWMA detection methods is evaluated using the conditional expected delay criterion, which is a measure of the detection delay, i.e., the time between the occurrence of a shift and when that shift is detected. Detection capabilities were explored under varying shift sizes and times at which the shifts occurred.
|
2 |
Generalised bootstrap proceduresLee, Stephen Man Sing January 1993 (has links)
No description available.
|
3 |
Sequential learning in artifical neural networksKadirkamanathan, Visakan January 1991 (has links)
No description available.
|
4 |
Sensor Fusion : Applying sensor fusion in a district heating substationKangerud, Jim January 2005 (has links)
Many machines in these days have sensors to collect information from the world they inhabit. The correctness of this information is crucial for the correct operation. However, at times sensors are not so reliable since they are sometimes affected of some type of noise and thus give incorrect information. Another drawback might be lack of information due to shortage of existing sensors. Sensor fusion is trying to overcome these drawbacks by integrating or combining information from multiple sensors. The heating of a building is a slow and time consuming process, i.e. either the flow or energy consumption are object to drastically changes. On the other hand, the tap water system, i.e. the heating of tap water can be the source to severe changes in both flow and energy consumption. This because of that the flow is stochastic in the tap water system, at any given time a tap may be opened or closed and therefore drastically change the flow. The purpose of this thesis is to investigate if is it possible to use sensor fusion to get accurate continuous flow values from a district heating substation. This is done by integrating different sensor fusion algorithms in a district heating substation simulator.
|
5 |
Statistics: Raising the Bar for the Seventh Grade Classroom.Mullins, Sherry Lynn 15 August 2006 (has links)
After recognizing the need for a more thorough concentration of statistics at the seventh grade level, the author concluded that it would be a good idea to include statistics that cover both seventh and eighth grade Virginia Standards of Learning. Many years of administering the SOL mathematics test at the eighth grade level led the author to the understanding that some of the more advanced seventh graders would be missing some key concepts taught in eighth grade because those advanced students would be taking algebra in the eighth grade. In this thesis, the author has developed four units that she feels are appropriate for this level and will fill the gap.
|
6 |
STATISTICAL ESTIMATION AND REDUCTION OF LEAKAGE CURRENT BY INPUT VECTOR CONTROL WITH PROCESS VARIATIONS CONSIDEREDKRISHNAMURTHY, ANUSHA 03 April 2006 (has links)
No description available.
|
7 |
Image Processing for Quanta Image SensorsOmar A Elgendy (6905153) 13 August 2019 (has links)
Since the birth of charge coupled devices (CCD) and the complementary metal-oxide-semiconductor (CMOS) active pixel sensors, pixel pitch of digital image sensors has been continuously shrinking to meet the resolution and size requirements of the cameras. However, shrinking pixels reduces the maximum number of photons a sensor can hold, a phenomenon broadly known as the full-well capacity limit. The drop in full-well capacity causes drop in signal-to-noise ratio and dynamic range.<div><br></div><div>The Quanta Image Sensor (QIS) is a class of solid-state image sensors proposed by Eric Fossum in 2005 as a potential solution for the limited full-well capacity problem. QIS is envisioned to be the next generation image sensor after CCD and CMOS since it enables sub-diffraction-limit pixels without the inherited problems of pixel shrinking. Equipped with a massive number of detectors that have single-photon sensitivity, the sensor counts the incoming photons and triggers a binary response “1” if the photon count exceeds a threshold, or “0” otherwise. To acquire an image, the sensor oversamples the space and time to generate a sequence of binary bit maps. Because of this binary sensing mechanism, the full-well capacity, signal-to-noise ratio and the dynamic range can all be improved using an appropriate image reconstruction algorithm. The contribution of this thesis is to address three image processing problems in QIS: 1) Image reconstruction, 2) Threshold design and 3) Color filter array design.</div><div><br></div><div>Part 1 of the thesis focuses on reconstructing the latent grayscale
image from the QIS binary measurements. Image reconstruction is a
necessary step for QIS because the raw binary measurements are not
images. Previous methods in the literature use iterative algorithms
which are computationally expensive. By modeling the QIS binary
measurements as quantized Poisson random variables, a new non-iterative
image reconstruction method based on the Transform-Denoise framework is
proposed. Experimental results show that the new method produces better
quality images while requiring less computing time.</div><div><br></div><div>Part 2 of the thesis considers the threshold design problem of a QIS. A
spatially-varying threshold can significantly improve the reconstruction
quality and the dynamic range. However, no known method of how to
achieve this can be found in the literature. The theoretical analysis of
this part shows that the optimal threshold should match with the
underlying pixel intensity. In addition, the analysis proves the
existence of a set of thresholds around the optimal threshold that give
asymptotically unbiased reconstructions. The asymptotic unbiasedness has
a phase transition behavior. A new threshold update scheme based on
this idea is proposed. Experimentally, the new method can provide good
estimates of the thresholds with less computing budget compared to
existing methods.</div><div><br></div><div>Part 3 of the thesis extends QIS capabilities to color imaging by
studying how a color filter array should be designed. Because of the
small pixel pitch of QIS, crosstalk between neighboring pixels is
inevitable and should be considered when designing the color filter
arrays. However, optimizing the light efficiency while suppressing
aliasing and crosstalk in a color filter array are conflicting tasks. A
new optimization framework is proposed to solve the problem. The new
framework unifies several mainstream design criteria while offering
generality and flexibility. Extensive experimental comparisons
demonstrate the effectiveness of the framework.</div>
|
8 |
The Study of Inverting Sediment Sound Speed Profile Using a Geoacoustic Model for a Nonhomogenous SeabedYang, Shih-Feng 03 July 2007 (has links)
The objective of this thesis is to develop and implement an algorithm for inverting the sound speed profile via estimation of the parameters embedded in a geoacoustic model. The environmental model inscribes a continuously-varying marine sediment layer with density and sound speed distributions represented by the generalized-exponential and inverse-square functions, respectively. Based upon a forward problem of plane-wave reflection from a non-uniform sediment layer overlying a uniform elastic basement, an inversion procedure for estimating the sound speed profile from the reflected sound field under the influence of noise is established and numerically implemented. The inversion invokes a probabilistic approach quantified by the posterior probability density for measuring the uncertainties of the estimated parameters from synthetic noisy data. Preliminary analysis on the solution of the forward problem and the sensitivity of the model parameters is first conducted, leading to a determination of the parameters chosen for inversion in the ensuing study. The parameter uncertainties referenced 1-D and 2-D marginal posterior probability densities are then examined, followed by the statistical estimation for the sound speed profile in terms of 99 % credibility interval. The effects of, the signal-to-noise ratio (SNR), the dimension of data vector, the region in which the data sampled, on the statistical estimation of sound speed profile are demonstrated and discussed.
|
9 |
Conditional Noise-Contrastive Estimation : With Application to Natural Image Statistics / Uppskattning via betingat kontrastivt brusCeylan, Ciwan January 2017 (has links)
Unnormalised parametric models are an important class of probabilistic models which are difficult to estimate. The models are important since they occur in many different areas of application, e.g. in modelling of natural images, natural language and associative memory. However, standard maximum likelihood estimation is not applicable to unnormalised models, so alternative methods are required. Noise-contrastive estimation (NCE) has been proposed as an effective estimation method for unnormalised models. The basic idea is to transform the unsupervised estimation problem into a supervised classification problem. The parameters of the unnormalised model are learned by training the model to differentiate the given data samples from generated noise samples. However, the choice of the noise distribution has been left open to the user, and as the performance of the estimation may be sensitive to this choice, it is desirable for it to be automated. In this thesis, the ambiguity in the choice of the noise distribution is addressed by presenting the previously unpublished conditional noise-contrastive estimation (CNCE) method. Like NCE, CNCE estimates unnormalised models by classifying data and noise samples. However, the choice of noise distribution is partly automated via the use of a conditional noise distribution that is dependent on the data. In addition to introducing the core theory for CNCE, the method is empirically validated on data and models where the ground truth is known. Furthermore, CNCE is applied to natural image data to show its applicability in a realistic application. / Icke-normaliserade parametriska modeller utgör en viktig klass av svåruppskattade statistiska modeller. Dessa modeller är viktiga eftersom de uppträder inom många olika tillämpningsområden, t.ex. vid modellering av bilder, tal och skrift och associativt minne. Dessa modeller är svåruppskattade eftersom den vanliga maximum likelihood-metoden inte är tillämpbar på icke-normaliserade modeller. Noise-contrastive estimation (NCE) har föreslagits som en effektiv metod för uppskattning av icke-normaliserade modeller. Grundidén är att transformera det icke-handledda uppskattningsproblemet till ett handlett klassificeringsproblem. Den icke-normaliserade modellens parametrar blir inlärda genom att träna modellen på att skilja det givna dataprovet från ett genererat brusprov. Dock har valet av brusdistribution lämnats öppet för användaren. Eftersom uppskattningens prestanda är känslig gentemot det här valet är det önskvärt att få det automatiserat. I det här examensarbetet behandlas valet av brusdistribution genom att presentera den tidigare opublicerade metoden conditional noise-contrastive estimation (CNCE). Liksom NCE uppskattar CNCE icke-normaliserade modeller via klassificering av data- och brusprov. I det här fallet är emellertid brusdistributionen delvis automatiserad genom att använda en betingad brusdistribution som är beroende på dataprovet. Förutom att introducera kärnteorin för CNCE valideras även metoden med hjälp av data och modeller vars genererande parametrar är kända. Vidare appliceras CNCE på bilddata för att demonstrera dess tillämpbarhet.
|
10 |
Primal dual pursuit: a homotopy based algorithm for the Dantzig selectorAsif, Muhammad Salman 10 July 2008 (has links)
Consider the following system model
y = Ax + e,
where x is n-dimensional sparse signal, y is the measurement vector in a much lower dimension m, A is the measurement matrix and e is the error in our measurements. The Dantzig selector estimates x by solving the following optimization problem
minimize || x ||₁ subject to || A'(Ax - y) ||∞ ≤ ε, (DS). This is a convex program and can be recast into a linear program and solved using any modern optimization method e.g., interior point methods. We propose a fast and efficient scheme for solving the Dantzig Selector (DS), which we call "Primal-Dual pursuit". This algorithm can be thought of as a "primal-dual homotopy" approach to solve the Dantzig selector (DS). It computes the solution to (DS) for a range of successively relaxed problems, by starting with a large artificial ε and moving towards the desired value. Our algorithm iteratively updates the primal and dual supports as ε reduces to the desired value, which gives final solution. The homotopy path solution of (DS) takes with varying ε is piecewise linear. At some critical values of ε in this path, either some new elements enter the support of the signal or some existing elements leave the support. We derive the optimality and feasibility conditions which are used to update the solutions at these critical points. We also present a detailed analysis of primal-dual pursuit for sparse signals in noiseless case. We show that if our signal is S-sparse, then we can find all its S elements in exactly S steps using about "S² log n" random measurements, with very high probability.
|
Page generated in 0.1182 seconds