• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 108
  • 22
  • 20
  • 19
  • 18
  • 16
  • 14
  • 10
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Efficient Computation of Probabilities of Events Described by Order Statistics and Application to a Problem of Queues

Jones, Lee K., Larson, Richard C., 1943- 05 1900 (has links)
Consider a set of N i.i.d. random variables in [0, 1]. When the experimental values of the random variables are arranged in ascending order from smallest to largest, one has the order statistics of the set of random variables. In this note an O(N3) algorithm is developed for computing the probability that the order statistics vector lies in a given rectangle. The new algorithm is then applied to a problem of statistical inference in queues. Illustrative computational results are included.
32

Estimation procedures using order statistics

Diebolt, Daniel T. 03 June 2011 (has links)
In recent times, the role of statistical methods based on order statistics has become more and more significant in statistical inference. Let Y1 < Y2 < • • • < Yn be the order statistics corresponding to a random sample of a size n from a continuous distribution having probability density function f(x; e), e c S2. The purpose of this thesis is mainly to examine the procedures for estimating the parameter e using order statistics.The usual procedures for estimation of unknown parameters are based on the whole sample without taking into account the order in which the sample is taken or without arranging the observations in order of magnitude. Order statistics and estimations based on order statistics are becoming more popular due to their frequent use in nonparametric inferences and in robust procedures. Procedures based on order statistics are particularly useful when the examined data contain one or more extreme values or outliers.This thesis will provide useful insight to the problem of estimation using order statistics. Some works in this field will be studied, reviewed and updated. Estimation based on order statistics using full and censored samples for small and large data sets will be investigated with reference to continuous distributions, such as the normal distribution. In particular, estimation problems as well as hypothesis testing for location and scale parameters of some continuous distributions and estimation of quantiles based on order statistics will be examined.Ball State UniversityMuncie, IN 47306
33

Tremor quantification and parameter extraction

Bejugam, Santosh January 2011 (has links)
Tremor is a neuro degenerative disease causing involuntary musclemovements in human limbs. There are many types of tremor that arecaused due to the damage of nerve cells that surrounds thalamus of thefront brain chamber. It is hard to distinguish or classify the tremors asthere are many reasons behind the formation of specific category, soevery tremor type is named behind its frequency type. Propermedication for the cure by physician is possible only when the disease isidentified.Because of the argument given in the above paragraph, there is a needof a device or a technique to analyze the tremor and for extracting theparameters associated with the signal. These extracted parameters canbe used to classify the tremor for onward identification of the disease.There are various diagnostic and treatment monitoring equipment areavailable for many neuromuscular diseases. This thesis is concernedwith the tremor analysis for the purpose of recognizing certain otherneurological disorders. A recording and analysis system for human’stremor is developed.The analysis was performed based on frequency and amplitudeparameters of the tremor. The Fast Fourier Transform (FFT) and higherorderspectra were used to extract frequency parameters (e.g., peakamplitude, fundamental frequency of tremor, etc). In order to diagnosesubjects’ condition, classification was implemented by statisticalsignificant tests (t‐test).
34

Extreme value distribution quantile estimation

Buck, Debra L. January 1983 (has links)
This thesis considers estimation of the quantiles of the smallest extreme value distribution, sometimes referred to as the log - Weibull distribution. The estimators considered are linear combinations of two order statistics. A table of the best linear estimates (BLUE's) is presented for sample sizes two through twenty. These estimators are compared to the asymptotic estimators of Kubat and Epstein (1980).
35

Advances in statistical inference and outlier related issues.

Childs, Aaron Michael. Balakrishnan N. Unknown Date (has links)
Thesis (Ph.D.)--McMaster University (Canada), 1996. / Source: Dissertation Abstracts International, Volume: 57-10, Section: B, page: 6347. Adviser: N. Balakrishnan.
36

Three essays on quantal response equilibrium model /

Yi, Kang-Oh, January 1999 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1999. / Vita. Includes bibliographical references.
37

Three Essays on Complex Systems: Self-Sorting in a One-Dimensional Gas, Collective Motion in a Two-Dimensional Ensemble of Disks, and Environment-Driven Seasonality of Mosquito Abundance

Young, Alexander L., Young, Alexander L. January 2017 (has links)
Complex systems offer broad, unique research challenges due to their inability to be understood through a classic reductionist perspective, as they exhibit emergent phenomena that arise through the interactions of their components. In this thesis, we briefly review some characteristics of complex systems and the interplay of mathematical and computational methods to study them. We then discuss these approaches, how they are implemented, and how they support one another in three settings. First, we present a study that connects weather data to seasonal population-abundance of mosquitoes, using a microscopic model. Secondly, we consider the collective motions that arise in ensembles of disks interacting through non-elastic collisions and investigate how such behaviors affect macroscopic transport properties. Finally, we consider a 'self-sorting' one-dimensional collection of point-particles. In all of these cases, agent-based models and simulations are used to guide analysis, and in the final example, we explain how the simulations led to new theorems. Articles and molecular dynamics computer codes are provided as appendices.
38

New Analytics Paradigms in Online Advertising and Fantasy Sports

Singal, Raghav January 2020 (has links)
Over the last two decades, digitization has been drastically shifting the way businesses operate and has provided access to high volume, variety, velocity, and veracity data. Naturally, access to such granular data has opened a wider range of possibilities than previously available. We leverage such data to develop application-driven models in order to evaluate current systems and make better decisions. We explore three application areas. In Chapter 1, we develop models and algorithms to optimize portfolios in daily fantasy sports (DFS). We use opponent-level data to predict behavior of fantasy players via a Dirichlet-multinomial process, and our predictions feed into a novel portfolio construction model. The model is solved via a sequence of binary quadratic programs, motivated by its connection to outperforming stochastic benchmarks, the submodularity of the objective function, and the theory of order statistics. In addition to providing theoretical guarantees, we demonstrate the value of our framework by participating in DFS contests. In Chapter 2, we develop an axiomatic framework for attribution in online advertising, i.e., assessing the contribution of individual ads to product purchase. Leveraging a user-level dataset, we propose a Markovian model to explain user behavior as a function of the ads she is exposed to. We use our model to illustrate limitations of existing heuristics and propose an original framework for attribution, which is motivated by causality and game theory. Furthermore, we establish that our framework coincides with an adjusted ``unique-uniform'' attribution scheme. This scheme is efficiently implementable and can be interpreted as a correction to the commonly used uniform attribution scheme. We supplement our theory with numerics using a real-world large-scale dataset. In Chapter 3, we propose a decision-making algorithm for personalized sequential marketing. As in attribution, using a user-level dataset, we propose a state-based model to capture user behavior as a function of the ad interventions. In contrast with existing approaches that model only the myopic value of an intervention, we also model the long-run value. The objective of the firm is to maximize the probability of purchase and a key challenge it faces is the lack of understanding of the state-specific effects of interventions. We propose a model-free learning algorithm for decision-making in such a setting. Our algorithm inherits the simplicity of Thompson sampling for a multi-armed bandit setting and we prove its asymptotic optimality. We supplement our theory with numerics on an email marketing dataset.
39

Inference procedures based on order statistics

Frey, Jesse C. 01 August 2005 (has links)
No description available.
40

Automated Tracking of Mouse Embryogenesis from Large-scale Fluorescence Microscopy Data

Wang, Congchao 03 June 2021 (has links)
Recent breakthroughs in microscopy techniques and fluorescence probes enable the recording of mouse embryogenesis at the cellular level for days, easily generating terabyte-level 3D time-lapse data. Since millions of cells are involved, this information-rich data brings a natural demand for an automated tool for its comprehensive analysis. This tool should automatically (1) detect and segment cells at each time point and (2) track cell migration across time. Most existing cell tracking methods cannot scale to the data with such large size and high complexity. For those purposely designed for embryo data analysis, the accuracy is heavily sacrificed. Here, we present a new computational framework for the mouse embryo data analysis with high accuracy and efficiency. Our framework detects and segments cells with a fully probability-principled method, which not only has high statistical power but also helps determine the desired cell territories and increase the segmentation accuracy. With the cells detected at each time point, our framework reconstructs cell traces with a new minimum-cost circulation-based paradigm, CINDA (CIrculation Network-based DataAssociation). Compared with the widely used minimum-cost flow-based methods, CINDA guarantees the global optimal solution with the best-of-known theoretical worst-case complexity and hundreds to thousands of times practical efficiency improvement. Since the information extracted from a single time point is limited, our framework iteratively refines cell detection and segmentation results based on the cell traces which contain more information from other time points. Results show that this dramatically improves the accuracy of cell detection, segmentation, and tracking. To make our work easy to use, we designed a standalone software, MIVAQ (Microscopic Image Visualization, Annotation, and Quantification), with our framework as the backbone and a user-friendly interface. With MIVAQ, users can easily analyze their data and visually check the results. / Doctor of Philosophy / Mouse embryogenesis studies mouse embryos from fertilization to tissue and organ formation. The current microscope and fluorescent labeling technique enable the recording of the whole mouse embryo for a long time with high resolution. The generated data can be terabyte-level and contains more than one million cells. This information-rich data brings a natural demand for an automated tool for its comprehensive analysis. This tool should automatically (1) detect and segment cells at each time point to get the information of cell morphology and (2) track cell migration across time. However, the development of analytical tools lags far behind the capability of data generation. Existing tools either cannot scale to the data with such large size and high complexity or sacrifice accuracy heavily for efficiency. In this dissertation, we present a new computational framework for the mouse embryo data analysis with high accuracy and efficiency. To make our framework easy to use, we also designed a standalone software, MIVAQ, with a user-friendly interface. With MIVAQ, users can easily analyze their data and visually check the results.

Page generated in 0.1102 seconds