• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 16
  • 14
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 18
  • 17
  • 16
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Sensitivity analysis of optimization : Examining sensitivity of bottleneck optimization to input data models

Ekberg, Marie January 2016 (has links)
The aim of this thesis is to examine optimization sensitivity in SCORE to the accuracy of particular input data models used in a simulation model of a production line. The purpose is to evaluate if it is sufficient to model input data using sample mean and default distributions instead of fitted distributions. An existing production line has been modeled for the simulation study. SCORE is based on maximizing any key performance measure of the production line while simultaneously minimizing the number of improvements necessary to achieve maximum performance. The sensitivity to the input models should become apparent the more changes required. The experiments concluded that the optimization struggles to obtain convergence when fitted distribution models were used. Configuring the input parameters to the optimization might yield better optimization result. The final conclusion is that the optimization is sensitive to what input data models are used in the simulation model.
42

Modeling and Survival Analysis of Breast Cancer: A Statistical, Artificial Neural Network, and Decision Tree Approach

Mudunuru, Venkateswara Rao 26 March 2016 (has links)
Survival analysis today is widely implemented in the fields of medical and biological sciences, social sciences, econometrics, and engineering. The basic principle behind the survival analysis implies to a statistical approach designed to take into account the amount of time utilized for a study period, or the study of time between entry into observation and a subsequent event. The event of interest pertains to death and the analysis consists of following the subject until death. Events or outcomes are defined by a transition from one discrete state to another at an instantaneous moment in time. In the recent years, research in the area of survival analysis has increased greatly because of its large usage in areas related to bio sciences and the pharmaceutical studies. After identifying the probability density function that best characterizes the tumors and survival times of breast cancer women, one purpose of this research is to compare the efficiency between competing estimators of the survival function. Our study includes evaluation of parametric, semi-parametric and nonparametric analysis of probability survival models. Artificial Neural Networks (ANNs), recently applied to a number of clinical, business, forecasting, time series prediction, and other applications, are computational systems consisting of artificial neurons called nodes arranged in different layers with interconnecting links. The main interest in neural networks comes from their ability to approximate complex nonlinear functions. Among the available wide range of neural networks, most research is concentrated around feed forward neural networks called Multi-layer perceptrons (MLPs). One of the important components of an artificial neural network (ANN) is the activation function. This work discusses properties of activation functions in multilayer neural networks applied to breast cancer stage classification. There are a number of common activation functions in use with ANNs. The main objective in this work is to compare and analyze the performance of MLPs which has back-propagation algorithm using various activation functions for the neurons of hidden and output layers to evaluate their performance on the stage classification of breast cancer data. Survival analysis can be considered a classification problem in which the application of machine-learning methods is appropriate. By establishing meaningful intervals of time according to a particular situation, survival analysis can easily be seen as a classification problem. Survival analysis methods deals with waiting time, i.e. time till occurrence of an event. Commonly used method to classify this sort of data is logistic regression. Sometimes, the underlying assumptions of the model are not true. In model building, choosing an appropriate model depends on complexity and the characteristics of the data that affect the appropriateness of the model. Two such strategies, which are used nowadays frequently, are artificial neural network (ANN) and decision trees (DT), which needs a minimal assumption. DT and ANNs are widely used methodological tools based on nonlinear models. They provide a better prediction and classification results than the traditional methodologies such as logistic regression. This study aimed to compare predictions of the ANN, DT and logistic models by breast cancer survival. In this work our goal is to design models using both artificial neural networks and logistic regression that can precisely predict the output (survival) of breast cancer patients. Finally we compare the performances of these models using receiver operating characteristic (ROC) analysis.
43

Optimalizační metody s využitím simulací v MS Excel / The Optimization Methods with Utilization of the Simulation in MS Exel

Škulavíková, Štěpánka January 2008 (has links)
Thesis is based on original self-made application programmed at VBA in MS Excel 2007. The reason to build this application was integration of simulation Monte Carlo and chosen optimization methods. The application allows do simulation of the knapsack problem and of the assignment problem with uncertainty. The parameters of these models are possible to set up as changing values in dependence of chosen probability distribution. Output of the simulation is a probability recommendation which objects should be used. Choose of objects depend on optimized models. Results of both models are represented by statistical indexes, tables of parameters and graph.
44

Desenho de polígonos e sequenciamento de blocos de minério para planejamento de curto prazo procurando estacionarização dos teores

Toledo, Augusto Andres Torres January 2018 (has links)
O planejamento de curto prazo em minas a céu aberto exige a definição de poligonais, que representam os sucessivos avanços de lavra. As poligonais, tradicionalmente, são desenhadas em um processo laborioso na tentativa de delinear como minério em qualidade e quantidade de acordo com os limites determinados. O minério delimitado deve apresentar a menor variabilidade em qualidade possível, com o objetivo de maximizar a recuperação na usina de processamento. Essa dissertação visa desenvolver um fluxo do trabalho para definir poligonais de curto prazo de forma automática, além disso, sequenciar todos os blocos de minério de cada polígono de modo a definir uma sequência interconectada lavrável de poligonais. O fluxo do trabalho foi aplicada à incerteza de teores, obtida através de simulações estocásticas. Algoritmos genéticos foram desenvolvidos em linguagem de programação Python e implementados na forma de plug-in no software geoestatístico Ar2GeMS. Múltiplas iterações são criadas para cada avanço individual, gerando regiões (ou poligonais). Então, a região que apresenta menor variabilidade de teores é selecionada. A distribuição de probabilidade dos teores dos blocos em cada avanço é comparada com a distribuição global de teores, calculada a partir de todos os blocos do corpo de minério. Os resultados mostraram que os teores dos blocos abrangidos pelas poligonais criadas dessa forma apresentam teores similares à distribuição de referência, permitindo o sequenciamento de lavra com distribuição de teores mais próximo possível da distribuição global. Modelos equiprováveis permitem avaliar a incerteza associada à solução proposta. / Open-pit short-term planning requieres the definition of polygons identifying the successive mining advances. These polygons are drawn in a labour intensive task attempting to delineate ore with the quantity and quality within established ranges. The ore delineated by the polygons should have the least possible quality variability among them, helping in maximizing ore recovery at the processing plant. This thesis aims at developíng a workflow for drawing short-term polygons automatically, sequencing all ore blocks within each polygon and leading to a mineable and connected sequence of polygons. This workflow is also tested under grade uncertainty obtained through multiple syochastic simulated models. For this, genetics algorithms were developed in Python programming language and pluged in Ar2GeMS geostatistical software. Multiple iterations were generated for each of the individual advances, generating regions or polygons, and selecting the regions of lower grade variability. The blocks probability distribution within each advance were compared to the global distribution, including all blocks within the ore body. Results show that the polygons generated are comprised by block grades similar to the ones from the reference distribution, leading to mining sequence as close as possible to the global maintaining a quasi-satationarity. Equally probable models provide the means to access the uncertainy in the solution provided.
45

Odhady diskrétních rozdělení pravděpodobnosti pro aplikace / Estimates of Discrete Probability Distributions for Applications

Mašek, Jakub January 2016 (has links)
Master's thesis is focused on solution of the statistical problem to find a probability distribution of a discrete random variable on the basis of the observed data. These estimates are obtained by minimizing pseudo-quasinorm which is introduced here.The thesis further focuses on atributes of this pseudo-quasinorm. It also contains practical application of these methods.
46

An Aggregate Stochastic Model Incorporating Individual Dynamics for Predation Movements of Anelosimus Studiosus

Quijano, Alex John, Joyner, Michele L., Seier, Edith, Hancock, Nathaniel, Largent, Michael, Jones, Thomas C. 01 June 2015 (has links)
In this paper, we discuss methods for developing a stochastic model which incorporates behavior differences in the predation movements of Anelosimus studiosus (a subsocial spider). Stochastic models for animal movement and, in particular, spider predation movement have been developed previously; however, this paper focuses on the development and implementation of the necessary mathematical and statistical methods required to expand such a model in order to capture a variety of distinct behaviors. A least squares optimization algorithm is used for parameter estimation to fit a single stochastic model to an individual spider during predation resulting in unique parameter values for each spider. Similarities and variations between parameter values across the spiders are analyzed and used to estimate probability distributions for the variable parameter values. An aggregate stochastic model is then created which incorporates the individual dynamics. The comparison between the optimal individual models to the aggregate model indicate the methodology and algorithm developed in this paper are appropriate for simulating a range of individualistic behaviors.
47

Kvazinormy diskrétních rozdělení pravděpodobnosti a jejich aplikace / Quasinorms of Discrete Probability Distributions and their Applications

Šácha, Jakub January 2013 (has links)
Dissertation thesis is focused on solution of the statistical problem to find a probability distribution of a discrete random variable on the basis of the observed data. These estimates are obtained by minimizing quasi-norms with given constraints. The thesis further focuses on deriving confidence intervals for estimated probabilities. It also contains practical application of these methods.
48

The Interquartile Range: Theory and Estimation.

Whaley, Dewey Lonzo 16 August 2005 (has links) (PDF)
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
49

An Integrated Framework for Automated Data Collection and Processing for Discrete Event Simulation Models

Rodriguez, Carlos 01 January 2015 (has links)
Discrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience. The focus of this research investigation is addressing the manual data collection and processing (MDCAP) problem prevalent in DES projects. This research investigation presents an integrated framework to solve the MDCAP problem by classifying the data needed for DES projects into three generic classes. Such classification permits automating and streamlining the preparation of the data, allowing DES modelers to collect, update, visualize, fit, validate, tally and test data in real-time, by performing intuitive actions. In addition to the proposed theoretical framework, this project introduces an innovative user interface that was programmed based on the ideas of the proposed framework. The interface is called DESI, which stands for Discrete Event Simulation Inputs. The proposed integrated framework to automate DES input data preparation was evaluated against benchmark measures presented in the literature in order to show its positive impact in DES input data management. This research investigation demonstrates that the proposed framework, instantiated by the DESI interface, addresses current gaps in the field, reduces the time devoted to input data management within DES projects and advances the state-of-the-art in DES input data management automation.
50

RISK BASED ANALYSIS AND DESIGN OF STIFFENED PLATES

Dwire, Heather B. 08 May 2008 (has links)
No description available.

Page generated in 0.045 seconds