• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Investigating Statistics Teachers' Knowledge of Probability in the Context of Hypothesis Testing

Dolor, Jason Mark Asis 05 October 2017 (has links)
In the last three decades, there has been a significant growth in the number of undergraduate students taking introductory statistics. As a result, there is a need by universities and community colleges to find well-qualified instructors and graduate teaching assistants to teach the growing number of statistics courses. Unfortunately, research has shown that even teachers of introductory statistics struggle with concepts they are employed to teach. The data presented in this research sheds light on the statistical knowledge of graduate teaching assistants (GTAs) and community college instructors (CCIs) in the realm of probability by analyzing their work on surveys and task-based interviews on the p-value. This research could be useful for informing professional development programs to better support present and future teachers of statistics.
592

Algoritmos assíncronos de iteração de política para Processos de Decisão Markovianos com Probabilidades Intervalares / Asynchronous policy iteration algorithms for Bounded-parameter Markov Decision Processes

Reis, Willy Arthur Silva 02 August 2019 (has links)
Um Processo de Decisão Markoviano (MDP) pode ser usado para modelar problemas de decisão sequencial. No entanto, podem existir limitações na obtenção de probabilidades para modelagem da transição de estados ou falta de confiabilidade nas informações existentes sobre estas probabilidades. Um modelo menos restritivo e que pode resolver este problema é o Processo de Decisão Markoviano com Probabilidades Intervalares (BMDP), que permite a representação imprecisa das probabilidades de transição de estados e raciocínio sobre uma solução robusta. Para resolver BMDPs de horizonte infinito, existem os algoritmos síncronos de Iteração de Valor Intervalar e Iteração de Política Robusto, que são ineficientes quando o tamanho do espaço de estados é grande. Neste trabalho são propostos algoritmos assíncronos de Iteração de Política baseados no particionamento do espaço de estados em subconjuntos aleatórios (Robust Asynchronous Policy Iteration - RAPI) ou em componentes fortemente conexos (Robust Topological Policy Iteration - RTPI). Também são propostas formas de inicializar a função valor e a política dos algoritmos, de forma a melhorar a convergência destes. O desempenho dos algoritmos propostos é avaliado em comparação com o algoritmo de Iteração de Política Robusto para BMDPs para domínios de planejamento existentes e um novo domínio proposto. Os resultados dos experimentos realizados mostram que (i) quanto mais estruturado é o domínio, melhor é o desempenho do algoritmo RTPI; (ii) o uso de computação paralela no algoritmo RAPI possui um pequeno ganho computacional em relação à sua versão sequencial; e (iii) uma boa inicialização da função valor e política pode impactar positivamente o tempo de convergência dos algoritmos. / A Markov Decision Process (MDP) can be used to model sequential decision problems. However, there may be limitations in obtaining probabilities for state transition modeling or lack of reliability in existing information on these probabilities. A less restrictive model that can solve this problem is the Bounded-parameter Markov Decision Process (BMDP), which allows the imprecise representation of the transition probabilities and reasoning about a robust solution. To solve infinite horizon BMDPs, there are synchronous algorithms such as Interval Value Iteration and Robust Policy Iteration, which are inefficient for large state spaces. In this work, we propose new asynchronous Policy Iteration algorithms based on state space partitioning in random subsets (Robust Asynchronous Policy Iteration - RAPI) or in strongly connected components (Robust Topological Policy Iteration - RTPI). We also propose ways to initialize the value function and policy of the algorithms, in order to improve their convergence. The performance of the proposed algorithms is evaluated in comparison with the Robust Policy Iteration algorithm for BMDPs for existing planning domains and a proposed new domain. The results of the experiments show that (i) the more structured the domain, the better is the performance of the RTPI algorithm; (ii) the use of parallel computing in the RAPI algorithm has a small computational gain compared to its sequential version; and (iii) a good initialization of the value function and policy can positively impact the convergence time of the algorithms.
593

Spatial sampling and prediction

Schelin, Lina January 2012 (has links)
This thesis discusses two aspects of spatial statistics: sampling and prediction. In spatial statistics, we observe some phenomena in space. Space is typically of two or three dimensions, but can be of higher dimension. Questions in mind could be; What is the total amount of gold in a gold-mine? How much precipitation could we expect in a specific unobserved location? What is the total tree volume in a forest area? In spatial sampling the aim is to estimate global quantities, such as population totals, based on samples of locations (papers III and IV). In spatial prediction the aim is to estimate local quantities, such as the value at a single unobserved location, with a measure of uncertainty (papers I, II and V). In papers III and IV, we propose sampling designs for selecting representative probability samples in presence of auxiliary variables. If the phenomena under study have clear trends in the auxiliary space, estimation of population quantities can be improved by using representative samples. Such samples also enable estimation of population quantities in subspaces and are especially needed for multi-purpose surveys, when several target variables are of interest. In papers I and II, the objective is to construct valid prediction intervals for the value at a new location, given observed data. Prediction intervals typically rely on the kriging predictor having a Gaussian distribution. In paper I, we show that the distribution of the kriging predictor can be far from Gaussian, even asymptotically. This motivated us to propose a semiparametric method that does not require distributional assumptions. Prediction intervals are constructed from the plug-in ordinary kriging predictor. In paper V, we consider prediction in the presence of left-censoring, where observations falling below a minimum detection limit are not fully recorded. We review existing methods and propose a semi-naive method. The semi-naive method is compared to one model-based method and two naive methods, all based on variants of the kriging predictor.
594

Queueing Analysis of a Priority-based Claim Processing System

Ibrahim, Basil January 2009 (has links)
We propose a situation in which a single employee is responsible for processing incoming claims to an insurance company that can be classified as being one of two possible types. More specifically, we consider a priority-based system having separate buffers to store high priority and low priority incoming claims. We construct a mathematical model and perform queueing analysis to evaluate the performance of this priority-based system, which incorporates the possibility of claims being redistributed, lost, or prematurely processed.
595

Queueing Analysis of a Priority-based Claim Processing System

Ibrahim, Basil January 2009 (has links)
We propose a situation in which a single employee is responsible for processing incoming claims to an insurance company that can be classified as being one of two possible types. More specifically, we consider a priority-based system having separate buffers to store high priority and low priority incoming claims. We construct a mathematical model and perform queueing analysis to evaluate the performance of this priority-based system, which incorporates the possibility of claims being redistributed, lost, or prematurely processed.
596

An evolving-requirements technology assessment process for advanced propulsion concepts

McClure, Erin Kathleen 07 July 2006 (has links)
This dissertation investigates the development of a methodology suitable for the evaluation of advanced propulsion concepts. At early stages of development, both the future performance of these concepts and their requirements are highly uncertain, making it difficult to forecast their future value. A systematic methodology to identify potential advanced propulsion concepts and assess their robustness is necessary to reduce the risk of developing advanced propulsion concepts. Existing advanced design methodologies have evaluated the robustness of technologies or concepts to variations in requirements, but they are not suitable to evaluate a large number of dissimilar concepts. Variations in requirements have been shown to impact the development of advanced propulsion concepts, and any method designed to evaluate these concepts must incorporate the possible variations of the requirements into the assessment. In order to do so, a methodology had to do two things. First, it had to systemically identify a probabilistic distribution for the future requirements. Such a distribution would allow decision-makers to quantify the uncertainty introduced by variations in requirements. Second, the methodology must assess the robustness of the propulsion concepts as a function of that distribution. These enabling elements have been synthesized into new methodology, the Evolving Requirements Technology Assessment (ERTA) method. The ERTA method was used to evaluate and compare advanced propulsion systems as possible power systems for a hurricane tracking, High Altitude, Long Endurance (HALE) unmanned aerial vehicle (UAV). The problem served as a good demonstration of the ERTA methodology because conventional propulsion systems will not be sufficient to power the UAV, but the requirements for such a vehicle are still uncertain.
597

Homogenization Relations for Elastic Properties Based on Two-Point Statistical Functions

Peydaye Saheli, Ghazal 06 April 2006 (has links)
In this research, the homogenization relations for elastic properties in isotropic and anisotropic materials are studied by applying two-point statistical functions to composite and polycrystalline materials. The validity of the results is investigated by direct comparison with experimental results. In todays technology, where advanced processing methods can provide materials with a variety of morphologies and features in different scales, a methodology to link property to microstructure is necessary to develop a framework for material design. Statistical distribution functions are commonly used for the representation of microstructures and also for homogenization of materials properties. The use of two-point statistics allows the materials designer to consider morphology and distribution in addition to properties of individual phases and components in the design space. This work is focused on studying the effect of anisotropy on the homogenization technique based on two-point statistics. The contribution of one-point and two-point statistics in the calculation of elastic properties of isotropic and anisotropic composites and textured polycrystalline materials will be investigated. For this purpose, an isotropic and anisotropic composite is simulated and an empirical form of the two-point probability functions are used which allows the construction of a composite Hull. The homogenization technique is also applied to two samples of Al-SiC composite that were fabricated through extrusion with two different particle size ratios (PSR). To validate the applied methodology, the elastic properties of the composites are measured by Ultrasonic methods. This methodology is then extended to completely random and textured polycrystalline materials with hexagonal crystal symmetry and the effect of cold rolling on the annealing texture of near- Titanium alloy are presented.
598

A Study on Integrating Credit Risk Models via Service-Oriented Architecture

Lin, Yueh-Min 26 June 2011 (has links)
This thesis establishes an information system which combines three credit risk models through Service-Oriented Architecture (SOA). The system requires the bank user inputting finance-related data and selecting options to generate a series of credit risk related results, including the probabilities of default, the recovery rates, the expected market value of assets, the volatilities of the expected market value of assets, the default points, the default distances, and four indexes from principal components analyses. In addition to exhibiting the numerical results, graphical results are also available for the user. Three credit risk models joining this system are the Moody¡¦s KMV Model with Default Point Modified, the Risk-Neutral Probability Measure Model, and the Time-Varying Jointly Estimated Model. Several previous researches have demonstrated the validity of these credit risk models, hence the purpose of this study is not to examine the practicability of these models, but to see if these models are capable of connecting each other effectively and eventually establishing a process to evaluate the credit risk of enterprises and industries by the use of testing samples. Testing samples are data from Taiwan Small and Medium Enterprise Credit Guarantee Fund. The finance-related data includes the loan amounts, the book value of assets, the data used to calculate the default point threshold (such as the short-term debt and the long-term debt), and the financial ratios with regard to growth ability (such as the revenue growth rate and the profit growth rate before tax), operation ability (such as the accounts receivable turnover rate and the inventory turnover rate), liability-paying ability (such as the current ratio and the debt ratio), and profitability (such as the return on assets and the return on equity). In addition to inputting the finance-related data, the system also require the user selecting the industrial category, the default point threshold, the way data being weighted, the data period, and the borrowing rates from the option page for every enterprise in order to acquire the results. Among the computing process, user is required to select weighted average method, either weighted by loan amounts or weighted by market value of assets, to obtain ¡§the weighted average probability of default of the industry¡¨ and ¡§the weighted average recovery rate of the industry¡¨ which are both used by the Time-Varying Jointly Estimated Model. This study also makes use of quartiles to simulate the situation when the user is near the bottom and top of the business cycle. Furthermore, the ¡§Supremum Strategy¡¨ and the ¡§Infimum Strategy¡¨ are added to this study to let the user realize the best condition and the worse condition of the ¡§Time-Varying Industrial Marginal Probabilities of Default¡¨.
599

Improved State Estimation For Jump Markov Linear Systems

Orguner, Umut 01 December 2006 (has links) (PDF)
This thesis presents a comprehensive example framework on how current multiple model state estimation algorithms for jump Markov linear systems can be improved. The possible improvements are categorized as: -Design of multiple model state estimation algorithms using new criteria. -Improvements obtained using existing multiple model state estimation algorithms. In the first category, risk-sensitive estimation is proposed for jump Markov linear systems. Two types of cost functions namely, the instantaneous and cumulative cost functions related with risk-sensitive estimation are examined and for each one, the corresponding multiple model estate estimation algorithm is derived. For the cumulative cost function, the derivation involves the reference probability method where one defines and uses a new probability measure under which the involved processes has independence properties. The performance of the proposed risk-sensitive filters are illustrated and compared with conventional algorithms using simulations. The thesis addresses the second category of improvements by proposing -Two new online transition probability estimation schemes for jump Markov linear systems. -A mixed multiple model state estimation scheme which combines desirable properties of two different multiple model state estimation methods. The two online transition probability estimators proposed use the recursive Kullback-Leibler (RKL) procedure and the maximum likelihood (ML) criteria to derive the corresponding identification schemes. When used in state estimation, these methods result in an average error decrease in the root mean square (RMS) state estimation errors, which is proved using simulation studies. The mixed multiple model estimation procedure which utilizes the analysis of the single Gaussian approximation of Gaussian mixtures in Bayesian filtering, combines IMM (Interacting Multiple Model) filter and GPB2 (2nd Order Generalized Pseudo Bayesian) filter efficiently. The resulting algorithm reaches the performance of GPB2 with less Kalman filters.
600

Hydro-climatic forecasting using sea surface temperatures

Chen, Chia-Jeng 20 June 2012 (has links)
A key determinant of atmospheric circulation patterns and regional climatic conditions is sea surface temperature (SST). This has been the motivation for the development of various teleconnection methods aiming to forecast hydro-climatic variables. Among such methods are linear projections based on teleconnection gross indices (such as the ENSO, IOD, and NAO) or leading empirical orthogonal functions (EOFs). However, these methods deteriorate drastically if the predefined indices or EOFs cannot account for climatic variability in the region of interest. This study introduces a new hydro-climatic forecasting method that identifies SST predictors in the form of dipole structures. An SST dipole that mimics major teleconnection patterns is defined as a function of average SST anomalies over two oceanic areas of appropriate sizes and geographic locations. The screening process of SST-dipole predictors is based on an optimization algorithm that sifts through all possible dipole configurations (with progressively refined data resolutions) and identifies dipoles with the strongest teleconnection to the external hydro-climatic series. The strength of the teleconnection is measured by the Gerrity Skill Score. The significant dipoles are cross-validated and used to generate ensemble hydro-climatic forecasts. The dipole teleconnection method is applied to the forecasting of seasonal precipitation over the southeastern US and East Africa, and the forecasting of streamflow-related variables in the Yangtze and Congo Rivers. These studies show that the new method is indeed able to identify dipoles related to well-known patterns (e.g., ENSO and IOD) as well as to quantify more prominent predictor-predictand relationships at different lead times. Furthermore, the dipole method compares favorably with existing statistical forecasting schemes. An operational forecasting framework to support better water resources management through coupling with detailed hydrologic and water resources models is also demonstrated.

Page generated in 0.0645 seconds