• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Portfolio s maximálním výnosem / Maximum Return Portfolio

Palko, Maximilián January 2019 (has links)
Classical method of portfolio selection is based on minimizing the variabi- lity of the portfolio. The Law of Large Numbers tells us that in case of longer investment horizon it should be enough to invest in the asset with the highest expected return which will eventually outperform any other portfolio. In our thesis we will suggest some portfolio creation methods which will create Maxi- mum Return Portfolios. These methods will be based on finding the asset with maximal expected return. That way we will avoid the problem of estimation errors of expected returns. Two of those methods will be selected based on the results of simulation analysis. Those two methods will be tested with the real stock data and compared with the S&P 500 index. Results of the testing suggest that our portfolios could have an application in the real world. Mainly because our portfolios showed to be significantly better than the index in the case of 10 year investment horizon. 1
12

Η Bootstrap σαν μέσο αντιμετώπισης του θορύβου της DEA

Γιαννακόπουλος, Βασίλειος 03 May 2010 (has links)
Η παρούσα διπλωματική εργασία πραγματεύεται την μελέτη της μεθόδου Bootstrap στα πλαίσια συμπλήρωσης των ελλείψεων της DEA, κατά τον υπολογισμό της τεχνικής αποτελεσματικότητας διαφόρων μονάδων λήψης αποφάσεων. Πιο συγκεκριμένα θα μελετηθεί η Bootstrap ως μέσο για τον υπολογισμό της μεροληψίας, και διαστημάτων εμπιστοσύνης των efficiency scores που προκύπτουν από την χρήση της DEA. Όπως θα φανεί, η DEA, ως μία εφαρμογή του γραμμικού προγραμματισμού, αποτελεί μία μη παραμετρική μέθοδο υπολογισμού της τεχνικής αποτελεσματικότητας και χαρακτηρίζεται από το μειονέκτημα της έλλειψης στατιστικών μεγεθών καθώς και την αδυναμία να ξεχωρίσει τον θόρυβο από την αναποτελεσματικότητα. Η Bootstrap από την άλλη, αποτελεί μία επαναληπτική εφαρμογή της DEA, η οποία καλείται να δώσει λύση στα παραπάνω προβλήματα. Σκοπός της παρούσας διπλωματικής είναι να ελέγξει τον βαθμό στον οποίο η Bootstrap καταφέρνει να εκπληρώσει αυτή την αποστολή. Για το σκοπό αυτό χρησιμοποιούνται πραγματικά δεδομένα που αφορούν ιχθυοκαλλιέργειες που δραστηριοποιούνται στην ελληνική επικράτεια, ενώ οι υπολογισμοί γίνονται μέσω των προγραμμάτων DEAP και PIM – DEA v2.0. / The present diplomatic essay treats the study of Bootstrap method within the bounds of completion of DEA deficiencies during the estimation of technical efficiency of several decision-making units. More precisely it will be scrutinized bootstrap as a mean of estimating biasness and the confidence intervals of the efficiency scores, which arise from the use of DEA. As it will be come clear, DEA as an implementation of linear programming, constitutes a non-parametric method of estimating technical efficiency and is characterized by the drawback of non-distinguishing the noise by the inefficiency. On the other hand, bootstrap constitutes a repetitive implementation of DEA, which is assigned to give a solution to the above questions. The purpose of this essay is to verify the degree in which bootstrap completes this mission. For this reason there are used real data, which concern fish farms that are placed in Greek territory while, the calculations are executed through the programs DEAP and PIM – DEA V2.0
13

On the Application of the Bootstrap : Coefficient of Variation, Contingency Table, Information Theory and Ranked Set Sampling

Amiri, Saeid January 2011 (has links)
This thesis deals with the bootstrap method. Three decades after the seminal paper by Bradly Efron, still the horizons of this method need more exploration. The research presented herein has stepped into different fields of statistics where the bootstrap method can be utilized as a fundamental statistical tool in almost any application. The thesis considers various statistical problems, which is explained briefly below. Bootstrap method: A comparison of the parametric and the nonparametric bootstrap of variance is presented. The bootstrap of ranked set sampling is dealt with, as well as the wealth of theories and applications on the RSS bootstrap that exist nowadays. Moreover, the performance of RSS in resampling is explored. Furthermore, the application of the bootstrap method in the inference of contingency table test is studied. Coefficient of variation: This part shows the capacity of the bootstrap for inferring the coefficient of variation, a task which the asymptotic method does not perform very well. Information theory: There are few works on the study of information theory, especially on the inference of entropy. The papers included in this thesis try to achieve the inference of entropy using the bootstrap method.
14

Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR

Luo, Shan 13 July 2009 (has links)
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the second stage we propose either partial least squares (PLS) method or simply taking significant differences as the discriminator. Finally, k-fold cross-validation method is used to estimate the shrinkages of the goodness measures, such as sensitivity, specificity and area under the ROC curve (AUC). There is no doubt in our mind 6 hour is enough for discriminating mock from Hsv1, and Coxsackie viruses. Adeno virus is an exception.
15

Dichotomous-Data Reliability Models with Auxiliary Measurements

俞一唐, Yu, I-Tang Unknown Date (has links)
我們提供一個新的可靠度模型,DwACM,並提供一個模式選擇準則CCP,我們利用DwACM和CCP來選擇衰變量。 / We propose a new reliability model, DwACM (Dichotomous-data with Auxiliary Continuous Measurements model) to describe a data set which consists of classical dichotomous response (Go or No Go) associated with a set of continuous auxiliary measurement. In this model, the lifetime of each individual is considered as a latent variable. Given the value of the latent variable, the dichotomous response is either 0 or 1 depending on if it fails or not at the measuring time. The continuous measurement can be regarded as observations of an underlying possible degradation candidate of which descending process is a function of the lifetime. Under the assumption that the failure of products is defined as the time at which the continuous measurement reaches a threshold, these two measurements can be linked in the proposed model. Statistical inference under this model are both in frequentist and Bayesian frameworks. To evaluate the continuous measurements, we provide a criterion, CCP (correct classification probability), to select the best degradation measurement. We also report our simulation studies of the performances of parameters estimators and CCP.
16

拔靴法在線性結構關係模式適合度指標之應用 / Bootstrap procedures for evaluating goodness-of-fit indices of linear structural equation models

羅靖霖, Lo, Chin Lin Unknown Date (has links)
線性結構關係模式是一種考慮以多個直線方程式來分析處理變數間因果關 係的統計方法,其結合了因徑分析及因素分析之優點並將之融合於整體模 式中。線性結構關係模式經過參數估計後,需評估整個模式之好壞,因此 許多學者嘗試提出一些評估模式好壞的適合度指標,如一般常用的卡方檢 定、殘差均方根、適合度指標、調整後適合度指標以及基準指標等。這些 指標中有的指標會受到樣本數大小或樣本分布的影響,有些指標受模式隱 藏變數多寡或因素指標多寡的影響,有些指標需有嚴格的條件(如樣本需 服從常態分布)及前提方可適用,且有些指標的分布是未知的,因此欲對 這些指標進行區間估計、假設檢定、或顯著性差異比較是不可能的。基於 上述各種適合度指標的缺點,本論文利用拔靴法進行重抽樣求得拔靴分布 來解決上述各種問題。然而傳統的拔靴法在線性結構關係模式上是不適用 的,因此,再提出一改良拔靴法程序,求得拔靴分布來做為評估模式好壞 的依據,並利用改良拔靴法來做巢狀模式之顯著性差異比較及利用抽樣誤 差和非抽樣誤差觀念來評估模式適合度。
17

Decision making and modelling uncertainty for the multi-criteria analysis of complex energy systems / La prise de décision et la modélisation d’incertitude pour l’analyse multi-critère des systèmes complexes énergétiques

Wang, Tairan 08 July 2015 (has links)
Ce travail de thèse doctorale traite l'analyse de la vulnérabilité des systèmes critiques pour la sécurité (par exemple, les centrales nucléaires) dans un cadre qui combine les disciplines de l'analyse des risques et de la prise de décision de multi-critères.La contribution scientifique suit quatre directions: (i) un modèle hiérarchique et quantitative est développé pour caractériser la susceptibilité des systèmes critiques pour la sécurité à plusieurs types de danger, en ayant la vue de `tous risques' sur le problème actuellement émergeant dans le domaine de l'analyse des risques; (ii) l'évaluation quantitative de la vulnérabilité est abordé par un cadre de classification empirique: à cette fin, un modèle, en se fondant sur la Majority Rule Sorting (MR-Sort) Méthode, généralement utilisés dans le domaine de la prise de décision, est construit sur la base d'un ensemble de données (en taille limitée) représentant (a priori connu) des exemples de classification de vulnérabilité; (iii) trois approches différentes (à savoir, une model-retrieval-based méthode, la méthode Bootstrap et la technique de validation croisée leave-one-out) sont élaborées et appliquées pour fournir une évaluation quantitative de la performance du modèle de classification (en termes de précision et de confiance dans les classifications), ce qui représente l'incertitude introduite dans l'analyse par la construction empirique du modèle de la vulnérabilité; (iv) basé sur des modèles développés, un problème de classification inverse est résolu à identifier un ensemble de mesures de protection qui réduisent efficacement le niveau de vulnérabilité du système critique à l’étude. Deux approches sont développées dans cet objectif: le premier est basé sur un nouvel indicateur de sensibilité, ce dernier sur l'optimisation.Les applications sur des études de cas fictifs et réels dans le domaine des risques de centrales nucléaires démontrent l'efficacité de la méthode proposée. / This Ph. D. work addresses the vulnerability analysis of safety-critical systems (e.g., nuclear power plants) within a framework that combines the disciplines of risk analysis and multi-criteria decision-making. The scientific contribution follows four directions: (i) a quantitative hierarchical model is developed to characterize the susceptibility of safety-critical systems to multiple types of hazard, within the needed `all-hazard' view of the problem currently emerging in the risk analysis field; (ii) the quantitative assessment of vulnerability is tackled by an empirical classification framework: to this aim, a model, relying on the Majority Rule Sorting (MR-Sort) Method, typically used in the decision analysis field, is built on the basis of a (limited-size) set of data representing (a priori-known) vulnerability classification examples; (iii) three different approaches (namely, a model-retrieval-based method, the Bootstrap method and the leave-one-out cross-validation technique) are developed and applied to provide a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments), accounting for the uncertainty introduced into the analysis by the empirical construction of the vulnerability model; (iv) on the basis of the models developed, an inverse classification problem is solved to identify a set of protective actions which effectively reduce the level of vulnerability of the critical system under consideration. Two approaches are developed to this aim: the former is based on a novel sensitivity indicator, the latter on optimization.Applications on fictitious and real case studies in the nuclear power plant risk field demonstrate the effectiveness of the proposed methodology.
18

Statistical Methods For Kinetic Modeling Of Fischer Tropsch Synthesis On A Supported Iron Catalyst

Critchfield, Brian L. 15 December 2006 (has links) (PDF)
Fischer-Tropsch Synthesis (FTS) is a promising technology for the production of ultra-clean fuels and chemical feedstocks from biomass, coal, or natural gas. Iron catalysts are ideal for conversion of coal and biomass. However, precipitated iron catalysts used in slurry-bubble column reactors suffer from high attrition resulting in difficulty separating catalysts from product and increased slurry viscosity. Thus, development of an active and selective-supported iron catalyst to manage attrition is needed. This thesis focuses on the development of a supported iron catalyst and kinetic models of FTS on the catalyst using advanced statistical methods for experimental design and analysis. A high surface area alumina, modified by the addition of approximately 2 wt% lanthanum, was impregnated with approximately 20 wt% Fe and 1% Pt in a two step procedure. Approximately 10 wt% Fe and 0.5 wt% Pt was added in each step. The catalyst had a CO uptake of 702 μmol/g, extent of reduction of 69%, and was reduced at 450°C. The catalyst was stable over H2 partial pressures of 4-10 atm, CO partial pressures of 1-4 atm, and temperatures of 220-260°C. Weisz modulus values were less than 0.15. A Langmuir-Hinshelwood type rate expression, derived from a proposed FTS mechanism, was used with D-optimal criterion to develop experiments sequentially at 220°C and 239°C. Joint likelihood confidence regions for the rate expression parameters with respect to run number indicate rapid convergence to precise-parameter estimates. Difficulty controlling the process at the designed conditions and steep gradients around the D-optimal criterion resulted in consecutive runs having the same optimal condition. In these situations another process condition was chosen to avoid consecutive replication of the same process condition. A kinetic model which incorporated temperature effects was also regressed. Likelihood and bootstrap confidence intervals suggested that the model parameters were precise. Histograms and skewness statistics calculated from Bootstrap resampling show parameter-effect nonlinearities were small.
19

Exact Analysis of Exponential Two-Component System Failure Data

Zhang, Xuan 01 1900 (has links)
<p>A survival distribution is developed for exponential two-component systems that can survive as long as at least one of the two components in the system function. It is assumed that the two components are initially independent and non-identical. If one of the two components fail (repair is impossible), the surviving component is subject to a different failure rate due to the stress caused by the failure of the other.</p> <p>In this paper, we consider such an exponential two-component system failure model when the observed failure time data are (1) complete, (2) Type-I censored, (3) Type-I censored with partial information on component failures, (4) Type-II censored and (5) Type-II censored with partial information on component failures. In these situations, we discuss the maximum likelihood estimates (MLEs) of the parameters by assuming the lifetimes to be exponentially distributed. The exact distributions (whenever possible) of the MLEs of the parameters are then derived by using the conditional moment generating function approach. Construction of confidence intervals for the model parameters are discussed by using the exact conditional distributions (when available), asymptotic distributions, and two parametric bootstrap methods. The performance of these four confidence intervals, in terms of coverage probabilities are then assessed through Monte Carlo simulation studies. Finally, some examples are presented to illustrate all the methods of inference developed here.</p> <p>In the case of Type-I and Type-II censored data, since there are no closed-form expressions for the MLEs, we present an iterative maximum likelihood estimation procedure for the determination of the MLEs of all the model parameters. We also carry out a Monte Carlo simulation study to examine the bias and variance of the MLEs.</p> <p>In the case of Type-II censored data, since the exact distributions of the MLEs depend on the data, we discuss the exact conditional confidence intervals and asymptotic confidence intervals for the unknown parameters by conditioning on the data observed.</p> / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0526 seconds