• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 455
  • 205
  • 61
  • 32
  • 29
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1034
  • 127
  • 126
  • 123
  • 100
  • 93
  • 82
  • 79
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Predicting Plans and Actions in Two-Player Repeated Games

Mathema, Najma 22 September 2020 (has links)
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. One way to enable effective interaction is to create models of associates to help to predict the modeled agents' actions, plans, and intentions. If AI agents are able to predict what other agents in their environment will be doing in the future and can understand the intentions of these other agents, the AI agents can use these predictions in their planning, decision-making and assessing their own potential. Prior work [13, 14] introduced the S# algorithm, which is designed as a robust algorithm for many two-player repeated games (RGs) to enable cooperation among players. Because S# generates actions, has (internal) experts that seek to accomplish an internal intent, and associates plans with each expert, it is a useful algorithm for exploring intent, plan, and action in RGs. This thesis presents a graphical Bayesian model for predicting actions, plans, and intents of an S# agent. The same model is also used to predict human action. The actions, plans and intentions associated with each S# expert are (a) identified from the literature and (b) grouped by expert type. The Bayesian model then uses its transition probabilities to predict the action and expert type from observing human or S# play. Two techniques were explored for translating probability distributions into specific predictions: Maximum A Posteriori (MAP) and Aggregation approach. The Bayesian model was evaluated for three RGs (Prisoners Dilemma, Chicken and Alternator) as follows. Prediction accuracy of the model was compared to predictions from machine learning models (J48, Multi layer perceptron and Random Forest) as well as from the fixed strategies presented in [20]. Prediction accuracy was obtained by comparing the model's predictions against the actual player's actions. Accuracy for plan and intent prediction was measured by comparing predictions to the actual plans and intents followed by the S# agent. Since the plans and the intents of human players were not recorded in the dataset, this thesis does not measure the accuracy of the Bayesian model against actual human plans and intents. Results show that the Bayesian model effectively models the actions, plans, and intents of the S# algorithm across the various games. Additionally, the Bayesian model outperforms other methods for predicting human actions. When the games do not allow players to communicate using so-called cheaptalk, the MAP-based predictions are significantly better than Aggregation-based predictions. There is no significant difference in the performance of MAP-based and Aggregation-based predictions for modeling human behavior when cheaptalk is allowed, except in the game of Chicken.
362

Volatility Forecasting Performance: Evaluation of GARCH type volatility models on Nordic equity indices

Wennström, Amadeus January 2014 (has links)
This thesis examines the volatility forecasting performance of six commonly used forecasting models; the simple moving average, the exponentially weighted moving average, the ARCH model, the GARCH model, the EGARCH model and the GJR-GARCH model. The dataset used in this report are three different Nordic equity indices, OMXS30, OMXC20 and OMXH25. The objective of this paper is to compare the volatility models in terms of the in-sample and out-of-sample fit. The results were very mixed. In terms of the in-sample fit, the result was clear and unequivocally implied that assuming a heavier tailed error distribution than the normal distribution and modeling the conditional mean significantly improves the fit. Moreover a main conclusion is that yes, the more complex models do provide a better in-sample fit than the more parsimonious models. However in terms of the out-of-sample forecasting performance the result was inconclusive. There is not a single volatility model that is preferred based on all the loss functions. An important finding is however not only that the ranking differs when using different loss functions but how dramatically it can differ. This illuminates the importance of choosing an adequate loss function for the intended purpose of the forecast. Moreover it is not necessarily the model with the best in-sample fit that produces the best out-of-sample forecast. Since the out-of-sample forecast performance is so vital to the objective of the analysis one can question whether the in-sample fit should even be used at all to support the choice of a specific volatility model.
363

Calculating power for the Finkelstein and Schoenfeld test statistic

Zhou, Thomas J. 07 March 2022 (has links)
The Finkelstein and Schoenfeld (FS) test is a popular generalized pairwise comparison approach to analyze prioritized composite endpoints (e.g., components are assessed in order of clinical importance). Power and sample size estimation for the FS test, however, are generally done via simulation studies. This simulation approach can be extremely computationally burdensome, compounded by an increasing number of composite endpoints and with increasing sample size. We propose an analytic solution to calculate power and sample size for commonly encountered two-component hierarchical composite endpoints. The power formulas are derived assuming underlying distributions in each of the component outcomes on the population level, which provide a computationally efficient and practical alternative to the standard simulation approach. The proposed analytic approach is extended to derive conditional power formulas, which are used in combination with the promising zone methodology to perform sample size re-estimation in the setting of adaptive clinical trials. Prioritized composite endpoints with more than two components are also investigated. Extensive Monte Carlo simulation studies were conducted to demonstrate that the performance of the proposed analytic approach is consistent with that of the standard simulation approach. We also demonstrate through simulations that the proposed methodology possesses generally desirable objective properties including robustness to mis-specified underlying distributional assumptions. We illustrate our proposed methods through application of the proposed formulas by calculating power and sample size for the Transthyretin Amyloidosis Cardiomyopathy Clinical Trial (ATTR-ACT) and the EMPULSE trial for empagliozin treatment of acute heart failure.
364

Educational Attainment: An Agent-Based Model

Truman, Anna Christine 09 May 2022 (has links)
No description available.
365

Risk Analysis of Wind Energy Company Stocks

Jiang, Xin January 2020 (has links)
In this thesis, probability theory and risk analysis are used to determine the riskof wind energy stocks. Three stocks of wind energy companies and three stocksof technology companies are gathered and risks are compared. Three difffferent riskmeasures: variance, value at risk, and conditional value at risk are used in this thesis.Conclusions which has been drawn, are that wind energy company stock risks arenot signifificantly lower than the stocks of other companies. Furthermore, optimalportfolios should include short positions of one or two of the energy companies forthe studied time period and under the difffferent risk measures.
366

Model for Bathtub-Shaped Hazard Rate: Monte Carlo Study

Leithead, Glen S. 01 May 1970 (has links)
A new model developed for the entire bathtub-shaped hazard rate curve has been evaluated as to its usefulness as a method of reliability estimation. The model is of the form: F(t) = 1 - exp - (ϴ1tL + ϴ2t + ϴ3tM) where "L" and "M" were assumed known. The estimate of reliability obtained from the new model was compared with the traditional restricted sample estimate for four different time intervals and was found to have less bias and variance for all time points. The was a monte carlo study and the data generated showed that the new model has much potential as a method for estimating reliability. (51 pages)
367

Sur l'estimation non paramétrique des modèles conditionnels pour variables fonctionnelles spatialement dépendantes / On the nonparametric estimation of certain conditional models in functional spatial data

Kaid, Zoulikha 09 December 2012 (has links)
Dans cette thèse, nous nous intéressons au problème de la prévision spatiale en considérant des modèles non paramétriques conditionnels dont la variable explicative est fonctionnelle. Plus précisément, les points étudiés pour décrire la co-variation spatiale entre une variable réponse réelle et une variable fonctionnelle sont le mode conditionnel et les quantiles conditionnels.En ce qui concerne le mode conditionnel, nous établissons la convergence presque complète, la convergence en norme Lp et la normalité asymptotique d'un estimateur à noyau. Ces propriétés asymptotiques sont obtenues sous des conditions assez générales telles, l'hypothèse de mélange forte et l'hypothèse de concentration de la mesure de probabilité de la variable explicative fonctionnelle. L'implémentation de l'estimateur construit en pratique est illustrée par une application sur des données météorologiques.Le modèle des quantiles conditionnels est abordé dans la deuxième partie de la thèse. Il est traité comme fonction inverse de la fonction de répartition conditionnelle qui est estimée par un estimateur à double noyaux. Sous les mêmes conditions que celles du modèle précédent, nous donnons l'expression de la vitesse de convergence en norme Lp et nous démontrons la normalité asymptotique de l'estimateur construit.Notre étude généralise au cas spatial de nombreux résultats déjà existant en série chronologique fonctionnelle. De plus, l'estimation de nos modèles repose sur une estimation préalable de la densité et de la fonction de répartition conditionnelles et permet de construire des régions prédictives, montrant ainsi l'apport de ce genre de modèles par rapport à la régression classique. / The main purpose of this thesis concerns the problem of spatial prediction using some nonparametric conditional models where the covariate variable is a functional one. More precisely, we treat the nonparametric estimation of the conditional mode and that of the conditional quantiles as spatial prediction tools alternative to the classical spatial regression of real response variable given a functional variable.Concerning the first model, that is the conditional mode, it is estimated by maximizing the spatial version of the kernel estimate of the conditional density. Under a general mixing condition and the concentration properties of the probability measure of the functional variable, we establish the almost complete convergence (with rate), the Lp consistency (with rate) and the asymptotic normality of the considered estimator. The usefulness of this estimation is illustrated by an application on real meteorological data.The model of the conditional quantiles is considered in the second part of this thesis and is treated as the inverse function of the conditional cumulative distribution function which is estimated by a double kernel estimator. Under the same general conditions as in the first model, we give the convergence rate in the Lp- norm and we show the asymptotic normality of the constructed estimator. These asymptotic results are closely related to the concentration properties on small balls of the probability measure of the underlying explanatory variable and the regularity of the conditional cumulative distribution function.Our study generalizes to spatial case some existing results in functional times series case. Finally, we highlight what our models brings compared to classical regression, discussing the use of our results as preliminary works to construct predictive regions.
368

Multilevel Methods for Stochastic Forward and Inverse Problems

Ballesio, Marco 02 February 2022 (has links)
This thesis studies novel and efficient computational sampling methods for appli- cations in three types of stochastic inversion problems: seismic waveform inversion, filtering problems, and static parameter estimation. A primary goal of a large class of seismic inverse problems is to detect parameters that characterize an earthquake. We are interested to solve this task by analyzing the full displacement time series at a given set of seismographs, but approaching the full waveform inversion with the standard Monte Carlo (MC) method is prohibitively expensive. So we study tools that can make this computation feasible. As part of the inversion problem, we must evaluate the misfit between recorded and synthetic seismograms efficiently. We employ as misfit function the Wasserstein metric origi- nally suggested to measure the distance between probability distributions, which is becoming increasingly popular in seismic inversion. To compute the expected values of the misfits, we use a sampling algorithm called Multi-Level Monte Carlo (MLMC). MLMC performs most of the sampling at a coarse space-time resolution, with only a few corrections at finer scales, without compromising the overall accuracy. We further investigate the Wasserstein metric and MLMC method in the context of filtering problems for partially observed diffusions with observations at periodic time intervals. Particle filters can be enhanced by considering hierarchies of discretizations to reduce the computational effort to achieve a given tolerance. This methodology is called Multi-Level Particle Filter (MLPF). However, particle filters, and consequently MLPFs, suffer from particle ensemble collapse, which requires the implementation of a resampling step. We suggest for one-dimensional processes a resampling procedure based on optimal Wasserstein coupling. We show that it is beneficial in terms of computational costs compared to standard resampling procedures. Finally, we consider static parameter estimation for a class of continuous-time state-space models. Unbiasedness of the gradient of the log-likelihood is an important property for gradient ascent (descent) methods to ensure their convergence. We propose a novel unbiased estimator of the gradient of the log-likelihood based on a double-randomization scheme. We use this estimator in the stochastic gradient ascent method to recover unknown parameters of the dynamics.
369

Identifying Student Difficulties in Conditional Probability within Statistical Reasoning

Fabby, Carol January 2021 (has links)
No description available.
370

A Study of Energy Literacy among Lower Secondary School Students in Japan / 日本の中学生のエネルギーリテラシー研究

Akitsu, Yutaka 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(エネルギー科学) / 甲第21188号 / エネ博第362号 / 新制||エネ||71(附属図書館) / 京都大学大学院エネルギー科学研究科エネルギー社会・環境科学専攻 / (主査)教授 石原 慶一, 教授 東野 達, 教授 吉田 純 / 学位規則第4条第1項該当 / Doctor of Energy Science / Kyoto University / DFAM

Page generated in 0.1313 seconds