• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 41
  • 23
  • 21
  • 19
  • 16
  • 12
  • 11
  • 9
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 372
  • 113
  • 104
  • 69
  • 68
  • 67
  • 56
  • 47
  • 44
  • 41
  • 32
  • 31
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Nouvel algorithme d'optimisation bayésien utilisant une approche Monte-Carlo séquentielle. / New Bayesian optimization algorithm using a sequential Monte-Carlo approach

Benassi, Romain 19 June 2013 (has links)
Ce travail de thèse s'intéresse au problème de l'optimisation globale d'une fonction coûteuse dans un cadre bayésien. Nous disons qu'une fonction est coûteuse lorsque son évaluation nécessite l’utilisation de ressources importantes (simulations numériques très longues, notamment). Dans ce contexte, il est important d'utiliser des algorithmes d'optimisation utilisant un faible nombre d'évaluations de cette dernière. Nous considérons ici une approche bayésienne consistant à affecter à la fonction à optimiser un a priori sous la forme d'un processus aléatoire gaussien, ce qui permet ensuite de choisir les points d'évaluation de la fonction en maximisant un critère probabiliste indiquant, conditionnellement aux évaluations précédentes, les zones les plus intéressantes du domaine de recherche de l'optimum. Deux difficultés dans le cadre de cette approche peuvent être identifiées : le choix de la valeur des paramètres du processus gaussien et la maximisation efficace du critère. La première difficulté est généralement résolue en substituant aux paramètres l'estimateur du maximum de vraisemblance, ce qui est une méthode peu robuste à laquelle nous préférons une approche dite complètement bayésienne. La contribution de cette thèse est de présenter un nouvel algorithme d'optimisation bayésien, maximisant à chaque étape le critère dit de l'espérance de l'amélioration, et apportant une réponse conjointe aux deux difficultés énoncées à l'aide d'une approche Sequential Monte Carlo. Des résultats numériques, obtenus à partir de cas tests et d'applications industrielles, montrent que les performances de notre algorithme sont bonnes par rapport à celles d’algorithmes concurrents. / This thesis deals with the problem of global optimization of expensive-to-evaluate functions in a Bayesian framework. We say that a function is expensive-to-evaluate when its evaluation requires a significant amount of resources (e.g., very long numerical simulations).In this context, it is important to use optimization algorithms that can deal with a limited number of function evaluations. We consider here a Bayesian approach which consists in assigning a prior to the function, under the form of a Gaussian random process. The idea is then to choose the next evaluation points using a probabilistic criterion that indicates, conditional on the previous evaluations, the most interesting regions of the research domain for the optimizer. Two difficulties in this approach can be identified: the choice of the Gaussian process prior and the maximization of the criterion. The first problem is usually solved by using a maximum likelihood approach, which turns out to be a poorly robust method, and to which we prefer a fully Bayesian approach. The contribution of this work is the introduction of a new Bayesian optimization algorithm, which maximizes the Expected Improvement (EI) criterion, and provides an answer to both problems thanks to a Sequential Monte Carlo approach. Numerical results on benchmark tests show good performances of our algorithm compared to those of several other methods of the literature.
172

Is Swedish monetary policy current or forward-looking? : A study using Taylor rules to explain the setting of the repo rate

Veskoukis, Andreas, Willman, Anna January 2019 (has links)
The purpose of this paper is to examine how a current-looking Taylor rule explains the setting of the repo rate by the Riksbank between 1995-2018 vis-à-vis a forward-looking Taylor rule. Furthermore, we investigate whether the explanatory power of these rules change after the financial crisis. The implied Taylor rates are calculated using our own estimates of the natural rate of interest. These rates are then plotted on a graph creating a span of uncertainty in which the repo rate can be set between. Finally, we regress the repo rate on the Taylor rates. In this way, we examine which rule is more in line with the repo rate. The results showed that a forward-looking Taylor rule based on a varying real interest rate is more in line with the repo rate than the current-looking rule, both for the period as a whole and after 2008. The explanatory power of both rules decreases in the period following 2008.
173

Are Highly Dispersed Variables More Extreme? The Case of Distributions with Compact Support

Adjogah, Benedict E 01 May 2014 (has links)
We consider discrete and continuous symmetric random variables X taking values in [0; 1], and thus having expected value 1/2. The main thrust of this investigation is to study the correlation between the variance, Var(X) of X and the value of the expected maximum E(Mn) = E(X1,...,Xn) of n independent and identically distributed random variables X1,X2,...,Xn, each distributed as X. Many special cases are studied, some leading to very interesting alternating sums, and some progress is made towards a general theory.
174

Etude de la mesure du parametre $\alpha$ dans le cadre de la violation de la symetrie CP à l'aide du canal B$^(0)_(d) \rightarrow\pi^+\pi^-\pi^0$ dans l'experience LHCb

Robert, Arnaud 27 June 2005 (has links) (PDF)
AUCUN
175

A robust multi-objective statistical improvement approach to electric power portfolio selection

Murphy, Jonathan Rodgers 13 November 2012 (has links)
Motivated by an electric power portfolio selection problem, a sampling method is developed for simulation-based robust design that builds on existing multi-objective statistical improvement methods. It uses a Bayesian surrogate model regressed on both design and noise variables, and makes use of methods for estimating epistemic model uncertainty in environmental uncertainty metrics. Regions of the design space are sequentially sampled in a manner that balances exploration of unknown designs and exploitation of designs thought to be Pareto optimal, while regions of the noise space are sampled to improve knowledge of the environmental uncertainty. A scalable test problem is used to compare the method with design of experiments (DoE) and crossed array methods, and the method is found to be more efficient for restrictive sample budgets. Experiments with the same test problem are used to study the sensitivity of the methods to numbers of design and noise variables. Lastly, the method is demonstrated on an electric power portfolio simulation code.
176

Methodologische Aspekte biomechanischer Messungen unter Laborbedingungen

Oriwol, Doris 30 March 2012 (has links) (PDF)
„Nun sag, wie hast du’s mit der Messung im Labor?“ So oder ähnlich lautet die sich anhand dieser Arbeit ergebende Gretchenfrage bezüglich biomechanischer Auswertungen und Studien des Laufsports, welche unter Laborbedingungen durchgeführt werden. Hierbei wird angenommen, dass eine Messung im Labor eine valide experimentelle Operationalisierung des Ausdauerlaufens darstellt. Aufgrund der räumlichen Begrenzung kann lediglich eine vergleichsweise geringe Anzahl an einzelnen Versuchen aufgezeichnet werden. Für die statistische Auswertung werden dann zumeist einzelne Parameter der Zeitreihen berechnet, welche wiederum zusammengefasst durch Mittelwerte den Probanden repräsentieren müssen. Bei der Verwendung von diskreten Parametern reduziert sich die aufgenommene Information der Zeitreihe dabei erheblich. Damit einhergehend muss die Frage geklärt werden, ob die Variabilität eines Probanden anhand diskreter Werte oder anhand der gesamten Kurve Beachtung finden muss. Des Weiteren stellt sich die Frage inwieweit das arithmetische Mittel über eine große Anzahl an Versuchen als die den Probanden repräsentierende Kennzahl verwendet und dessen Variabilität mittels einer endlichen Anzahl an Wiederholungen charakterisiert werden kann. Für die Untersuchungen wurden zunächst zwei Studien durchgeführt, wobei die Aufzeichnung von Bodenreaktionskräften und der Winkelgeschwindigkeit bei 100 Läufen an je zwei Messtagen im Labor erfolgte. Die statistischen Auswertungen umfassen sowohl die Betrachtung der Konvergenz von Folgen kumulierter Mittelwerte, Standardabweichungen und Root Mean Square Errors für diskrete Parameter und die gesamten aufgezeichneten Signale der Bodenreaktionskräfte und Winkelgeschwindigkeit als auch die Untersuchung von Prädiktionsbändern. Zudem wurden unterschiedliche Algorithmen zur Bestimmung der minimalen Anzahl an aufzuzeichnenden Versuchen entwickelt. Diese beinhalten nichtlineare Regressionsmodelle für die Anpassung der kumulierten Fläche der Prädiktionsbänder gesamter Kurven und die Analyse der Differenzen aufeinanderfolgender Standardabweichungskurven. Zusammenfassend geht aus dieser Arbeit hervor, dass die postulierte ausreichende und stabile Charakterisierung eines Probanden anhand des arithmetischen Mittels sowie der vollständigen und soliden Beschreibung der Variabilität für diskrete Parameter nicht nachgewiesen werden konnte. Für gesamte Kurven ergab sich ein anderes Bild. Die Probanden konnten anhand der mittleren vertikalen Bodenreaktionskräfte sowie der Bodenreaktionskräfte in anterior-posterior Richtung stabil und ausreichend charakterisiert werden. Für die Bodenreaktionskräfte in mediolateraler Richtung und die Kurve der Winkelgeschwindigkeit wurde dies nicht bestätigt. Die Möglichkeit der Charakterisierung der Variabilität eines Probanden konnte zudem verifiziert werden. Durch Beibehaltung der ursprünglichen Messprozedur ist die Wahrscheinlichkeit sehr hoch, dass der begangene Fehler den Ausgang der statistischen Auswertung beeinflusst und damit Eigenschaften der vorliegenden Grundgesamtheit unter Umständen falsch widerspiegelt. Von einer Verwendung des Mittelwertes diskreter Parameter sollte daher abgesehen werden. Der Fehler sowie dessen unbekanntes Ausmaß sind zum Teil unkontrollierbar und dessen Auswirkungen auf weitere biomechanische Kenngrößen nicht überprüfbar. Die Annahme, dass eine Labormessung als valide experimentelle Operationalisierung des Ausdauerlaufens angesehen werden kann, ist damit hinfällig. Es ist zukünftig notwendig, die Erforschung neuer Aufnahme- und Auswerteprozeduren, die alternative Verwendung gesamter Kurven und die Entwicklung neuer Testverfahren zu forcieren.
177

A Study of the Relationship between Customer's Expected Future Use, Customer Satisfaction and Customer Retention on ADSL Broadband Internet Service- A Case of Greater Kaohsiung Area.

Wang, Jiann-hwa 18 July 2006 (has links)
Under the four forces of opening up of domestic telecommunication market, breakthroughs in information and communication technology, changes in consumer demand, and supplier¡¦s race for economic efficiency, industry structures have transformed. What used to be three distinct industries of divergent directions of development ¡V network service (computers), telecommunication (telephones) and cable television (television), are interacting. The market is growing rapidly with combined digital flow of the three industries. Broadband internet played a key role in such transformation. ADSL broadband internet service has prospered in recent years. As the internet population approach maturity, the market has changed focus from quantity to quality. There are two major challenges to fixed network internet service providers are: To increase market share by attracting more customers to its service base, and retain customers to minimize loss of customers. Telecommunication service providers attract new customers by offering special prices with gifts, they retain customers by elevating switching cost and long term contracts. But the result of price-cutting competition was erosion of profit margin. Customer loyalty is also swayed by low prices met by competitors. Under such intense competitive environment, service providers are actively developing digital content and making investment in fiber optic broadband to mitigate loss. It is hoped that richer internet content and faster speed broadband can attract more customers and increase customer revenue contribution. The subject of this study are ADSL broadband internet users in the greater Kaohsiung area, the regions surveyed include Kaohsiung County and Kaohsiung City. A questionnaire was used as the data collection tool. The survey was conducted on a convenient sample. Data was analyzed using SPSS statistical software. The influence of customer¡¦s expected future benefits, overall customer satisfaction and marketing strategies on customer retention were investigated. Furthermore, the interaction effects were also reviewed. The results of the study are: I.The most important factor in influencing customer retention is overall customer satisfaction, followed by marketing strategy and customer¡¦s expected future use. Further results are derived from analysis: 1.Overall customer satisfaction, customer¡¦s expected future use and marketing strategy showed significant positive influence on customer retention. 2.The influence of overall customer satisfaction, customer¡¦s expected future use and marketing strategy on customer retention vary in strength. 3.Marketing strategy showed significant positive influence on overall customer satisfaction and customer¡¦s expected future use. 4.With respect to the degree of influence on customer retention, overall customer satisfaction showed greater influence than marketing strategy and customer¡¦s expected future use. II.Customers exhibit high expectations for future use¡C Telecommunication service provider satisfying the following ¡§customer¡¦s expected future use¡¨ will significantly enhance customer retention. 1.Provide value-adding services. 2.Provide more value-adding application content. 3.Offer better prices. 4.Exchange fiber optic with ADSL in the future at free of charge
178

A pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternatives

Rousis, Damon 01 July 2011 (has links)
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
179

Efficient Simulations in Finance

Sak, Halis January 2008 (has links) (PDF)
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
180

On The Expected Value Of The Linear Complexity Of Periodic Sequences

Ozakin, Cigdem 01 July 2004 (has links) (PDF)
In cryptography, periodic sequences with terms in F2 are used almost everywhere. These sequences should have large linear complexity to be cryptographically strong. In fact, the linear complexity of a sequence should be close to its period. In this thesis, we study the expected value for N-periodic sequences with terms in the finite field Fq. This study is entirely devoted to W. Meidl and Harald Niederreiter&rsquo / s paper which is &ldquo / On the Expected Value of the Linear Complexity and the k-Error Linear Complexity of Periodic Sequences&rdquo / We only expand this paper, there is no improvement. In this paper there are important theorems and results about the expected value of linear complexity of periodic sequences.

Page generated in 0.0354 seconds