• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 13
  • 13
  • 12
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 93
  • 52
  • 34
  • 32
  • 30
  • 28
  • 28
  • 26
  • 18
  • 17
  • 16
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Hospodářská změna v Rusku: závislost na ropě / Economic change in Russia: Oil dependency

Krupa, Mikuláš January 2019 (has links)
This thesis concentrates on the case of Russian economy and assessment of its dependence on oil. Russia is often cited as an example of country suffering from resource curse as its natural wealth forms significant share of country's exports and revenues. Thesis will first concentrate on factors determining current state of Russian economy. Presence of the symptoms of Dutch disease in the Russian economy will be studied using the Vector error correction model (VECM) applied on the real effective exchange rate of country (REER). Thesis will also contain an assessment of Russian institutional environment to check for other symptoms of resource curse theory. Analysis of latest federal budget will be used to evaluate the sustainability of Russian federal finances. The thesis is concluded by discussion of results and possible paths of future development of Russian economy.
152

Norsko a Botswana jako výjimky z teorie prokletí přírodními zdroji / Norway and Botswana as exceptions to the theory of the curse of natural resources

Drozdová, Miroslava January 2020 (has links)
This thesis compares Norway, Botswana and Venezuela and their sovereign wealth funds. The first two countries are referred to as exceptions to the theory of the resource curse, which explains the phenomenon that shows that countries with a high dependence on income from the export of natural resources have a slower rate of economic, political and institutional development. On the contrary, Venezuela (although it was considered an exception in the past) is severely affected by this phenomenon and thus serves as a negative example in this thesis. The thesis focuses on state sovereign wealth funds and examines whether and under what conditions these funds have an effect on reversing the resource curse. Based on the theoretical part, five key characteristics are identified that the fund must meet in order to function against the negative manifestations of the curse of natural resources - (1) offsetting the effects of volatility, (2) diversifying the economy, (3) budgetary policy, (4) controlling the allocation of expenditure, (5) transparency of funds. Based on these characteristics, it is possible to observe that the Norwegian sovereign wealth fund works best from selected funds as a defense against the resource curse, followed by the Botswana fund and third by the Venezuelan fund. Norway and Botswana...
153

Reaktioner på stöld i antikens Rom : En känslohistorisk undersökning av defixiones från den heliga källan i Sulis Minervas helgedom i Bath / Reactions to theft in ancient Rome : An emotional history on the defixiones found in the sacred spring of the temple of Sulis Minerva at Bath

Andersson, Linus January 2023 (has links)
Denna text undersöker, via närläsning, Tabellae Sulis–förbannelsetavlor riktade mot tjuvar hittade i den heliga källan i Sulis Minervas helgedom i Bath–med mål att utröna dessas känslomässiga innehåll och samhälleliga kontext. De 32 studerade tavlorna utgör försök att hämna stölder av klädesplagg och smärre summor pengar, antagligen stulna medan deras författare njöt av den heliga källans vatten. De utgör i det yttersta en sorts överenskommelse mellan författaren och gudinnan. Den senare ges en del av det stulna föremålet, eller i vissa fall tjuven, och förväntas bestraffa denne tills denne återlämnar föremålet i fråga till templet där det stals. Vad gäller straff söker tavlorna attackera alltifrån tjuvens hälsa och sinnen till dennes fortplantningsförmåga och själva dennes liv. Vanligast är önskan att tjuven skall betala för sitt illdåd i dennes eget blod. På känslomässig nivå ger tavlorna kuttryck för den bestulnes ilska och hämndlystnad. I detta kan de ha fungerat som en känslomässig kontrollmekanism, ett säkert och samhälleligt accepterat sätt att uttrycka och agera på känslor, som annars kunnat bli socialt problematiska. / This paper seeks, by means of close reading, to examine the Tabellae Sulis–a series of curse tablets against thieves, found in the sacred spring of the temple of Sulis Minerva at Bath–to explore their emotional content and societal context. The 32 studied tablets are concerned with the theft of minor sums of silver and various items of clothing, crimes most likely committed while the victim was soaking in the sacred spring. The tablets can be considered a sort of quasi-legal agreement between the victim and the goddess in question. The latter is granted partial ownership of the stolen object–or, in some cases, the thief themselves–and expected to punish said thief until they return the object in question to the temple where it was stolen. In terms of punishment, the tablets attack everything from the thief’s mind, motor functions and senses to their ability to reproduce and even their very lives. Most commonly they request that the thief pay the value of the stolen object in their own blood. On an emotional level, the tablets give expression to the anger of the victims and their hunger for vengeance. In this way, they can be considered to have served as an emotional control mechanism, a safe and generally accepted way to express and act on feelings that might otherwise have proven socially problematic.
154

Navigating the Metric Zoo: Towards a More Coherent Model For Quantitative Evaluation of Generative ML Models

Dozier, Robbie 26 August 2022 (has links)
No description available.
155

Влияние мирового нефтяного рынка на экономику Ирака : магистерская диссертация / Impact of the global oil market on the economy of Iraq

Альристим, М. Х. А., Alristim, M. H. A. January 2020 (has links)
The relevance of the research of the topic master's thesis is due to the increasing impact to the fact that Iraq is a vivid example of a resource-dependent country today, which is affected by the resource curse and which faces many problems. The purpose of the research: to assess the impact of the volatility of the global oil market on the economic development of Iraq and determine the prospects for its development. In accordance with the chosen of the purpose of the research is the following tasks in the work: to consider the theoretical aspects of resource-dependent countries and the features of their development; to study the theory of "resource curse" and ways to overcome it; to analyze the world oil market, especially its development and its factors; consider OPEC and its role in regulating the global oil market; to analyze the economic development of Iraq as a resource-dependent country; conduct a SWOT analysis for the economy of Iraq; develop recommendations for the development of the economy of Iraq. The scientific novelty can be attributed to highlighting the factors that make Iraq a resource-dependent country and is leading to the resource curse, as well as conducting a SWOT analysis and highlighting the weaknesses, strengths, threats and opportunities for the economy of Iraq. The practical significance of the research is determined to develop recommendations to help the country find a way out of its resource dependence. The implementation of these recommendations will help improve the situation in Iraq. / Ирак на сегодняшний день является ярким примером ресурснозависимой страны, на которую действует ресурсное проклятие и которая сталкивается со множеством проблем. Целью исследования является оценка влияния волатильности мирового рынка нефти на экономическое развитие Ирака и определение перспектив его развития. В соответствии с выбранной целью в работе были поставлены следующие задачи: рассмотреть теоретические аспекты ресурсозависимых стран и особенности их развития; изучить теорию «ресурсного проклятия» и пути его преодоления; проанализировать мировой рынок нефти, особенности его развития и его факторы; рассмотреть ОПЕК и ее роль в регулировании мирового рынка нефти; проанализировать экономическое развитие Ирака как ресурсозависимой страны; провести SWOT-анализ для экономики Ирака; разработать рекомендаций для развития экономики Ирака. К научной новизне можно отнести выделение факторов делающих Ирак ресурснозависимой страной и способствующих ресурсному проклятию, а также проведение SWOT-анализа и выделение слабых, сильных сторон, угроз и возможностей для экономики Ирака. Практическая значимость магистерской работы состоит в разработки рекомендаций, помогающих стране выйти из ресурсной зависимости. Внедрение данных рекомендаций будет способствовать улучшению ситуации в Ираке.
156

PHYSICS INFORMED MACHINE LEARNING METHODS FOR UNCERTAINTY QUANTIFICATION

Sharmila Karumuri (14226875) 17 May 2024 (has links)
<p>The need to carry out Uncertainty quantification (UQ) is ubiquitous in science and engineering. However, carrying out UQ for real-world problems is not straightforward and they require a lot of computational budget and resources. The objective of this thesis is to develop computationally efficient approaches based on machine learning to carry out UQ. Specifically, we addressed two problems.</p> <p><br></p> <p>The first problem is, it is difficult to carry out Uncertainty propagation (UP) in systems governed by elliptic PDEs with spatially varying uncertain fields in coefficients and boundary conditions. Here as we have functional uncertainties, the number of uncertain parameters is large. Unfortunately, in these situations to carry out UP we need to solve the PDE a large number of times to obtain convergent statistics of the quantity governed by the PDE. However, solving the PDE by a numerical solver repeatedly leads to a computational burden. To address this we proposed to learn the surrogate of the solution of the PDE in a data-free manner by utilizing the physics available in the form of the PDE. We represented the solution of the PDE as a deep neural network parameterized function in space and uncertain parameters. We introduced a physics-informed loss function derived from variational principles to learn the parameters of the network. The accuracy of the learned surrogate is validated against the corresponding ground truth estimate from the numerical solver. We demonstrated the merit of using our approach by solving UP problems and inverse problems faster than by using a standard numerical solver.</p> <p><br></p> <p>The second problem we focused on in this thesis is related to inverse problems. State of the art approach to solving inverse problems involves posing the inverse problem as a Bayesian inference task and estimating the distribution of input parameters conditioned on the observed data (posterior). Markov Chain Monte Carlo (MCMC) methods and variational inference methods provides us ways to estimate the posterior. However, these inference techniques need to be re-run whenever a new set of observed data is given leading to a computational burden. To address this, we proposed to learn a Bayesian inverse map i.e., the map from the observed data to the posterior. This map enables us to do on-the-fly inference. We demonstrated our approach by solving various examples and we validated the posteriors learned from our approach against corresponding ground truth posteriors from the MCMC method.</p>
157

Välsignad förbannelse : En retorisk analys av bibliskt material i Black Metallyrik

Jonsäll, Hans January 2015 (has links)
This bachelor thesis offers a rhetorical analysis of the album Maranatha by Swedish Black Metal artist Funeral Mist. Its main focus is on the intertextuality between the song "Blessed Curse" and the biblical book Deuteronomy, especially Deut 28 from which it has sampled a large portion of text. In the analysis I uncover the similarities and differences between the two texts in order to explain how the biblical fragments constitute new meanings when rearranged and taken out of their original context. The analysis concludes with relating the material to its new context i.e. the album Maranatha and the Black Metal scene by explaining other intertexts and references to the Bible and discussing which genre is best suited to describe the album as a whole. The results of the study show that the biblical quotations in the lyrics convey radically different messages and meanings compared to their original content in Deut 28. This in turn acknowledge how dependent linguistic symbols are on their context. I finish off my thesis with a few reflections on the moral and ethical implications of this use of biblical material concerning the anti-christian agenda supported by members of the Black Metal scene and specifically how Daniel Rostén of Funeral Mist view his own work and agenda.
158

Protection of petrolium resources in Africa : a comparative analysis of oil and gas laws of selected African States

Mailula, Douglas Tlogane 08 July 2014 (has links)
The resource curse is a defining feature of the African content. Despite vast resource wealth, Africa remains the poorest and most underdeveloped continent in the world. The aim of this study is to conduct a comparative analysis of the primary laws regulating of oil and gas exploration and product activities in Angola, Nigeria and South Africa in order to determine their effectiveness in protecting the continent's depleting petroleum resources. Different regulatory models apply to Angola, following the Norwegian carried-interest model, Nigeria, where a British discretionary model has been retained, an a South africa, where a unique model has been developed. The comparison is conducted by analysing and comparing these different regulatory systems in terms of legal frameworks; the legal nature of the regulatory systems; ownership of the oil and gas resources; legal nature of licenses; organisational or institutional structures; fiscal systems; local communities benefits from these proceeds of oil and gas resources; local content; state/government participation arrangements; and environmental challenges. The study evaluates the effectiveness of these regimes by examining the extent to which they recognise and enforce state ownership of he oil and gas resourcs in situ; recognise and enforce the doctrine of Permanent Sovereignty over Natural Resources (PSNR); protect the environment; how they provide for institutional capacities for the management of resources; and the protection of local communities from exploitation and abuse by recognising their rights to benefit from revenues derived from these resources. An overall assessment of the three systems reveals that there is no ideal model for oil and gas regulation in Africa. The Norwegian model might well be considered an ideal model if it was applied with care and correctly in Angola. The study hopes to gain practical importance for the proper regulationof the oil and gas industries' upstream activities in Africa and assist governments of the selected jurisdictions in their policy revisions, as some recommendations are made. / Economics / LLD.
159

Efficient estimation using the characteristic function : theory and applications with high frequency data

Kotchoni, Rachidi 05 1900 (has links)
The attached file is created with Scientific Workplace Latex / Nous abordons deux sujets distincts dans cette thèse: l'estimation de la volatilité des prix d'actifs financiers à partir des données à haute fréquence, et l'estimation des paramétres d'un processus aléatoire à partir de sa fonction caractéristique. Le chapitre 1 s'intéresse à l'estimation de la volatilité des prix d'actifs. Nous supposons que les données à haute fréquence disponibles sont entachées de bruit de microstructure. Les propriétés que l'on prête au bruit sont déterminantes dans le choix de l'estimateur de la volatilité. Dans ce chapitre, nous spécifions un nouveau modèle dynamique pour le bruit de microstructure qui intègre trois propriétés importantes: (i) le bruit peut être autocorrélé, (ii) le retard maximal au delà duquel l'autocorrélation est nulle peut être une fonction croissante de la fréquence journalière d'observations; (iii) le bruit peut avoir une composante correlée avec le rendement efficient. Cette dernière composante est alors dite endogène. Ce modèle se différencie de ceux existant en ceci qu'il implique que l'autocorrélation d'ordre 1 du bruit converge vers 1 lorsque la fréquence journalière d'observation tend vers l'infini. Nous utilisons le cadre semi-paramétrique ainsi défini pour dériver un nouvel estimateur de la volatilité intégrée baptisée "estimateur shrinkage". Cet estimateur se présente sous la forme d'une combinaison linéaire optimale de deux estimateurs aux propriétés différentes, l'optimalité étant défini en termes de minimisation de la variance. Les simulations indiquent que l'estimateur shrinkage a une variance plus petite que le meilleur des deux estimateurs initiaux. Des estimateurs sont également proposés pour les paramètres du modèle de microstructure. Nous clôturons ce chapitre par une application empirique basée sur des actifs du Dow Jones Industrials. Les résultats indiquent qu'il est pertinent de tenir compte de la dépendance temporelle du bruit de microstructure dans le processus d'estimation de la volatilité. Les chapitres 2, 3 et 4 s'inscrivent dans la littérature économétrique qui traite de la méthode des moments généralisés. En effet, on rencontre en finance des modèles dont la fonction de vraisemblance n'est pas connue. On peut citer en guise d'exemple la loi stable ainsi que les modèles de diffusion observés en temps discrets. Les méthodes d'inférence basées sur la fonction caractéristique peuvent être envisagées dans ces cas. Typiquement, on spécifie une condition de moment basée sur la différence entre la fonction caractéristique (conditionnelle) théorique et sa contrepartie empirique. Le défit ici est d'exploiter au mieux le continuum de conditions de moment ainsi spécifié pour atteindre la même efficacité que le maximum de vraisemblance dans les inférences. Ce défit a été relevé par Carrasco et Florens (2000) qui ont proposé la procédure CGMM (continuum GMM). La fonction objectif que ces auteurs proposent est une forme quadratique hilbertienne qui fait intervenir l'opérateur inverse de covariance associé au continuum de condition de moments. Cet opérateur inverse est régularisé à la Tikhonov pour en assurer l'existence globale et la continuité. Carrasco et Florens (2000) ont montré que l'estimateur obtenu en minimisant cette forme quadratique est asymptotiquement aussi efficace que l'estimateur du maximum de vraisemblance si le paramètre de régularisation (α) tend vers zéro lorsque la taille de l'échatillon tend vers l'infini. La nature de la fonction objectif du CGMM soulève deux questions importantes. La première est celle de la calibration de α en pratique, et la seconde est liée à la présence d'intégrales multiples dans l'expression de la fonction objectif. C'est à ces deux problématiques qu'essayent de répondent les trois derniers chapitres de la présente thèse. Dans le chapitre 2, nous proposons une méthode de calibration de α basée sur la minimisation de l'erreur quadratique moyenne (EQM) de l'estimateur. Nous suivons une approche similaire à celle de Newey et Smith (2004) pour calculer un développement d'ordre supérieur de l'EQM de l'estimateur CGMM de sorte à pouvoir examiner sa dépendance en α en échantillon fini. Nous proposons ensuite deux méthodes pour choisir α en pratique. La première se base sur le développement de l'EQM, et la seconde se base sur des simulations Monte Carlo. Nous montrons que la méthode Monte Carlo délivre un estimateur convergent de α optimal. Nos simulations confirment la pertinence de la calibration de α en pratique. Le chapitre 3 essaye de vulgariser la théorie du chapitre 2 pour les modèles univariés ou bivariés. Nous commençons par passer en revue les propriétés de convergence et de normalité asymptotique de l'estimateur CGMM. Nous proposons ensuite des recettes numériques pour l'implémentation. Enfin, nous conduisons des simulations Monte Carlo basée sur la loi stable. Ces simulations démontrent que le CGMM est une méthode fiable d'inférence. En guise d'application empirique, nous estimons par CGMM un modèle de variance autorégressif Gamma. Les résultats d'estimation confirment un résultat bien connu en finance: le rendement est positivement corrélé au risque espéré et négativement corrélé au choc sur la volatilité. Lorsqu'on implémente le CGMM, une difficulté majeure réside dans l'évaluation numérique itérative des intégrales multiples présentes dans la fonction objectif. Les méthodes de quadrature sont en principe parmi les plus précises que l'on puisse utiliser dans le présent contexte. Malheureusement, le nombre de points de quadrature augmente exponentiellement en fonction de la dimensionalité (d) des intégrales. L'utilisation du CGMM devient pratiquement impossible dans les modèles multivariés et non markoviens où d≥3. Dans le chapitre 4, nous proposons une procédure alternative baptisée "reéchantillonnage dans le domaine fréquentielle" qui consiste à fabriquer des échantillons univariés en prenant une combinaison linéaire des éléments du vecteur initial, les poids de la combinaison linéaire étant tirés aléatoirement dans un sous-espace normalisé de ℝ^{d}. Chaque échantillon ainsi généré est utilisé pour produire un estimateur du paramètre d'intérêt. L'estimateur final que nous proposons est une combinaison linéaire optimale de tous les estimateurs ainsi obtenus. Finalement, nous proposons une étude par simulation et une application empirique basées sur des modèles autorégressifs Gamma. Dans l'ensemble, nous faisons une utilisation intensive du bootstrap, une technique selon laquelle les propriétés statistiques d'une distribution inconnue peuvent être estimées à partir d'un estimé de cette distribution. Nos résultats empiriques peuvent donc en principe être améliorés en faisant appel aux connaissances les plus récentes dans le domaine du bootstrap. / In estimating the integrated volatility of financial assets using noisy high frequency data, the time series properties assumed for the microstructure noise determines the proper choice of the volatility estimator. In the first chapter of the current thesis, we propose a new model for the microstructure noise with three important features. First of all, our model assumes that the noise is L-dependent. Secondly, the memory lag L is allowed to increase with the sampling frequency. And thirdly, the noise may include an endogenous part, that is, a piece that is correlated with the latent returns. The main difference between this microstructure model and existing ones is that it implies a first order autocorrelation that converges to 1 as the sampling frequency goes to infinity. We use this semi-parametric model to derive a new shrinkage estimator for the integrated volatility. The proposed estimator makes an optimal signal-to-noise trade-off by combining a consistent estimators with an inconsistent one. Simulation results show that the shrinkage estimator behaves better than the best of the two combined ones. We also propose some estimators for the parameters of the noise model. An empirical study based on stocks listed in the Dow Jones Industrials shows the relevance of accounting for possible time dependence in the noise process. Chapters 2, 3 and 4 pertain to the generalized method of moments based on the characteristic function. In fact, the likelihood functions of many financial econometrics models are not known in close form. For example, this is the case for the stable distribution and a discretely observed continuous time model. In these cases, one may estimate the parameter of interest by specifying a moment condition based on the difference between the theoretical (conditional) characteristic function and its empirical counterpart. The challenge is then to exploit the whole continuum of moment conditions hence defined to achieve the maximum likelihood efficiency. This problem has been solved in Carrasco and Florens (2000) who propose the CGMM procedure. The objective function of the CGMM is a quadrqtic form on the Hilbert space defined by the moment function. That objective function depends on a Tikhonov-type regularized inverse of the covariance operator associated with the moment function. Carrasco and Florens (2000) have shown that the estimator obtained by minimizing the proposed objective function is asymptotically as efficient as the maximum likelihood estimator provided that the regularization parameter (α) converges to zero as the sample size goes to infinity. However, the nature of this objective function raises two important questions. First of all, how do we select α in practice? And secondly, how do we implement the CGMM when the multiplicity (d) of the integrals embedded in the objective-function d is large. These questions are tackled in the last three chapters of the thesis. In Chapter 2, we propose to choose α by minimizing the approximate mean square error (MSE) of the estimator. Following an approach similar to Newey and Smith (2004), we derive a higher-order expansion of the estimator from which we characterize the finite sample dependence of the MSE on α. We provide two data-driven methods for selecting the regularization parameter in practice. The first one relies on the higher-order expansion of the MSE whereas the second one uses only simulations. We show that our simulation technique delivers a consistent estimator of α. Our Monte Carlo simulations confirm the importance of the optimal selection of α. The goal of Chapter 3 is to illustrate how to efficiently implement the CGMM for d≤2. To start with, we review the consistency and asymptotic normality properties of the CGMM estimator. Next we suggest some numerical recipes for its implementation. Finally, we carry out a simulation study with the stable distribution that confirms the accuracy of the CGMM as an inference method. An empirical application based on the autoregressive variance Gamma model led to a well-known conclusion: investors require a positive premium for bearing the expected risk while a negative premium is attached to the unexpected risk. In implementing the characteristic function based CGMM, a major difficulty lies in the evaluation of the multiple integrals embedded in the objective function. Numerical quadratures are among the most accurate methods that can be used in the present context. Unfortunately, the number of quadrature points grows exponentially with d. When the data generating process is Markov or dependent, the accurate implementation of the CGMM becomes roughly unfeasible when d≥3. In Chapter 4, we propose a strategy that consists in creating univariate samples by taking a linear combination of the elements of the original vector process. The weights of the linear combinations are drawn from a normalized set of ℝ^{d}. Each univariate index generated in this way is called a frequency domain bootstrap sample that can be used to compute an estimator of the parameter of interest. Finally, all the possible estimators obtained in this fashion can be aggregated to obtain the final estimator. The optimal aggregation rule is discussed in the paper. The overall method is illustrated by a simulation study and an empirical application based on autoregressive Gamma models. This thesis makes an extensive use of the bootstrap, a technique according to which the statistical properties of an unknown distribution can be estimated from an estimate of that distribution. It is thus possible to improve our simulations and empirical results by using the state-of-the-art refinements of the bootstrap methodology.
160

Apprentissage machine efficace : théorie et pratique

Delalleau, Olivier 03 1900 (has links)
Malgré des progrès constants en termes de capacité de calcul, mémoire et quantité de données disponibles, les algorithmes d'apprentissage machine doivent se montrer efficaces dans l'utilisation de ces ressources. La minimisation des coûts est évidemment un facteur important, mais une autre motivation est la recherche de mécanismes d'apprentissage capables de reproduire le comportement d'êtres intelligents. Cette thèse aborde le problème de l'efficacité à travers plusieurs articles traitant d'algorithmes d'apprentissage variés : ce problème est vu non seulement du point de vue de l'efficacité computationnelle (temps de calcul et mémoire utilisés), mais aussi de celui de l'efficacité statistique (nombre d'exemples requis pour accomplir une tâche donnée). Une première contribution apportée par cette thèse est la mise en lumière d'inefficacités statistiques dans des algorithmes existants. Nous montrons ainsi que les arbres de décision généralisent mal pour certains types de tâches (chapitre 3), de même que les algorithmes classiques d'apprentissage semi-supervisé à base de graphe (chapitre 5), chacun étant affecté par une forme particulière de la malédiction de la dimensionalité. Pour une certaine classe de réseaux de neurones, appelés réseaux sommes-produits, nous montrons qu'il peut être exponentiellement moins efficace de représenter certaines fonctions par des réseaux à une seule couche cachée, comparé à des réseaux profonds (chapitre 4). Nos analyses permettent de mieux comprendre certains problèmes intrinsèques liés à ces algorithmes, et d'orienter la recherche dans des directions qui pourraient permettre de les résoudre. Nous identifions également des inefficacités computationnelles dans les algorithmes d'apprentissage semi-supervisé à base de graphe (chapitre 5), et dans l'apprentissage de mélanges de Gaussiennes en présence de valeurs manquantes (chapitre 6). Dans les deux cas, nous proposons de nouveaux algorithmes capables de traiter des ensembles de données significativement plus grands. Les deux derniers chapitres traitent de l'efficacité computationnelle sous un angle différent. Dans le chapitre 7, nous analysons de manière théorique un algorithme existant pour l'apprentissage efficace dans les machines de Boltzmann restreintes (la divergence contrastive), afin de mieux comprendre les raisons qui expliquent le succès de cet algorithme. Finalement, dans le chapitre 8 nous présentons une application de l'apprentissage machine dans le domaine des jeux vidéo, pour laquelle le problème de l'efficacité computationnelle est relié à des considérations d'ingénierie logicielle et matérielle, souvent ignorées en recherche mais ô combien importantes en pratique. / Despite constant progress in terms of available computational power, memory and amount of data, machine learning algorithms need to be efficient in how they use them. Although minimizing cost is an obvious major concern, another motivation is to attempt to design algorithms that can learn as efficiently as intelligent species. This thesis tackles the problem of efficient learning through various papers dealing with a wide range of machine learning algorithms: this topic is seen both from the point of view of computational efficiency (processing power and memory required by the algorithms) and of statistical efficiency (n umber of samples necessary to solve a given learning task).The first contribution of this thesis is in shedding light on various statistical inefficiencies in existing algorithms. Indeed, we show that decision trees do not generalize well on tasks with some particular properties (chapter 3), and that a similar flaw affects typical graph-based semi-supervised learning algorithms (chapter 5). This flaw is a form of curse of dimensionality that is specific to each of these algorithms. For a subclass of neural networks, called sum-product networks, we prove that using networks with a single hidden layer can be exponentially less efficient than when using deep networks (chapter 4). Our analyses help better understand some inherent flaws found in these algorithms, and steer research towards approaches that may potentially overcome them. We also exhibit computational inefficiencies in popular graph-based semi-supervised learning algorithms (chapter 5) as well as in the learning of mixtures of Gaussians with missing data (chapter 6). In both cases we propose new algorithms that make it possible to scale to much larger datasets. The last two chapters also deal with computational efficiency, but in different ways. Chapter 7 presents a new view on the contrastive divergence algorithm (which has been used for efficient training of restricted Boltzmann machines). It provides additional insight on the reasons why this algorithm has been so successful. Finally, in chapter 8 we describe an application of machine learning to video games, where computational efficiency is tied to software and hardware engineering constraints which, although often ignored in research papers, are ubiquitous in practice.

Page generated in 0.9652 seconds