251 |
Large smooth cylindrical elements located in a rectangular channel : upstream hydraulic conditions and drag force evaluationTurcotte, Benoit 11 1900 (has links)
Classical approaches to evaluate the stability of large woody debris (LWD) introduced in streams for habitat restoration or flood management purposes are usually based on inappropriate assumptions and hydraulic equations. Results suggest that the physics of small cylindrical elements located in large channels cannot be transferred to the case of a large roughness elements placed in small channels. The introduction of LWD in a small channel can generate a significant modification of the upstream hydraulic conditions. This modification has direct implications on the stability of the LWD.
Experiments were performed in a controlled environment: a small stream section was represented by a low roughness rectangular flume and LWD were modeled with smooth PVC cylinders. Direct force measurements were performed with a load cell and results were used to identify an equation that evaluates the drag force acting on a large cylindrical element place in a rectangular channel. This equation does not depend on a drag coefficient. Water depths were also measured during the experiments and results were used to develop an approach that evaluates the upstream hydraulic impacts of a large cylinder introduced in a rectangular channel. The effect of the variation of the unit discharge (discharge per unit of width), cylinder size, cylinder elevation from the channel bed, and downstream hydraulic conditions, could be related to the upstream hydraulic conditions with relative success. Dimensionless parameters were developed to increase the versatility of the approach.
The application of this approach to field cases is expected to require adjustments, mainly because of the roughness of natural environments differs from the smoothness of the controlled environment described in this work.
|
252 |
Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique / Generic acceleration schemes for gradient-based optimization in machine learningLin, Hongzhou 16 November 2017 (has links)
Les problèmes d’optimisation apparaissent naturellement pendant l’entraine-ment de modèles d’apprentissage supervises. Un exemple typique est le problème deminimisation du risque empirique (ERM), qui vise a trouver un estimateur en mini-misant le risque sur un ensemble de données. Le principal défi consiste a concevoirdes algorithmes d’optimisation efficaces permettant de traiter un grand nombre dedonnées dans des espaces de grande dimension. Dans ce cadre, les méthodes classiques d’optimisation, telles que l’algorithme de descente de gradient et sa varianteaccélérée, sont couteux en termes de calcul car elles nécessitent de passer a traverstoutes les données a chaque évaluation du gradient. Ce défaut motive le développement de la classe des algorithmes incrémentaux qui effectuent des mises a jour avecdes gradients incrémentaux. Ces algorithmes réduisent le cout de calcul par itération, entrainant une amélioration significative du temps de calcul par rapport auxméthodes classiques. Une question naturelle se pose : serait-il possible d’accélérerdavantage ces méthodes incrémentales ? Nous donnons ici une réponse positive, enintroduisant plusieurs schémas d’accélération génériques.Dans le chapitre 2, nous développons une variante proximale de l’algorithmeFinito/MISO, qui est une méthode incrémentale initialement conçue pour des problèmes lisses et fortement convexes. Nous introduisons une étape proximale dans lamise a jour de l’algorithme pour prendre en compte la pénalité de régularisation quiest potentiellement non lisse. L’algorithme obtenu admet un taux de convergencesimilaire a l’algorithme Finito/MISO original.Dans le chapitre 3, nous introduisons un schéma d’accélération générique, appele Catalyst, qui s’applique a une grande classe de méthodes d’optimisation, dansle cadre d’optimisations convexes. La caractéristique générique de notre schémapermet l’utilisateur de sélectionner leur méthode préférée la plus adaptée aux problemes. Nous montrons que en appliquant Catalyst, nous obtenons un taux deconvergence accélère. Plus important, ce taux coïncide avec le taux optimale desméthodes incrémentales a un facteur logarithmique pres dans l’analyse du pire descas. Ainsi, notre approche est non seulement générique mais aussi presque optimale du point de vue théorique. Nous montrons ensuite que l’accélération est bienprésentée en pratique, surtout pour des problèmes mal conditionnes.Dans le chapitre 4, nous présentons une seconde approche générique qui appliqueles principes Quasi-Newton pour accélérer les méthodes de premier ordre, appeléeQNing. Le schéma s’applique a la même classe de méthodes que Catalyst. En outre,il admet une simple interprétation comme une combinaison de l’algorithme L-BFGSet de la régularisation Moreau-Yosida. A notre connaissance, QNing est le premieralgorithme de type Quasi-Newton compatible avec les objectifs composites et lastructure de somme finie.Nous concluons cette thèse en proposant une extension de l’algorithme Catalyst au cas non convexe. Il s’agit d’un travail en collaboration avec Dr. CourtneyPaquette et Pr. Dmitriy Drusvyatskiy, de l’Université de Washington, et mes encadrants de thèse. Le point fort de cette approche réside dans sa capacité a s’adapterautomatiquement a la convexité. En effet, aucune information sur la convexité de lafonction n’est nécessaire avant de lancer l’algorithme. Lorsque l’objectif est convexe,l’approche proposée présente les mêmes taux de convergence que l’algorithme Catalyst convexe, entrainant une accélération. Lorsque l’objectif est non-convexe, l’algorithme converge vers les points stationnaires avec le meilleur taux de convergencepour les méthodes de premier ordre. Des résultats expérimentaux prometteurs sontobserves en appliquant notre méthode a des problèmes de factorisation de matriceparcimonieuse et a l’entrainement de modèles de réseaux de neurones. / Optimization problems arise naturally in machine learning for supervised problems. A typical example is the empirical risk minimization (ERM) formulation, which aims to find the best a posteriori estimator minimizing the regularized risk on a given dataset. The current challenge is to design efficient optimization algorithms that are able to handle large amounts of data in high-dimensional feature spaces. Classical optimization methods such as the gradient descent algorithm and its accelerated variants are computationally expensive under this setting, because they require to pass through the entire dataset at each evaluation of the gradient. This was the motivation for the recent development of incremental algorithms. By loading a single data point (or a minibatch) for each update, incremental algorithms reduce the computational cost per-iteration, yielding a significant improvement compared to classical methods, both in theory and in practice. A natural question arises: is it possible to further accelerate these incremental methods? We provide a positive answer by introducing several generic acceleration schemes for first-order optimization methods, which is the main contribution of this manuscript. In chapter 2, we develop a proximal variant of the Finito/MISO algorithm, which is an incremental method originally designed for smooth strongly convex problems. In order to deal with the non-smooth regularization penalty, we modify the update by introducing an additional proximal step. The resulting algorithm enjoys a similar linear convergence rate as the original algorithm, when the problem is strongly convex. In chapter 3, we introduce a generic acceleration scheme, called Catalyst, for accelerating gradient-based optimization methods in the sense of Nesterov. Our approach applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. The Catalyst algorithm can be viewed as an inexact accelerated proximal point algorithm, applying a given optimization method to approximately compute the proximal operator at each iteration. The key for achieving acceleration is to appropriately choose an inexactness criteria and control the required computational effort. We provide a global complexity analysis and show that acceleration is useful in practice. In chapter 4, we present another generic approach called QNing, which applies Quasi-Newton principles to accelerate gradient-based optimization methods. The algorithm is a combination of inexact L-BFGS algorithm and the Moreau-Yosida regularization, which applies to the same class of functions as Catalyst. To the best of our knowledge, QNing is the first Quasi-Newton type algorithm compatible with both composite objectives and the finite sum setting. We provide extensive experiments showing that QNing gives significant improvement over competing methods in large-scale machine learning problems. We conclude the thesis by extending the Catalyst algorithm into the nonconvex setting. This is a joint work with Courtney Paquette and Dmitriy Drusvyatskiy, from University of Washington, and my PhD advisors. The strength of the approach lies in the ability of the automatic adaptation to convexity, meaning that no information about the convexity of the objective function is required before running the algorithm. When the objective is convex, the proposed approach enjoys the same convergence result as the convex Catalyst algorithm, leading to acceleration. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Promising experimental results have been observed when applying to sparse matrix factorization problems and neural network models.
|
253 |
Machine Learning Strategies for Large-scale Taxonomies / Strategies d'apprentissage pour la classification dans les grandes taxonomiesBabbar, Rohit 17 October 2014 (has links)
À l'ère de Big Data, le développement de modèles d'apprentissage machine efficaces et évolutifs opérant sur des Tera-Octets de données est une nécessité. Dans cette thèse, nous étudions un cadre d'apprentissage machine pour la classification hiérarchique à large échelle. Cette analyse comprend l'étude des défis comme la complexité d'entraînement des modèles ainsi que leur temps de prédiction. Dans la première partie de la thèse, nous étudions la distribution des lois de puissance sous-jacente à la création des taxonomies à grande échelle. Cette étude permet de dériver des bornes sur la complexité spatiale des classifieurs hiérarchiques. L'exploitation de ce résultat permet alors le développement des modèles efficaces pour les classes distribuées selon une loi de puissance. Nous proposons également une méthode efficace pour la sélection de modèles pour des classifieurs multi-classes de type séparateurs à vaste marge ou de la régression logistique. Dans une deuxième partie, nous étudions le problème de la classification hiérarichique contre la classification plate d'un point de vue théorique. Nous dérivons une borne sur l'erreur de généralisation qui permet de définir les cas où la classification hiérarchique serait plus avantageux que la classification plate. Nous exploitons en outre les bornes développées pour proposer deux méthodes permettant adapter une taxonomie donnée de catégories à une taxonomies de sorties qui permet d'atteindre une meilleure performance de test. / In the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup.
|
254 |
Effects of anthropogenic pressure on large mammal species in the Hyrcanian forest, Iran / Effects of poaching, logging and livestock grazing on large mammalsSoofi, Mahmood 08 December 2017 (has links)
No description available.
|
255 |
Large smooth cylindrical elements located in a rectangular channel : upstream hydraulic conditions and drag force evaluationTurcotte, Benoit 11 1900 (has links)
Classical approaches to evaluate the stability of large woody debris (LWD) introduced in streams for habitat restoration or flood management purposes are usually based on inappropriate assumptions and hydraulic equations. Results suggest that the physics of small cylindrical elements located in large channels cannot be transferred to the case of a large roughness elements placed in small channels. The introduction of LWD in a small channel can generate a significant modification of the upstream hydraulic conditions. This modification has direct implications on the stability of the LWD.
Experiments were performed in a controlled environment: a small stream section was represented by a low roughness rectangular flume and LWD were modeled with smooth PVC cylinders. Direct force measurements were performed with a load cell and results were used to identify an equation that evaluates the drag force acting on a large cylindrical element place in a rectangular channel. This equation does not depend on a drag coefficient. Water depths were also measured during the experiments and results were used to develop an approach that evaluates the upstream hydraulic impacts of a large cylinder introduced in a rectangular channel. The effect of the variation of the unit discharge (discharge per unit of width), cylinder size, cylinder elevation from the channel bed, and downstream hydraulic conditions, could be related to the upstream hydraulic conditions with relative success. Dimensionless parameters were developed to increase the versatility of the approach.
The application of this approach to field cases is expected to require adjustments, mainly because of the roughness of natural environments differs from the smoothness of the controlled environment described in this work. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
|
256 |
Grandes déviations de systèmes stochastiques modélisant des épidémies / Large deviations for stochastic systems modeling epidemicsSamegni Kepgnou, Brice 13 July 2017 (has links)
Le but de cette thèse est de développer la théorie de Freidlin-Wentzell pour des modèles des épidémies, afin de prédire le temps mis par les perturbations aléatoires pour éteindre une situation endémique "stable". Tout d'abord nous proposons une nouvelle démonstration plus courte par rapport à celle établit récemment (sous une hypothèse un peu différente, mais satisfaite dans tous les exemples de modèles de maladie infectieuses que nous avons à l'esprit) par Kratz et Pardoux (2017) sur le principe de grandes déviations pour les modèles des épidémies. Ensuite nous établissons un principe de grandes déviations pour des EDS poissoniennes réfléchies au bord d'un ouvert suffisamment régulier. Nous établissons aussi un résultat concernant la zone du bord la plus probable par laquelle le processus solution de l'EDS de Poisson va sortir du domaine d'attraction d'un équilibre stable de sa loi des grands nombres limite. Nous terminons cette thèse par la présentation des méthodes "non standard aux différences finis", appropriés pour approcher numériquement les solutions de nos EDOs ainsi que par la résolution d'un problème de contrôle optimal qui permet d'avoir une bonne approximation du temps d'extinction d'un processus d'infection. / In this thesis, we develop the Freidlin-Wentzell theory for the "natural'' Poissonian random perturbations of the above ODE in Epidemic Dynamics (and similarly for models in Ecology or Population Dynamics), in order to predict the time taken by random perturbations to extinguish a "stable" endemic situation. We start by a shorter proof of a recent result of Kratz and Pardoux (under a somewhat different hypothesis which is satisfied in all the cases we have examined so far), which establishes the large deviations principle for epidemic models. Next, we establish the large deviations principle for reflected Poisonian SDE at the boundary of a sufficiently regular open set. Then, we establish the result for the most likely boundary area by which the process will exit the domain of attraction of a stable equilibrium of an ODE. We conclude this thesis with the presentation of the "non - standard finite difference" methods, suitable to approach numerically the solutions of our ODEs as well as the resolution of an optimal control problem which allows to have a good approximation of the time of extinction of an endemic situation.
|
257 |
Single layer routing : mapping topological to geometric solutionsHong, Won-kook. January 1986 (has links)
No description available.
|
258 |
A Theoretical and Experimental Investigation of Power Transmission in a Large Diameter Optical FiberCarter, Frances D 07 August 2004 (has links)
The effect of varying the angle of incidence of a Gaussian beam from a He-Ne laser incident upon a large radius optical fiber is theoretically and experimentally investigated. The modes in a weakly-guiding, step index fiber were determined by using an analytical approximation technique to calculate the corresponding eigenvalues. An expression was developed for the fractional power per mode as a function of the angle of incidence for such a fiber. This expression was used to calculate the fractional power per mode for the lowest order 171 modes. This allowed the calculation of the fractional power per order and total power. By comparing these theoretical results to our experiment results, it is shown that the theoretical method is accurate at normal incidence and gives qualitative but not quantitative agreement at larger angles.
|
259 |
Influence of stream corridor geomorphology on large wood jams and associated fish assemblages in mixed deciduous-conifer forest in Upper MichiganMorris, Arthur E. L. 24 August 2005 (has links)
No description available.
|
260 |
High energy resummation and electroweak corrections in dijet production at hadronic collidersMedley, Jack James January 2016 (has links)
Coloured final states are ubiquitous at hadron colliders such as the Large Hadron Collider (LHC). Therefore understanding high energy perturbative quantum chromodynamics (QCD) at these experiments is essential not only as a test of the Standard Model, but also because these processes form the dominant background to many searches for new physics. One such `standard candle' is the production of a dilepton pair in association with dijets. Here we present a new description of this final state (through the production of a Z⁰ boson and γ*). This calculation adds to the fixed-order accuracy the dominant logarithms in the limit of large partonic centre-of-mass energy to all orders in the strong coupling αs. This is achieved within the framework of High Energy Jets. This calculation is made possible by extending the high energy treatment to take into account the multiple t-channel exchanges arising from Z⁰ and gamma* -emissions off several quark lines. The correct description of the interference effects from the various t-channel exchanges requires an extension of the subtraction terms in the all-order calculation. We describe this construction and compare the resulting predictions to a number of recent analyses of LHC data. The description of a wide range of observables is good, and, as expected, stands out from other approaches in particular in the regions of large dijet invariant mass and large dijet rapidity spans. In addition we also present the application of the High Energy Jets framework to two new experimental scenarios. Firstly, we show a comparison of High Energy Jets matched to the ARIADNE parton shower to an ATLAS study of gap activity in dijet events. We see that our description agrees well with the data throughout and in many distributions gives the best theoretical description. This shows the extra logarithmic corrections are essential to describe data already in LHC Run I. Secondly, we present a study of Z⁰/γ* plus dijets at 100 TeV. We compare the behaviour of the high energy logarithmic enhancements to the QCD perturbative series at 7 TeV and 100 Tev and see that at any high energy hadronic Future Circular Collider (FCC) the effects described by our resummation become significantly more important.
|
Page generated in 0.062 seconds