• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 9
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 57
  • 57
  • 14
  • 14
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design of safe control laws for the locomotion of biped robots / Conception de lois de commandes sûres pour la locomotion des robots bipèdes

Bohorquez dorante, Nestor 14 December 2018 (has links)
Un robot bipède doit pouvoir marcher en toute sécurité dans une foule. Pour cela, il faut prendre en compte deux aspects : l’équilibre et l'évitement des collisions. Maintenir l’équilibre implique d'éviter les défaillances dynamiques et cinématiques de la dynamique instable du robot. Pour ce qui est de l’évitement des collisions, il s’agit d’éviter le contact entre le robot et des individus. Nous voulons être capables de satisfaire ces deux contraintes simultanément, à l’instant présent mais aussi dans le futur. Nous pouvons assurer l’équilibre du robot indéfiniment en le faisant entrer dans un cycle limite de marche ou en le faisant s’arrêter après quelques pas. Néanmoins, une telle garantie pour l’évitement d’obstacle n’est pas possible pour plusieurs raisons : impossibilité de connaître de manière absolue la direction vers laquelle les individus se dirigent, limitations cinématiques et dynamiques du robot, mouvement adverse de la foule, etc. Nous traitons ces limitations avec une stratégie standard de navigation dans une foule, appelée passive safety, qui nous permet de formuler une loi de commande prédictive avec laquelle nous assurons l’équilibre et l'évitement des collisions, de manière unifiée, en faisant s’arrêter le robot de manière sécurisée et en temps fini. De plus, nous définissons une nouvelle stratégie de navigation sûre basée sur le principe d’évitement des collisions aussi longtemps que possible, qui a la propriété de minimiser leur apparition et sévérité. Nous proposons une formulation lexicographique qui synthétise des mouvements conformes à ce principe. Nous augmentons les degrés de liberté de la locomotion d’un robot bipède en permettant la variation de l’orientation et de la durée des pas en ligne. Cependant, cela introduit des non-linéarités dans les contraintes de nos problèmes d’optimisation. Nous faisons des approximations de ces contraintes non-linéaires avec des contraintes linéaires sûres de sorte que la satisfaction des secondes implique la satisfaction des premières. Nous proposons une nouvelle méthode de résolution des problèmes non-linéaires (Optimisation Quadratique Successive Sûre) qui assure la faisabilité des itérations de Newton en utilisant cette redéfinition des contraintes. Nous simulons la marche d’un robot bipède dans une foule pour évaluer la performance de nos lois des commandes. D’une part, nous réussissons à réduire (statistiquement) la quantité et la sévérité des collisions en comparaison avec la méthode de passive safety, spécialement dans les conditions d’incertitude de la marche du robot dans une foule. D’autre part, nous montrons des exemples de comportements typiques du robot, qui découlent de la liberté de choisir l’orientation et la durée des pas. Nous rapportons le coût de calcul de notre méthode de résolution des problèmes non-linéaires en comparaison avec une méthode standard. Nous montrons qu’une seule itération de Newton est nécessaire pour arriver à une solution faisable, mais que le coût de calcul dépend du nombre de factorisations de l’active set dont nous avons besoin pour arriver à l’active set optimal. / We want a biped robot to walk safely in a crowd. This involves two aspects: balance and collision avoidance. The first implies avoiding kinematic and dynamical failures of the unstable walking dynamics of the robot; the second refers to avoiding collisions with people. We want to be able to solve both problems not only now but also in the future. We can ensure balance indefinitely by entering in a cyclic walk or by making the robot stop after a couple of steps. Nonetheless, we cannot give a comparable guarantee in collision avoidance for many reasons: impossibility of having absolute knowledge of where people are moving, kinematic/dynamical limitations of the robot, adversarial crowd motion, etc. We address this limitation with a standard strategy for crowd navigation, known as passive safety, that allows us to formulate a unified Model Predictive Control approach for balance and collision avoidance in which we require the robot to stop safely in finite time. In addition, we define a novel safe navigation strategy based on the premise of avoiding collisions for as long as possible that minimizes their occurrence and severity. We propose a lexicographic formulation that produces motions that comply with such premise.We increase the degrees of freedom of the locomotion of a biped robot by allowing the duration and orientation of its steps to vary online. This introduces nonlinearities in the constraints of the optimization problems we solve. We approximate these nonlinear constraints with safe linear constraints so that satisfying the latter implies satisfying the former. We propose a novel method (Safe Sequential Quadratic Programming) that ensures feasible Newton iterates in the solution of nonlinear problems based on this redefinition of constraints.We make a series of simulations of a biped robot walking in a crowd to evaluate the performance of our proposed controllers. We are able to attest the reduction in the number and in the severity of collisions with our proposed navigation strategy in comparison with passive safety, specially when there is uncertainty in the motion of people. We show typical behaviors of the robot that arise when we allow the online variation of the duration and orientation of the steps and how it further improves collision avoidance. We report the computational cost of our proposed numerical method for nonlinear problems in comparison with a standard method. We show that we only need one Newton iteration to arrive to a feasible solution but that the CPU time is dependent on the amount of active set factorizations needed to arrive to the optimal active set.
22

Desenvolvimento de algoritmo de controle de tração para regeneração de energia metroviária - ACTREM: melhoria da eficiência energética de sistemas de tração metroviária. / Development traction control algorithm for subway energy regeneration - ACTREM: improving the energy efficiency of subway traction systems.

Carlos Alberto de Sousa 01 September 2015 (has links)
Esta tese propõe um modelo de regeneração de energia metroviária, baseado no controle de paradas e partidas do trem ao longo de sua viagem, com o aproveitamento da energia proveniente da frenagem regenerativa no sistema de tração. O objetivo é otimizar o consumo de energia, promover maior eficiência, na perspectiva de uma gestão sustentável. Aplicando o Algoritmo Genético (GA) para obter a melhor configuração de tráfego dos trens, a pesquisa desenvolve e testa o Algoritmo de Controle de Tração para Regeneração de Energia Metroviária (ACTREM), usando a Linguagem de programação C++. Para analisar o desempenho do algoritmo de controle ACTREM no aumento da eficiência energética, foram realizadas quinze simulações da aplicação do ACTREM na linha 4 - Amarela do metrô da cidade de São Paulo. Essas simulações demonstraram a eficiência do ACTREM para gerar, automaticamente, os diagramas horários otimizados para uma economia de energia nos sistemas metroviários, levando em consideração as restrições operacionais do sistema, como capacidade máxima de cada trem, tempo total de espera, tempo total de viagem e intervalo entre trens. Os resultados mostram que o algoritmo proposto pode economizar 9,5% da energia e não provocar impactos relevantes na capacidade de transporte de passageiros do sistema. Ainda sugerem possíveis continuidades de estudos. / This thesis proposes a subway energy regeneration model, based on control stops and train departures throughout his trip, with the use of energy from the regenerative braking in the drive system. The goal is to optimize the power consumption, improve efficiency, in view of sustainable management. Applying Genetic Algorithm (GA) to get the better of the trains traffic configuration, the research develops and tests the Traction Control Algorithm for Subway Energy Regeneration (ACTREM), using the C ++ programming language. To analyze the performance of ACTREM control algorithm in enhancing energy efficiency, there were fifteen simulations of applying ACTREM on line 4 - Yellow subway in São Paulo. These simulations showed the ACTREM efficiency to generate automatically diagrams schedules optimized for energy savings in metro systems, taking into account the system\'s operational constraints such as maximum each train capacity, total wait time, total travel time and interval between trains. The results show that the proposed algorithm can save 9.5% of the energy and not cause significant impacts on the transportation system capacity passengers. Also suggest possible continuities studies.
23

Systematic process development by simultaneous modeling and optimization of simulated moving bed chromatography

Bentley, Jason A. 10 January 2013 (has links)
Adsorption separation processes are extremely important to the chemical industry, especially in the manufacturing of food, pharmaceutical, and fine chemical products. This work addresses three main topics: first, systematic decision-making between rival gas phase adsorption processes for the same separation problem; second, process development for liquid phase simulated moving bed chromatography (SMB); third, accelerated startup for SMB units. All of the work in this thesis uses model-based optimization to answer complicated questions about process selection, process development, and control of transient operation. It is shown in this thesis that there is a trade-off between productivity and product recovery in the gaseous separation of enantiomers using SMB and pressure swing adsorption (PSA). These processes are considered as rivals for the same separation problem and it is found that each process has a particular advantage that may be exploited depending on the production goals and economics. The processes are compared on a fair basis of equal capitol investment and the same multi-objective optimization problem is solved with equal constraints on the operating parameters. Secondly, this thesis demonstrates by experiment a systematic algorithm for SMB process development that utilizes dynamic optimization, transient experimental data, and parameter estimation to arrive at optimal operating conditions for a new separation problem in a matter of hours. Comparatively, the conventional process development for SMB relies on careful system characterization using single-column experiments, and manual tuning of operating parameters, that may take days and weeks. The optimal operating conditions that are found by this new method ensure both high purity constraints and optimal productivity are satisfied. The proposed algorithm proceeds until the SMB process is optimized without manual tuning. In some case studies, it is shown with both linear and nonlinear isotherm systems that the optimal performance can be reached in only two changes of operating conditions following the proposed algorithm. Finally, it is shown experimentally that the startup time for a real SMB unit is significantly reduced by solving model-based startup optimization problems using the SMB model developed from the proposed algorithm. The startup acceleration with purity constraints is shown to be successful at reducing the startup time by about 44%, and it is confirmed that the product purities are maintained during the operation. Significant cost savings in terms of decreased processing time and increased average product concentration can be attained using a relatively simple startup acceleration strategy.
24

Objective Quality Assessment and Optimization for High Dynamic Range Image Tone Mapping

Ma, Kede 03 June 2014 (has links)
Tone mapping operators aim to compress high dynamic range (HDR) images to low dynamic range ones so as to visualize HDR images on standard displays. Most existing works were demonstrated on specific examples without being thoroughly tested on well-established and subject-validated image quality assessment models. A recent tone mapped image quality index (TMQI) made the first attempt on objective quality assessment of tone mapped images. TMQI consists of two fundamental building blocks: structural fidelity and statistical naturalness. In this thesis, we propose an enhanced tone mapped image quality index (eTMQI) by 1) constructing an improved nonlinear mapping function to better account for the local contrast visibility of HDR images and 2) developing an image dependent statistical naturalness model to quantify the unnaturalness of tone mapped images based on a subjective study. Experiments show that the modified structural fidelity and statistical naturalness terms in eTMQI better correlate with subjective quality evaluations. Furthermore, we propose an iterative optimization algorithm for tone mapping. The advantages of this algorithm are twofold: 1) eTMQI and TMQI can be compared in a more straightforward way; 2) better quality tone mapped images can be automatically generated by using eTMQI as the optimization goal. Numerical and subjective experiments demonstrate that eTMQI is a superior objective quality assessment metric for tone mapped images and consistently outperforms TMQI.
25

Numerical optimization for mixed logit models and an application

Dogan, Deniz 08 January 2008 (has links)
In this thesis an algorithm (MLOPT) for mixed logit models is proposed. Mixed logit models are flexible discrete choice models, but their estimation with large datasets involves the solution of a nonlinear optimization problem with a high dimensional integral in the objective function, which is the log-likelihood function. This complex structure is a general problem that occurs in statistics and optimization. MLOPT uses sampling from the dataset of individuals to generate a data sample. In addition to this, Monte Carlo samples are used to generate an integration sample to estimate the choice probabilities. MLOPT estimates the log-likelihood function values for each individual in the dataset by controlling and adaptively changing the data sample and the size of the integration sample at each iteration. Furthermore, MLOPT incorporates statistical testing for the quality of the solution obtained within the optimization problem. MLOPT is tested with a benchmark study from the literature (AMLET) and further applied to real-life applications in the automotive industry by predicting market shares in the Low Segment of the new car market. The automotive industry is particularly interesting in that understanding the behavior of buyers and how rebates affect their preferences is very important for revenue management. Real transaction data is used to generate and test the mixed logit models developed in this study. Another new aspect of this study is that the sales transactions are differentiated with respect to the transaction type of the purchases made. These mixed logit models are used to estimate demand and analyze market share changes under different what-if scenarios. An analysis and discussion of the results obtained are also presented.
26

Simulation et optimisation de forme d'hydroliennes à flux transverse / Simulation and shape optimization of vertical axis hydrokinetic turbines

Guillaud, Nathanaël 29 March 2017 (has links)
Dans le cadre de la production d'électricité par énergie renouvelable, cette thèse a pour objectif de contribuer à l'amélioration des performances hydrodynamiques des hydroliennes à flux transverse conçues par HydroQuest. Pour y parvenir, deux axes d'étude principaux sont proposés. Le premier consiste à améliorer la compréhension de la performance de l'hydrolienne et de l'écoulement en son sein par voie numérique. L'influence du paramètre d'avance ainsi que celle de la solidité de l'hydrolienne sont étudiées. Les écoulements mis en jeux étant complexes, une méthode de type Simulation des Granges Échelles 3D est utilisée afin de les restituer au mieux. Le phénomène de décrochage dynamique, qui apparaît pour certains régimes de fonctionnement de l'hydrolienne, fait l'objet d'une étude à part entière sur un cas de profil oscillant.Le second axe se concentre sur les carénages de l’hydrolienne qui font l'objet d'une procédure d'optimisation numérique. Afin de pouvoir réaliser les nombreuses simulations requises en un temps réaliste, des méthodes de type Unsteady Reynolds-Averaged Navier-Stokes 2D moins coûteuses et fournissant une précision suffisante pour ce type d'étude sont utilisées. / Within the renewable electricity production framework, this study aims to contribute to the efficiency improvement of the Vertical Axis Hydrokinetic Turbines designed by HydroQuest. To achieve this objective, two approaches are used. The first consists in the improvement of the comprehension of the turbine efficiency such as the flow through the turbine by numerical means. The influence of the tip speed ratio such as the turbine soldity are investigated. The flow through the turbine is complex. A 3D Large Eddy Simulation type is thus used. The dynamic stall phenomenon which could occur in Vertical Axis Hydrokinetic Turbines is also studied in a oscillating blade configuration.The second approach consists in the numerical optimization of the turbine channeling device. To perform the high number of simulations required, a 2D Unsteady Reynolds-Averaged Navier-Stokes simulation type is used.
27

Imagerie sismique appliquée à la caractérisation géométrique des fondations de pylônes électriques très haute tension / High resolution seismic imaging applied to the geometrical characterization of very high voltage electric pylons

Roques, Aurélien 15 October 2012 (has links)
L'imagerie de la proche surface est essentielle en géotechnique car la caractérisation et l'identification des premiers mètres du sol interviennent dans de nombreuses applications de l'aménagement du territoire. Les méthodes classiques d'imagerie sismique sont appréciées car elles sont simples de mise en oeuvre et d'interprétation. Utilisés en génie civil, ces outils ont généralement été développés initialement en prospection pétrolière. La problématique que nous abordons dans ce travail intéresse réseau de transport d'électricité (RTE) ; il s'agit d'identifier la géométrie des fondations de pylônes électriques très haute tension en utilisant des méthodes d'imagerie sismique qui ont fait leurs preuves dans le contexte de la géophysique de gisement. En particulier, nous évaluons les performances de l'inversion de la forme d'onde (FWI) et de la migration par retournement temporel. Nous présentons le principe de ces méthodes que nous mettons ensuite en oeuvre avec un outil basé sur une modélisation de la propagation d'ondes en milieu élastique 2D ; dans ce cadre, le temps de calcul de l'inversion est aujourd'hui raisonnable, ce qui est loin d'être le cas lorsqu'on considère un milieu élastique 3D. Ensuite, nous présentons les résultats d'imagerie sur données synthétiques puis réelles. Concernant les données synthétiques 2D, l'inversion permet d'identifier les dimensions de la fondation à condition que le rapport de vitesse entre la fondation et l'encaissant ne dépasse pas 3. La migration permet quant à elle d'imager de façon satisfaisante des contrastes beaucoup plus élevés. Sur données réelles, les tests que nous avons faits ne permettent pas d'identifier la géométrie de la fondation avec ces méthodes ; en réalisant l'inversion de données synthétiques 3D avec notre outil 2D, nous montrons que le caractère 3D des données est un obstacle important à l'utilisation de notre outil sur des données réelles contenant une forte signature 3D de la structure à imager. / Near surface imaging is essential for geotechnics purpose. Characterization and identification of the first layers - between 0 and 10m - of the ground is necessary for many applications of national and regional development. Classical methods of imagery arouse a great interest as they are easy to use. In general, these numerical tools used in civil engineering have been first developped by seismic petroleum companies. The issue we are tackling comes to identifying the geometry of the foundations of very high voltage electric pylons using seismic imagery methods for french electricity transport and network. In particular, we assess the performances of the full waveform inversion and the reverse time migration. First, we explain the principle of these methods and then we implement them with a tool based on 2D modelisation which involves a reasonable computing time, contrary to 3D inversion carried out with today's means. Next, we show imagery results on synthetic and real data. Concerning, synthetic data, inversion makes it possible to identify the dimensions of the foundation as long as the velocity ratio between the foundation and the bedrock does not exceed 3. As to migration, it has good results with even much higher contrasts. Concerning real data, these two methods don't succeed in identifying the geometry of the foundation ; we inverted 3D synthetical data with our tool and show that the 3D property of data is prohibitive to 2D-inversion of real data with such an important 3D signature as the one we get on the foundation data.
28

Imagerie 2-D/3-D de la teneur en eau en milieu hétérogène par méthode RMP : biais et incertitudes / SNMR 2-D/3-D water content imagery in heterogeneous media : bias and uncertainties

Chevalier, Antoine 02 July 2014 (has links)
La caractérisation non-destructive de la variabilité spatiale et temporelle de la teneur en eau pour des milieux hétérogènes superficiels est un enjeu de taille pour la compréhension du fonctionnement hydrogéologique de certaines zones critiques.Pour répondre à ces besoins, une candidate potentielle est la méthode de Résonance Magnétique des Protons (RMP), technique géophysique récente qui permet d'imager la teneur en eau de la subsurface de façon directe. Ses déclinaisons opérationnelles en 2-D et en 3-D sont émergentes et nécessitent une étude exhaustive afin d'en comprendre les possibilités et les limitations.Le caractère intégrateur de la mesure RMP, en conjonction avec l'erreur expérimentale qui la contamine, limite la résolution de la méthode. Conséquence : la traduction d'un jeu de mesures sous la forme d'une distribution spatiale de teneur en eau n'est pas unique. Certaines sont plus désirables que d'autres, aussi des aprioris géométriques sont-ils utilisés pour limiter l'espace des solutions possibles. De ce fait, l'estimation qui est faite de la teneur en eau est biaisée (aprioris) et entachée d'incertitudes (bruit), jamais quantifiées à ce jour à plus d'une dimension.Afin de mettre à jour les processus qui vont influencer la résolution spatiale et l'estimation du volume d'eau, les propriétés du problème direct de l'imagerie RMP sont analysées au moyen d'outils non-biaisés telles que les corrélations.Le problème d'imagerie RMP inverse étant non-linéaire, ce travail de thèse propose une méthodologie de reconstruction d'images de la teneur en eau qui offre l'accès aux analyses des incertitudes, axée sur un algorithme d'échantillonnage inverse de Monte-Carlo (Metropolis-Hastings). Une étude intensive des biais introduits par l'apriori sur la reconstruction de différents volumes 3D est effectuée.Enfin, les possibilités d'imageries RMP sont illustrées sur deux cas d'applications réels en milieux hétérogènes karstiques ou thermo-karstiques, validés par comparaison avec d'autres sources d'informations. Le premier cas est une imagerie 2-D du conduit karstique de Poumeyssens, dont la cartographie est connue.Le deuxième cas est l'imagerie 3-D d'une poche d'eau sous-glaciaire dans le glacier de Têre-Rousse laquelle est explorée par de nombreux forages et d'autres méthodes géophysiques. / The non-destructive observation of the ground water content's variaiblity in time and space is a major issue to understand the hydrodynamical functioning of heterogeneous media.Although many geophysical methods derive the water content of the subsurface from intermediate physical parameters, the method of surface nuclear magnetic resonance (SNMR) provide a direct estimate of ground water content. For this method, 2-D or 3-D SNMR tomography applications are only emerging and an in-depth analysis is required to assess their possibilities and limitations.The general resolution of the method is limited because the measurement characterize an important volume and is contaminated by electromagnetic noise. Consequently, the translation of measurement into an image of water content admit many solutions. Among them, several are more desirable than others from a structural/geometrical stand point. Geometrical prior knowledge are used to limit the infinite solution space. The resulting water content estimate is necessarily biased (prior knowledge) and affected by uncertainties (noise). To date, those aspects have never been quantified for 2-D and 3-D SNMR data sets.The processes that are controlling the geometrical rendering and the estimated water volume, properties of the SNMR imaging problem are analyzed using correlations and linear regressions as they are unbiased tools for the data space analysis.As the MRS inverse problem is non-linear, this thesis proposes a monte carlo based methodology (Metropolis- Hastings) to provide water content and uncertainty estimates. As the geometrical prior expectations control the estimates, the resulting bias is explored and discussed for different water content configuration.Finally, the possibilities of MRS imaging are illustrated by means of two highly heterogeneous environments: thermo-karstic and karstic. The results of the MRS imagery are compared and validated with other types of knowledge. The first case is a 2-D imaging of the conduit Poumeyssens karst. The latter geometry is precisely known. The second case is the 3-D imaging of a internal cavity inside the french alp glacier Tete-Rousse, explored by multiple other destructive and non-destructive methods.
29

Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem

Fischer, Manfred M., Staufer-Steinnocher, Petra 03 1900 (has links) (PDF)
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
30

Engineering of Temperature Profiles for Location-Specific Control of Material Micro-Structure in Laser Powder Bed Fusion Additive Manufacturing

Lewandowski, George 15 June 2020 (has links)
No description available.

Page generated in 0.0685 seconds