• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 9
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 62
  • 62
  • 15
  • 14
  • 14
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Imagerie sismique appliquée à la caractérisation géométrique des fondations de pylônes électriques très haute tension / High resolution seismic imaging applied to the geometrical characterization of very high voltage electric pylons

Roques, Aurélien 15 October 2012 (has links)
L'imagerie de la proche surface est essentielle en géotechnique car la caractérisation et l'identification des premiers mètres du sol interviennent dans de nombreuses applications de l'aménagement du territoire. Les méthodes classiques d'imagerie sismique sont appréciées car elles sont simples de mise en oeuvre et d'interprétation. Utilisés en génie civil, ces outils ont généralement été développés initialement en prospection pétrolière. La problématique que nous abordons dans ce travail intéresse réseau de transport d'électricité (RTE) ; il s'agit d'identifier la géométrie des fondations de pylônes électriques très haute tension en utilisant des méthodes d'imagerie sismique qui ont fait leurs preuves dans le contexte de la géophysique de gisement. En particulier, nous évaluons les performances de l'inversion de la forme d'onde (FWI) et de la migration par retournement temporel. Nous présentons le principe de ces méthodes que nous mettons ensuite en oeuvre avec un outil basé sur une modélisation de la propagation d'ondes en milieu élastique 2D ; dans ce cadre, le temps de calcul de l'inversion est aujourd'hui raisonnable, ce qui est loin d'être le cas lorsqu'on considère un milieu élastique 3D. Ensuite, nous présentons les résultats d'imagerie sur données synthétiques puis réelles. Concernant les données synthétiques 2D, l'inversion permet d'identifier les dimensions de la fondation à condition que le rapport de vitesse entre la fondation et l'encaissant ne dépasse pas 3. La migration permet quant à elle d'imager de façon satisfaisante des contrastes beaucoup plus élevés. Sur données réelles, les tests que nous avons faits ne permettent pas d'identifier la géométrie de la fondation avec ces méthodes ; en réalisant l'inversion de données synthétiques 3D avec notre outil 2D, nous montrons que le caractère 3D des données est un obstacle important à l'utilisation de notre outil sur des données réelles contenant une forte signature 3D de la structure à imager. / Near surface imaging is essential for geotechnics purpose. Characterization and identification of the first layers - between 0 and 10m - of the ground is necessary for many applications of national and regional development. Classical methods of imagery arouse a great interest as they are easy to use. In general, these numerical tools used in civil engineering have been first developped by seismic petroleum companies. The issue we are tackling comes to identifying the geometry of the foundations of very high voltage electric pylons using seismic imagery methods for french electricity transport and network. In particular, we assess the performances of the full waveform inversion and the reverse time migration. First, we explain the principle of these methods and then we implement them with a tool based on 2D modelisation which involves a reasonable computing time, contrary to 3D inversion carried out with today's means. Next, we show imagery results on synthetic and real data. Concerning, synthetic data, inversion makes it possible to identify the dimensions of the foundation as long as the velocity ratio between the foundation and the bedrock does not exceed 3. As to migration, it has good results with even much higher contrasts. Concerning real data, these two methods don't succeed in identifying the geometry of the foundation ; we inverted 3D synthetical data with our tool and show that the 3D property of data is prohibitive to 2D-inversion of real data with such an important 3D signature as the one we get on the foundation data.
32

Imagerie 2-D/3-D de la teneur en eau en milieu hétérogène par méthode RMP : biais et incertitudes / SNMR 2-D/3-D water content imagery in heterogeneous media : bias and uncertainties

Chevalier, Antoine 02 July 2014 (has links)
La caractérisation non-destructive de la variabilité spatiale et temporelle de la teneur en eau pour des milieux hétérogènes superficiels est un enjeu de taille pour la compréhension du fonctionnement hydrogéologique de certaines zones critiques.Pour répondre à ces besoins, une candidate potentielle est la méthode de Résonance Magnétique des Protons (RMP), technique géophysique récente qui permet d'imager la teneur en eau de la subsurface de façon directe. Ses déclinaisons opérationnelles en 2-D et en 3-D sont émergentes et nécessitent une étude exhaustive afin d'en comprendre les possibilités et les limitations.Le caractère intégrateur de la mesure RMP, en conjonction avec l'erreur expérimentale qui la contamine, limite la résolution de la méthode. Conséquence : la traduction d'un jeu de mesures sous la forme d'une distribution spatiale de teneur en eau n'est pas unique. Certaines sont plus désirables que d'autres, aussi des aprioris géométriques sont-ils utilisés pour limiter l'espace des solutions possibles. De ce fait, l'estimation qui est faite de la teneur en eau est biaisée (aprioris) et entachée d'incertitudes (bruit), jamais quantifiées à ce jour à plus d'une dimension.Afin de mettre à jour les processus qui vont influencer la résolution spatiale et l'estimation du volume d'eau, les propriétés du problème direct de l'imagerie RMP sont analysées au moyen d'outils non-biaisés telles que les corrélations.Le problème d'imagerie RMP inverse étant non-linéaire, ce travail de thèse propose une méthodologie de reconstruction d'images de la teneur en eau qui offre l'accès aux analyses des incertitudes, axée sur un algorithme d'échantillonnage inverse de Monte-Carlo (Metropolis-Hastings). Une étude intensive des biais introduits par l'apriori sur la reconstruction de différents volumes 3D est effectuée.Enfin, les possibilités d'imageries RMP sont illustrées sur deux cas d'applications réels en milieux hétérogènes karstiques ou thermo-karstiques, validés par comparaison avec d'autres sources d'informations. Le premier cas est une imagerie 2-D du conduit karstique de Poumeyssens, dont la cartographie est connue.Le deuxième cas est l'imagerie 3-D d'une poche d'eau sous-glaciaire dans le glacier de Têre-Rousse laquelle est explorée par de nombreux forages et d'autres méthodes géophysiques. / The non-destructive observation of the ground water content's variaiblity in time and space is a major issue to understand the hydrodynamical functioning of heterogeneous media.Although many geophysical methods derive the water content of the subsurface from intermediate physical parameters, the method of surface nuclear magnetic resonance (SNMR) provide a direct estimate of ground water content. For this method, 2-D or 3-D SNMR tomography applications are only emerging and an in-depth analysis is required to assess their possibilities and limitations.The general resolution of the method is limited because the measurement characterize an important volume and is contaminated by electromagnetic noise. Consequently, the translation of measurement into an image of water content admit many solutions. Among them, several are more desirable than others from a structural/geometrical stand point. Geometrical prior knowledge are used to limit the infinite solution space. The resulting water content estimate is necessarily biased (prior knowledge) and affected by uncertainties (noise). To date, those aspects have never been quantified for 2-D and 3-D SNMR data sets.The processes that are controlling the geometrical rendering and the estimated water volume, properties of the SNMR imaging problem are analyzed using correlations and linear regressions as they are unbiased tools for the data space analysis.As the MRS inverse problem is non-linear, this thesis proposes a monte carlo based methodology (Metropolis- Hastings) to provide water content and uncertainty estimates. As the geometrical prior expectations control the estimates, the resulting bias is explored and discussed for different water content configuration.Finally, the possibilities of MRS imaging are illustrated by means of two highly heterogeneous environments: thermo-karstic and karstic. The results of the MRS imagery are compared and validated with other types of knowledge. The first case is a 2-D imaging of the conduit Poumeyssens karst. The latter geometry is precisely known. The second case is the 3-D imaging of a internal cavity inside the french alp glacier Tete-Rousse, explored by multiple other destructive and non-destructive methods.
33

Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem

Fischer, Manfred M., Staufer-Steinnocher, Petra 03 1900 (has links) (PDF)
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
34

Engineering of Temperature Profiles for Location-Specific Control of Material Micro-Structure in Laser Powder Bed Fusion Additive Manufacturing

Lewandowski, George 15 June 2020 (has links)
No description available.
35

Optimization of a Floor Grinding Machine for Uniform Grinding Pattern

Srikantha Dath, Adithya January 2023 (has links)
Husqvarna Construction is one of the leading construction machinery manufacturers in the world. To stay in the forefront, investing in novel methods to model, test & and optimize machinery is crucial. The most important part of development and testing is to bridge the gap between desired and actual results. Model-based Simulation in testing plays a superior role in visualizing possibilities while cutting down the usage of resources. Floor Grinders are common in industrial and commercial settings to achieve desired floor results. Like every machinery, optimization towards achieving better results is a necessity. The purpose of this thesis is to develop a methodology to optimize Husqvarna Constructions’s floor-grinding machine through its grinding pattern and further study & gather data about the key indicators for an optimum grinding pattern. This is done by setting up a grinding pattern simulation of the PG 690 floor grinder on SIMGRIND (Husqvarna Construction’s own simulation application). A metric was developed to determine whether a grinding pattern is good, and by utilizing the metric as an optimization goal, the impact of different machine parameters on the grinding pattern was established. The grinding & travel speeds were viewed as ratios and it was observed that optimized patterns were attained at particular ratios. Another crucial factor that was studied was the impact of oscillations. Further, the impact of grinding head size on the grinding pattern was also studied. The investigation was limited to a simulation study since physical validation opened up several uncertainties beyond the scope of this work. At the end of this work, a few recommendations for developing physical validation setups are made, to test the results of the simulation.
36

Parallel paradigms in optimal structural design

Van Huyssteen, Salomon Stephanus 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Modern-day processors are not getting any faster. Due to the power consumption limit of frequency scaling, parallel processing is increasingly being used to decrease computation time. In this thesis, several parallel paradigms are used to improve the performance of commonly serial SAO programs. Four novelties are discussed: First, replacing double precision solvers with single precision solvers. This is attempted in order to take advantage of the anticipated factor 2 speed increase that single precision computations have over that of double precision computations. However, single precision routines present unpredictable performance characteristics and struggle to converge to required accuracies, which is unfavourable for optimization solvers. Second, QP and dual are statements pitted against one another in a parallel environment. This is done because it is not always easy to see which is best a priori. Therefore both are started in parallel and the competing threads are cancelled as soon as one returns a valid point. Parallel QP vs. dual statements prove to be very attractive, converging within the minimum number of outer iterations. The most appropriate solver is selected as the problem properties change during the iteration steps. Thread cancellation poses problems caused by threads having to wait to arrive at appropriate checkpoints, thus su ering from unnecessarily long wait times because of struggling competing routines. Third, multiple global searches are started in parallel on a shared memory system. Problems see a speed increase of nearly 4x for all problems. Dynamically scheduled threads alleviate the need for set thread amounts, as in message passing implementations. Lastly, the replacement of existing matrix-vector multiplication routines with optimized BLAS routines, especially BLAS routines targeted at GPGPU technologies (graphics processing units), proves to be superior when solving large matrix-vector products in an iterative environment. These problems scale well within the hardware capabilities and speedups of up to 36x are recorded. / AFRIKAANSE OPSOMMING: Hedendaagse verwerkers word nie vinniger nie as gevolg van kragverbruikingslimiet soos die verwerkerfrekwensie op-skaal. Parallelle prosesseering word dus meer dikwels gebruik om berekeningstyd te laat daal. Verskeie parallelle paradigmas word gebruik om die prestasie van algemeen sekwensiële optimeringsprogramme te verbeter. Vier ontwikkelinge word bespreek: Eerste, is die vervanging van dubbel presisie roetines met enkel presisie roetines. Dit poog om voordeel te trek uit die faktor 2 spoed verbetering wat enkele presisie berekeninge het oor dubbel presisie berekeninge. Enkele presisie roetines is onvoorspelbaar en sukkel in meeste gevalle om die korrekte akkuraatheid te vind. Tweedens word QP teen duale algoritmes in ’n parallel omgewing gebruik. Omdat dit nie altyd voor die tyd maklik is om te sien watter een die beste gaan presteer nie, word almal in parallel begin en die mededingers word dan gekanselleer sodra een terugkeer met ’n geldige KKT punt. Parallele QP teen duale algoritmes blyk om baie aantreklik te wees. Konvergensie gebeur in alle gevalle binne die minimum aantal iterasies. Die mees geskikte algoritme word op elke iterasie gebruik soos die probleem eienskappe verander gedurende die iterasie stappe. “Thread” kanseleering hou probleme in en word veroorsaak deur “threads” wat moet wag om die kontrolepunte te bereik, dus ly die beste roetines onnodig as gevolg van meededinger roetines was sukkel. Derdens, verskeie globale optimerings word in parallel op ’n “shared memory” stelsel begin. Probleme bekom ’n spoed verhoging van byna vier maal vir alle probleme. Dinamiese geskeduleerde “threads” verlig die behoefte aan voorafbepaalde “threads” soos gebruik word in die “message passing” implementerings. Laastens is die vervanging van die bestaande matriks-vektor vermenigvuldiging roetines met geoptimeerde BLAS roetines, veral BLAS roetines wat gerig is op GPGPU tegnologië. Die GPU roetines bewys om superieur te wees wanneer die oplossing van groot matrix-vektor produkte in ’n iteratiewe omgewing gebruik word. Hierdie probleme skaal ook goed binne die hardeware se vermoëns, vir die grootste probleme wat getoets word, word ’n versnelling van 36 maal bereik.
37

Trajectory generation for autonomous unmanned aircraft using inverse dynamics

Drury, R. G. January 2010 (has links)
The problem addressed in this research is the in-flight generation of trajectories for autonomous unmanned aircraft, which requires a method of generating pseudo-optimal trajectories in near-real-time, on-board the aircraft, and without external intervention. The focus of this research is the enhancement of a particular inverse dynamics direct method that is a candidate solution to the problem. This research introduces the following contributions to the method. A quaternion-based inverse dynamics model is introduced that represents all orientations without singularities, permits smooth interpolation of orientations, and generates more accurate controls than the previous Euler-angle model. Algorithmic modifications are introduced that: overcome singularities arising from parameterization and discretization; combine analytic and finite difference expressions to improve the accuracy of controls and constraints; remove roll ill-conditioning when the normal load factor is near zero, and extend the method to handle negative-g orientations. It is also shown in this research that quadratic interpolation improves the accuracy and speed of constraint evaluation. The method is known to lead to a multimodal constrained nonlinear optimization problem. The performance of the method with four nonlinear programming algorithms was investigated: a differential evolution algorithm was found to be capable of over 99% successful convergence, to generate solutions with better optimality than the quasi- Newton and derivative-free algorithms against which it was tested, but to be up to an order of magnitude slower than those algorithms. The effects of the degree and form of polynomial airspeed parameterization on optimization performance were investigated, and results were obtained that quantify the achievable optimality as a function of the parameterization degree. Overall, it was found that the method is a potentially viable method of on-board near- real-time trajectory generation for unmanned aircraft but for this potential to be realized in practice further improvements in computational speed are desirable. Candidate optimization strategies are identified for future research.
38

Statistical and numerical optimization for speckle blind structured illumination microscopy / Optimisation numérique et statistique pour la microscopie à éclairement structuré non contrôlé

Liu, Penghuan 25 May 2018 (has links)
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0<q<=1. La norme lp,q est introduite afin de prendre en compte une hypothèse de parcimonie sur l’objet. Dans l’approche marginale, nous reconstruisons uniquement l’objet et les motifs d’éclairement sont traités comme des paramètres de nuisance. Notre contribution est double. Premièrement, une analyse théorique démontre que l’exploitation des statistiques d’ordre deux des données permet d’accéder à un facteur de super résolution de deux, lorsque le support de la densité spectrale du speckle correspond au support fréquentiel de la fonction de transfert du microscope. Ensuite, nous abordons le problème du calcul numérique de la solution. Afin de réduire à la fois le coût de calcul et les ressources en mémoire, nous proposons un estimateur marginal à base de patches. L’élément clé de cette méthode à patches est de négliger l’information de corrélation entre les pixels appartenant à différents patches. Des résultats de simulations et en application à des données réelles démontrent la capacité de super résolution de nos méthodes. De plus, celles-ci peuvent être appliquées aussi bien sur des problèmes de reconstruction 2D d’échantillons fins, mais également sur des problèmes d’imagerie 3D d’objets plus épais. / Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0<q<=1. The lp,q-norm is introduced based on the sparsity assumption of the object. In the marginal approach, we only reconstruct the object, while the unknown speckle patterns are considered as nuisance parameters. Our contribution is two fold. First, a theoretical analysis demonstrates that using the second order statistics of the data, blind-speckle-SIM yields a super-resolution factor of two, provided that the support of the speckle spectral density equals the frequency support of the microscope point spread function. Then, numerical implementation is addressed. In order to reduce the computational burden and the memory requirement of the marginal approach, a patch-based marginal estimator is proposed. The key idea behind the patch-based estimator consists of neglecting the correlation information between pixels from different patches. Simulation results and experiments with real data demonstrate the super-resolution capacity of our methods. Moreover, our proposed methods can not only be applied in 2D super-resolution problems with thin samples, but are also compatible with 3D imaging problems of thick samples.
39

Trajectory generation for autonomous unmanned aircraft using inverse dynamics

Drury, R. G. 09 1900 (has links)
The problem addressed in this research is the in-flight generation of trajectories for autonomous unmanned aircraft, which requires a method of generating pseudo-optimal trajectories in near-real-time, on-board the aircraft, and without external intervention. The focus of this research is the enhancement of a particular inverse dynamics direct method that is a candidate solution to the problem. This research introduces the following contributions to the method. A quaternion-based inverse dynamics model is introduced that represents all orientations without singularities, permits smooth interpolation of orientations, and generates more accurate controls than the previous Euler-angle model. Algorithmic modifications are introduced that: overcome singularities arising from parameterization and discretization; combine analytic and finite difference expressions to improve the accuracy of controls and constraints; remove roll ill-conditioning when the normal load factor is near zero, and extend the method to handle negative-g orientations. It is also shown in this research that quadratic interpolation improves the accuracy and speed of constraint evaluation. The method is known to lead to a multimodal constrained nonlinear optimization problem. The performance of the method with four nonlinear programming algorithms was investigated: a differential evolution algorithm was found to be capable of over 99% successful convergence, to generate solutions with better optimality than the quasi- Newton and derivative-free algorithms against which it was tested, but to be up to an order of magnitude slower than those algorithms. The effects of the degree and form of polynomial airspeed parameterization on optimization performance were investigated, and results were obtained that quantify the achievable optimality as a function of the parameterization degree. Overall, it was found that the method is a potentially viable method of on-board near- real-time trajectory generation for unmanned aircraft but for this potential to be realized in practice further improvements in computational speed are desirable. Candidate optimization strategies are identified for future research.
40

Μιμιδικοί και εξελικτικοί αλγόριθμοι στην αριθμητική βελτιστοποίηση και στη μη γραμμική δυναμική

Πεταλάς, Ιωάννης 18 September 2008 (has links)
Το κύριο στοιχείο της διατριβής είναι οι Εξελικτικοί Αλγόριθμοι. Στο πρώτο μέρος παρουσιάζονται οι Μιμιδικοί Αλγόριθμοι. Οι Μιμιδικοί Αλγόριθμοι είναι υβριδικά σχήματα που συνδυάζουν τους Εξελιτκικούς Αλγορίθμους με μεθόδους τοπικής αναζήτησης. Οι Μιμιδικοί Αλγόριθμοι συγκρίθηκαν με τους Εξελικτικούς Αλγορίθμους σε πληθώρα προβλημάτων ολικής βελτιστοποίησης και είχαν καλύτερα αποτελέσματα. Στο δεύτερο μέρος μελετήθηκαν προβλήματα μη γραμμικής δυναμικής. Αυτά ήταν η εκτίμηση της περιοχής ευστάθειας διατηρητικών απεικονίσεων, η ανίχνευση συντονισμών και ο υπολογισμός περιοδικών τροχιών. Τα αποτελέσματα ήταν ικανοποιητικά. / The main objective of the thesis was the study of Evolutionary Algorithms. At the first part, Memetic Algorithms were introduced. Memetic Algorithms are hybrid schemes that combine Evolutionary Algorithms and local search methods. Memetic Algorithms were compared to Evolutionary Algorithms in various problems of global optimization and they had better performance. At the second part, problems from nonlinear dynamics were studied. These were the estimation of the stability region of conservative maps, the detection of resonances and the computation of periodic orbits. The results were satisfactory.

Page generated in 0.1204 seconds