• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Conception d'algorithmes hybrides pour l'optimisation de l'énergie mémoire dans les systèmes embarqués et de fonctions multimodales / Design of hybrid algorithms for memory energy optimization in embedded systems and multimodal functions

Idrissi Aouad, Maha 04 July 2011 (has links)
La mémoire est considérée comme étant gloutonne en consommation d'énergie, un problème sensible, particulièrement dans les systèmes embarqués. L'optimisation globale de fonctions multimodales est également un problème délicat à résoudre du fait de la grande quantité d'optima locaux de ces fonctions. Dans ce mémoire, je présente différents nouveaux algorithmes hybrides et distribués afin de résoudre ces deux problèmes d'optimisation. Ces algorithmes sont comparés avec les méthodes classiques utilisées dans la littérature et les résultats obtenus sont encourageants. En effet, ces résultats montrent une réduction de la consommation d'énergie en mémoire d'environ 76% jusqu'à plus de 98% sur nos programmes tests, d'une part. D'autre part, dans le cas de l'optimisation globale de fonctions multimodales, nos algorithmes hybrides convergent plus souvent vers la solution optimale globale. Des versions distribuées et coopératives de ces nouveaux algorithmes hybrides sont également proposées. Elles sont, par ailleurs, plus rapides que leurs versions séquentielles respectives. / Résumé en anglais : Memory is considered to be greedy in energy consumption, a sensitive issue, especially in embedded systems. The global optimization of multimodal functions is also a difficult problem because of the large number of local optima of these functions. In this thesis report, I present various new hybrid and distributed algorithms to solve these two optimization problems. These algorithms are compared with conventional methods used in the literature and the results obtained are encouraging. Indeed, these results show a reduction in memory energy consumption by about 76% to more than 98% on our benchmarks on one hand. On the other hand, in the case of global optimization of multimodal functions, our hybrid algorithms converge more often to the global optimum solution. Distributed and cooperative versions of these new hybrid algorithms are also proposed. They are more faster than their respective sequential versions.
12

Hybrid parallel algorithms for solving nonlinear Schrödinger equation / Hibridni paralelni algoritmi za rešavanje nelinearne Šredingerove jednačine

Lončar Vladimir 17 October 2017 (has links)
<p>Numerical methods and algorithms for solving of partial differential equations, especially parallel algorithms, are an important research topic, given the very broad applicability range in all areas of science. Rapid advances of computer technology open up new possibilities for development of faster algorithms and numerical simulations of higher resolution. This is achieved through paralleliza-tion at different levels that&nbsp; practically all current computers support.</p><p>In this thesis we develop parallel algorithms for solving one kind of partial differential equations known as nonlinear Schr&ouml;dinger equation (NLSE) with a convolution integral kernel. Equations of this type arise in many fields of physics such as nonlinear optics, plasma physics and physics of ultracold atoms, as well as economics and quantitative&nbsp; finance. We focus on a special type of NLSE, the dipolar Gross-Pitaevskii equation (GPE), which characterizes the behavior of ultracold atoms in the state of Bose-Einstein condensation.</p><p>We present novel parallel algorithms for numerically solving GPE for a wide range of modern parallel computing platforms, from shared memory systems and dedicated hardware accelerators in the form of graphics processing units (GPUs), to&nbsp;&nbsp; heterogeneous computer clusters. For shared memory systems, we provide an algorithm and implementation targeting multi-core processors us-ing OpenMP. We also extend the algorithm to GPUs using CUDA toolkit and combine the OpenMP and CUDA approaches into a hybrid, heterogeneous al-gorithm that is capable of utilizing all&nbsp; available resources on a single computer. Given the inherent memory limitation a single&nbsp; computer has, we develop a distributed memory algorithm based on Message Passing Interface (MPI) and previous shared memory approaches. To maximize the performance of hybrid implementations, we optimize the parameters governing the distribution of data&nbsp; and workload using a genetic algorithm. Visualization of the increased volume of output data, enabled by the efficiency of newly developed algorithms, represents a challenge in itself. To address this, we integrate the implementations with the state-of-the-art visualization tool (VisIt), and use it to study two use-cases which demonstrate how the developed programs can be applied to simulate real-world systems.</p> / <p>Numerički metodi i algoritmi za re&scaron;avanje parcijalnih diferencijalnih jednačina, naročito paralelni algoritmi, predstavljaju izuzetno značajnu oblast istraživanja, uzimajući u obzir veoma &scaron;iroku primenljivost u svim oblastima nauke. Veliki napredak informacione tehnologije otvara nove mogućnosti za razvoj bržih al-goritama i&nbsp; numeričkih simulacija visoke rezolucije. Ovo se ostvaruje kroz para-lelizaciju na različitim nivoima koju poseduju praktično svi moderni računari. U ovoj tezi razvijeni su paralelni algoritmi za re&scaron;avanje jedne vrste parcijalnih diferencijalnih jednačina poznate kao nelinearna &Scaron;redingerova jednačina sa inte-gralnim konvolucionim kernelom. Jednačine ovog tipa se javljaju u raznim oblas-tima fizike poput nelinearne optike, fizike plazme i fizike ultrahladnih atoma, kao i u ekonomiji i kvantitativnim finansijama. Teza se bavi posebnim oblikom nelinearne &Scaron;redingerove jednačine, Gros-Pitaevski jednačinom sa dipol-dipol in-terakcionim članom, koja karakteri&scaron;e pona&scaron;anje ultrahladnih atoma u stanju Boze-Ajn&scaron;tajn kondenzacije.<br />U tezi su predstavljeni novi paralelni algoritmi za numeričko re&scaron;avanje Gros-Pitaevski jednačine za &scaron;irok spektar modernih računarskih platformi, od sis-tema sa deljenom memorijom i specijalizovanih hardverskih akceleratora u ob-liku grafičkih procesora, do heterogenih računarskih klastera. Za sisteme sa deljenom memorijom, razvijen je&nbsp; algoritam i implementacija namenjena vi&scaron;e-jezgarnim centralnim procesorima&nbsp; kori&scaron;ćenjem OpenMP tehnologije. Ovaj al-goritam je pro&scaron;iren tako da radi i u&nbsp; okruženju grafičkih procesora kori&scaron;ćenjem CUDA alata, a takođe je razvijen i&nbsp; predstavljen hibridni, heterogeni algoritam koji kombinuje OpenMP i CUDA pristupe i koji je u stanju da iskoristi sve raspoložive resurse jednog računara.<br />Imajući u vidu inherentna ograničenja raspoložive memorije koju pojedinačan računar poseduje, razvijen je i algoritam za sisteme sa distribuiranom memorijom zasnovan na Message Passing Interface tehnologiji i prethodnim algoritmima za sisteme sa deljenom memorijom. Da bi se maksimalizovale performanse razvijenih hibridnih implementacija, parametri koji određuju raspodelu podataka i računskog opterećenja su optimizovani kori&scaron;ćenjem genetskog algoritma. Poseban izazov je vizualizacija povećane količine izlaznih podataka, koji nastaju kao rezultat efikasnosti novorazvijenih algoritama. Ovo je u tezi re&scaron;eno kroz inte-graciju implementacija sa najsavremenijim alatom za vizualizaciju (VisIt), &scaron;to je omogućilo proučavanje dva primera koji pokazuju kako razvijeni programi mogu da se iskoriste za simulacije realnih sistema.</p>
13

Σχεδιασμός και ανάπτυξη αλγορίθμου συσταδοποίησης μεγάλης κλίμακας δεδομένων

Γούλας, Χαράλαμπος January 2015 (has links)
Υπό το φάσμα της νέας, ανερχόμενης κοινωνίας της πληροφορίας, η σύγκλιση των υπολογιστών με τις τηλεπικοινωνίες έχει οδηγήσει στην συνεχώς αυξανόμενη παραγωγή και αποθήκευση τεράστιου όγκου δεδομένων σχεδόν για οποιονδήποτε τομέα της ανθρώπινης ενασχόλησης. Αν, λοιπόν, τα δεδομένα αποτελούν τα καταγεγραμμένα γεγονότα της ανθρώπινης ενασχόλησης, οι πληροφορίες αποτελούν τους κανόνες, που τα διέπουν. Και η κοινωνία στηρίζεται και αναζητά διακαώς νέες πληροφορίες. Το μόνο που απομένει, είναι η ανακάλυψη τους. Ο τομέας, που ασχολείται με την συστηματική ανάλυση των δεδομένων με σκοπό την εξαγωγή χρήσιμης γνώσης ονομάζεται μηχανική μάθηση. Υπό αυτό, λοιπόν, το πρίσμα, η παρούσα διπλωματική πραγματεύεται την μηχανική μάθηση ως μια ελπίδα των επιστημόνων να αποσαφηνίσουν τις δομές που διέπουν τα δεδομένα και να ανακαλύψουν και να κατανοήσουν τους κανόνες, που “κινούν” τον φυσικό κόσμο. Αρχικά, πραγματοποιείται μια πρώτη περιγραφή της μηχανικής μάθησης ως ένα από τα βασικότερα δομικά στοιχεία της τεχνητής νοημοσύνης, παρουσιάζοντας ταυτόχρονα μια πληθώρα προβλημάτων, στα οποία μπορεί να βρει λύση, ενώ γίνεται και μια σύντομη ιστορική αναδρομή της πορείας και των κομβικών της σημείων. Ακολούθως, πραγματοποιείται μια όσο το δυνατόν πιο εμπεριστατωμένη περιγραφή, μέσω χρήσης εκτεταμένης βιβλιογραφίας, σχεδιαγραμμάτων και λειτουργικών παραδειγμάτων των βασικότερων κλάδων της, όπως είναι η επιβλεπόμενη μάθηση (δέντρα αποφάσεων, νευρωνικά δίκτυα), η μη-επιβλεπόμενη μάθηση (συσταδοποίηση δεδομένων), καθώς και πιο εξειδικευμένων μορφών της, όπως είναι η ημί-επιβλεπόμενη μηχανική μάθηση και οι γενετικοί αλγόριθμοι. Επιπρόσθετα, σχεδιάζεται και υλοποιείται ένας νέος πιθανοτικός αλγόριθμος συσταδοποίησης (clustering) δεδομένων, ο οποίος ουσιαστικά αποτελεί ένα υβρίδιο ενός ιεραρχικού αλγορίθμου ομαδοποίησης και ενός αλγορίθμου διαμέρισης. Ο αλγόριθμος δοκιμάστηκε σε ένα πλήθος διαφορετικών συνόλων, πετυχαίνοντας αρκετά ενθαρρυντικά αποτελέσματα, συγκριτικά με άλλους γνωστούς αλγορίθμους, όπως είναι ο k-means και ο single-linkage. Πιο συγκεκριμένα, ο αλγόριθμος κατασκευάζει συστάδες δεδομένων, με μεγαλύτερη ομοιογένεια κατά πλειοψηφία σε σχέση με τους παραπάνω, ενώ το σημαντικότερο πλεονέκτημά του είναι ότι δεν χρειάζεται κάποια αντίστοιχη παράμετρο k για να λειτουργήσει. Τέλος, γίνονται προτάσεις τόσο για περαιτέρω βελτίωση του παραπάνω αλγορίθμου, όσο και για την ανάπτυξη νέων τεχνικών και μεθόδων, εναρμονισμένων με τις σύγχρονες τάσεις της αγοράς και προσανατολισμένων προς τις απαιτητικές ανάγκες της νέας, αναδυόμενης κοινωνίας της πληροφορίας. / In the spectrum of a new and emerging information society, the convergence of computers and telecommunication has led to a continuously increasing production and storage of huge amounts of data for almost any field of human engagement. So, if the data are recorded facts of human involvement, then information are the rules that govern them. And society depends on and looking earnestly for new information. All that remains is their discovery. The field of computer science, which deals with the systematic analysis of data in order to extract useful information, is called machine learning. In this light, therefore, this thesis discusses the machine learning as a hope of scientists to elucidate the structures that govern the data and discover and understand the rules that "move" the natural world. Firstly, a general description of machine learning, as one of the main components of artificial intelligence, is discussed, while presenting a variety of problems that machine learning can find solutions, as well as a brief historical overview of its progress. Secondly, a more detailed description of machine learning is presented by using extensive literature, diagrams, drawings and working examples of its major research areas, as is the supervised learning (decision trees, neural networks), the unsupervised learning (clustering algorithms) and more specialized forms, as is the semi-supervised machine learning and genetic algorithms. In addition to the above, it is planned and implemented a new probabilistic clustering algorithm, which is a hybrid of a hierarchical clustering algorithm and a partitioning algorithm. The algorithm was tested on a plurality of different datasets, achieving sufficiently encouraging results, as compared to other known algorithms, such as k-means and single-linkage. More specifically, the algorithm constructs data blocks, with greater homogeneity by majority with respect to the above, while the most important advantage is that it needs no corresponding parameter k to operate. Finally, suggestions are made in order to further improve the above algorithm, as well as to develop new techniques and methods in keeping with the current market trends, oriented to the demanding needs of this new, emerging information society.
14

Optimalizace pro registraci obrazů založená na genetických algoritmech / Optimization based on genetic algorithms for image registration

Horáková, Pavla January 2012 (has links)
Diploma thesis is focused on global optimization methods and their utilization for medical image registration. The main aim is creation of the genetic algorithm and test its functionality on synthetic data. Besides test functions and test figures algorithm was subjected to real medical images. For this purpose was created graphical user interface with choise of parameters according to actual requirement. After adding an iterative gradient method it became of hybrid genetic algorithm.
15

Hybrid Parallel Computing Strategies for Scientific Computing Applications

Lee, Joo Hong 10 October 2012 (has links)
Multi-core, multi-processor, and Graphics Processing Unit (GPU) computer architectures pose significant challenges with respect to the efficient exploitation of parallelism for large-scale, scientific computing simulations. For example, a simulation of the human tonsil at the cellular level involves the computation of the motion and interaction of millions of cells over extended periods of time. Also, the simulation of Radiative Heat Transfer (RHT) effects by the Photon Monte Carlo (PMC) method is an extremely computationally demanding problem. The PMC method is example of the Monte Carlo simulation method—an approach extensively used in wide of application areas. Although the basic algorithmic framework of these Monte Carlo methods is simple, they can be extremely computationally intensive. Therefore, an efficient parallel realization of these simulations depends on a careful analysis of the nature these problems and the development of an appropriate software framework. The overarching goal of this dissertation is develop and understand what the appropriate parallel programming model should be to exploit these disparate architectures, both from the metric of efficiency, as well as from a software engineering perspective. In this dissertation we examine these issues through a performance study of PathSim2, a software framework for the simulation of large-scale biological systems, using two different parallel architectures’ distributed and shared memory. First, a message-passing implementation of a multiple germinal center simulation by PathSim2 is developed and analyzed for distributed memory architectures. Second, a germinal center simulation is implemented on shared memory architecture with two parallelization strategies based on Pthreads and OpenMP. Finally, we present work targeting a complete hybrid, parallel computing architecture. With this work we develop and analyze a software framework for generic Monte Carlo simulations implemented on multiple, distributed memory nodes consisting of a multi-core architecture with attached GPUs. This simulation framework is divided into two asynchronous parts: (a) a threaded, GPU-accelerated pseudo-random number generator (or producer), and (b) a multi-threaded Monte Carlo application (or consumer). The advantage of this approach is that this software framework can be directly used within any Monte Carlo application code, without requiring application-specific programming of the GPU. We examine this approach through a performance study of the simulation of RHT effects by the PMC method on a hybrid computing architecture. We present a theoretical analysis of our proposed approach, discuss methods to optimize performance based on this analysis, and compare this analysis to experimental results obtained from simulations run on two different hybrid, parallel computing architectures. / Ph. D.
16

Um novo método híbrido aplicado à solução de sistemas não-lineares com raízes múltiplas / A new hybrid method applied to the solution of nonlinear systems with multiple roots

Maurício Rodrigues Silva 22 June 2009 (has links)
Este trabalho tem como objetivo apresentar soluções de sistemas não-lineares com raízes múltiplas, através de um algoritmo híbrido. Para esta finalidade foi desenvolvido e implementado um algoritmo de busca aleatória baseado no método proposto por Luus e Jaakola (1973) como etapa de busca aleatória dos pontos iniciais, que são refinados através do algoritmo de Hooke e Jeeves. O diferencial deste trabalho foi propor um algoritmo híbrido, utilizando as características dos algoritmos Luus-Jaakola e Hooke e Jeeves como etapas de busca e refinamento respectivamente. Para isso, os algoritmos acima são encapsulados em funções no algoritmo híbrido. Além destas duas etapas, o algoritmo híbrido possui duas outras características importantes, que é a execução repetida até que se alcance um número suficiente de soluções distintas, que são então submetidas a um processo de classificação de soluções por intervalo, onde cada intervalo gera um conjunto de soluções próximas, que por sua vez, são submetidas à etapa final de minimização, resultando em apenas um valor de solução por classe. Desta forma cada classe produz uma única solução, que faz parte do conjunto final de soluções do problema, pois este algoritmo é aplicado a problemas com múltiplas soluções. Então, o algoritmo híbrido desenvolvido foi testado, tendo como padrão, vários problemas clássicos de programação não-linear, em especial os problemas irrestritos com múltiplas soluções. Após os testes, os resultados foram comparados com o algoritmo Luus-Jaakola, e o Método de Newton Intervalar / Bisseção Generalizada (IN/GB - Interval Newton/Generalized Bisection), com a finalidade de se obter uma análise quantitativa e qualitativa de seu desempenho. Por fim comprovou-se que o algortimo Híbrido obteve resultados superiores quando comparados com os demais. / This paper aims to present solutions for nonlinear systems with multiple roots, using a hybrid algorithm. For this purpose was developed and implemented an algorithm based on random search method proposed by Luus and Jaakola (1973) as a step in search of random starting points, which will be refined through the algorithm of Hooke and Jeeves. The differential of this work is to propose a hybrid algorithm, using the characteristics of the Luus-Jaakola algorithm and Hooke and Jeeves as a search and refinement stages respectively. For this, the above algorithms are encapsulated in functions in the hybrid algorithm. Besides these two steps, the hybrid algorithm has two other important characteristics, which is the execution repeated until to reach a sufficient number of distinct solutions, which is then undergo a process of classification of solutions by interval, where each interval generates a set solutions to close, which in turn is subject to the final stage of minimization, resulting in only one value per class of solution. Thus each class provides a unique solution, which is part of the final set of solutions of the problem, because this algorithm is applied to problems with multiple solutions. So, the hybrid algorithm developed was tested, with the standard, several problems of classical non-linear programming, in particular the unrestricted problems with multiple solutions. After the tests, the results were compared with algorithm Luus-Jaakola, and the Interval Newton/Generalized Bisection method (IN/GB), in order to obtain a quantitative and qualitative analysis of their performance. Finally it was found that the hybrid algortimo achieved higher when compared to the others.
17

Um novo método híbrido aplicado à solução de sistemas não-lineares com raízes múltiplas / A new hybrid method applied to the solution of nonlinear systems with multiple roots

Maurício Rodrigues Silva 22 June 2009 (has links)
Este trabalho tem como objetivo apresentar soluções de sistemas não-lineares com raízes múltiplas, através de um algoritmo híbrido. Para esta finalidade foi desenvolvido e implementado um algoritmo de busca aleatória baseado no método proposto por Luus e Jaakola (1973) como etapa de busca aleatória dos pontos iniciais, que são refinados através do algoritmo de Hooke e Jeeves. O diferencial deste trabalho foi propor um algoritmo híbrido, utilizando as características dos algoritmos Luus-Jaakola e Hooke e Jeeves como etapas de busca e refinamento respectivamente. Para isso, os algoritmos acima são encapsulados em funções no algoritmo híbrido. Além destas duas etapas, o algoritmo híbrido possui duas outras características importantes, que é a execução repetida até que se alcance um número suficiente de soluções distintas, que são então submetidas a um processo de classificação de soluções por intervalo, onde cada intervalo gera um conjunto de soluções próximas, que por sua vez, são submetidas à etapa final de minimização, resultando em apenas um valor de solução por classe. Desta forma cada classe produz uma única solução, que faz parte do conjunto final de soluções do problema, pois este algoritmo é aplicado a problemas com múltiplas soluções. Então, o algoritmo híbrido desenvolvido foi testado, tendo como padrão, vários problemas clássicos de programação não-linear, em especial os problemas irrestritos com múltiplas soluções. Após os testes, os resultados foram comparados com o algoritmo Luus-Jaakola, e o Método de Newton Intervalar / Bisseção Generalizada (IN/GB - Interval Newton/Generalized Bisection), com a finalidade de se obter uma análise quantitativa e qualitativa de seu desempenho. Por fim comprovou-se que o algortimo Híbrido obteve resultados superiores quando comparados com os demais. / This paper aims to present solutions for nonlinear systems with multiple roots, using a hybrid algorithm. For this purpose was developed and implemented an algorithm based on random search method proposed by Luus and Jaakola (1973) as a step in search of random starting points, which will be refined through the algorithm of Hooke and Jeeves. The differential of this work is to propose a hybrid algorithm, using the characteristics of the Luus-Jaakola algorithm and Hooke and Jeeves as a search and refinement stages respectively. For this, the above algorithms are encapsulated in functions in the hybrid algorithm. Besides these two steps, the hybrid algorithm has two other important characteristics, which is the execution repeated until to reach a sufficient number of distinct solutions, which is then undergo a process of classification of solutions by interval, where each interval generates a set solutions to close, which in turn is subject to the final stage of minimization, resulting in only one value per class of solution. Thus each class provides a unique solution, which is part of the final set of solutions of the problem, because this algorithm is applied to problems with multiple solutions. So, the hybrid algorithm developed was tested, with the standard, several problems of classical non-linear programming, in particular the unrestricted problems with multiple solutions. After the tests, the results were compared with algorithm Luus-Jaakola, and the Interval Newton/Generalized Bisection method (IN/GB), in order to obtain a quantitative and qualitative analysis of their performance. Finally it was found that the hybrid algortimo achieved higher when compared to the others.
18

Machine learning in predictive maintenance of industrial robots

Morettini, Simone January 2021 (has links)
Industrial robots are a key component for several industrial applications. Like all mechanical tools, they do not last forever. The solution to extend the life of the machine is to perform maintenance on the degraded components. The optimal approach is called predictive maintenance, which aims to forecast the best moment for performing maintenance on the robot. This minimizes maintenance costs as well as prevents mechanical failure that can lead to unplanned production stops. There already exist methods to perform predictive maintenance on industrial robots, but these methods require additional sensors. This research aims to predict the anomalies by only using data from the sensors that already are used to control the robot. A machine learning approach is proposed for implementing predictive maintenance of industrial robots, using the torque profiles as input data. The algorithms selected are tested on simulated data created using wear and temperature models. The torque profiles from the simulator are used to extract a health index for each joint, which in turn are used to detect anomalous states of the robot. The health index has a fast exponential growth trend which is difficult to predict in advance. A Gaussian process regressor, an Exponentron, and hybrid algorithms are applied for the prediction of the time series of the health state to implement the predictive maintenance. The predictions are evaluated considering the accuracy of the time series prediction and the precision of anomaly forecasting. The investigated methods are shown to be able to predict the development of the wear and to detect the anomalies in advance. The results reveal that the hybrid approach obtained by combining predictions from different algorithms outperforms the other solutions. Eventually, the analysis of the results shows that the algorithms are sensitive to the quality of the data and do not perform well when the data present a low sampling rate or missing samples. / Industrirobotar är en nyckelkomponent för flera industriella applikationer. Likt alla mekaniska verktyg håller de inte för alltid. Lösningen för att förlänga maskinens livslängd är att utföra underhåll på de slitna komponenterna. Det optimala tillvägagångssättet kallas prediktivt underhåll, vilket innebär att förutsäga den bästa tidpunkten för att utföra underhåll på roboten. Detta minimerar både kostnaderna för underhåll samt förebygger mekaniska fel som kan leda till oplanerade produktionsstopp. Det finns redan metoder för att utföra prediktivt underhåll på industriella robotar, men dessa metoder kräver ytterligare sensorer. Denna forskning syftar till att förutsäga avvikelserna genom att endast använda data från de sensorer som redan används för att reglera roboten. En maskininlärningsmetod föreslås för implementering av prediktivt underhåll av industriella robotar, med hjälp av vridmomentprofiler som indata. Metoderna testas på simulerad data som skapats med hjälp av slitage- och temperaturmodeller. Vridmomenten används för att extrahera ett hälsoindex för varje axel, vilket i sin tur används för att upptäcka anomalier hos roboten. Hälsoindexet har en snabb exponentiell tillväxttrend som är svår att förutsäga i förväg. En Gaussisk processregressor, en Exponentron och hybridalgoritmer används för prediktion av tidsserien för hälsoindexet för att implementera det prediktiva underhållet. Förutsägelserna utvärderas baserat på träffsäkerheten av förutsägelsen för tidsserien samt precisionen för förutsagda avvikelser. De undersökta metoderna visar sig kunna förutsäga utvecklingen av slitage och upptäcka avvikelser i förväg. Resultaten uppvisar att hybridmetoden som erhålls genom att kombinera prediktioner från olika algoritmer överträffar de andra lösningarna. I analysen av prestandan visas att algoritmerna är känsliga för kvaliteten av datat och att de inte fungerar bra när datat har låg samplingsfrekvens eller då datapunkter saknas.
19

Représentation de solution en optimisation continue, multi-objectif et applications / Representation of solution in continuous and multi-objectif of optimization with applications

Zidani, Hafid 26 October 2013 (has links)
Cette thèse a pour objectif principal le développement de nouveaux algorithmes globaux pour la résolution de problèmes d’optimisation mono et multi-objectif, en se basant sur des formules de représentation ayant la tâche principale de générer des points initiaux appartenant à une zone proche du minimum globale. Dans ce contexte, une nouvelle approche appelée RFNM est proposée et testée sur plusieurs fonctions non linéaires, non différentiables et multimodales. D’autre part, une extension à la dimension infinie a été établie en proposant une démarche pour la recherche du minimum global. Par ailleurs, plusieurs problèmes de conception mécanique, à caractère aléatoire, ont été considérés et résolus en utilisant cette approche, avec amélioration de la méthode multi-objectif NNC. Enfin, une contribution à l'optimisation multi-objectif par une nouvelle approche a été proposée. Elle permet de générer un nombre suffisant de points pour représenter la solution optimale de Pareto. / The main objective of this work is to develop new global algorithms to solve single and multi-objective optimization problems, based on the representation formulas with the main task to generate initial points belonging to an area close to the global minimum. In this context, a new approach called RFNM is proposed and tested on several nonlinear, non-differentiable and multimodal finctions. On the other hand, an extension to the infinite dimension was established by proposing an approach for finding the global minimum. Moreover,several random mechanical design problems were considered and resolved using this approach, and improving the NNC multi-objective method. Finally, a new multi-objective optimization method called RSMO is presented. It solves the multi-objective optimization problems by generating a sufficient number o fpoints in the Pareto front.

Page generated in 0.4392 seconds