• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • Tagged with
  • 27
  • 27
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Automatic parameter tuning in localization algorithms / Automatisk parameterjustering av lokaliseringsalgoritmer

Lundberg, Martin January 2019 (has links)
Many algorithms today require a number of parameters to be set in order to perform well in a given application. The tuning of these parameters is often difficult and tedious to do manually, especially when the number of parameters is large. It is also unlikely that a human can find the best possible solution for difficult problems. To be able to automatically find good sets of parameters could both provide better results and save a lot of time. The prominent methods Bayesian optimization and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are evaluated for automatic parameter tuning in localization algorithms in this work. Both methods are evaluated using a localization algorithm on different datasets and compared in terms of computational time and the precision and recall of the final solutions. This study shows that it is feasible to automatically tune the parameters of localization algorithms using the evaluated methods. In all experiments performed in this work, Bayesian optimization was shown to make the biggest improvements early in the optimization but CMA-ES always passed it and proceeded to reach the best final solutions after some time. This study also shows that automatic parameter tuning is feasible even when using noisy real-world data collected from 3D cameras.
22

Fault detection and model-based diagnostics in nonlinear dynamic systems

Nakhaeinejad, Mohsen 09 February 2011 (has links)
Modeling, fault assessment, and diagnostics of rolling element bearings and induction motors were studied. Dynamic model of rolling element bearings with faults were developed using vector bond graphs. The model incorporates gyroscopic and centrifugal effects, contact deflections and forces, contact slip and separations, and localized faults. Dents and pits on inner race, outer race and balls were modeled through surface profile changes. Experiments with healthy and faulty bearings validated the model. Bearing load zones under various radial loads and clearances were simulated. The model was used to study dynamics of faulty bearings. Effects of type, size and shape of faults on the vibration response and on dynamics of contacts in presence of localized faults were studied. A signal processing algorithm, called feature plot, based on variable window averaging and time feature extraction was proposed for diagnostics of rolling element bearings. Conducting experiments, faults such as dents, pits, and rough surfaces on inner race, balls, and outer race were detected and isolated using the feature plot technique. Time features such as shape factor, skewness, Kurtosis, peak value, crest factor, impulse factor and mean absolute deviation were used in feature plots. Performance of feature plots in bearing fault detection when finite numbers of samples are available was shown. Results suggest that the feature plot technique can detect and isolate localized faults and rough surface defects in rolling element bearings. The proposed diagnostic algorithm has the potential for other applications such as gearbox. A model-based diagnostic framework consisting of modeling, non-linear observability analysis, and parameter tuning was developed for three-phase induction motors. A bond graph model was developed and verified with experiments. Nonlinear observability based on Lie derivatives identified the most observable configuration of sensors and parameters. Continuous-discrete Extended Kalman Filter (EKF) technique was used for parameter tuning to detect stator and rotor faults, bearing friction, and mechanical loads from currents and speed signals. A dynamic process noise technique based on the validation index was implemented for EKF. Complex step Jacobian technique improved computational performance of EKF and observability analysis. Results suggest that motor faults, bearing rotational friction, and mechanical load of induction motors can be detected using model-based diagnostics as long as the configuration of sensors and parameters is observable. / text
23

Hyper-optimalizace neuronových sítí založená na Gaussovských procesech / Gaussian Processes Based Hyper-Optimization of Neural Networks

Coufal, Martin January 2020 (has links)
Cílem této diplomové práce je vytvoření nástroje pro optimalizaci hyper-parametrů umělých neuronových sítí. Tento nástroj musí být schopen optimalizovat více hyper-parametrů, které mohou být navíc i korelovány. Tento problém jsem vyřešil implmentací optimalizátoru, který využívá Gaussovské procesy k predikci vlivu jednotlivých hyperparametrů na výslednou přesnost neuronové sítě. Z provedených experimentů na několika benchmark funkcích jsem zjistil, že implementovaný nástroj je schopen dosáhnout lepších výsledků než optimalizátory založené na náhodném prohledávání a snížit tak v průměru počet potřebných kroků optimalizace. Optimalizace založená na náhodném prohledávání dosáhla lepších výsledků pouze v prvních krocích optimalizace, než si optimalizátor založený na Gaussovských procesech vytvoří dostatečně přesný model problému. Nicméně téměř všechny experimenty provedené na datasetu MNIST prokázaly lepší výsledky optimalizátoru založeného na náhodném prohledávání. Tyto rozdíly v provedených experimentech jsou pravděpodobně dány složitostí zvolených benchmark funkcí nebo zvolenými parametry implementovaného optimalizátoru.
24

Evoluční algoritmy pro vícekriteriální optimalizaci / Evolutionary Algorithms for Multiobjective Optimization

Pilát, Martin January 2013 (has links)
Multi-objective evolutionary algorithms have gained a lot of atten- tion in the recent years. They have proven to be among the best multi-objective optimizers and have been used in many industrial ap- plications. However, their usability is hindered by the large number of evaluations of the objective functions they require. These can be expensive when solving practical tasks. In order to reduce the num- ber of objective function evaluations, surrogate models can be used. These are a simple and fast approximations of the real objectives. In this work we present the results of research made between the years 2009 and 2013. We present a multi-objective evolutionary algo- rithm with aggregate surrogate model, its newer version, which also uses a surrogate model for the pre-selection of individuals. In the next part we discuss the problem of selection of a particular type of model. We show which characteristics of the various models are im- portant and desirable and provide a framework which combines sur- rogate modeling with meta-learning. Finally, in the last part, we ap- ply multi-objective optimization to the problem of hyper-parameters tuning. We show that additional objectives can make finding of good parameters for classifiers faster. 1
25

A Software Product Line for Parameter Tuning

Pukhkaiev, Dmytro 09 August 2023 (has links)
Optimization is omnipresent in our world. Its numerous applications spread from industrial cases, such as logistics, construction management or production planning; to the private sphere, filled with problems of selecting daycare or vacation planning. In this thesis, we concentrate on expensive black-box optimization (EBBO) problems, a subset of optimization problems (OPs), which are characterized by an expensive cost of evaluating an objective function. Such OPs are reoccurring in various domains, being known as: hyperpameter optimization in machine learning, performance configuration optimization or parameter tuning in search-based software engineering, simulation optimization in operations research, meta-optimization or parameter tuning in the optimization domain itself. High diversity of domains introduces a plethora of solving approaches, which adhere to a similar structure and workflow, but differ in details. The software frameworks stemming from different areas possess only partially intersecting manageability points, i.e., lack manageability. In this thesis, we argue that the lack of manageability in EBBO is a major problem, which leads to underachieving optimization quality. The goal of this thesis is to study the role of manageability in EBBO and to investigate whether improving the manageability of EBBO frameworks increases optimization quality. To reach this goal, we appeal to software product line engineering (SPLE), a methodology for developing highly-manageable software systems. Based on the foundations of SPLE, we introduce a novel framework for EBBO called BRISE. It offers: 1) a loosely-coupled software architecture, separating concerns of the experiment designer and the developer of EBBO strategies; 2) a full coverage of all EBBO problem types; and 3) a context-aware variability model, which captures the experiment-designer-defined OP with the content model; and manageability points including their variants and constraints with the cardinality-based feature model. High manageability of the introduced BRISE framework enables us: 1) to extend the framework with novel efficient strategies, such as adaptive repetition management; and 2) to introduce novel EBBO mechanisms, such as multi-objective compositional surrogate modeling, dynamic sampling and hierarchical surrogate modeling. The evaluation of the novel approaches with a set of case studies, including: the WFG benchmark for multi-objective optimization, combined selection and parameter control of meta-heuristics, and energy optimization; demonstrated their superiority over the state-of-the-art competitors. Thus, it supports the research hypothesis of this thesis: Improving manageability of an EBBO framework enables to increase optimization quality.
26

Parameter Tuning in a Jet Printing Machine usingReinforcement Learning / Parameterjustering i en jet printermaskin med enFörstärkande inlärningsalgoritm

MURTAZA, ALEXANDER January 2021 (has links)
Surface mount technology is a common way to assembly electrical components onto PrintedCircuit Boards (PCB). To assemble the components, solder paste is used. One way to apply solderpaste onto PCB is jet printing.The quality of the solder paste deposits on the PCB depends on the properties of the solder pasteand the ejection parameters settings of the jet printer. Every solder paste is unique with its owncharacteristics. Solder paste dots are of good quality if the positioning of the dot is good, the dotis circular, and the number of satellites is at a minimum. A satellite is a droplet that has fallenoutside the main droplet. The parameters that have the most effect on the solder paste are thewaveform parameters Rise time and Voltage level.This master thesis examined the possibility to design and implement a feedback-based machinelearning algorithm that can find the most suitable value for the Rise time and Voltage level, thatgives good quality of the solder paste deposits. The algorithm that was used was a ReinforcementLearning algorithm. Reinforcement Learning is a reward-based learning algorithm where an agentlearns to interact with an environment by using trial and error. The specific algorithm that wasused was a Deep-Q-Learning algorithm. In this master thesis, it was also examined how the cameraresolution affects the decision of the algorithm. To see the implication of the camera resolution,two machines were used, an older and a newer machine were used where one of the biggestdifferences is that the camera resolution.It was concluded that a Deep-Q-Learning algorithm can be used to find the most suitable value forthe waveform parameters Rise time and Voltage level, which results in specified quality of thesolder paste deposits. It was also concluded that the algorithm converges faster for a lower cameraresolution, but the results obtained are more optional with the higher camera resolution. / Ytmontering är en metod som används för att montera elektriska komponenter på kretskort. Föratt kunna montera komponenterna används lödpasta. En teknik för att applicera lödpasta påkretskort är jet printing.Kvaliteten på lödpastavolymen på ett kretskort beror dels på egenskaperna hos lödpastan, dels påutskjutningssparametrarna hos jetprintern. Varje lödpasta är unik med hänsyn till flödesegenskaper. En lödpastadeposition har god kvalitet om depositionen har en bra position, omdepositionen är cirkulär och om mängden satelliter är minimal. En satellit är en droppe lödpastasom fallit utanför huvuddepositionen. Parametrarna som har störst effekt på lödpasta ärvågformsparameterna stigtid och spänningsnivå.Detta examensarbete undersökte möjligheten att hitta en feedbackbaserad maskininlärningsalgoritm som kan hitta de mest lämpliga värdena för stigtiden och spänningsnivå som ger godkvalitet på lödpastadepositionen. Algoritmen som användes var en Förstärkande inlärningsalgoritm.Förstärkande inlärning är en belöningsbaserad inlärningsalgoritm där en agent lär sig attinteragera med en miljö genom att använda trial and error. Den specifika algoritmen som användesvar en Deep-Q-Learning-algoritm. I examensarbetet undersöktes även hur kameraupplösningenspåverkar algoritmen och dess beslut. För att undersöka detta användes två maskiner, en nyare ochäldre version där att kameraupplösningen är lägre.Slutsatsen som drogs var att en Deep-Q-Learning-algoritm kan användas för att hitta det mestlämpliga värdena för vågformsparametrarna stigtid och spänningsnivå. En annan slutsats somdrogs var att algoritmen konvergerade snabbare när kameraupplösningen är lägre. Parapeternasom är optimala för den kameran med lägre upplösning är inte optimala för den kameran medhögre upplösning.
27

Prédiction de suites individuelles et cadre statistique classique : étude de quelques liens autour de la régression parcimonieuse et des techniques d'agrégation / Prediction of individual sequences and prediction in the statistical framework : some links around sparse regression and aggregation techniques

Gerchinovitz, Sébastien 12 December 2011 (has links)
Cette thèse s'inscrit dans le domaine de l'apprentissage statistique. Le cadre principal est celui de la prévision de suites déterministes arbitraires (ou suites individuelles), qui recouvre des problèmes d'apprentissage séquentiel où l'on ne peut ou ne veut pas faire d'hypothèses de stochasticité sur la suite des données à prévoir. Cela conduit à des méthodes très robustes. Dans ces travaux, on étudie quelques liens étroits entre la théorie de la prévision de suites individuelles et le cadre statistique classique, notamment le modèle de régression avec design aléatoire ou fixe, où les données sont modélisées de façon stochastique. Les apports entre ces deux cadres sont mutuels : certaines méthodes statistiques peuvent être adaptées au cadre séquentiel pour bénéficier de garanties déterministes ; réciproquement, des techniques de suites individuelles permettent de calibrer automatiquement des méthodes statistiques pour obtenir des bornes adaptatives en la variance du bruit. On étudie de tels liens sur plusieurs problèmes voisins : la régression linéaire séquentielle parcimonieuse en grande dimension (avec application au cadre stochastique), la régression linéaire séquentielle sur des boules L1, et l'agrégation de modèles non linéaires dans un cadre de sélection de modèles (régression avec design fixe). Enfin, des techniques stochastiques sont utilisées et développées pour déterminer les vitesses minimax de divers critères de performance séquentielle (regrets interne et swap notamment) en environnement déterministe ou stochastique. / The topics addressed in this thesis lie in statistical machine learning. Our main framework is the prediction of arbitrary deterministic sequences (or individual sequences). It includes online learning tasks for which we cannot make any stochasticity assumption on the data to be predicted, which requires robust methods. In this work, we analyze several connections between the theory of individual sequences and the classical statistical setting, e.g., the regression model with fixed or random design, where stochastic assumptions are made. These two frameworks benefit from one another: some statistical methods can be adapted to the online learning setting to satisfy deterministic performance guarantees. Conversely, some individual-sequence techniques are useful to tune the parameters of a statistical method and to get risk bounds that are adaptive to the unknown variance. We study such connections for several connected problems: high-dimensional online linear regression under a sparsity scenario (with an application to the stochastic setting), online linear regression on L1-balls, and aggregation of nonlinear models in a model selection framework (regression on a fixed design). We also use and develop stochastic techniques to compute the minimax rates of game-theoretic online measures of performance (e.g., internal and swap regrets) in a deterministic or stochastic environment.

Page generated in 0.1071 seconds