151 |
Unsupervised Spatio-Temporal Activity Learning and Recognition in a Stream Processing Framework / Oövervakad maskininlärning och klassificering av spatio-temporala aktiviteter i ett ström-baserat ramverkTiger, Mattias January 2014 (has links)
Learning to recognize and predict common activities, performed by objects and observed by sensors, is an important and challenging problem related both to artificial intelligence and robotics.In this thesis, the general problem of dynamic adaptive situation awareness is considered and we argue for the need for an on-line bottom-up approach.A candidate for a bottom layer is proposed, which we consider to be capable of future extensions that can bring us closer towards the goal.We present a novel approach to adaptive activity learning, where a mapping between raw data and primitive activity concepts are learned and continuously improved on-line and unsupervised. The approach takes streams of observations of objects as input and learns a probabilistic representation of both the observed spatio-temporal activities and their causal relations. The dynamics of the activities are modeled using sparse Gaussian processes and their causal relations using probabilistic graphs.The learned model supports both estimating the most likely activity and predicting the most likely future (and past) activities. Methods and ideas from a wide range of previous work are combined to provide a uniform and efficient way to handle a variety of common problems related to learning, classifying and predicting activities.The framework is evaluated both by learning activities in a simulated traffic monitoring application and by learning the flight patterns of an internally developed autonomous quadcopter system. The conclusion is that our framework is capable of learning the observed activities in real-time with good accuracy.We see this work as a step towards unsupervised learning of activities for robotic systems to adapt to new circumstances autonomously and to learn new activities on the fly that can be detected and predicted immediately. / Att lära sig känna igen och förutsäga vanliga aktiviteter genom att analysera sensordata från observerade objekt är ett viktigt och utmanande problem relaterat både till artificiell intelligens och robotik. I det här exjobbet studerar vi det generella problemet rörande adaptiv situationsmedvetenhet, och vi argumenterar för behovet av ett angreppssätt som arbetar on-line (direkt på ny data) och från botten upp. Vi föreslår en möjlig lösning som vi anser bereder väg för framtida utökningar som kan ta oss närmare detta mål. Vi presenterar en ny metod för adaptiv aktivitetsinlärning, där en mappning mellan rå-data och grundläggande aktivitetskoncept, samt deras kausala relationer, lärs och är kontinuerligt förfinade utan behov av övervakning. Tillvägagångssättet bygger på användandet av strömmar av observationer av objekt, och inlärning sker av en statistik representation för både de observerade spatio-temporala aktiviteterna och deras kausala relationer. Aktiviteternas dynamik modelleras med hjälp av glesa Gaussiska processer och för att modellera aktiviteternas kausala samband används probabilistiska grafer. Givet observationer av ett objekt så stödjer de inlärda modellerna både skattning av den troligaste aktiviteten och förutsägelser av de mest troliga framtida (och dåtida) aktiviteterna utförda. Metoder och idéer från en rad olika tidigare arbeten kombineras på ett sätt som möjliggör ett enhetligt och effektivt sätt att hantera en mängd vanliga problem relaterade till inlärning, klassificering och förutsägelser av aktiviteter. Ramverket är utvärderat genom att dels inlärning av aktiviteter i en simulerad trafikövervakningsapplikation och dels genom inlärning av flygmönster hos ett internt utvecklad quadrocoptersystem. Slutsatsen är att vårt ramverk klarar av att lära sig de observerade aktivisterna i realtid med god noggrannhet. Vi ser detta arbete som ett steg mot oövervakad inlärning av aktiviteter för robotsystem, så att dessa kan anpassa sig till nya förhållanden autonomt och lära sig nya aktiviteter direkt och som då dessutom kan börja detekteras och förutsägas omedelbart.
|
152 |
Bayesovská optimalizace / Bayesian optimizationKostovčík, Peter January 2017 (has links)
Optimization is an important part of mathematics and is mostly used for practical applications. For specific types of objective functions, a lot of different methods exist. A method to use when the objective is unknown and/or expensive can be difficult to determine. One of the answers is bayesian optimization, which instead of direct optimization creates a probabilistic model and uses it to constructs easily optimizable auxiliary function. It is an iterative method that uses information from previous iterations to find new point in which the objective is evaluated and tries to find the optimum within a fewer iterations. This thesis introduces bayesian optimization, suma- rizes its different approaches in lower and higher dimensions and shows when to use it suitably. An important part of the thesis is my own optimization algorithm which is applied to different practical problems - e.g. parameter optimization in machine learning algorithm. 1
|
153 |
Modelagem experimental de um link FSO com inserção de feixes não difrativos / Modelling a link of free space optical communication experimental with insertion of non-diffracting beamsAleixo Júnior, José Francisco Meireles, 1977- 22 August 2018 (has links)
Orientador: Michel Zamboni Rached / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-22T08:06:38Z (GMT). No. of bitstreams: 1
AleixoJunior_JoseFranciscoMeireles_M.pdf: 4411350 bytes, checksum: bd00cad2cef3935ec7f463fc060d49c0 (MD5)
Previous issue date: 2013 / Resumo: Os objetivos centrais do presente trabalho foram a montagem de um link de comunicação óptica no espaço livre (FSO) e a posterior inserção de um feixe (nãodifrativo) de Bessel no enlace. Após uma breve revisão dos conceitos e limitações envolvidos no funcionamento dos sistemas FSO, construímos um enlace óptico chegando a uma distância de 50 metros a um custo muito reduzido. Na tentativa de diminuir o eventual impacto causado pela difração do feixe óptico, propusemos o uso de feixes não difrativos para realizar o link. Assim, após um estudo das características básicas do feixe nãodifrativo de Bessel e de sua geração experimental, realizamos sua inserção num enlace óptico de curta distância com o propósito de demonstrar a real possibilidade de uso desses feixes em comunicações ópticas no espaço livre / Abstract: The main aims of this work were building a link of free space optical communication (FSO) and the subsequent insertion of a Bessel (nondiffracting) beam in the link. After a brief review of the concepts about the operation and limitations of FSO systems, we build an optical link through a distance of 50 meters at a much reduced cost. Attempting to reduce the possible impact caused by the optical beam diffraction, and scattering we have proposed the use of nondiffracting beams to perform the link. In this way, after studying the basic characteristics of nondiffracting Bessel beams and their experimental generation, we implement their insertion in a short-distance optical link in order to demonstrate the real possibility of using these beams in free space optical communications / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
154 |
Modelling Bitcell BehaviourSebastian, Maria Treesa January 2020 (has links)
With advancements in technology, the dimensions of transistors are scaling down. It leads to shrinkage in the size of memory bitcells, increasing its sensitivity to process variations introduced during manufacturing. Failure of a single bitcell can cause the failure of an entire memory; hence careful statistical analysis is essential in estimating the highest reliable performance of the bitcell before using them in memory design. With high repetitiveness of bitcell, the traditional method of Monte Carlo simulation would require along time for accurate estimation of rare failure events. A more practical approach is importance sampling where more samples are collected from the failure region. Even though importance sampling is much faster than Monte Carlo simulations, it is still fairly time-consuming as it demands an iterative search making it impractical for large simulation sets. This thesis proposes two machine learning models that can be used in estimating the performance of a bitcell. The first model predicts the time taken by the bitcell for read or write operation. The second model predicts the minimum voltage required in maintaining the bitcell stability. The models were trained using the K-nearest neighbors algorithm and Gaussian process regression. Three sparse approximations were implemented in the time prediction model as a bigger dataset was available. The obtained results show that the models trained using Gaussian process regression were able to provide promising results.
|
155 |
Reliability Analysis of Linear Dynamic Systems by Importance Sampling-Separable Monte Carlo TechniqueThapa, Badal January 2020 (has links)
No description available.
|
156 |
Joint Calibration of a Cladding Oxidation and a Hydrogen Pick-up Model for Westinghouse Electric Sweden ABNyman, Joakim January 2020 (has links)
Knowledge regarding a nuclear power plants potential and limitations is of utmost importance when working in the nuclear field. One way to extend the knowledge is using fuel performance codes that to its best ability mimics the real-world phenomena. Fuel performance codes involve a system of interlinked and complex models to predict the thermo-mechanical behaviour of the fuel rods. These models use several different model parameters that can be imprecise and therefore the parameters need to be fitted/calibrated against measurement data. This thesis presents two methods to calibrate model parameters in the presence of unknown sources of uncertainty. The case where these methods have been tested are the oxidation and hydrogen pickup of the zirconium cladding around the fuel rods. Initially, training and testing data were sampled by using the Dakota software in combination with the nuclear simulation program TRANSURANUS so that a Gaussian process surrogate model could be built. The model parameters were then calibrated in a Bayesian way by a MCMC algorithm. Additionally, two models are presented to handle unknown sources of uncertainty that may arise from model inadequacies, nuisance parameters or hidden measurement errors, these are the Marginal likelihood optimization method and the Margin method. To calibrate the model parameters, data from two sources were used. One source that only had data regarding the oxide thickness but the data was extensive, and another that had both oxide data and hydrogen concentration data, but less data was available. The model parameters were calibrated by the use of the presented methods. But an unforeseen non-linearity for the joint oxidation and hydrogen pick-up case when predicting the correlation of the model parameters made this result unreliable.
|
157 |
LANE TRACKING USING DEPENDENT EXTENDED TARGET MODELSakbari, behzad January 2021 (has links)
Detection of multiple-lane markings (lane-line) on road surfaces is an essential aspect
of autonomous vehicles. Although several approaches have been proposed to detect
lanes, detecting multiple lane-lines consistently, particularly across a stream of frames
and under varying lighting conditions is still a challenging problem. Since the road's
markings are designed to be smooth and parallel, lane-line sampled features tend
to be spatially and temporally correlated inside and between frames. In this thesis,
we develop novel methods to model these spatial and temporal dependencies in the
form of the target tracking problem. In fact, instead of resorting to the conventional
method of processing each frame to detect lanes only in the space domain, we treat
the overall problem as a Multiple Extended Target Tracking (METT) problem.
In the first step, we modelled lane-lines as multiple "independent" extended targets
and developed a spline mathematical model for the shape of the targets. We showed
that expanding the estimations across the time domain could improve the result of
estimation. We identify a set of control points for each spline, which will track over
time. To overcome the clutter problem, we developed an integrated probabilistic data
association fi lter (IPDAF) as our basis, and formulated a METT algorithm to track
multiple splines corresponding to each lane-line.In the second part of our work, we investigated the coupling between multiple extended targets. We considered the non-parametric case and modeled target dependency
using the Multi-Output Gaussian Process. We showed that considering
dependency between extended targets could improve shape estimation results. We
exploit the dependency between extended targets by proposing a novel recursive approach
called the Multi-Output Spatio-Temporal Gaussian Process Kalman Filter
(MO-STGP-KF). We used MO-STGP-KF to estimate and track multiple dependent
lane markings that are possibly degraded or obscured by traffic. Our method tested
for tracking multiple lane-lines but can be employed to track multiple dependent
rigid-shape targets by using the measurement model in the radial space
In the third section, we developed a Spatio-Temporal Joint Probabilistic Data
Association Filter (ST-JPDAF). In multiple extended target tracking problems with
clutter, sometimes extended targets share measurements: for example, in lane-line
detection, when two-lane markings pass or merge together. In single-point target
tracking, this problem can be solved using the famous Joint Probabilistic Data Association
(JPDA) filter. In the single-point case, even when measurements are dependent,
we can stack them in the coupled form of JPDA. In this last chapter, we expanded
JPDA for tracking multiple dependent extended targets using an approach called
ST-JPDAF. We managed dependency of measurements in space (inside a frame) and
time (between frames) using different kernel functions, which can be learned using
the trained data. This extension can be used to track the shape and dynamic of
dependent extended targets within clutter when targets share measurements.
The performance of the proposed methods in all three chapters are quanti ed on
real data scenarios and their results are compared against well-known model-based,
semi-supervised, and fully-supervised methods. The proposed methods offer very promising results. / Thesis / Doctor of Philosophy (PhD)
|
158 |
Parameter Stability in Additive Normal Tempered Stable Processes for Equity DerivativesAlcantara Martinez, Eduardo Alberto January 2023 (has links)
This thesis focuses on the parameter stability of additive normal tempered stable processes when calibrating a volatility surface. The studied processes arise as a generalization of Lévy normal tempered stable processes, and their main characteristic are their time-dependent parameters. The theoretical background of the subject is presented, where its construction is discussed taking as a starting point the definition of Lévy processes. The implementation of an option valuation model using Fourier techniques and the calibration process of the model are described. The thesis analyzes the parameter stability of the model when it calibrates the volatility surface of a market index (EURO STOXX 50) during three time spans. The time spans consist of the periods from Dec 2016 to Dec 2017 (after the Brexit and the US presidential elections), from Nov 2019 to Nov 2020 (during the pandemic caused by COVID-19) and a more recent time period, April 2023. The findings contribute to the understanding of the model itself and the behavior of the parameters under particular economic conditions.
|
159 |
MULTI-FIDELITY MODELING AND MULTI-OBJECTIVE BAYESIAN OPTIMIZATION SUPPORTED BY COMPOSITIONS OF GAUSSIAN PROCESSESHomero Santiago Valladares Guerra (15383687) 01 May 2023 (has links)
<p>Practical design problems in engineering and science involve the evaluation of expensive black-box functions, the optimization of multiple—often conflicting—targets, and the integration of data generated by multiple sources of information, e.g., numerical models with different levels of fidelity. If not properly handled, the complexity of these design problems can lead to lengthy and costly development cycles. In the last years, Bayesian optimization has emerged as a powerful alternative to solve optimization problems that involve the evaluation of expensive black-box functions. Bayesian optimization has two main components: a probabilistic surrogate model of the black-box function and an acquisition function that drives the optimization. Its ability to find high-performance designs within a limited number of function evaluations has attracted the attention of many fields including the engineering design community. The practical relevance of strategies with the ability to fuse information emerging from different sources and the need to optimize multiple targets has motivated the development of multi-fidelity modeling techniques and multi-objective Bayesian optimization methods. A key component in the vast majority of these methods is the Gaussian process (GP) due to its flexibility and mathematical properties.</p>
<p><br></p>
<p>The objective of this dissertation is to develop new approaches in the areas of multi-fidelity modeling and multi-objective Bayesian optimization. To achieve this goal, this study explores the use of linear and non-linear compositions of GPs to build probabilistic models for Bayesian optimization. Additionally, motivated by the rationale behind well-established multi-objective methods, this study presents a novel acquisition function to solve multi-objective optimization problems in a Bayesian framework. This dissertation presents four contributions. First, the auto-regressive model, one of the most prominent multi-fidelity models in engineering design, is extended to include informative mean functions that capture prior knowledge about the global trend of the sources. This additional information enhances the predictive capabilities of the surrogate. Second, the non-linear auto-regressive Gaussian process (NARGP) model, a non-linear multi-fidelity model, is integrated into a multi-objective Bayesian optimization framework. The NARGP model offers the possibility to leverage sources that present non-linear cross-correlations to enhance the performance of the optimization process. Third, GP classifiers, which employ non-linear compositions of GPs, and conditional probabilities are combined to solve multi-objective problems. Finally, a new multi-objective acquisition function is presented. This function employs two terms: a distance-based metric—the expected Pareto distance change—that captures the optimality of a given design, and a diversity index that prevents the evaluation of non-informative designs. The proposed acquisition function generates informative landscapes that produce Pareto front approximations that are both broad and diverse.</p>
|
160 |
Investigation of Machine Learning Regression Techniques to Predict Critical Heat FluxHelmryd Grosfilley, Emil January 2022 (has links)
A unifying model for Critical Heat Flux (CHF) prediction has been elusive for over 60 years. With the release of the data utilized in the making of the 2006 Groeneveld Lookup table (LUT), by far the largest public CHF database available to date, data-driven predictions on a large variable space can be performed. The popularization of machine learning techniques to solve regression problems allows for deeper and more advanced tools when analyzing the data. We compare three different machine learning algorithms to predict the occurrence of CHF in vertical, uniformly heated round tubes. For each selected algorithm (ν-Support vector regression, Gaussian process regression, and Neural network regression), an optimized hyperparameter set is fitted. The best performing algorithm is the Neural network, which achieves a standard deviation of the prediction/measured factor three times lower than the LUT, while the Gaussian process regression and the ν-Support vector regression both lead to two times lower standard deviation. All algorithms significantly outperform the LUT prediction performance. The neural network model and training methodology are designed to prevent overfitting, which is confirmed by data analysis of the predictions. Additionally, a feasibility study of transfer learning and uncertainty quantification is performed, to investigate potential future applications.
|
Page generated in 0.0719 seconds