• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 230
  • 230
  • 52
  • 43
  • 41
  • 37
  • 31
  • 30
  • 30
  • 28
  • 28
  • 26
  • 25
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Forecasting hourly electricity consumption for sets of households using machine learning algorithms

Linton, Thomas January 2015 (has links)
To address inefficiency, waste, and the negative consequences of electricity generation, companies and government entities are looking to behavioural change among residential consumers. To drive behavioural change, consumers need better feedback about their electricity consumption. A monthly or quarterly bill provides the consumer with almost no useful information about the relationship between their behaviours and their electricity consumption. Smart meters are now widely dispersed in developed countries and they are capable of providing electricity consumption readings at an hourly resolution, but this data is mostly used as a basis for billing and not as a tool to assist the consumer in reducing their consumption. One component required to deliver innovative feedback mechanisms is the capability to forecast hourly electricity consumption at the household scale. The work presented by this thesis is an evaluation of the effectiveness of a selection of kernel based machine learning methods at forecasting the hourly aggregate electricity consumption for different sized sets of households. The work of this thesis demonstrates that k-Nearest Neighbour Regression and Gaussian process Regression are the most accurate methods within the constraints of the problem considered. In addition to accuracy, the advantages and disadvantages of each machine learning method are evaluated, and a simple comparison of each algorithms computational performance is made. / För att ta itu med ineffektivitet, avfall, och de negativa konsekvenserna av elproduktion så vill företag och myndigheter se beteendeförändringar bland hushållskonsumenter. För att skapa beteendeförändringar så behöver konsumenterna bättre återkoppling när det gäller deras elförbrukning. Den nuvarande återkopplingen i en månads- eller kvartalsfaktura ger konsumenten nästan ingen användbar information om hur deras beteenden relaterar till deras konsumtion. Smarta mätare finns nu överallt i de utvecklade länderna och de kan ge en mängd information om bostäders konsumtion, men denna data används främst som underlag för fakturering och inte som ett verktyg för att hjälpa konsumenterna att minska sin konsumtion. En komponent som krävs för att leverera innovativa återkopplingsmekanismer är förmågan att förutse elförbrukningen på hushållsskala. Arbetet som presenteras i denna avhandling är en utvärdering av noggrannheten hos ett urval av kärnbaserad maskininlärningsmetoder för att förutse den sammanlagda förbrukningen för olika stora uppsättningar av hushåll. Arbetet i denna avhandling visar att "k-Nearest Neighbour Regression" och "Gaussian Process Regression" är de mest exakta metoder inom problemets begränsningar. Förutom noggrannhet, så görs en utvärdering av fördelar, nackdelar och prestanda hos varje maskininlärningsmetod.
202

Compréhension de situation et estimation de risques pour les aides à la conduite préventives / Situation understanding and risk assessment framework for preventive driver assistance

Armand, Alexandre 31 May 2016 (has links)
Les nouvelles voitures sont pourvues d’aides à la conduite qui améliorent le confort et la sécurité. Bien que ces systèmes contribuent à la réduction des accidents de la route, leur déploiement montre que leurs performances sont encore limitées par leur faible compréhension de situation. Cela est principalement lié aux limites des capteurs de perception, et à la non prise en compte du contexte. Ces limites se traduisent par des détections de risques tardives, et donc en assistances sous forme d’alertes ou de freinages automatiques. Cette thèse se concentre sur l’introduction d’informations contextuelles dans le processus de décision des systèmes d’aides à la conduite. Le but est de détecter des risques plus tôt que les systèmes conventionnels, ainsi que d’améliorer la confiance qu’on peut avoir dans les informations générées.Le comportement d’un véhicule dépend de divers éléments tels que le réseau routier, les règles de la circulation, ainsi que de la cohabitation avec d’autres usagers de la route. Ces interactions se traduisent par une interdépendance forte entre chaque élément. De plus, bien que chaque conducteur doive suivre les mêmes règles de circulation, ils peuvent réagir de façon différente à une même situation. Cela implique qu’un même comportement peut être considéré comme sûr ou risqué, selon le conducteur. Ces informations doivent être prises en compte dans le processus de prise de décision des systèmes. Cette thèse propose un cadre qui combine les informations a priori contenues dans les cartes de navigation numériques avec l’information temps réel fournie par les capteurs de perception et/ou communications sans fil, pour permettre une meilleure compréhension de situation et ainsi mieux anticiper les risques. Ce principe est comparable aux tâches qu’un copilote doit accomplir. Ces travaux se répartissent en deux principales étapes : la compréhension de situation, et l’estimation des risques.L’étape de compréhension de situation consiste à donner du sens aux différentes observations réalisées par les capteurs de perception, en exploitant des informations a priori. Le but est de comprendre comment les entités perçues interagissent, et comment ces interactions contraignent le comportement du véhicule. Cette étape établit les relations spatio-temporelles entre les entités perçues afin d’évaluer leur pertinence par rapport au véhicule, et ainsi extraire les entités les plus contraignantes. Pour cela, une ontologie contenant des informations a priori sur la façon dont différentes entités de la route interagissent est proposée. Cette première étape a été testée en temps réel, utilisant des données enregistrées sur un véhicule évoluant en environnements contraints.L’étape de détection des risques s’appuie sur la situation perçue, et sur les signes annonciateurs de risques. Le cas d’usage choisi pour cette étude se concentre sur les intersections, puisqu’une grande majorité des accidents de la route y ont lieux. La manière de réagir d’un conducteur lorsqu’il se rapproche d’une intersection est apprise par des Processus Gaussiens. Cette connaissance à priori du conducteur est ensuite exploitée, avec les informations contextuelles, par un réseau Bayésien afin d’estimer si le conducteur semble interagir comme attendu avec l’intersection. L’approche probabiliste qui a été choisie permet de prendre en compte les incertitudes dont souffrent chacune des sources d’information. Des tests ont été réalisés à partir de données enregistrées à bord d’un véhicule afin de valider l’approche. Les résultats montrent qu’en prenant en compte les individualités des conducteurs, leurs actions sur le véhicule, ainsi que l’état du véhicule, il est possible de mieux estimer si le conducteur interagit comme attendu avec l’environnement, et donc d’anticiper les risques. Finalement, il est montré qu’il est possible de générer une assistance plus préventive que les systèmes d’aide à la conduite conventionnels. / Modern vehicles include advanced driving assistance systems for comfort and active safety features. Whilst these systems contribute to the reduction of road accidents, their deployment has shown that performance is constrained by their limited situation understanding capabilities. This is mainly due to perception constraints and by ignoring the context within which these vehicles evolve. It results in last minute risk assessment, and thus in curative assistance in the form of warning alerts or automatic braking. This thesis focuses on the introduction of contextual information into the decision processes of driving assistance systems. The overall purpose is to infer risk earlier than conventional driving assistance systems, as well as to enhance the level of trust on the information provided to drivers.Several factors govern the vehicle behaviour. These include the road network and traffic rules, as well as other road users such as vehicles and pedestrians with which the vehicle interacts. This results in strong interdependencies amongst all entities, which govern their behaviour. Further, whilst traffic rules apply equally to all participants, each driver interacts differently with the immediate environment, leading to different risk level for a given behaviour. This information must be incorporated within the decision-making processes of these systems. In this thesis, a framework is proposed that combines a priori information from digital navigation maps with real time information from on board vehicle sensors and/or external sources via wireless communications links, to infer a better situation understanding, which should enable to anticipate risks. This tenet is similar to the task of a co-pilot when using a priori notated road information. The proposed approach is constrained by using only data from close to production sensors. The framework proposed in this thesis consists of two phases, namely situation understanding and risk assessment.The situation understanding phase consists in performing a high level interpretation of all observations by including a priori information within the framework. The purpose is to understand how the perceived road entities interact, and how the interactions constrain the vehicle behaviour. This phase establishes the spatio-temporal relationships between the perceived entities to determine their relevance with respect to the subject vehicle motion, and then to identify which entities to be tracked. For this purpose, an ontology is proposed. It stores a priori information about the manner how different road entities relate and interact. This initial phase was tested in real time using data recorded on a passenger vehicle evolving in constrained environments.The risk assessment phase then looks into the perceived situation and into the manner how it becomes dangerous. To demonstrate the framework applicability, a use case applied to road intersections was chosen. Intersections are complex parts in the road network where different entities converge and most accidents occur. In order to detect risk situations, the manner how the driver reacts in a given situation is learned through Gaussian Processes. This knowledge about the driver is then used within a context aware Bayesian Network to estimate whether the driver is likely to interact as expected with the relevant entities or not. The probabilistic approach taken allows to take into consideration all uncertainties embedded in the observations. Field trials were performed using a passenger vehicle to validate the proposed approach. The results show that by incorporating drivers’ individualities and their actuations with the observation of the vehicle state, it is possible to better estimate whether the driver interacts as expected with the environment, and thus to anticipate risk. Further, it is shown that it is possible to generate assistance earlier than conventional safety systems.
203

GENERATIVE MODELS WITH MARGINAL CONSTRAINTS

Bingjing Tang (16380291) 16 June 2023 (has links)
<p> Generative models form powerful tools for learning data distributions and simulating new samples. Recent years have seen significant advances in the flexibility and applicability of such models, with Bayesian approaches like nonparametric Bayesian models and deep neural network models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) finding use in a wide range of domains. However, the black-box nature of these models means that they are often hard to interpret, and they often come with modeling implications that are inconsistent with side knowledge resulting from domain knowledge. This thesis studies situations where the modeler has side knowledge represented as probability distributions on functionals of the objects being modeled, and we study methods to incorporate this particular kind of side knowledge into flexible generative models. This dissertation covers three main parts. </p> <p><br></p> <p>The first part focuses on incorporating a special case of the aforementioned side knowledge into flexible nonparametric Bayesian models. Many times, practitioners have additional distributional information about a subset of the coordinates of the observations being modeled. The flexibility of nonparametric Bayesian models usually implies incompatibility with this side information. Such inconsistency triggers the necessity of developing methods to incorporate this side knowledge into flexible nonparametric Bayesian models. We design a specialized generative process to build in this side knowledge and propose a novel sigmoid Gaussian process conditional model. We also develop a corresponding posterior sampling method based on data augmentation to overcome a doubly intractable problem. We illustrate the efficacy of our proposed constrained nonparametric Bayesian model in a variety of real-world scenarios including modeling environmental and earthquake data. </p> <p><br></p> <p>The second part of the dissertation discusses neural network approaches to satisfying the said general side knowledge. Further, the generative models considered in this part broaden into black-box models. We formulate this side knowledge incorporation problem as a constrained divergence minimization problem and propose two scalable neural network approaches as its solution. We demonstrate their practicality using various synthetic and real examples. </p> <p><br></p> <p> The third part of the dissertation concentrates on a specific generative model of individual pixels of the fMRI data constructed from a latent group image. Usually there is two-fold side knowledge about the latent group image: spatial structure and partial activation zones. The former can be captured by modeling the prior for the group image with Markov random fields. The latter, which is often obtained from previous related studies, is left for future research. We propose a novel Bayesian model with Markov random fields and aim to estimate the maximum a posteriori for the group image. We also derive a variational Bayes algorithm to overcome local optima in the optimization.</p>
204

Reconstructing Historical Earthquake-Induced Tsunamis: Case Study of 1820 Event Near South Sulawesi, Indonesia

Paskett, Taylor Jole 13 July 2022 (has links) (PDF)
We build on the method introduced by Ringer, et al., applying it to an 1820 event that happened near South Sulawesi, Indonesia. We utilize other statistical models to aid our Metropolis-Hastings sampler, including a Gaussian process which informs the prior. We apply the method to multiple possible fault zones to determine which fault is the most likely source of the earthquake and tsunami. After collecting nearly 80,000 samples, we find that between the two most likely fault zones, the Walanae fault zone matches the anecdotal accounts much better than Flores. However, to support the anecdotal data, both samplers tend toward powerful earthquakes that may not be supported by the faults in question. This indicates that even further research is warranted. It may indicate that some other type of event took place, such as a multiple-fault rupture or landslide tsunami.
205

MULTI-FIDELITY MODELING AND MULTI-OBJECTIVE BAYESIAN OPTIMIZATION SUPPORTED BY COMPOSITIONS OF GAUSSIAN PROCESSES

Homero Santiago Valladares Guerra (15383687) 01 May 2023 (has links)
<p>Practical design problems in engineering and science involve the evaluation of expensive black-box functions, the optimization of multiple—often conflicting—targets, and the integration of data generated by multiple sources of information, e.g., numerical models with different levels of fidelity. If not properly handled, the complexity of these design problems can lead to lengthy and costly development cycles. In the last years, Bayesian optimization has emerged as a powerful alternative to solve optimization problems that involve the evaluation of expensive black-box functions. Bayesian optimization has two main components: a probabilistic surrogate model of the black-box function and an acquisition function that drives the optimization. Its ability to find high-performance designs within a limited number of function evaluations has attracted the attention of many fields including the engineering design community. The practical relevance of strategies with the ability to fuse information emerging from different sources and the need to optimize multiple targets has motivated the development of multi-fidelity modeling techniques and multi-objective Bayesian optimization methods. A key component in the vast majority of these methods is the Gaussian process (GP) due to its flexibility and mathematical properties.</p> <p><br></p> <p>The objective of this dissertation is to develop new approaches in the areas of multi-fidelity modeling and multi-objective Bayesian optimization. To achieve this goal, this study explores the use of linear and non-linear compositions of GPs to build probabilistic models for Bayesian optimization. Additionally, motivated by the rationale behind well-established multi-objective methods, this study presents a novel acquisition function to solve multi-objective optimization problems in a Bayesian framework. This dissertation presents four contributions. First, the auto-regressive model, one of the most prominent multi-fidelity models in engineering design, is extended to include informative mean functions that capture prior knowledge about the global trend of the sources. This additional information enhances the predictive capabilities of the surrogate. Second, the non-linear auto-regressive Gaussian process (NARGP) model, a non-linear multi-fidelity model, is integrated into a multi-objective Bayesian optimization framework. The NARGP model offers the possibility to leverage sources that present non-linear cross-correlations to enhance the performance of the optimization process. Third, GP classifiers, which employ non-linear compositions of GPs, and conditional probabilities are combined to solve multi-objective problems. Finally, a new multi-objective acquisition function is presented. This function employs two terms: a distance-based metric—the expected Pareto distance change—that captures the optimality of a given design, and a diversity index that prevents the evaluation of non-informative designs. The proposed acquisition function generates informative landscapes that produce Pareto front approximations that are both broad and diverse.</p>
206

Early-Stage Prediction of Lithium-Ion Battery Cycle Life Using Gaussian Process Regression / Prediktion i tidigt stadium av litiumjonbatteriers livslängd med hjälp av Gaussiska processer

Wikland, Love January 2020 (has links)
Data-driven prediction of battery health has gained increased attention over the past couple of years, in both academia and industry. Accurate early-stage predictions of battery performance would create new opportunities regarding production and use. Using data from only the first 100 cycles, in a data set of 124 cells where lifetimes span between 150 and 2300 cycles, this work combines parametric linear models with non-parametric Gaussian process regression to achieve cycle lifetime predictions with an overall accuracy of 8.8% mean error. This work presents a relevant contribution to current research as this combination of methods is previously unseen when regressing battery lifetime on a high dimensional feature space. The study and the results presented further show that Gaussian process regression can serve as a valuable contributor in future data-driven implementations of battery health predictions. / Datadriven prediktion av batterihälsa har fått ökad uppmärksamhet under de senaste åren, både inom akademin och industrin. Precisa prediktioner i tidigt stadium av batteriprestanda skulle kunna skapa nya möjligheter för produktion och användning. Genom att använda data från endast de första 100 cyklerna, i en datamängd med 124 celler där livslängden sträcker sig mellan 150 och 2300 cykler, kombinerar denna uppsats parametriska linjära modeller med ickeparametrisk Gaussisk processregression för att uppnå livstidsprediktioner med en genomsnittlig noggrannhet om 8.8% fel. Studien utgör ett relevant bidrag till den aktuella forskningen eftersom den använda kombinationen av metoder inte tidigare utnyttjats för regression av batterilivslängd med ett högdimensionellt variabelrum. Studien och de erhållna resultaten visar att regression med hjälp av Gaussiska processer kan bidra i framtida datadrivna implementeringar av prediktion för batterihälsa.
207

Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems / Inferens av buffer-kötider i dataprocesseringssystem med hjälp av Gaussiska processer

Hall, Otto January 2017 (has links)
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research. / Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
208

Design, development and investigation of innovative indoor approaches for healthcare solutions. Design and simulation of RFID and reconfigurable antenna for wireless indoor applications; modelling and Implementation of ambient and wearable sensing, activity recognition, using machine learning, neural network for unobtrusive health monitoring

Oguntala, George A. January 2019 (has links)
The continuous integration of wireless communication systems in medical and healthcare applications has made the actualisation of reliable healthcare applications and services for patient care and smart home a reality. Diverse indoor approaches are sought to improve the quality of living and consequently longevity. The research centres on the development of smart healthcare solutions using various indoor technologies and techniques for active and assisted living. At first, smart health solutions for ambient and wearable assisted living in smart homes are sought. This requires a detailed study of indoor localisation. Different indoor localisation technologies including acoustic, magnetic, optical and radio frequency are evaluated and compared. From the evaluation, radio frequency-based technologies, with interest in wireless fidelity (Wi-Fi) and radio frequency identification (RFID) are isolated for smart healthcare. The research focus is sought on auto-identification technologies, with design considerations and performance constraints evaluated. Moreover, the design of various antennas for different indoor technologies to achieve innovative healthcare solutions is of interest. First, a meander line passive RFID tag antenna resonating at the European ultra-high frequency is designed, simulated and evaluated. Second, a frequency-reconfigurable patch antenna with the capability to resonate at ten distinct frequencies to support Wi-Fi and worldwide interoperability for microwave access applications is designed and simulated. Afterwards, a low-profile, lightweight, textile patch antenna using denim material substrate is designed and experimentally verified. It is established that, by loading proper rectangular slots and introducing strip lines, substantial size antenna miniaturisation is achieved. Further, novel wearable and ambient methodologies to further ameliorate smart healthcare and smart homes are developed. Machine learning and deep learning methods using multivariate Gaussian and Long short-term memory recurrent neural network are used to experimentally validate the viability of the new approaches. This work follows the construction of the SmartWall of passive RFID tags to achieve non-invasive data acquisition that is highly unobtrusive. / Tertiary Education Trust Fund (TETFund) of the Federal Government of Nigeria
209

Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond

Maroñas Molano, Juan 28 February 2022 (has links)
[ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilístico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado. En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana. / [CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilístic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta càncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta càncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la pràctica. Quan això passa, diem que un model està perfectament calibrat. En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implícita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocàstic que serveix com a distribució a priori en un problema d'inferència Bayesiana. / [EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated. In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem. / Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582
210

Graph Cut Based Mesh Segmentation Using Feature Points and Geodesic Distance

Liu, L., Sheng, Y., Zhang, G., Ugail, Hassan January 2015 (has links)
No / Both prominent feature points and geodesic distance are key factors for mesh segmentation. With these two factors, this paper proposes a graph cut based mesh segmentation method. The mesh is first preprocessed by Laplacian smoothing. According to the Gaussian curvature, candidate feature points are then selected by a predefined threshold. With DBSCAN (Density-Based Spatial Clustering of Application with Noise), the selected candidate points are separated into some clusters, and the points with the maximum curvature in every cluster are regarded as the final feature points. We label these feature points, and regard the faces in the mesh as nodes for graph cut. Our energy function is constructed by utilizing the ratio between the geodesic distance and the Euclidean distance of vertex pairs of the mesh. The final segmentation result is obtained by minimizing the energy function using graph cut. The proposed algorithm is pose-invariant and can robustly segment the mesh into different parts in line with the selected feature points.

Page generated in 0.0514 seconds