• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 22
  • 19
  • 15
  • 7
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 236
  • 236
  • 90
  • 44
  • 43
  • 37
  • 30
  • 30
  • 27
  • 25
  • 24
  • 22
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

[pt] MODELO DE MISTURA LÍQUIDO-SÓLIDO UNIDIMENSIONAL COM BASE TERMODINÂMICA PARA PREVER DEPOSIÇÃO DE PARAFINA / [en] THERMODYNAMIC-BASED, ONE-DIMENSIONAL LIQUID-SOLID MIXTURE MODEL TO PREDICT WAX DEPOSITION

FABIO GASPAR SANTOS JUNIOR 14 December 2021 (has links)
[pt] A deposição de parafina é um dos principais problemas quando se trata de garantia de escoamento. O fluido que sai quente dos reservatórios é transportado por dutos que trocam calor com o ambiente frio. Quando o fluido atinge temperaturas abaixo da TIAC (Temperatura Inicial de Aparecimento de Cristais), partículas de sólido se precipitam. Isto causa não só aumento da viscosidade, mas também a formação de depósito nas paredes dos dutos, diminuindo a seção livre de escoamento e consequentemente a produção, ou até mesmo obstruindo totalmente o duto, resultando em um grande esforço de manutenção e causando grandes perdas de capital. Este problema é bastante recorrente, sendo assim, prever a deposição da parafina é crucial para projetos e operações de dutos, sendo preciso desenvolver modelos numéricos que forneçam resultados de forma acurada e eficiente. Considerando que os dutos são muito longos, o presente trabalho propõe um modelo hidrodinâmico unidimensional, com um modelo bidimensional de transferência de calor obtido através de um processo de marcha ao longo do duto. Todas as propriedades e fração volumétrica de sólido são determinadas em função da composição do fluido, pressão e temperatura, a partir de interpolações de tabelas criadas com um modelo termodinâmico em uma etapa de pré-processamento. O modelo considera a existência de depósito na parede do duto quando a fração de volume de sólido é igual ou maior do que 2 por cento. A modelagem proposta foi avaliada em diferentes situações, em escala laboratorial e de campo, apresentando dados de temperatura, pressão e espessura do depósito com concordância razoável com dados da literatura, mostrando que a modelagem implementada reproduz satisfatoriamente o comportamento físico do fenômeno de deposição de parafina. / [en] Wax deposition is one of the main problems when it comes to flow assurance. The hot fluid which leaves the reservoirs is transported by pipelines that exchange heat with the cold environment. Solid particles precipitate when the fluid reaches temperatures below the WAT (Wax Appearance Temperature). It causes not only an increase in viscosity, but also the formation of deposits on the pipe walls, reducing the cross-flow area and the production, or even totally obstructing the pipe, resulting in a significant maintenance effort and causing large capital losses. This problem is quite recurrent, therefore, predicting of wax deposition is crucial for pipeline projects and operations, and it is necessary to develop numerical models that provide accurate and efficient results. Considering that pipelines are very long, the present work proposes a one-dimensional hydrodynamic model with a two-dimensional heat transfer model being obtained through a marching process along the pipeline. All properties and solid volumetric fraction are determined as a function of fluid composition, pressure and temperature, from tables interpolation created with a thermodynamic model in a pre-processing step. The model considers the deposit existence on the pipe wall when the solid volume fraction is equal to or greater than 2 percent. The proposed model was evaluated in different situations, in a laboratory and field scale, presenting temperature, pressure and deposit thickness data in reasonable agreement with literature data, showing that the implemented model reproduces satisfactorily the physical behavior of the wax deposition phenomenon.
212

Machine learning multicriteria optimization in radiation therapy treatment planning / Flermålsoptimering med maskininlärning inom strålterapiplanering

Zhang, Tianfang January 2019 (has links)
In radiation therapy treatment planning, recent works have used machine learning based on historically delivered plans to automate the process of producing clinically acceptable plans. Compared to traditional approaches such as repeated weighted-sum optimization or multicriteria optimization (MCO), automated planning methods have, in general, the benefits of low computational times and minimal user interaction, but on the other hand lack the flexibility associated with general-purpose frameworks such as MCO. Machine learning approaches can be especially sensitive to deviations in their dose prediction due to certain properties of the optimization functions usually used for dose mimicking and, moreover, suffer from the fact that there exists no general causality between prediction accuracy and optimized plan quality.In this thesis, we present a means of unifying ideas from machine learning planning methods with the well-established MCO framework. More precisely, given prior knowledge in the form of either a previously optimized plan or a set of historically delivered clinical plans, we are able to automatically generate Pareto optimal plans spanning a dose region corresponding to plans which are achievable as well as clinically acceptable. For the former case, this is achieved by introducing dose--volume constraints; for the latter case, this is achieved by fitting a weighted-data Gaussian mixture model on pre-defined dose statistics using the expectation--maximization algorithm, modifying it with exponential tilting and using specially developed optimization functions to take into account prediction uncertainties.Numerical results for conceptual demonstration are obtained for a prostate cancer case with treatment delivered by a volumetric-modulated arc therapy technique, where it is shown that the methods developed in the thesis are successful in automatically generating Pareto optimal plans of satisfactory quality and diversity, while excluding clinically irrelevant dose regions. For the case of using historical plans as prior knowledge, the computational times are significantly shorter than those typical of conventional MCO. / Inom strålterapiplanering har den senaste forskningen använt maskininlärning baserat på historiskt levererade planer för att automatisera den process i vilken kliniskt acceptabla planer produceras. Jämfört med traditionella angreppssätt, såsom upprepad optimering av en viktad målfunktion eller flermålsoptimering (MCO), har automatiska planeringsmetoder generellt sett fördelarna av lägre beräkningstider och minimal användarinteraktion, men saknar däremot flexibiliteten hos allmänna ramverk som exempelvis MCO. Maskininlärningsmetoder kan vara speciellt känsliga för avvikelser i dosprediktionssteget på grund av särskilda egenskaper hos de optimeringsfunktioner som vanligtvis används för att återskapa dosfördelningar, och lider dessutom av problemet att det inte finns något allmängiltigt orsakssamband mellan prediktionsnoggrannhet och kvalitet hos optimerad plan. I detta arbete presenterar vi ett sätt att förena idéer från maskininlärningsbaserade planeringsmetoder med det väletablerade MCO-ramverket. Mer precist kan vi, givet förkunskaper i form av antingen en tidigare optimerad plan eller en uppsättning av historiskt levererade kliniska planer, automatiskt generera Paretooptimala planer som täcker en dosregion motsvarande uppnåeliga såväl som kliniskt acceptabla planer. I det förra fallet görs detta genom att introducera dos--volym-bivillkor; i det senare fallet görs detta genom att anpassa en gaussisk blandningsmodell med viktade data med förväntning--maximering-algoritmen, modifiera den med exponentiell lutning och sedan använda speciellt utvecklade optimeringsfunktioner för att ta hänsyn till prediktionsosäkerheter.Numeriska resultat för konceptuell demonstration erhålls för ett fall av prostatacancer varvid behandlingen levererades med volymetriskt modulerad bågterapi, där det visas att metoderna utvecklade i detta arbete är framgångsrika i att automatiskt generera Paretooptimala planer med tillfredsställande kvalitet och variation medan kliniskt irrelevanta dosregioner utesluts. I fallet då historiska planer används som förkunskap är beräkningstiderna markant kortare än för konventionell MCO.
213

Adaptive Estimation using Gaussian Mixtures

Pfeifer, Tim 25 October 2023 (has links)
This thesis offers a probabilistic solution to robust estimation using a novel adaptive estimator. Reliable state estimation is a mandatory prerequisite for autonomous systems interacting with the real world. The presence of outliers challenges the Gaussian assumption of numerous estimation algorithms, resulting in a potentially skewed estimate that compromises reliability. Many approaches attempt to mitigate erroneous measurements by using a robust loss function – which often comes with a trade-off between robustness and numerical stability. The proposed approach is purely probabilistic and enables adaptive large-scale estimation with non-Gaussian error models. The introduced Adaptive Mixture algorithm combines a nonlinear least squares backend with Gaussian mixtures as the measurement error model. Factor graphs as graphical representations allow an efficient and flexible application to real-world problems, such as simultaneous localization and mapping or satellite navigation. The proposed algorithms are constructed using an approximate expectation-maximization approach, which justifies their design probabilistically. This expectation-maximization is further generalized to enable adaptive estimation with arbitrary probabilistic models. Evaluating the proposed Adaptive Mixture algorithm in simulated and real-world scenarios demonstrates its versatility and robustness. A synthetic range-based localization shows that it provides reliable estimation results, even under extreme outlier ratios. Real-world satellite navigation experiments prove its robustness in harsh urban environments. The evaluation on indoor simultaneous localization and mapping datasets extends these results to typical robotic use cases. The proposed adaptive estimator provides robust and reliable estimation under various instances of non-Gaussian measurement errors.
214

Incorporating Metadata Into the Active Learning Cycle for 2D Object Detection / Inkorporera metadata i aktiv inlärning för 2D objektdetektering

Stadler, Karsten January 2021 (has links)
In the past years, Deep Convolutional Neural Networks have proven to be very useful for 2D Object Detection in many applications. These types of networks require large amounts of labeled data, which can be increasingly costly for companies deploying these detectors in practice if the data quality is lacking. Pool-based Active Learning is an iterative process of collecting subsets of data to be labeled by a human annotator and used for training to optimize performance per labeled image. The detectors used in Active Learning cycles are conventionally pre-trained with a small subset, approximately 2% of available data labeled uniformly at random. This is something I challenged in this thesis by using image metadata. With the motivation of many Machine Learning models being a "jack of all trades, master of none", thus it is hard to train models such that they generalize to all of the data domain, it can be interesting to develop a detector for a certain target metadata domain. A simple Monte Carlo method, Rejection Sampling, can be implemented to sample according to a metadata target domain. This would require a target and proposal metadata distribution. The proposal metadata distribution would be a parametric model in the form of a Gaussian Mixture Model learned from the training metadata. The parametric model for the target distribution could be learned in a similar manner, however from a target dataset. In this way, only the training images with metadata most similar to the target metadata distribution can be sampled. This sampling approach was employed and tested with a 2D Object Detector: Faster-RCNN with ResNet-50 backbone. The Rejection Sampling approach was tested against conventional random uniform sampling and a classical Active Learning baseline: Min Entropy Sampling. The performance was measured and compared on two different target metadata distributions that were inferred from a specific target dataset. With a labeling budget of 2% for each cycle, the max Mean Average Precision at 0.5 Intersection Over Union for the target set each cycle was calculated. My proposed approach has a 40 % relative performance advantage over random uniform sampling for the first cycle, and 10% after 9 cycles. Overall, my approach only required 37 % of the labeled data to beat the next best-tested sampler: the conventional uniform random sampling. / De senaste åren har Djupa Neurala Faltningsnätverk visat sig vara mycket användbara för 2D Objektdetektering i många applikationer. De här typen av nätverk behöver stora mängder av etiketterat data, något som kan innebära ökad kostnad för företag som distribuerar dem, om kvaliteten på etiketterna är bristfällig. Pool-baserad Aktiv Inlärning är en iterativ process som innebär insamling av delmängder data som ska etiketteras av en människa och användas för träning, för att optimera prestanda per etiketterat data. Detektorerna som används i Aktiv Inlärning är konventionellt sätt förtränade med en mindre delmängd data, ungefär 2% av all tillgänglig data, etiketterat enligt slumpen. Det här är något jag utmanade i det här arbetet genom att använda bild metadata. Med motiveringen att många Maskininlärningsmodeller presterar sämre på större datadomäner, eftersom det kan vara svårt att lära detektorer stora datadomäner, kan det vara intressant att utveckla en detektor för ett särskild metadata mål-domän. För att samla in data enligt en metadata måldomän, kan en enkel Monte Carlo metod, Rejection Sampling implementeras. Det skulle behövas en mål-metadata-distribution och en faktisk metadata distribution. den faktiska metadata distributionen skulle vara en parametrisk modell i formen av en Gaussisk blandningsmodell som är tränad på träningsdata. Den parametriska modellen för mål-metadata-distributionen skulle kunna vara tränad på liknande sätt, fast ifrån mål-datasetet. På detta sätt, skulle endast träningsbilder med metadata mest lik mål-datadistributionen kunna samlas in. Den här samplings-metoden utvecklades och testades med en 2D objektdetektor: Faster R-CNN med ResNet-50 bildegenskapextraktor. Rejection sampling metoden blev testad mot konventionell likformig slumpmässig sampling av data och en klassisk Aktiv Inlärnings metod: Minimum Entropi sampling. Prestandan mättes och jämfördes mellan två olika mål-metadatadistributioner som var framtagna från specifika mål-metadataset. Med en etiketteringsbudget på 2%för varje cykel, så beräknades medelvärdesprecisionen om 0.5 snitt över union för mål-datasetet. Min metod har 40%bättre prestanda än slumpmässig likformig insamling i första cykeln, och 10 % efter 9 cykler. Överlag behövde min metod endast 37 % av den etiketterade data för att slå den näst basta samplingsmetoden: slumpmässig likformig insamling.
215

Nuevas contribuciones a la teoría y aplicación del procesado de señal sobre grafos

Belda Valls, Jordi 16 January 2023 (has links)
[ES] El procesado de señal sobre grafos es un campo emergente de técnicas que combinan conceptos de dos áreas muy consolidadas: el procesado de señal y la teoría de grafos. Desde la perspectiva del procesado de señal puede obtenerse una definición de la señal mucho más general asignando cada valor de la misma a un vértice de un grafo. Las señales convencionales pueden considerarse casos particulares en los que los valores de cada muestra se asignan a una cuadrícula uniforme (temporal o espacial). Desde la perspectiva de la teoría de grafos, se pueden definir nuevas transformaciones del grafo de forma que se extiendan los conceptos clásicos del procesado de la señal como el filtrado, la predicción y el análisis espectral. Además, el procesado de señales sobre grafos está encontrando nuevas aplicaciones en las áreas de detección y clasificación debido a su flexibilidad para modelar dependencias generales entre variables. En esta tesis se realizan nuevas contribuciones al procesado de señales sobre grafos. En primer lugar, se plantea el problema de estimación de la matriz Laplaciana asociada a un grafo, que determina la relación entre nodos. Los métodos convencionales se basan en la matriz de precisión, donde se asume implícitamente Gaussianidad. En esta tesis se proponen nuevos métodos para estimar la matriz Laplaciana a partir de las correlaciones parciales asumiendo respectivamente dos modelos no Gaussianos diferentes en el espacio de las observaciones: mezclas gaussianas y análisis de componentes independientes. Los métodos propuestos han sido probados con datos simulados y con datos reales en algunas aplicaciones biomédicas seleccionadas. Se demuestra que pueden obtenerse mejores estimaciones de la matriz Laplaciana con los nuevos métodos propuestos en los casos en que la Gaussianidad no es una suposición correcta. También se ha considerado la generación de señales sintéticas en escenarios donde la escasez de señales reales puede ser un problema. Los modelos sobre grafos permiten modelos de dependencia por pares más generales entre muestras de señal. Así, se propone un nuevo método basado en la Transformada de Fourier Compleja sobre Grafos y en el concepto de subrogación. Se ha aplicado en el desafiante problema del reconocimiento de gestos con las manos. Se ha demostrado que la extensión del conjunto de entrenamiento original con réplicas sustitutas generadas con los métodos sobre grafos, mejora significativamente la precisión del clasificador de gestos con las manos. / [CAT] El processament de senyal sobre grafs és un camp emergent de tècniques que combinen conceptes de dues àrees molt consolidades: el processament de senyal i la teoria de grafs. Des de la perspectiva del processament de senyal pot obtindre's una definició del senyal molt més general assignant cada valor de la mateixa a un vèrtex d'un graf. Els senyals convencionals poden considerar-se casos particulars en els quals els valors de la mostra s'assignen a una quadrícula uniforme (temporal o espacial). Des de la perspectiva de la teoria de grafs, es poden definir noves transformacions del graf de manera que s'estenguen els conceptes clàssics del processament del senyal com el filtrat, la predicció i l'anàlisi espectral. A més, el processament de senyals sobre grafs està trobant noves aplicacions en les àrees de detecció i classificació a causa de la seua flexibilitat per a modelar dependències generals entre variables. En aquesta tesi es donen noves contribucions al processament de senyals sobre grafs. En primer lloc, es planteja el problema d'estimació de la matriu Laplaciana associada a un graf, que determina la relació entre nodes. Els mètodes convencionals es basen en la matriu de precisió, on s'assumeix implícitament la gaussianitat. En aquesta tesi es proposen nous mètodes per a estimar la matriu Laplaciana a partir de les correlacions parcials assumint respectivament dos models no gaussians diferents en l'espai d'observació: mescles gaussianes i anàlisis de components independents. Els mètodes proposats han sigut provats amb dades simulades i amb dades reals en algunes aplicacions biomèdiques seleccionades. Es demostra que poden obtindre's millors estimacions de la matriu Laplaciana amb els nous mètodes proposats en els casos en què la gaussianitat no és una suposició correcta. També s'ha considerat el problema de generar senyals sintètics en escenaris on l'escassetat de senyals reals pot ser un problema. Els models sobre grafs permeten models de dependència per parells més generals entre mostres de senyal. Així, es proposa un nou mètode basat en la Transformada de Fourier Complexa sobre Grafs i en el concepte de subrogació. S'ha aplicat en el desafiador problema del reconeixement de gestos amb les mans. S'ha demostrat que l'extensió del conjunt d'entrenament original amb rèpliques substitutes generades amb mètodes sobre grafs, millora significativament la precisió del classificador de gestos amb les mans. / [EN] Graph signal processing appears as an emerging field of techniques that combine concepts from two highly consolidated areas: signal processing and graph theory. From the perspective of signal processing, it is possible to achieve a more general signal definition by assigning each value of the signal to a vertex of a graph. Conventional signals can be considered particular cases where the sample values are assigned to a uniform (temporal or spatial) grid. From the perspective of graph theory, new transformations of the graph can be defined in such a way that they extend the classical concepts of signal processing such as filtering, prediction and spectral analysis. Furthermore, graph signal processing is finding new applications in detection and classification areas due to its flexibility to model general dependencies between variables. In this thesis, new contributions are given to graph signal processing. Firstly, it is considered the problem of estimating the Laplacian matrix associated with a graph, which determines the relationship between nodes. Conventional methods are based on the precision matrix, where Gaussianity is implicitly assumed. In this thesis, new methods to estimate the Laplacian matrix from the partial correlations are proposed respectively assuming two different non-Gaussian models in the observation space: Gaussian Mixtures and Independent Component Analysis. The proposed methods have been tested with simulated data and with real data in some selected biomedical applications. It is demonstrate that better estimates of the Laplacian matrix can be obtained with the new proposed methods in cases where Gaussianity is not a correct assumption. The problem of generating synthetic signal in scenarios where real signals scarcity can be an issue has also been considered. Graph models allow more general pairwise dependence models between signal samples. Thus a new method based on the Complex Graph Fourier Transform and on the concept of subrogation is proposed. It has been applied in the challenging problem of hand gesture recognition. It has been demonstrated that extending the original training set with graph surrogate replicas, significantly improves the accuracy of the hand gesture classifier. / Belda Valls, J. (2022). Nuevas contribuciones a la teoría y aplicación del procesado de señal sobre grafos [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/191333
216

Combining Subject Expert Experimental Data with Standard Data in Bayesian Mixture Modeling

Xiong, Hui 26 September 2011 (has links)
No description available.
217

Speaker Diarization System for Call-center data

Li, Yi January 2020 (has links)
To answer the question who spoke when, speaker diarization (SD) is a critical step for many speech applications in practice. The task of our project is building a MFCC-vector based speaker diarization system on top of a speaker verification system (SV), which is an existing Call-centers application to check the customer’s identity from a phone call. Our speaker diarization system uses 13-Dimensional MFCCs as Features, performs Voice Active Detection (VAD), segmentation, Linear Clustering and the Hierarchical Clustering based on GMM and the BIC score. By applying it, we decrease the Equal Error Rate (EER) of the SV from 18.1% in the baseline experiment to 3.26% on the general call-center conversations. To better analyze and evaluate the system, we also simulated a set of call-center data based on the public audio databases ICSI corpus. / För att svara på frågan vem som talade när är högtalardarisering (SD) ett kritiskt steg för många talapplikationer i praktiken. Uppdraget med vårt projekt är att bygga ett MFCC-vektorbaserat högtalar-diariseringssystem ovanpå ett högtalarverifieringssystem (SV), som är ett befintligt Call-center-program för att kontrollera kundens identitet från ett telefonsamtal. Vårt högtalarsystem använder 13-dimensionella MFCC: er som funktioner, utför Voice Active Detection (VAD), segmentering, linjär gruppering och hierarkisk gruppering baserat på GMM och BIC-poäng. Genom att tillämpa den minskar vi EER (Equal Error Rate) från 18,1 % i baslinjeexperimentet till 3,26 % för de allmänna samtalscentret. För att bättre analysera och utvärdera systemet simulerade vi också en uppsättning callcenter-data baserat på de offentliga ljuddatabaserna ICSI corpus.
218

Exploring Single-molecule Heterogeneity and the Price of Cell Signaling

Wang, Tenglong 25 January 2022 (has links)
No description available.
219

Unraveling Complexity: Panoptic Segmentation in Cellular and Space Imagery

Emanuele Plebani (18403245) 03 June 2024 (has links)
<p dir="ltr">Advancements in machine learning, especially deep learning, have facilitated the creation of models capable of performing tasks previously thought impossible. This progress has opened new possibilities across diverse fields such as medical imaging and remote sensing. However, the performance of these models relies heavily on the availability of extensive labeled datasets.<br>Collecting large amounts of labeled data poses a significant financial burden, particularly in specialized fields like medical imaging and remote sensing, where annotation requires expert knowledge. To address this challenge, various methods have been developed to mitigate the necessity for labeled data or leverage information contained in unlabeled data. These encompass include self-supervised learning, few-shot learning, and semi-supervised learning. This dissertation centers on the application of semi-supervised learning in segmentation tasks.<br><br>We focus on panoptic segmentation, a task that combines semantic segmentation (assigning a class to each pixel) and instance segmentation (grouping pixels into different object instances). We choose two segmentation tasks in different domains: nerve segmentation in microscopic imaging and hyperspectral segmentation in satellite images from Mars.<br>Our study reveals that, while direct application of methods developed for natural images may yield low performance, targeted modifications or the development of robust models can provide satisfactory results, thereby unlocking new applications like machine-assisted annotation of new data.<br><br>This dissertation begins with a challenging panoptic segmentation problem in microscopic imaging, systematically exploring model architectures to improve generalization. Subsequently, it investigates how semi-supervised learning may mitigate the need for annotated data. It then moves to hyperspectral imaging, introducing a Hierarchical Bayesian model (HBM) to robustly classify single pixels. Key contributions of include developing a state-of-the-art U-Net model for nerve segmentation, improving the model's ability to segment different cellular structures, evaluating semi-supervised learning methods in the same setting, and proposing HBM for hyperspectral segmentation. <br>The dissertation also provides a dataset of labeled CRISM pixels and mineral detections, and a software toolbox implementing the full HBM pipeline, to facilitate the development of new models.</p>
220

Perfektní simulace ve stochastické geometrii / Perfect simulation in stochastic geometry

Sadil, Antonín January 2010 (has links)
Perfect simulations are methods, which convert suitable Markov chain Monte Carlo (MCMC) algorithms into algorithms which return exact draws from the target distribution, instead of approximations based on long-time convergence to equilibrium. In recent years a lot of various perfect simulation algorithms were developed. This work provides a unified exposition of some perfect simulation algorithms with applications to spatial point processes, especially to the Strauss process and area-interaction process. Described algorithms and their properties are compared theoretically and also by a simulation study.

Page generated in 0.0794 seconds