• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 13
  • 9
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Probabilistic Estimation of Unobserved Process Events

Rogge-Solti, Andreas January 2014 (has links)
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance. / Unternehmen versuchen Wettbewerbsvorteile zu gewinnen und die Kundenzufriedenheit zu erhöhen. Um die Qualität und die Effizienz ihrer Prozesse zu gewährleisten, wenden Unternehmen Geschäftsprozessmanagement an. Hierbei spielt die Prozesskontrolle im täglichen Betrieb eine wichtige Rolle. Prozesskontrolle wird durch Prozessmonitoring ermöglicht, d.h. durch die Überwachung des Prozessfortschritts laufender Prozessinstanzen. So können Verzögerungen entdeckt und es kann entsprechend reagiert werden, um Prozesse wie erwartet und termingerecht beenden zu können. Um Prozessmonitoring zu ermöglichen, müssen prozessrelevante Ereignisse aus der Prozessumgebung gesammelt und ausgewertet werden. Sofern eine Prozessausführungsengine die Orchestrierung von Geschäftsprozessen übernimmt, kann jede Prozessaktivität überwacht werden. Aber viele Geschäftsprozesse eignen sich nicht für automatisierte Orchestrierung, da sie z.B. besonders viel Handlungsfreiheit erfordern. Dies ist in Krankenhäusern der Fall, in denen Geschäftsprozesse oft manuell durchgeführt werden. Daher ist es meist umständlich oder unmöglich, jeden Prozessfortschritt zu erfassen. Zudem ist händische Prozessausführung und -dokumentation fehleranfällig, so wird z.B. manchmal vergessen zu dokumentieren. Eine Herausforderung für Unternehmen ist, dass manche Prozessereignisse nicht im Prozessmonitoring erfasst werden. Solch unbeobachtete Prozessereignisse können jedoch als Entscheidungsgrundlage dienen, selbst wenn kein exaktes Wissen über den Zeitpunkt ihres Auftretens vorliegt. Zum Beispiel ist bei der Prozesskontrolle zu entscheiden, ob zusätzliche Ressourcen eingesetzt werden sollen, wenn eine Verspätung angenommen wird. Diese Arbeit stellt einen probabilistischen Ansatz für den Umgang mit unbeobachteten Prozessereignissen vor. Dabei werden entscheidende Fragen von Prozessmanagern beantwortet (z.B. "Wann werden wir den Fall beenden?", oder "Wann wurde die Aktivität ausgeführt, die nicht dokumentiert wurde?"). Der Hauptbeitrag der Arbeit ist die Einführung eines erweiterten probabilistischen Modells ins Geschäftsprozessmanagement, das auf stochastischen Petri Netzen basiert. Dabei wird ein ganzheitlicher Ansatz zur Unterstützung der einzelnen Phasen des Geschäftsprozesslebenszyklus verfolgt. Es werden Techniken zum Lernen des probabilistischen Modells, zum Vorhersagen des Zeitpunkts des Prozessendes, zum Qualitätsmanagement von Dokumentationen durch Erkennung fehlender Einträge, und zur Optimierung von Monitoringkonfigurationen bereitgestellt. Letztere dient zur Auswahl von relevanten Stellen im Prozess, die beobachtet werden sollten. Diese Techniken wurden in einer quelloffenen prototypischen Anwendung implementiert. Zur Evaluierung wird der Ansatz mit existierenden Alternativen an echten Prozessdaten eines Krankenhauses gemessen. Die generelle Anwendbarkeit in weiteren Domänen wird examplarisch an Prozessdaten aus der Logistik und dem Finanzwesen gezeigt.
62

Jointly integrating current context and social influence for improving recommendation / Intégration simultanée du contexte actuel et de l'influence sociale pour l'amélioration de la recommandation

Bambia, Meriam 13 June 2017 (has links)
La diversité des contenus recommandation et la variation des contextes des utilisateurs rendent la prédiction en temps réel des préférences des utilisateurs de plus en plus difficile mettre en place. Toutefois, la plupart des approches existantes n'utilisent que le temps et l'emplacement actuels séparément et ignorent d'autres informations contextuelles sur lesquelles dépendent incontestablement les préférences des utilisateurs (par exemple, la météo, l'occasion). En outre, ils ne parviennent pas considérer conjointement ces informations contextuelles avec les interactions sociales entre les utilisateurs. D'autre part, la résolution de problèmes classiques de recommandation (par exemple, aucun programme de télévision vu par un nouvel utilisateur connu sous le nom du problème de démarrage froid et pas assez d'items co-évalués par d'autres utilisateurs ayant des préférences similaires, connu sous le nom du problème de manque de donnes) est d'importance significative puisque sont attaqués par plusieurs travaux. Dans notre travail de thèse, nous proposons un modèle probabiliste qui permet exploiter conjointement les informations contextuelles actuelles et l'influence sociale afin d'améliorer la recommandation des items. En particulier, le modèle probabiliste vise prédire la pertinence de contenu pour un utilisateur en fonction de son contexte actuel et de son influence sociale. Nous avons considérer plusieurs éléments du contexte actuel des utilisateurs tels que l'occasion, le jour de la semaine, la localisation et la météo. Nous avons utilisé la technique de lissage Laplace afin d'éviter les fortes probabilités. D'autre part, nous supposons que l'information provenant des relations sociales a une influence potentielle sur les préférences des utilisateurs. Ainsi, nous supposons que l'influence sociale dépend non seulement des évaluations des amis mais aussi de la similarité sociale entre les utilisateurs. Les similarités sociales utilisateur-ami peuvent être établies en fonction des interactions sociales entre les utilisateurs et leurs amis (par exemple les recommandations, les tags, les commentaires). Nous proposons alors de prendre en compte l'influence sociale en fonction de la mesure de similarité utilisateur-ami afin d'estimer les préférences des utilisateurs. Nous avons mené une série d'expérimentations en utilisant un ensemble de donnes réelles issues de la plateforme de TV sociale Pinhole. Cet ensemble de donnes inclut les historiques d'accès des utilisateurs-vidéos et les réseaux sociaux des téléspectateurs. En outre, nous collectons des informations contextuelles pour chaque historique d'accès utilisateur-vidéo saisi par le système de formulaire plat. Le système de la plateforme capture et enregistre les dernières informations contextuelles auxquelles le spectateur est confronté en regardant une telle vidéo.Dans notre évaluation, nous adoptons le filtrage collaboratif axé sur le temps, le profil dépendant du temps et la factorisation de la matrice axe sur le réseau social comme tant des modèles de référence. L'évaluation a port sur deux tâches de recommandation. La première consiste sélectionner une liste trie de vidéos. La seconde est la tâche de prédiction de la cote vidéo. Nous avons évalué l'impact de chaque élément du contexte de visualisation dans la performance de prédiction. Nous testons ainsi la capacité de notre modèle résoudre le problème de manque de données et le problème de recommandation de démarrage froid du téléspectateur. Les résultats expérimentaux démontrent que notre modèle surpasse les approches de l'état de l'art fondes sur le facteur temps et sur les réseaux sociaux. Dans les tests des problèmes de manque de donnes et de démarrage froid, notre modèle renvoie des prédictions cohérentes différentes valeurs de manque de données. / Due to the diversity of alternative contents to choose and the change of users' preferences, real-time prediction of users' preferences in certain users' circumstances becomes increasingly hard for recommender systems. However, most existing context-aware approaches use only current time and location separately, and ignore other contextual information on which users' preferences may undoubtedly depend (e.g. weather, occasion). Furthermore, they fail to jointly consider these contextual information with social interactions between users. On the other hand, solving classic recommender problems (e.g. no seen items by a new user known as cold start problem, and no enough co-rated items with other users with similar preference as sparsity problem) is of significance importance since it is drawn by several works. In our thesis work, we propose a context-based approach that leverages jointly current contextual information and social influence in order to improve items recommendation. In particular, we propose a probabilistic model that aims to predict the relevance of items in respect with the user's current context. We considered several current context elements such as time, location, occasion, week day, location and weather. In order to avoid strong probabilities which leads to sparsity problem, we used Laplace smoothing technique. On the other hand, we argue that information from social relationships has potential influence on users' preferences. Thus, we assume that social influence depends not only on friends' ratings but also on social similarity between users. We proposed a social-based model that estimates the relevance of an item in respect with the social influence around the user on the relevance of this item. The user-friend social similarity information may be established based on social interactions between users and their friends (e.g. recommendations, tags, comments). Therefore, we argue that social similarity could be integrated using a similarity measure. Social influence is then jointly integrated based on user-friend similarity measure in order to estimate users' preferences. We conducted a comprehensive effectiveness evaluation on real dataset crawled from Pinhole social TV platform. This dataset includes viewer-video accessing history and viewers' friendship networks. In addition, we collected contextual information for each viewer-video accessing history captured by the plat form system. The platform system captures and records the last contextual information to which the viewer is faced while watching such a video. In our evaluation, we adopt Time-aware Collaborative Filtering, Time-Dependent Profile and Social Network-aware Matrix Factorization as baseline models. The evaluation focused on two recommendation tasks. The first one is the video list recommendation task and the second one is video rating prediction task. We evaluated the impact of each viewing context element in prediction performance. We tested the ability of our model to solve data sparsity and viewer cold start recommendation problems. The experimental results highlighted the effectiveness of our model compared to the considered baselines. Experimental results demonstrate that our approach outperforms time-aware and social network-based approaches. In the sparsity and cold start tests, our approach returns consistently accurate predictions at different values of data sparsity.
63

Análise de resíduos em modelos de regressão von Mises. / Analysis of residues in von Mises regression models.

LEAL, Grayci-Mary Gonçalves. 10 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-10T16:48:23Z No. of bitstreams: 1 GRAYCI-MARY GONÇALVES LEAL - DISSERTAÇÃO PPGMAT 2006..pdf: 956853 bytes, checksum: 4fd52ea4cb6e8e47a91cfe0b76a5c4bb (MD5) / Made available in DSpace on 2018-07-10T16:48:23Z (GMT). No. of bitstreams: 1 GRAYCI-MARY GONÇALVES LEAL - DISSERTAÇÃO PPGMAT 2006..pdf: 956853 bytes, checksum: 4fd52ea4cb6e8e47a91cfe0b76a5c4bb (MD5) Previous issue date: 2006-04 / Capes / Dados envolvendo medidas angulares estão presentes nas mais diversas áreas do conhecimento. Para analisá-los é necessário utilizar uma teoria estatística específica e apropriada, diferente da que utilizamos para dados lineares. Particularmente, quando o interesse for formular, ajustar e fazer diagnósticos em modelos de regressão, uma vez que, neste contexto, a natureza da variável deve ser considerada. Neste trabalho, utilizamos os modelos de regressão von Mises para investigar a associação tipo circular-linear e apresentamos dois resíduos padronizados que foram obtidos a partir da componente da função desvio e cujas distribuições de probabilidades podem ser aproximadas pela distribuição normal padrão, definida para dados lineares. / Datainvolvingangulararepresentinthemostdiverseareasofscience. Toanalyze them is necessary to introduce an appropriate theory and to study specific and appropriate statistics as well, different from that we use for linear data. When the interest is to formulate, to adjust and to make diagnostics on regression models, the nature of the variables must be considered. In this work, we use the von Mises regression models to investigate the circular-linear association and discuss two standardized residuals defined from the component of the deviance function whose probability distributions can be approximated by the normal standard distribution defined for linear data.
64

Probabilidade para o ensino mÃdio / Probability for high school

Josà Nobre Dourado JÃnior 27 June 2014 (has links)
Este trabalho tem como objetivo introduzir os conceitos bÃsicos da Teoria das Probabilidades e apresentar noÃÃes sobre alguns modelos probabilÃsticos para o estudante do Ensino MÃdio. Iniciaremos o trabalho apresentando no capÃtulo 1 as noÃÃes de experimento determinÃstico, experimento aleatÃrio, espaÃo amostral e eventos, seguidos de algumas definiÃÃes de Probabilidade, conceitos que constituem a base para essa teoria. No capÃtulo 2 abordaremos os conceitos de Probabilidade Condicional e IndependÃncia de Eventos, apresentando alguns teoremas importantes que decorrem desses conceitos, bem como algumas de suas aplicaÃoes. No capÃtulo 3 apresentaremos de maneira simples alguns modelos probabilÃsticos discretos bastante Ãteis por modelarem de forma eficaz um bom nÃmero de experimentos aleatÃrios contribuindo assim para o cÃlculo das probabilidades de seus resultados. Por fim, no capÃtulo 4 serà apresentado o modelo probabilÃstico conhecido como DistribuiÃÃo de Poisson, que nos permite calcular a probabilidade de um evento ocorrer em um dado intervalo de tempo ou numa dada regiÃo espacial. / This work has as objective introduce the basic concepts of the Theory of Probabilities and present notions on some probabilistic models for the student of the High School. We will begin the work presented in chapter I the notions of experiment deterministic, random experiment, sample space and events, followed by some definitions of Probability concepts that constitute the basis for this theory. In chapter II we will discuss the concepts of Conditional Probability and Independence of Events showcasing some important theorems that derive from these concepts, as well as some of its applications. In chapter III we will present in a simple way some probabilistic models discrete quite useful for shape effectively a good number of random experiments thus contributing to the calculation of the probabilities of its results. Finally, in chapter IV will be presented the probability model known as Poisson distribution, which allows us to calculate the probability that an event will occur in a given time interval or in a given spatial region.
65

O ensino dos modelos probabilísticos discretos no ensino médio

Santana, Jailson Santos 16 April 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work aims to support Basic Education teachers by providing a detailed materials for teaching Combinatorial Analysis, Probability and Probabilistic Models, taking into account aspects related to day-to- day using mathematical concepts in problem situations. We also propose a teaching sequence on the topics mentioned above for the Basic Education teachers to broaden and diversify their strategies education. / Este trabalho tem como objetivo dar suporte ao professor da Educação Básica fornecendo um material detalhado para o ensino da Análise Combinatória, Probabilidade e Modelos Probabilísticos, levando-se em consideração aspectos relacionados ao dia-a-dia, utilizando conceitos matemáticos em situações problemas. Propomos ainda uma sequência didática sobre os temas acima citados para que os professores da Educação Básica possam ampliar e diversificar as suas estratégias de ensino.
66

Paralelní evoluční algoritmus EDA využívající teorii kopulí / Parallel Evolutionary Algorithm EDA Based on Copulas

Hyrš, Martin Unknown Date (has links)
In my thesis I~ deal with the design, implementation and testing of the advanced parallel Estimation of Distribution Algorithm (EDA) utilizing copula theory to create a~ probabilistic model. A~new population is created by the process of sampling the joint distribution function, which models the current distribution of the subpopulation of promising individuals . The usage of copulas increases the efficiency of the learning process and sampling the probabilistic model. It can be separated into mutually independent marginal distributions and the copula , which represents the correlations between the variables of the solved problem. This concept initiated the usage of the parallel island architecture , in which the migration of probabilistic models belonging to individual islands ' subpopulations was used instead of the migration of individuals . The statistical tests used in the comparison of the proposed algorithm ( mCEDA = migrating Copula - based Estimation of Distribution Algorithm ) and the algorithms of other authors confirmed the effectiveness of the proposed concept .
67

Interactive Machine Assistance: A Case Study in Linking Corpora and Dictionaries

Black, Kevin P 01 November 2015 (has links) (PDF)
Machine learning can provide assistance to humans in making decisions, including linguistic decisions such as determining the part of speech of a word. Supervised machine learning methods derive patterns indicative of possible labels (decisions) from annotated example data. For many problems, including most language analysis problems, acquiring annotated data requires human annotators who are trained to understand the problem and to disambiguate among multiple possible labels. Hence, the availability of experts can limit the scope and quantity of annotated data. Machine-learned pre-annotation assistance, which suggests probable labels for unannotated items, can enable expert annotators to work more quickly and thus to produce broader and larger annotated resources more cost-efficiently. Yet, because annotated data is required to build the pre-annotation model, bootstrapping is an obstacle to utilizing pre-annotation assistance, especially for low-resource problems where little or no annotated data exists. Interactive pre-annotation assistance can mitigate bootstrapping costs, even for low-resource problems, by continually refining the pre-annotation model with new annotated examples as the annotators work. In practice, continually refining models has seldom been done except for the simplest of models which can be trained quickly. As a case study in developing sophisticated, interactive, machine-assisted annotation, this work employs the task of corpus-dictionary linkage (CDL), which is to link each word token in a corpus to its correct dictionary entry. CDL resources, such as machine-readable dictionaries and concordances, are essential aids in many tasks including language learning and corpus studies. We employ a pipeline model to provide CDL pre-annotations, with one model per CDL sub-task. We evaluate different models for lemmatization, the most significant CDL sub-task since many dictionary entry headwords are usually lemmas. The best performing lemmatization model is a hybrid which uses a maximum entropy Markov model (MEMM) to handle unknown (novel) word tokens and other component models to handle known word tokens. We extend the hybrid model design to the other CDL sub-tasks in the pipeline. We develop an incremental training algorithm for the MEMM which avoids wasting previous computation as would be done by simply retraining from scratch. The incremental training algorithm facilitates the addition of new dictionary entries over time (i.e., new labels) and also facilitates learning from partially annotated sentences which allows annotators to annotate words in any order. We validate that the hybrid model attains high accuracy and can be trained sufficiently quickly to provide interactive pre-annotation assistance by simulating CDL annotation on Quranic Arabic and classical Syriac data.
68

Understanding, improving, and generalizing generative models

Jolicoeur-Martineau, Alexia 08 1900 (has links)
Les modèles génératifs servent à générer des échantillons d'une loi de probabilité (ex. : du texte, des images, de la musique, des vidéos, des molécules, et beaucoup plus) à partir d'un jeu de données (ex. : une banque d'images, de texte, ou autre). Entrainer des modèles génératifs est une tâche très difficile, mais ces outils ont un très grand potentiel en termes d'applications. Par exemple, dans le futur lointain, on pourrait envisager qu'un modèle puisse générer les épisodes d'une émission de télévision à partir d'un script et de voix générés par d'autres modèles génératifs. Il existe plusieurs types de modèles génératifs. Pour la génération d'images, l'approche la plus fructueuse est sans aucun doute la méthode de réseaux adverses génératifs (GANs). Les GANs apprennent à générer des images par un jeu compétitif entre deux joueurs, le Discriminateur et le Générateur. Le Discriminateur tente de prédire si une image est vraie ou fausse, tandis que le Générateur tente de générer des images plus réalistes en apprenant à faire croire au discriminateur que ces fausses images générées sont vraies. En complétant ce jeu, les GANs arrivent à générer des images presque photo-réalistes. Il est souvent possible pour des êtres humains de distinguer les fausses images (générés par les GANs) des vraies images (ceux venant du jeu de données), mais la tâche devient plus difficile au fur et à mesure que cette technologie s'améliore. Le plus gros défaut des GANs est que les données générées par les GANs manquent souvent de diversité (ex. : les chats au visage aplati sont rares dans la banque d'images, donc les GANs génèrent juste des races de chats plus fréquentes). Ces méthodes souvent aussi souvent très instables. Il y a donc encore beaucoup de chemin à faire avant l'obtention d'images parfaitement photo-réalistes et diverses. De nouvelles méthodes telles que les modèles de diffusion à la base de score semblent produire de meilleurs résultats que les GANs, donc tout n'est pas gagné pour les GANs. C'est pourquoi cette thèse n'est pas concentrée seulement sur les GANs, mais aussi sur les modèles de diffusion. Notez que cette thèse est exclusivement concentrée sur la génération de données continues (ex. : images, musique, vidéos) plutôt que discrètes (ex. : texte), car cette dernière fait usage de méthodes complètement différentes. Le premier objectif de cette thèse est d'étudier les modèles génératifs de façon théorique pour mieux les comprendre. Le deuxième objectif de cette thèse est d'inventer de nouvelles astuces (nouvelles fonctions objectives, régularisations, architectures, etc.) permettant d'améliorer les modèles génératifs. Le troisième objectif est de généraliser ces approches au-delà de leur formulation initiale, pour permettre la découverte de nouveaux liens entre différentes approches. Ma première contribution est de proposer un discriminateur relativiste qui estime la probabilité qu'une donnée réelle, soit plus réaliste qu'une donnée fausse (inventée par un modèle générateur). Les GANs relativistes forment une nouvelle classe de fonctions de perte qui apportent beaucoup de stabilité durant l'entrainement. Ma seconde contribution est de prouver que les GANs relativistes forment une mesure de dissimilarité. Ma troisième contribution est de concevoir une variante adverse au appariement de score pour produire des données de meilleure qualité avec les modèles de diffusion. Ma quatrième contribution est d'améliorer la vitesse de génération des modèles de diffusion par la création d'une méthode numérique de résolution pour équations différentielles stochastiques (SDEs). / Generative models are powerful tools to generate samples (e.g., images, music, text) from an unknown distribution given a finite set of examples. Generative models are hard to train successfully, but they have the potential to revolutionize arts, science, and business. These models can generate samples from various data types (e.g., text, images, audio, videos, 3d). In the future, we can envision generative models being used to create movies or episodes from a TV show given a script (possibly also generated by a generative model). One of the most successful methods for generating images is Generative Adversarial Networks (GANs). This approach consists of a game between two players, the Discriminator and the Generator. The goal of the Discriminator is to classify an image as real or fake, while the Generator attempts to fool the Discriminator into thinking that the fake images it generates are real. Through this game, GANs are able to generate very high-quality samples, such as photo-realistic images. Humans are still generally able to distinguish real images (from the training dataset) from fake images (generated by GANs), but the gap is lessening as GANs become better over time. The biggest weakness of GANs is that they have trouble generating diverse data representative of the full range of the data distribution. Thus, there is still much progress to be made before GANs reach their full potential. New methods performing better than GANs are also appearing. One prime example is score-based diffusion models. This thesis focuses on generative models that seemed promising at the time for continuous data generation: GANs and score-based diffusion models. I seek to improve generative models so that they reach their full potential (Objective 1: Improving) and to understand these approaches better on a theoretical level (Objective 2: Theoretical understanding). I also want to generalize these approaches beyond their original setting (Objective 3: Generalizing), allowing the discovery of new connections between different concepts/fields. My first contribution is to propose using a relativistic discriminator, which estimates the probability that a given real data is more realistic than a randomly sampled fake data. Relativistic GANs form a new class of GAN loss functions that are much more stable with respect to optimization hyperparameters. My second contribution is to take a more rigorous look at relativistic GANs and prove that they are proper statistical divergences. My third contribution is to devise an adversarial variant to denoising score matching, which leads to higher quality data with score-based diffusion models. My fourth contribution is to significantly improve the speed of score-based diffusion models through a carefully devised Stochastic Differential Equation (SDE) solver.
69

Data analysis of rainfall event characteristics and derivation of flood frequency distribution equations for urban stormwater management purposes

Hassini, Sonia January 2018 (has links)
further development of the simple and promising analytical probabilistic approach / Urban stormwater management aims at mitigating the adverse impacts of urbanization. Hydrological models are used in support of stormwater management planning and design. There are three main approaches that can be applied for this modeling purpose: (1) continuous simulation approach which is accurate but time-consuming; (2) design storm approach, which is widely used and its accuracy highly depends on the selected antecedent moisture conditions and temporal distribution of design storms; and (3) the analytical probabilistic approach which is recently developed and still not used in practice. Although it is time-effective and it can produce results as accurate as the other two approaches; the analytical probabilistic approach requires further developments in order to make it more reliable and accurate. For this purpose, three subtopics are investigated in this thesis. (1) Rainfall data analysis as required by the analytical probabilistic approach with emphasis on testing the exponentiality of rainfall event duration, volume and interevent time (i.e., time separating it from its preceding rainfall event). A goodness-of-fit testing procedure that is suitable for this kind of data analysis was proposed. (2) Derivation of new analytical probabilistic models for peak discharge rate incorporating trapezoidal and triangular hydrograph shapes in order to include all possible catchment’s responses. And (3) the infiltration process is assumed to continue until the end of the rainfall event; however, the soil may get saturated earlier and the excess amount would contribute to the runoff volume which may have adverse impact if not taken into consideration. Thus, in addition to the infiltration process, the saturation excess runoff is also included and new models for flood frequencies are developed. All the models developed in this thesis are tested and compared to methods used in practice, reasonable results were obtained. / Thesis / Doctor of Philosophy (PhD) / Urban stormwater management aims at mitigating the adverse impacts of urbanization. Hydrological models are used in support of stormwater management planning and design. The analytical probabilistic stormwater management model (APSWM) is a promising tool for planning and design analysis. The purpose of this thesis is to further develop APSWM in order to make it more reliable and accurate. First, a clear procedure for rainfall data analysis as required by APSWM is provided. Second, a new APSWM is derived incorporating other runoff temporal-distribution patterns. Finally, the possibility of soil layer saturation while it is still raining is added to the model. All the models developed in this thesis are tested and compared to methods used in engineering practice, reasonable results were obtained.
70

Développement de modèles graphiques probabilistes pour analyser et remailler les maillages triangulaires 2-variétés / Development of probabilistic graphical models to analyze and remesh 2-manifold triangular meshes

Vidal, Vincent 09 December 2011 (has links)
Ce travail de thèse concerne l'analyse structurelle des maillages triangulaires surfaciques, ainsi que leur traitement en vue de l'amélioration de leur qualité (remaillage) ou de leur simplification. Dans la littérature, le repositionnement des sommets d'un maillage est soit traité de manière locale, soit de manière globale mais sans un contrôle local de l'erreur géométrique introduite, i.e. les solutions actuelles ne sont pas globales ou introduisent de l'erreur géométrique non-contrôlée. Les techniques d'approximation de maillage les plus prometteuses se basent sur une décomposition en primitives géométriques simples (plans, cylindres, sphères etc.), mais elles n'arrivent généralement pas à trouver la décomposition optimale, celle qui optimise à la fois l'erreur géométrique de l'approximation par les primitives choisies, et le nombre et le type de ces primitives simples. Pour traiter les défauts des approches de remaillage existantes, nous proposons une méthode basée sur un modèle global, à savoir une modélisation graphique probabiliste, intégrant des contraintes souples basées sur la géométrie (l'erreur de l'approximation), la qualité du maillage et le nombre de sommets du maillage. De même, pour améliorer la décomposition en primitives simples, une modélisation graphique probabiliste a été choisie. Les modèles graphiques de cette thèse sont des champs aléatoires de Markov, ces derniers permettant de trouver une configuration optimale à l'aide de la minimisation globale d'une fonction objectif. Nous avons proposé trois contributions dans cette thèse autour des maillages triangulaires 2-variétés : (i) une méthode d'extraction statistiquement robuste des arêtes caractéristiques applicable aux objets mécaniques, (ii) un algorithme de segmentation en régions approximables par des primitives géométriques simples qui est robuste à la présence de données aberrantes et au bruit dans la position des sommets, (iii) et finalement un algorithme d'optimisation de maillages qui cherche le meilleur compromis entre l'amélioration de la qualité des triangles, la qualité de la valence des sommets, le nombre de sommets et la fidélité géométrique à la surface initiale. / The work in this thesis concerns structural analysis of 2-manifold triangular meshes, and their processing towards quality enhancement (remeshing) or simplification. In existing work, the repositioning of mesh vertices necessary for remeshing is either done locally or globally, but in the latter case without local control on the introduced geometrical error. Therefore, current results are either not globally optimal or introduce unwanted geometrical error. Other promising remeshing and approximation techniques are based on a decomposition into simple geometrical primitives (planes, cylinders, spheres etc.), but they generally fail to find the best decomposition, i.e. the one which jointly optimizes the residual geometrical error as well as the number and type of selected simple primitives. To tackle the weaknesses of existing remeshing approaches, we propose a method based on a global model, namely a probabilistic graphical model integrating soft constraints based on geometry (approximation error), mesh quality and the number of mesh vertices. In the same manner, for segmentation purposes and in order to improve algorithms delivering decompositions into simple primitives, a probabilistic graphical modeling has been chosen. The graphical models used in this work are Markov Random Fields, which allow to find an optimal configuration by a global minimization of an objective function. We have proposed three contributions in this thesis about 2-manifold triangular meshes : (i) a statistically robust method for feature edge extraction for mechanical objects, (ii) an algorithm for the segmentation into regions which are approximated by simple primitives, which is robust to outliers and to the presence of noise in the vertex positions, (iii) and lastly an algorithm for mesh optimization which jointly optimizes triangle quality, the quality of vertex valences, the number of vertices, as well as the geometrical fidelity to the initial surface.

Page generated in 0.0834 seconds