• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 16
  • 12
  • 11
  • 8
  • 1
  • 1
  • Tagged with
  • 153
  • 153
  • 53
  • 46
  • 38
  • 36
  • 33
  • 30
  • 27
  • 24
  • 21
  • 19
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Alocação de tarefas de desastre na plataforma RMASBench : uma abordagem baseada em passagem de mensagens e formação de grupos / Allocation of disaster tasks in the RMASBench platform : an approach based on message passing and group formation

Corrêa, Abel January 2015 (has links)
Em ambientes de desastre urbano, grupos de agentes de resgate devem resolver tarefas de modo a minimizar os danos que podem ocorrer na cidade. Tais ambientes são dinâmicos e parcialmente observáveis, com características que dizem respeito à distância espacial, quantidade de recursos, à dificuldade da tarefa de desastre e à capacidade do agente de atendê-la. A comunicação entre os agentes pode ser ruidosa ou inexistente. Os sistemas multiagente são desenvolvidos para resolver problemas complexos e abrangentes, que estão além da capacidade de um único agente. Nesse contexto, os agentes são elementos computacionais autônomos que são responsáveis por uma parte da solução do problema. Os agentes são situados em um ambiente e podem ter habilidade social, interagindo com outros agentes para resolver as tarefas. Comumente, o domínio de desastre urbano é formalizado como um problema de alocação de tarefas e modelado como um problema de otimização de restrições distribuídas entre agentes heterogêneos, onde eles têm que escolher as tarefas que maximizam suas utilidades individuais ou minimizem seus custos individuais. Essa dissertação de mestrado propõe um modelo para formação de grupos de agentes baseado na minimização de uma métrica de distância. O modelo é formalizado como um problema de otimização de restrições distribuídas, usando algoritmos para troca de mensagens entre os agentes. O modelo chamado Formação de Grupos pela Minimização da Distância (FGMD) tem agentes autônomos que tem a capacidade de se auto-organizar sem a necessidade de um controle centralizado. Aplicamos o FGMD na plataforma RMASBench, que é um simulador para situações de desastre urbano. Comparou-se o FGMD com os algoritmos mais recentes de passagem de mensagens, tendo sido verificado que o FGMD use menos computação não-paralela. Com respeito a minimização dos danos na cidade, mostrou-se que é possível obter resultados melhores que as abordagens do estado-da-arte com leve aumento no esforço computacional. / In urban disaster environments, groups of rescue agents must solve tasks in order to minimize the damage that can occur in a city. Such environments are dynamic and partially observable, with features that correspond to spatial distance, amount of resources, difficulty of the disaster task, and the capability of the agent to handle it. The communication between the agents can be noisy or non-existent. Multiagent systems are developed to solve complex and comprehensive problems, that are beyond the capability of one single agent. In this context, the agents are autonomous computational elements that are responsible for a piece of the solution of the problem. The agents are situated in an environment, and may have social ability, interacting with other agents to solve the tasks. Commonly, the urban disaster domain is formalized as a task allocation problem, and modelled as a constraint optimization problem distributed among heterogeneous agents, where they have to choose the tasks that maximize their individual utilities or minimize their individual costs. This master thesis proposes a model for formation of groups of agents based in the minimization of a distance. The model is formalized as a distributed constraint optimization problem, using algorithms to exchange messages between agents. The model called Formation of Groups by Minimization of Distance (FGMD) has self-organizing autonomous agents without a centralized control. We applied the FGMD in the RMASBench platform, that is a simulator for urban disaster situations. We compare the FGMD with the most recent message passing algorithms, verifying that FGMD uses less non-parallel computation. With respect to the minimization of the damage in the city, we show that it is possible to obtain better results than the state-of-art approaches, with slightly increase of computational effort.
92

Graphical models and point set matching / Modelos Gráficos e Casamento de Padrões de Pontos

Caetano, Tiberio Silva January 2004 (has links)
Casamento de padrões de pontos em Espaços Euclidianos é um dos problemas fundamentais em reconhecimento de padrões, tendo aplicações que vão desde Visão Computacional até Química Computacional. Sempre que dois padrões complexos estão codi- ficados em termos de dois conjuntos de pontos que identificam suas características fundamentais, sua comparação pode ser vista como um problema de casamento de padrões de pontos. Este trabalho propõe uma abordagem unificada para os problemas de casamento exato e inexato de padrões de pontos em Espaços Euclidianos de dimensão arbitrária. No caso de casamento exato, é garantida a obtenção de uma solução ótima. Para casamento inexato (quando ruído está presente), resultados experimentais confirmam a validade da abordagem. Inicialmente, considera-se o problema de casamento de padrões de pontos como um problema de casamento de grafos ponderados. O problema de casamento de grafos ponderados é então formulado como um problema de inferência Bayesiana em um modelo gráfico probabilístico. Ao explorar certos vínculos fundamentais existentes em padrões de pontos imersos em Espaços Euclidianos, provamos que, para o casamento exato de padrões de pontos, um modelo gráfico simples é equivalente ao modelo completo. É possível mostrar que inferência probabilística exata neste modelo simples tem complexidade polinomial para qualquer dimensionalidade do Espaço Euclidiano em consideração. Experimentos computacionais comparando esta técnica com a bem conhecida baseada em relaxamento probabilístico evidenciam uma melhora significativa de desempenho para casamento inexato de padrões de pontos. A abordagem proposta é signi- ficativamente mais robusta diante do aumento do tamanho dos padrões envolvidos. Na ausência de ruído, os resultados são sempre perfeitos. / Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.
93

Desenvolvimento de modelos de causalidade com informações de QTLs para estudo do relacionamento de caracteres fenotípicos relativos à absorção de fósforo em milho / Development of causal models with QTL information to the study of relationship among traits associated with phosphorus uptake in maize

Adriana Cheavegatti Gianotto 26 March 2015 (has links)
Metodologias de mapeamento de QTLs modernas empregam abordagem multivariada e se beneficiam da matriz de covariâncias fenotípicas para melhorar as estimativas de localização e efeitos de QTLs. No entanto, a correlação fenotípica pode ser em parte atribuída às relações de causalidade entre os fenótipos e mesmo as abordagens de mapeamento de QTLs multivariadas atuais têm desconsiderado tais relacionamentos. Dentre as metodologias científicas desenvolvidas para o estudo da causalidade em dados observacionais, destacam-se os modelos de equações estruturais e os modelos gráficos. Neste trabalho, foi estudado um conjunto de caracteres fenotípicos relacionados à morfologia de raízes, absorção de fósforo e acúmulo de biomassa em uma população composta de 145 linhagens endogâmicas recombinantes (RILs) do programa de melhoramento de milho da EMBRAPA Milho e Sorgo. O mapeamento de QTLs para os caracteres fenotípicos foi realizado utilizando mapeamento de múltiplos intervalos univariado (MIM) e multivariado (MT-MIM). A análise MIM revelou QTLs afetando diâmetro de raízes, área de superfície de raízes finas, peso seco da parte aérea e concentração de fósforo na parte aérea e nas raízes. A análise MT-MIM revelou 12 QTLs, com diferentes padrões de pleiotropia, com efeitos marginais para as sete variáveis analisadas. Um modelo de relacionamento causal entre os caracteres fenotípicos foi desenvolvido utilizando conhecimento prévio e modelagem de equações estruturais. O modelo de equações estruturais apresentou fluxo unidirecional de causalidade entre as variáveis, com as variáveis de morfologia de raízes exercendo efeito sobre as variáveis de acúmulo de biomassa, que por sua vez, têm efeito sobre as variáveis de absorção de fósforo. A aplicação do algoritmo PC para a descoberta de causalidade automatizada baseada nos padrões de independências condicionais não foi capaz de orientar todas as relações de causalidade descobertas, porém revelou um relacionamento mais complexo que o modelo de equações estruturais, com potenciais ciclos de retroalimentação causais. O emprego de algoritmos de descoberta de causalidade baseados em informações de QTLs, chamados QDG e QPSO, permitiu a orientação de todos os relacionamentos de causalidade encontrados pelo algoritmo PC e confirmou a existência de dois ciclos vizinhos de relacionamento causais entre as variáveis estudadas. Como regra geral, os QTLs pleiotrópicos detectados pela metodologia MT-MIM apresentaram efeitos sobre caracteres fenotípicos alinhados causalmente nos modelos propostos pelos algoritmos PC e QDG, sugerindo que alguns dos QTLs detectados são na realidade efeitos indiretos de QTLs situados em posição mais elevada no modelo causal. O emprego da abordagem MT-MIM aliada à análise de causalidade permitiu melhor compreensão da arquitetura genética dos caracteres de morfologia de raiz, acumulação de biomassa e aquisição de fósforo em milho. / Modern QTL mapping approaches are multivariate and take advantage of the phenotypic covariance matrix to improve estimates of QTL positions and effects. However, phenotypic correlation can also be assigned to the causal relationship among phenotypes, and even modern multivariate QTL analysis does not take these relationships into account. Structural equation models and graphical models are the main methodologies to study causality from observational data. We studied a set of phenotypes related to root morphology, biomass accumulation and phosphorus acquisition in maize. These phenotypes were measured in a maize population from the EMBRAPA breeding program composed of 145 recombinant inbred lines (RILs) derived from the crossing of two divergent lines for phosphorus acquisition efficiency. QTL mapping for the traits was performed using univariate (MIM) and multivariate (MT-MIM) multiple interval mapping. MIM analysis revealed QTL affecting root diameter, fine root surface area, shoot dry weight and root dry weight. MT-MIM analysis revealed 12 QTL with different pleiotropy patterns and QTL with marginal effects affecting all seven studied characters. A causal model for phenotype characters was developed using a priori knowledge and structural equation model techniques. The structural equation model presented an unidirectional causal flow among the variables, with root morphological traits exerting causal effects over biomass traits, which in turn cause phosphorus acquisition traits. Using PC algorithm for an automatic search of causal models based on conditional independence was not able to orient all discovered causal relationships among traits but revealed a more intricated relationship than the structural equation model, with potential causal feedback loops among the traits. Employing causal search algorithms based on QTL information (named QDG and QPSO) allowed the orientation of all causal relationships detected by PC algorithm and it has also confirmed the presence of two neighbor causal cycles among the studied traits. As a general rule, pleiotropic QTL detected by MT-MIM approach exerted effects over traits according to the causal model discovered by PC and QDG algorithms, suggesting that some of the QTL detected effects were indirect effects of QTL located upstream at the proposed causal model. Employing MT-MIM approach and causal analysis has allowed a better comprehension of genetic architecture underlying root morphology, biomass accumulation and phosphorus acquisition traits in maize.
94

Lois de Wishart sur les cônes convexes / Wishart laws on convex cones

Mamane, Salha 20 March 2017 (has links)
En analyse multivariée de données de grande dimension, les lois de Wishart définies dans le contexte des modèles graphiques revêtent une grande importance car elles procurent parcimonie et modularité. Dans le contexte des modèles graphiques Gaussiens régis par un graphe G, les lois de Wishart peuvent être définies sur deux restrictions alternatives du cône des matrices symétriques définies positives : le cône PG des matrices symétriques définies positives x satisfaisant xij=0, pour tous sommets i et j non adjacents, et son cône dual QG. Dans cette thèse, nous proposons une construction harmonieuse de familles exponentielles de lois de Wishart sur les cônes PG et QG. Elle se focalise sur les modèles graphiques d'interactions des plus proches voisins qui présentent l'avantage d'être relativement simples tout en incluant des exemples de tous les cas particuliers intéressants: le cas univarié, un cas d'un cône symétrique, un cas d'un cône homogène non symétrique, et une infinité de cas de cônes non-homogènes. Notre méthode, simple, se fonde sur l'analyse sur les cônes convexes. Les lois de Wishart sur QAn sont définies à travers la fonction gamma sur QAn et les lois de Wishart sur PAn sont définies comme la famille de Diaconis- Ylvisaker conjuguée. Ensuite, les méthodes développées sont utilisées pour résoudre la conjecture de Letac- Massam sur l'ensemble des paramètres de la loi de Wishart sur QAn. Cette thèse étudie aussi les sousmodèles, paramétrés par un segment dans M, d'une famille exponentielle paramétrée par le domaine des moyennes M. / In the framework of Gaussian graphical models governed by a graph G, Wishart distributions can be defined on two alternative restrictions of the cone of symmetric positive definite matrices: the cone PG of symmetric positive definite matrices x satisfying xij=0 for all non-adjacent vertices i and j and its dual cone QG. In this thesis, we provide a harmonious construction of Wishart exponential families in graphical models. Our simple method is based on analysis on convex cones. The focus is on nearest neighbours interactions graphical models, governed by a graph An, which have the advantage of being relatively simple while including all particular cases of interest such as the univariate case, a symmetric cone case, a nonsymmetric homogeneous cone case and an infinite number of non-homogeneous cone cases. The Wishart distributions on QAn are constructed as the exponential family generated from the gamma function on QAn. The Wishart distributions on PAn are then constructed as the Diaconis- Ylvisaker conjugate family for the exponential family of Wishart distributions on QAn. The developed methods are then used to solve the Letac-Massam Conjecture on the set of parameters of type I Wishart distributions on QAn. Finally, we introduce and study exponential families of distributions parametrized by a segment of means with an emphasis on their Fisher information. The focus in on distributions with matrix parameters. The particular cases of Gaussian and Wishart exponential families are further examined.
95

Study on the Use of Vision and Laser Range Sensors with Graphical Models for the SLAM Problem / Étude sur l'exploitation de la vision et d'un télémètre laser avec des modèles graphiques probabilistes appliqués au problème de la cartographie et localisation simultanées

Paiva mendes, Ellon 12 July 2017 (has links)
La capacité des robots mobiles à se localiser précisément par rapport à leur environnement est indispensable à leur autonomie. Pour ce faire, les robots exploitent les données acquises par des capteurs qui observent leur état interne, tels que centrales inertielles ou l’odométrie, et les données acquises par des capteurs qui observent l’environnement, telles que les caméras et les Lidars. L’exploitation de ces derniers capteurs a suscité le développement de solutions qui estiment conjointement la position du robot et la position des éléments dans l'environnement, appelées SLAM (Simultaneous Localization and Mapping). Pour gérer le bruit des données provenant des capteurs, les solutions pour le SLAM sont mises en œuvre dans un contexte probabiliste. Les premiers développements étaient basés sur le filtre de Kalman étendu, mais des développements plus récents utilisent des modèles graphiques probabilistes pour modéliser le problème d’estimation et de le résoudre grâce à techniques d’optimisation. Cette thèse exploite cette dernière approche et propose deux techniques distinctes pour les véhicules terrestres autonomes: une utilisant la vision monoculaire, l’autre un Lidar. L’absence d’information de profondeur dans les images obtenues par une caméra a mené à l’utilisation de paramétrisations spécifiques pour les points de repères qui isolent la profondeur inconnue dans une variable, concentrant la grande incertitude sur la profondeur dans un seul paramètre. Une de ces paramétrisations, nommé paramétrisation pour l’angle de parallaxe (ou PAP, Parallax Angle Parametrization), a été introduite dans le contexte du problème d’ajustement de faisceaux, qui traite l’ensemble des données en une seule étape d’optimisation globale. Nous présentons comment exploiter cette paramétrisation dans une approche incrémentale de SLAM à base de modèles graphiques, qui intègre également les mesures de mouvement du robot. Les Lidars peuvent être utilisés pour construire des solutions d’odométrie grâce à un recalage séquentiel des nuages de points acquis le long de la trajectoire. Nous définissons une couche basée sur les modèles graphiques au dessus d’une telle couche d’odométrie, qui utilise l’algorithme ICP (Iterative Closest Points). Des repères clefs (keyframes) sont définis le long de la trajectoire du robot, et les résultats de l’algorithme ICP sont utilisés pour construire un graphe de poses, exploité pour résoudre un problème d’optimisation qui permet la correction de l’ensemble de la trajectoire du robot et de la carte de l’environnement à suite des fermetures de boucle.Après une introduction à la théorie des modèles graphiques appliquée au problème de SLAM, le manuscrit présente ces deux approches. Des résultats simulés et expérimentaux illustrent les développements tout au long du manuscrit, en utilisant des jeux des données classiques et obtenus au laboratoire. / A strong requirement to deploy autonomous mobile robots is their capacity to localize themselves with a certain precision in relation to their environment. Localization exploits data gathered by sensors that either observe the inner states of the robot, like acceleration and speed, or the environment, like cameras and Light Detection And Ranging (LIDAR) sensors. The use of environment sensors has triggered the development of localization solutions that jointly estimate the robot position and the position of elements in the environment, referred to as Simultaneous Localization and Mapping (SLAM) approaches. To handle the noise inherent of the data coming from the sensors, SLAM solutions are implemented in a probabilistic framework. First developments were based on Extended Kalman Filters, while a more recent developments use probabilistic graphical models to model the estimation problem and solve it through optimization. This thesis exploits the latter approach to develop two distinct techniques for autonomous ground vehicles: oneusing monocular vision, the other one using LIDAR. The lack of depth information in camera images has fostered the use of specific landmark parametrizations that isolate the unknown depth in one variable, concentrating its large uncertainty into a single parameter. One of these parametrizations, named Parallax Angle Parametrization, was originally introduced in the context of the Bundle Adjustment problem, that processes all the gathered data in a single global optimization step. We present how to exploit this parametrization in an incremental graph-based SLAM approach in which robot motion measures are also incorporated. LIDAR sensors can be used to build odometry-like solutions for localization by sequentially registering the point clouds acquired along a robot trajectory. We define a graphical model layer on top of a LIDAR odometry layer, that uses the Iterative Closest Points (ICP) algorithm as registration technique. Reference frames are defined along the robot trajectory, and ICP results are used to build a pose graph, used to solve an optimization problem that enables the correction of the robot trajectory and the environment map upon loop closures. After an introduction to the theory of graphical models applied to SLAM problem, the manuscript depicts these two approaches. Simulated and experimental results illustrate the developments throughout the manuscript, using classic and in-house datasets.
96

Semantic Description of Activities in Videos

Dias Moreira De Souza, Fillipe 07 April 2017 (has links)
Description of human activities in videos results not only in detection of actions and objects but also in identification of their active semantic relationships in the scene. Towards this broader goal, we present a combinatorial approach that assumes availability of algorithms for detecting and labeling objects and actions, albeit with some errors. Given these uncertain labels and detected objects, we link them into interpretative structures using domain knowledge encoded with concepts of Grenander’s general pattern theory. Here a semantic video description is built using basic units, termed generators, that represent labels of objects or actions. These generators have multiple out-bonds, each associated with either a type of domain semantics, spatial constraints, temporal constraints or image/video evidence. Generators combine between each other, according to a set of pre-defined combination rules that capture domain semantics, to form larger structures known as configurations, which here will be used to represent video descriptions. Such connected structures of generators are called configurations. This framework offers a powerful representational scheme for its flexibility in spanning a space of interpretative structures (configurations) of varying sizes and structural complexity. We impose a probability distribution on the configuration space, with inferences generated using a Markov Chain Monte Carlo-based simulated annealing algorithm. The primary advantage of the approach is that it handles known computer vision challenges – appearance variability, errors in object label annotation, object clutter, simultaneous events, temporal dependency encoding, etc. – without the need for a exponentially- large (labeled) training data set.
97

Multi-label classification based on sum-product networks / Classificação multi-rótulo baseada em redes soma-produto

Julissa Giuliana Villanueva Llerena 06 September 2017 (has links)
Multi-label classification consists of learning a function that is capable of mapping an object to a set of relevant labels. It has applications such as the association of genes with biological functions, semantic classification of scenes and text categorization. Traditional classification (i.e., single-label) is therefore a particular case of multi-label classification in which each object is associated with exactly one label. A successful approach to constructing classifiers is to obtain a probabilistic model of the relation between object attributes and labels. This model can then be used to classify objects, finding the most likely prediction by computing the marginal probability or the most probable explanation (MPE) of the labels given the attributes. Depending on the probabilistic models family chosen, such inferences may be intractable when the number of labels is large. Sum-Product Networks (SPN) are deep probabilistic models, that allow tractable marginal inference. Nevertheless, as with many other probabilistic models, performing MPE inference is NP- hard. Although, SPNs have already been used successfully for traditional classification tasks (i.e. single-label), there is no in-depth investigation on the use of SPNs for Multi-Label classification. In this work we investigate the use of SPNs for Multi-Label classification. We compare several algorithms for learning SPNs combined with different proposed approaches for classification. We show that SPN-based multi-label classifiers are competitive against state-of-the-art classifiers, such as Random k-Labelsets with Support Vector Machine and MPE inference on CutNets, in a collection of benchmark datasets. / A classificação Multi-Rótulo consiste em aprender uma função que seja capaz de mapear um objeto para um conjunto de rótulos relevantes. Ela possui aplicações como associação de genes com funções biológicas, classificação semântica de cenas e categorização de texto. A classificação tradicional, de rótulo único é, portanto, um caso particular da Classificação Multi-Rótulo, onde cada objeto está associado com exatamente um rótulo. Uma abordagem bem sucedida para classificação é obter um modelo probabilístico da relação entre atributos do objeto e rótulos. Esse modelo pode então ser usado para classificar objetos, encon- trando a predição mais provável por meio da probabilidade marginal ou a explicação mais provavél dos rótulos dados os atributos. Dependendo da família de modelos probabilísticos escolhidos, tais inferências podem ser intratáveis quando o número de rótulos é grande. As redes Soma-Produto (SPN, do inglês Sum Product Network) são modelos probabilísticos profundos, que permitem inferência marginal tratável. No entanto, como em muitos outros modelos probabilísticos, a inferência da explicação mais provavél é NP-difícil. Embora SPNs já tenham sido usadas com sucesso para tarefas de classificação tradicionais, não existe investigação aprofundada no uso de SPNs para classificação Multi-Rótulo. Neste trabalho, investigamos o uso de SPNs para classificação Multi-Rótulo. Comparamos vários algoritmos de aprendizado de SPNs combinados com diferentes abordagens propostos para classi- ficação. Mostramos que os classificadores Multi-Rótulos baseados em SPN são competitivos contra classificadores estado-da-arte, como Random k-Labelsets usando Máquinas de Suporte Vetorial e inferência exata da explicação mais provavél em CutNets, em uma coleção de conjuntos de dados de referência.
98

Estimating Dependence Structures with Gaussian Graphical Models : A Simulation Study in R / Beroendestruktur Skattning med Gaussianska Grafiska Modeller : En Simuleringsstudie i R

Angelchev Shiryaev, Artem, Karlsson, Johan January 2021 (has links)
Graphical models are powerful tools when estimating complex dependence structures among large sets of data. This thesis restricts the scope to undirected Gaussian graphical models. An initial predefined sparse precision matrix was specified to generate multivariate normally distributed data. Utilizing the generated data, a simulation study was conducted reviewing accuracy, sensitivity and specificity of the estimated precision matrix. The graphical LASSO was applied using four different packages available in R with seven selection criteria's for estimating the tuning parameter. The findings are mostly in line with previous research. The graphical LASSO is generally faster and feasible in high dimensions, in contrast to stepwise model selection. A portion of the selection methods for estimating the optimal tuning parameter obtained the true network structure. The results provide an estimate of how well each model obtains the true, predefined dependence structure as featured in our simulation. As the simulated data used in this thesis is merely an approximation of real-world data, one should not take the results as the only aspect of consideration when choosing a model.
99

STRUCTURED PREDICTION: STATISTICAL AND COMPUTATIONAL GUARANTEES IN LEARNING AND INFERENCE

Kevin Segundo Bello Medina (11196552) 28 July 2021 (has links)
<div>Structured prediction consists of receiving a structured input and producing a combinatorial structure such as trees, clusters, networks, sequences, permutations, among others. From the computational viewpoint, structured prediction is in general considered <i>intractable</i> because of the size of the output space being exponential in the input size. For instance, in image segmentation tasks, the number of admissible segments is exponential in the number of pixels. A second factor is the combination of the input dimensionality along with the amount of data under availability. In structured prediction it is common to have the input live in a high-dimensional space, which involves to jointly reason about thousands or millions of variables, and at the same time contend with limited amount of data. Thus, learning and inference methods with strong computational and statistical guarantees are desired. The focus of our research is then to propose <i>principled methods</i> for structured prediction that are both polynomial time, i.e., <i>computationally efficient</i>, and require a polynomial number of data samples, i.e., <i>statistically efficient</i>.</div><div><br></div><div>The main contributions of this thesis are as follows:</div><div><br></div><div>(i) We develop an efficient and principled learning method of latent variable models for structured prediction under Gaussian perturbations. We derive a Rademacher-based generalization bound and argue that the use of non-convex formulations in learning latent-variable models leads to tighter bounds of the Gibbs decoder distortion.</div><div><br></div><div>(ii) We study the fundamental limits of structured prediction, i.e., we characterize the necessary sample complexity for learning factor graph models in the context of structured prediction. In particular, we show that the finiteness of our novel MaxPair-dimension is necessary for learning. Lastly, we show a connection between the MaxPair-dimension and the VC-dimension---which allows for using existing results on VC-dimension to calculate the MaxPair-dimension.</div><div><br></div><div>(iii) We analyze a generative model based on connected graphs, and find the structural conditions of the graph that allow for the exact recovery of the node labels. In particular, we show that exact recovery is realizable in polynomial time for a large class of graphs. Our analysis is based on convex relaxations, where we thoroughly analyze a semidefinite program and a degree-4 sum-of-squares program. Finally, we extend this model to consider linear constraints (e.g., fairness), and formally explain the effect of the added constraints on the probability of exact recovery.</div><div><br></div>
100

Cognitive Modeling of high-level cognition through Discrete State Dynamic processes

D'Alessandro, Marco 17 February 2021 (has links)
Modeling complex cognitive phenomena is a challenging task, especially when it is required to account for the functioning of a cognitive system interacting with an uncertain and changing environment. Psychometrics offers a heterogeneous corpus of computational tools to infer latent cognitive constructs from the observation of behavioral outcomes. However, there is not an explicit consensus regarding the optimal way to properly take into account the intrinsic dynamic properties of the environment, as well as the dynamic nature of cognitive states. In the present dissertation, we explore the potentials of relying on discrete state dynamic models to formally account for the unfolding of cognitive sub-processes in changing task environments. In particular, we propose Probabilistic Graphical Models (PGMs) as an ideal and unifying mathematical language to represent cognitive dynamics as structured graphs codifying (causal) relationships between cognitive sub-components which unfolds in discrete time. We propose several works demonstrating the advantage and the representational power of such a modeling framework, by providing dynamic models of cognition specified according to different levels of abstraction.

Page generated in 0.0454 seconds