• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 47
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Target Classification Based on Kinematics / Klassificering av flygande objekt med hjälp av kinematik

Hallberg, Robert January 2012 (has links)
Modern aircraft are getting more and better sensors. As a result of this, the pilots are getting moreinformation than they can handle. To solve this problem one can automate the information processingand instead provide the pilots with conclusions drawn from the sensor information. An aircraft’smovement can be used to determine which class (e.g. commercial aircraft, large military aircraftor fighter) it belongs to. This thesis focuses on comparing three classification schemes; a Bayesianclassification scheme with uniform priors, Transferable Belief Model and a Bayesian classificationscheme with entropic priors.The target is modeled by a jump Markov linear system that switches between different modes (flystraight, turn left, etc.) over time. A marginalized particle filter that spreads its particles over thepossible mode sequences is used for state estimation. Simulations show that the results from Bayesianclassification scheme with uniform priors and the Bayesian classification scheme with entropic priorsare almost identical. The results also show that the Transferable Belief Model is less decisive thanthe Bayesian classification schemes. This effect is argued to come from the least committed principlewithin the Transferable Belief Model. A fixed-lag smoothing algorithm is introduced to the filter andit is shown that the classification results are improved. The advantage of having a filter that remembersthe full mode sequence (such as the marginalized particle filter) and not just determines the currentmode (such as an interacting multiple model filter) is also discussed.
22

4D Segmentation of Cardiac MRI Data Using Active Surfaces with Spatiotemporal Shape Priors

Abufadel, Amer Y. 17 November 2006 (has links)
This dissertation presents a fully automatic segmentation algorithm for cardiac MR data. Some of the currently published methods are automatic, but they only work well in 2D and sometimes in 3D and do not perform well near the extremities (apex and base) of the heart. Additionally, they require substantial user input to make them feasible for use in a clinical environment. This dissertation introduces novel approaches to improve the accuracy, robustness, and consistency of existing methods. Segmentation accuracy can be improved by knowing as much about the data as possible. Accordingly, we compute a single 4D active surface that performs segmentation in space and time simultaneously. The segmentation routine can now take advantage of information from neighboring pixels that can be adjacent either spatially or temporally. Robustness is improved further by using confidence labels on shape priors. Shape priors are deduced from manual segmentation of training data. This data may contain imperfections that may impede proper manual segmentation. Confidence labels indicate the level of fidelity of the manual segmentation to the actual data. The contribution of regions with low confidence levels can be attenuated or excluded from the final result. The specific advantages of using the 4D segmentation along with shape priors and regions of confidence are highlighted throughout the thesis dissertation. Performance of the new method is measured by comparing the results to traditional 3D segmentation and to manual segmentation performed by a trained clinician.
23

Essays on belief formation and pro-sociality

Mohlin, Erik January 2010 (has links)
This thesis consists of four independent papers. The first two papers use experimental methods to study pro-social behaviors. The other two use theoretical methods to investigate questions about belief formation. The first paper “Communication: Content or Relationship?” investigates the effect on communication on generosity in a dictator game. In the basic experiment (the control), subjects in one room are dictators and subjects in another room are recipients. The subjects are anonymous to each other throughout the whole experiment. Each dictator gets to allocate a sum of 100 SEK between herself and an unknown recipient in the other room. In the first treatment we allow each recipient to send a free-form message to his dictator counterpart, before the dictator makes her allocation decision. In order to separate the effect of the content of the communication, from the relationship-building effect of communication, we carry out a third treatment, where we take the messages from the previous treatment and give each of them to a dictator in this new treatment. The dictators are informed that the recipients who wrote the messages are not the recipients they will have the opportunity to send money to. We find that this still increases donation compared to the baseline but not as much as in the other treatment. This suggests that both the impersonal content of the communication and the relationship effect matters for donations. The second paper, “Limbic justice – Amygdala Drives Rejection in the Ultimatum Game”, is about the neurological basis for the tendency to punish norm violators in the Ultimatum Game. In the Ultimatum Game, a proposer proposes a way to divide a fixed sum of money. The responder accepts or rejects the proposal. If the proposal is accepted the proposed split is realized and if the proposal is rejected both subjects gets zero. Subjects were randomly allocated to receive either the benzodiazepine oxazepam or a placebo substance, and then played the Ultimatum Game in the responder role, while lying in and fMRI camera. Rejection rate is significantly lower in the treatment group than in the control group. Moreover a mygdala was relatively more activated in the placebo group than in the oxazepam group for unfair offers. This is mirrored by differences in activation in the medial prefrontal cortex (mPFC) and right ACC. Our findings suggest that the automatic and emotional response to unfairness, or norm violations, are driven by amygdala and that balancing of such automatic behavioral responses is associated with parts of the prefrontal cortex. The conflict of motives is monitored by the ACC. In order to decide what strategy to choose, a player needs to form beliefs about what other players will do. This requires the player to have a model of how other people form beliefs – what psychologists call a theory of mind. In the third paper “Evolution of Theories of Mind” I study the evolution of players’ models of how other players think. When people play a game for the first time, their behavior is often well predicted by the level-k, and related models. According to this model, people think in a limited number of steps, when they form beliefs about other peoples' behavior. Moreover, people differ with respect to how they form beliefs. The heterogeneity is represented by a set of cognitive types {0,1,2,...}, such that type 0 randomizes uniformly and type k&gt;0 plays a k times iterated best response to this. Empirically one finds that most experimental subjects behave as if they are of type 1 or 2, and individuals of type 3 and above are very rare. When people play the same game more than once, they may use their experience to predict how others will behave. Fictitious play is a prominent model of learning, according to which all individuals believe that the future will be like the past, and best respond to the average of past play. I define a model of heterogeneous fictitious play, according to which there is a hierarchy of types {1,2,...}, such that type k plays a k time iterated best response to the average of past play. The level-k and fictitious play models, implicitly assume that players lack specific information about the cognitive types of their opponents. I extend these models to allow for the possibility that types are partially observed. I study evolution of types in a number of games separately. In contrast to most of the literature on evolution and learning, I also study the evolution of types across different games. I show that an evolutionary process, based on payoffs earned in different games, both with and without partial observability, can lead to a polymorphic population where relatively unsophisticated types survive, often resulting in initial behavior that does not correspond to a Nash equilibrium. Two important mechanisms behind these results are the following: (i) There are games, such as the Hawk-Dove game, where there is an advantage of not thinking and behaving like others, since choosing the same action as the opponent yields an inefficient outcome. This mechanism is at work even if types are not observed. (ii) If types are partially observed then there are Social dilemmas where lower types may have a commitment advantage; lower types may be able to commit to strategies that result in more efficient payoffs. The importance of categorical reasoning in human cognition is well-established in psychology and cognitive science, and one of the most important functions of categorization is to facilitate prediction. Prediction on the basis of categorical reasoning is relevant when one has to predict the value of a variable on the basis of one's previous experience with similar situations, but where the past experience does not include any situation that was identical to the present situation in all relevant aspects. In such situations one can classify the situation as belonging to some category, and use the past experiences in that category to make a prediction about the current situation. In the fourth paper, “Optimal Categorization”, I provide a model of categorizations that are optimal in the sense that they minimize prediction error. From an evolutionary perspective we would expect humans to have developed categories that generate predictions which induce behavior that maximize fitness, and it seems reasonable to assume that fitness is generally increasing in how accurate the predictions are. In the model a subject starts out with a categorization that she has learnt or inherited early in life. The categorization divides the space of objects into categories. In the beginning of each period, the subject observes a two-dimensional object in one dimension, and wants to predict the object’s value in the other dimension. She has a data base of objects that were observed in both dimensions in the past. The subject determines what category the new object belongs to on the basis of observation of its first dimension. She predicts that its value in the second dimension will be equal to the average value among the past observations in the corresponding category. At the end of each period the second dimension is observed, and the observation is stored in the data base. The main result is that the optimal number of categories is determined by a trade-off between (a) decreasing the size of categories in order to enhance category homogeneity, and (b) increasing the size of categories in order to enhance category sample size. In other words, the advantage of fine grained categorizations is that objects in a category are similar to each other. The advantage of coarse categorizations is that a prediction about a category is based on a large number of observations, thereby reducing the risk of over-fitting. Comparative statics reveal how the optimal categorization depends on the number of observations as well as on the frequency of objects with different properties. The set-up does not presume the existence of an objectively true categorization “out there”. The optimal categorization is a framework we impose on our environment in order to predict it. / <p>Diss. Stockholm : Handelshögskolan, 2010. Sammanfattning jämte 4 uppsatser.</p>
24

Knowledge-based image segmentation using sparse shape priors and high-order MRFs / Segmentation d’images avec des a priori de forme parcimonieux et des champs de Markov aléatoires d’ordre supérieur

Xiang, Bo 28 November 2013 (has links)
Nous présentons dans cette thèse une approche nouvelle de la segmentation d’images, avec des descripteurs a priori utilisant des champs de Markov d’ordre supérieur. Nous représentons le modèle de forme par un graphe de distribution de points qui décrit les informations a priori des invariants de pose grâce à des cliques L1 discrètes d’ordre supérieur. Chaque clique de triplet décrit les variations statistiques locales de forme par des mesures d’angle,ce qui assure l’invariance aux transformations globales (translation, rotation et échelle). L’apprentissage d’une structure de graphe discret d’ordre supérieur est réalisé grâce à l’apprentissage d’un champ de Markov aléatoire utilisant une décomposition duale, ce qui renforce son efficacité tout en préservant sa capacité à rendre compte des variations.Nous introduisons la connaissance a priori d’une manière innovante pour la segmentation basée sur un modèle. Le problème de la segmentation est ici traité par estimation statistique d’un maximum a posteriori (MAP). L’optimisation des paramètres de la modélisation- c’est à dire de la position des points de contrôle - est réalisée par le calcul d’une fonction d’énergie globale de champs de Markov (MRF). On combine ainsi les calculs statistiques régionaux et le suivi des frontières avec la connaissance a priori de la forme.Les descripteurs invariants sont estimés par des potentiels de Markov d’ordre 2, tandis que les caractéristiques régionales sont transposées dans un espace de caractéristiques et calculées grâce au théorème de la Divergence.De plus, nous proposons une nouvelle approche pour la segmentation conjointe de l’image et de sa modélisation ; cette méthode permet d’obtenir une segmentation plus fine lorsque la délimitation précise d’un objet est recherchée. Un modèle graphique combinant l’information a priori et les informations de pixel est développé pour réaliser l’unité des modules "top-down" et "bottom-up". La cohérence entre l’image et sa modélisation est assurée par une décomposition qui associe les parties du modèle avec la labellisation de chaque pixel.Les deux champs de Markov d’ordre supérieur considérés sont optimisés par les algorithmes de l’état de l’art. Les résultats prometteurs dans les domaines de la vision par ordinateur et de l’imagerie médicale montrent le potentiel de cette méthode appliquée à la segmentation. / In this thesis, we propose a novel framework for knowledge-based segmentation using high-order Markov Random Fields (MRFs). We represent the shape model as a point distribution graphical model which encodes pose invariant shape priors through L1 sparse higher order cliques. Each triplet clique encodes the local shape variation statistics on the angle measurements which inherit invariance to global transformations (i.e. translation,rotation and scale). A sparse higher-order graph structure is learned through MRF training using dual decomposition, producing boosting efficiency while preserving its ability to represent the shape variation.We incorporate the prior knowledge in a novel framework for model-based segmentation.We address the segmentation problem as a maximum a posteriori (MAP) estimation in a probabilistic framework. A global MRF energy function is defined to jointly combine regional statistics, boundary support as well as shape prior knowledge for estimating the optimal model parameters (i.e. the positions of the control points). The pose-invariant priors are encoded in second-order MRF potentials, while regional statistics acting on a derived image feature space can be exactly factorized using Divergence theorem. Furthermore, we propose a novel framework for joint model-pixel segmentation towardsa more refined segmentation when exact boundary delineation is of interest. Aunified model-based and pixel-driven integrated graphical model is developed to combine both top-down and bottom-up modules simultaneously. The consistency between the model and the image space is introduced by a model decomposition which associates the model parts with pixels labeling. Both of the considered higher-order MRFs are optimized efficiently using state-of the-art MRF optimization algorithms. Promising results on computer vision and medical image applications demonstrate the potential of the proposed segmentation methods.
25

Sophisticated and small versus simple and sizeable: When does it pay off to introduce drifting coefficients in Bayesian VARs?

Feldkircher, Martin, Huber, Florian, Kastner, Gregor 01 1900 (has links) (PDF)
We assess the relationship between model size and complexity in the time-varying parameter VAR framework via thorough predictive exercises for the Euro Area, the United Kingdom and the United States. It turns out that sophisticated dynamics through drifting coefficients are important in small data sets while simpler models tend to perform better in sizeable data sets. To combine best of both worlds, novel shrinkage priors help to mitigate the curse of dimensionality, resulting in competitive forecasts for all scenarios considered. Furthermore, we discuss dynamic model selection to improve upon the best performing individual model for each point in time. / Series: Department of Economics Working Paper Series
26

Segmentation of facade images with shape priors / Segmentation des images de façade avec à priori sur la forme

Kozinski, Mateusz 30 June 2015 (has links)
L'objectif de cette thèse concerne l'analyse automatique d'images de façades de bâtiments à partir de descriptions formelles à priori de formes géométriques. Ces informations suggérées par un utilisateur permettent de modéliser, de manière formelle, des contraintes spatiales plus ou moins dures quant à la segmentation sémantique produite par le système. Ceci permet de se défaire de deux principaux écueils inhérents aux méthodes d'analyse de façades existantes qui concernent d'une part la coûteuse fidélité de la segmentation résultante aux données visuelles de départ, d'autre part, la spécificité architecturale des règles imposées lors du processus de traitement. Nous proposons d'explorer au travers de cette thèse, différentes méthodes alternatives à celles proposées dans la littérature en exploitant un formalisme de représentation d'à priori de haut niveau d'abstraction, les propriétés engendrées par ces nouvelles méthodes ainsi que les outils de résolution mis en œuvres par celles-ci. Le système résultant est évalué tant quantitativement que qualitativement sur de multiples bases de données standards et par le biais d'études comparatives à des approches à l'état de l'art en la matière. Parmi nos contributions, nous pouvons citer la combinaison du formalisme des grammaires de graphes exprimant les variations architecturales de façades de bâtiments et les modèles graphiques probabilistes modélisant l'énergie attribuée à une configuration paramétrique donnée, dans un schéma d'optimisation par minimisation d'énergie; ainsi qu'une nouvelle approche par programmation linéaire d'analyse avec à priori de formes. Enfin, nous proposons un formalisme flexible de ces à priori devançant de par ses performances les méthodes à l'état de l'art tout en combinant les avantages de la généricité de contraintes simples manuellement imposées par un utilisateur, à celles de la précision de la segmentation finale qui se faisait jusqu'alors au prix d'un encodage préliminaire restrictif de règles grammaticales complexes propres à une famille architecturale donnée. Le système décrit permet également de traiter avec robustesse des scènes comprenant des objets occultants et pourrait encore être étendu notamment afin de traiter l'extension tri-dimensionnelle de la sémantisation d'environnements urbains sous forme de nuages de points 3D ou d'une analyse multi-image de bâtiments / The aim of this work is to propose a framework for facade segmentation with user-defined shape priors. In such a framework, the user specifies a shape prior using a rigorously defined shape prior formalism. The prior expresses a number of hard constraints and soft preference on spatial configuration of segments, constituting the final segmentation. Existing approaches to the problem are affected by a compromise between the type of constraints, the satisfaction of which can be guaranteed by the segmentation algorithm, and the capability to approximate optimal segmentations consistent with a prior. In this thesis we explore a number of approaches to facade parsing that combine prior formalism featuring high expressive power, guarantees of conformance of the resulting segmentations to the prior, and effective inference. We evaluate the proposed algorithms on a number of datasets. Since one of our focus points is the accuracy gain resulting from more effective inference algorithms, we perform a fair comparison to existing methods, using the same data term. Our contributions include a combination of graph grammars for expressing variation of facade structure with graphical models encoding the energy of models of given structures for different positions of facade elements. We also present the first linear formulation of facade parsing with shape priors. Finally, we propose a shape prior formalism that enables formulating the problem of optimal segmentation as the inference in a Markov random field over the standard four-connected grid of pixels. The last method advances the state of the art by combining the flexibility of a user-defined grammar with segmentation accuracy that was reserved for frameworks with pre-defined priors before. It also enables handling occlusions by simultaneously recovering the structure of the occluded facade and segmenting the occluding objects. We believe that it can be extended in many directions, including semantizing three-dimensional point clouds and parsing images of general urban scenes
27

Modeling Unbalanced Nested Repeated Measures Data In The Presence of Informative Drop-out with Application to Ambulatory Blood Pressure Monitoring Data

Ghulam, Enas M., Ph.D. 01 October 2019 (has links)
No description available.
28

Le décours temporel de l'utilisation des fréquences spatiales dans les troubles du spectre autistique

Caplette, Laurent 08 1900 (has links)
Notre système visuel extrait d'ordinaire l'information en basses fréquences spatiales (FS) avant celles en hautes FS. L'information globale extraite tôt peut ainsi activer des hypothèses sur l'identité de l'objet et guider l'extraction d'information plus fine spécifique par la suite. Dans les troubles du spectre autistique (TSA), toutefois, la perception des FS est atypique. De plus, la perception des individus atteints de TSA semble être moins influencée par leurs a priori et connaissances antérieures. Dans l'étude décrite dans le corps de ce mémoire, nous avions pour but de vérifier si l'a priori de traiter l'information des basses aux hautes FS était présent chez les individus atteints de TSA. Nous avons comparé le décours temporel de l'utilisation des FS chez des sujets neurotypiques et atteints de TSA en échantillonnant aléatoirement et exhaustivement l'espace temps x FS. Les sujets neurotypiques extrayaient les basses FS avant les plus hautes: nous avons ainsi pu répliquer le résultat de plusieurs études antérieures, tout en le caractérisant avec plus de précision que jamais auparavant. Les sujets atteints de TSA, quant à eux, extrayaient toutes les FS utiles, basses et hautes, dès le début, indiquant qu'ils ne possédaient pas l'a priori présent chez les neurotypiques. Il semblerait ainsi que les individus atteints de TSA extraient les FS de manière purement ascendante, l'extraction n'étant pas guidée par l'activation d'hypothèses. / Our visual system usually samples low spatial frequency (SF) information before higher SF information. The coarse information thereby extracted can activate hypotheses in regard to the object's identity and guide further extraction of specific finer information. In autism spectrum disorder (ASD) however, SF perception is atypical. Moreover, individuals with ASD seem to rely less on their prior knowledge when perceiving objects. In the present study, we aimed to verify if the prior according to which we sample visual information in a coarse-to-fine fashion is existent in ASD. We compared the time course of SF sampling in neurotypical and ASD subjects by randomly and exhaustively sampling the SF x time space. Neurotypicals were found to sample low SFs before higher ones, thereby replicating the finding from many other studies, but characterizing it with much greater precision. ASD subjects were found, for their part, to extract SFs in a more fine-to-coarse fashion, extracting all relevant SFs upon beginning. This indicated that they did not possess a coarse-to-fine prior. Thus, individuals with ASD seem to sample information in a purely bottom-up fashion, without the guidance from hypotheses activated by coarse information.
29

Structural priors in deep neural networks

Ioannou, Yani Andrew January 2018 (has links)
Deep learning has in recent years come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite breakthroughs in training deep networks, there remains a lack of understanding of both the optimization and structure of deep networks. The approach advocated by many researchers in the field has been to train monolithic networks with excess complexity, and strong regularization --- an approach that leaves much to desire in efficiency. Instead we propose that carefully designing networks in consideration of our prior knowledge of the task and learned representation can improve the memory and compute efficiency of state-of-the art networks, and even improve generalization --- what we propose to denote as structural priors. We present two such novel structural priors for convolutional neural networks, and evaluate them in state-of-the-art image classification CNN architectures. The first of these methods proposes to exploit our knowledge of the low-rank nature of most filters learned for natural images by structuring a deep network to learn a collection of mostly small, low-rank, filters. The second addresses the filter/channel extents of convolutional filters, by learning filters with limited channel extents. The size of these channel-wise basis filters increases with the depth of the model, giving a novel sparse connection structure that resembles a tree root. Both methods are found to improve the generalization of these architectures while also decreasing the size and increasing the efficiency of their training and test-time computation. Finally, we present work towards conditional computation in deep neural networks, moving towards a method of automatically learning structural priors in deep networks. We propose a new discriminative learning model, conditional networks, that jointly exploit the accurate representation learning capabilities of deep neural networks with the efficient conditional computation of decision trees. Conditional networks yield smaller models, and offer test-time flexibility in the trade-off of computation vs. accuracy.
30

Bayesian Models for the Analyzes of Noisy Responses From Small Areas: An Application to Poverty Estimation

Manandhar, Binod 26 April 2017 (has links)
We implement techniques of small area estimation (SAE) to study consumption, a welfare indicator, which is used to assess poverty in the 2003-2004 Nepal Living Standards Survey (NLSS-II) and the 2001 census. NLSS-II has detailed information of consumption, but it can give estimates only at stratum level or higher. While population variables are available for all households in the census, they do not include the information on consumption; the survey has the `population' variables nonetheless. We combine these two sets of data to provide estimates of poverty indicators (incidence, gap and severity) for small areas (wards, village development committees and districts). Consumption is the aggregate of all food and all non-food items consumed. In the welfare survey the responders are asked to recall all information about consumptions throughout the reference year. Therefore, such data are likely to be noisy, possibly due to response errors or recalling errors. The consumption variable is continuous and positively skewed, so a statistician might use a logarithmic transformation, which can reduce skewness and help meet the normality assumption required for model building. However, it could be problematic since back transformation may produce inaccurate estimates and there are difficulties in interpretations. Without using the logarithmic transformation, we develop hierarchical Bayesian models to link the survey to the census. In our models for consumption, we incorporate the `population' variables as covariates. First, we assume that consumption is noiseless, and it is modeled using three scenarios: the exponential distribution, the gamma distribution and the generalized gamma distribution. Second, we assume that consumption is noisy, and we fit the generalized beta distribution of the second kind (GB2) to consumption. We consider three more scenarios of GB2: a mixture of exponential and gamma distributions, a mixture of two gamma distributions, and a mixture of two generalized gamma distributions. We note that there are difficulties in fitting the models for noisy responses because these models have non-identifiable parameters. For each scenario, after fitting two hierarchical Bayesian models (with and without area effects), we show how to select the most plausible model and we perform a Bayesian data analysis on Nepal's poverty data. We show how to predict the poverty indicators for all wards, village development committees and districts of Nepal (a big data problem) by combining the survey data with the census. This is a computationally intensive problem because Nepal has about four million households with about four thousand households in the survey and there is no record linkage between households in the survey and the census. Finally, we perform empirical studies to assess the quality of our survey-census procedure.

Page generated in 0.0441 seconds