• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 75
  • 30
  • 9
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 269
  • 47
  • 34
  • 32
  • 24
  • 24
  • 22
  • 22
  • 22
  • 20
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Precise localization in 3D prior map for autonomous driving / Localisation d'un véhicule autonome à partir d'une carte a priori de points 3D

Tazir, Mohamed Lamine 17 December 2018 (has links)
Les véhicules autonomes, qualifiés aussi de véhicules sans conducteur, deviennent dans certains contextes une réalité tangible et partageront très bientôt nos routes avec d’autres véhicules classiques. Pour qu’un véhicule autonome se déplace de manière sécurisée, il doit savoir où il se trouve et ce qui l’entoure dans l’environnement. Pour la première tâche, pour déterminer sa position dans l’environnement, il doit se localiser selon six degrés de liberté (position et angles de rotation). Alors que pour la deuxième tâche, une bonne connaissance de cet environnement « proche » est nécessaire, ce qui donne lieu à une solution sous forme de cartographie. Par conséquent, pour atteindre le niveau de sécurité souhaité des véhicules autonomes, une localisation précise est primordiale. Cette localisation précise permet au véhicule non seulement de se positionner avec précision, mais également de trouver sa trajectoire optimale et d’éviter efficacement les collisions avec des objets statiques et dynamiques sur son trajet. Actuellement, la solution la plus répandue est le système de positionnement (GPS). Ce système ne permet qu’une précision limitée (de l’ordre de plusieurs mètres) et bien que les systèmes RTK (RealTime Kinematic) et DGPS (Differential GPS) aient atteint une précision bien plus satisfaisante, ces systèmes restent sensibles au masquage des signaux, et aux réflexions multiples, en particulier dans les zones urbaines denses. Toutes ces déficiences rendent ces systèmes inadaptés pour traiter des tâches critiques telles que l’évitement des collisions. Une alternative qui a récemment attiré l’attention des experts (chercheurs et industriels), consiste à utiliser une carte à priori pour localiser la voiture de l’intérieur de celui-ci. En effet, les cartes facilitent le processus de navigation et ajoutent une couche supplémentaire de sécurité et de compréhension. Le véhicule utilise ses capteurs embarqués pour comparer ce qu’il perçoit à un moment donné avec ce qui est stocké dans sa mémoire. Les cartes à priori permettent donc au véhicule de mieux se localiser dans son environnement en lui permettant de focaliser ses capteurs et la puissance de calcul uniquement sur les objets en mouvement. De cette façon, le véhicule peut prédire ce qui devrait arriver et voir ensuite ce qui se passe réellement en temps réel, et donc peut prendre une décision sur ce qu’il faut faire.Cette thèse vise donc à développer des outils permettant une localisation précise d’un véhicule autonome dans un environnement connu à priori. Cette localisation est déterminée par appariement (Map-matching) entre une carte de l’environnement disponible a priori et les données collectées au fur et à mesure que le véhicule se déplace. Pour ce faire, deux phases distinctes sont déployées. La première permet la construction de la carte, avec une précision centimétrique en utilisant des techniques de construction de cartes statiques ou dynamiques. La seconde correspond à la capacité de localiser le véhicule dans cette carte 3D en l’absence d’infrastructures dédiées comprenant le système GPS, les mesures inertielles (IMU) ou des balises.Au cours de ce travail, différentes techniques sont développées pour permettre la réalisation des deux phases mentionnées ci-dessus. Ainsi, la phase de construction de cartes, qui consiste à recaler des nuages de points capturés pour construire une représentation unique et unifiée de l’environnement, correspond au problème de la localisation et de la cartographie simultanée (SLAM). Afin de faire face à ce problème, nous avons testé et comparé différentes méthodes de recalage. Cependant, l’obtention de cartes précises nécessite des nuages de points très denses, ce qui les rend inefficaces pour une utilisation en temps réel. Dans ce contexte, une nouvelle méthode de réduction des points est proposée. (...) / The concept of self-driving vehicles is becoming a happening reality and will soon share our roads with other vehicles –autonomous or not-. For a self-driving car to move around in its environment in a securely, it needs to sense to its immediate environment and most importantly localize itself to be able to plan a safe trajectory to follow. Therefore, to perform tasks suchas trajectory planning and navigation, a precise localization is of upmost importance. This would further allow the vehicle toconstantly plan and predict an optimal path in order to weave through cluttered spaces by avoiding collisions with other agentssharing the same space as the latter. For years, the Global Positioning System (GPS) has been a widespread complementary solution for navigation. The latter allows only a limited precision (range of several meters). Although the Differential GPSand the Real Time Kinematic (RTK) systems have reached considerable accuracy, these systems remain sensitive to signal masking and multiple reflections, offering poor reliability in dense urban areas. All these deficiencies make these systems simply unsuitable to handle hard real time constraints such as collision avoidance. A prevailing alternative that has attracted interest recently, is to use upload a prior map in the system so that the agent can have a reliable support to lean on. Indeed,maps facilitate the navigation process and add an extra layer of security and other dimensions of semantic understanding. The vehicle uses its onboard sensors to compare what it perceives at a given instant to what is stored in the backend memory ofthe system. In this way, the autonomous vehicle can actually anticipate and predict its actions accordingly.The purpose of this thesis is to develop tools allowing an accurate localization task in order to deal with some complex navigation tasks outlined above. Localization is mainly performed by matching a 3D prior map with incoming point cloudstructures as the vehicle moves. Three main objectives are set out leading with two distinct phases deployed (the map building and the localization). The first allows the construction of the map, with centimeter accuracy using static or dynamic laser surveying technique. Explicit details about the experimental setup and data acquisition campaigns thoroughly carried outduring the course of this work are given. The idea is to construct efficient maps liable to be updated in the long run so thatthe environment representation contained in the 3D models are compact and robust. Moreover, map-building invariant on any dedicated infrastructure is of the paramount importance of this work in order to rhyme with the concept of flexible mapping and localization. In order to build maps incrementally, we rely on a self-implementation of state of the art iterative closest point (ICP) algorithm, which is then upgraded with new variants and compared to other implemented versions available inthe literature. However, obtaining accurate maps requires very dense point clouds, which make them inefficient for real-time use. Inthis context, the second objective deals with points cloud reduction. The proposed approach is based on the use of both colorinformation and the geometry of the scene. It aims to find sets of 3D points with the same color in a very small region and replacing each set with one point. As a result, the volume of the map will be significantly reduced, while the proprieties of this map such as the shape and color of scanned objects remain preserved.The third objective resort to efficient, precise and reliable localization once the maps are built and treated. For this purpose, the online data should be accurate, fast with low computational effort whilst maintaining a coherent model of the explored space. To this end, the Velodyne HDL-32 comes into play. (...)
142

Contributions aux méthodes bayésiennes approchées pour modèles complexes / Contributions to Bayesian Computing for Complex Models

Grazian, Clara 15 April 2016
Récemment, la grande complexité des applications modernes, par exemple dans la génétique, l’informatique, la finance, les sciences du climat, etc. a conduit à la proposition des nouveaux modèles qui peuvent décrire la réalité. Dans ces cas,méthodes MCMC classiques ne parviennent pas à rapprocher la distribution a posteriori, parce qu’ils sont trop lents pour étudier le space complet du paramètre. Nouveaux algorithmes ont été proposés pour gérer ces situations, où la fonction de vraisemblance est indisponible. Nous allons étudier nombreuses caractéristiques des modèles complexes: comment éliminer les paramètres de nuisance de l’analyse et faire inférence sur les quantités d’intérêt,dans un cadre bayésienne et non bayésienne et comment construire une distribution a priori de référence. / Recently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior.
143

Personnalisation robuste de modèles 3D électromécaniques du cœur. Application à des bases de données cliniques hétérogènes et longitudinales / Robust personalisation of 3D electromechanical cardiac models. Application to heterogeneous and longitudinal clinical databases

Molléro, Roch 19 December 2017 (has links)
La modélisation cardiaque personnalisée consiste à créer des simulations 3D virtuelles de cas cliniques réels pour aider les cliniciens à prédire le comportement du cœur ou à mieux comprendre certaines pathologies. Dans cette thèse nous illustrons d'abord la nécessité d'une approche robuste d'estimation des paramètres, dans un cas ou l'incertitude dans l'orientation des fibres myocardiques entraîne une incertitude dans les paramètres estimés qui est très large par rapport à leur variabilité physiologique. Nous présentons ensuite une approche originale multi-échelle 0D/3D pour réduire le temps de calcul, basée sur un couplage multi-échelle entre les simulations du modèle 3D et d'une version "0D" réduite de ce modèle. Ensuite, nous dérivons un algorithme rapide de personnalisation multi-échelle pour le modèle 3D. Dans un deuxième temps, nous construisons plus de 140 simulations 3D personnalisées, dans le cadre de deux études impliquant l'analyse longitudinale de la fonction cardiaque : d'une part, l'analyse de l'évolution de cardiomyopathies à long terme, d'autre part la modélisation des changements cardiovasculaires pendant la digestion. Enfin, nous présentons un algorithme pour sélectionner automatiquement des directions observables dans l'espace des paramètres à partir d'un ensemble de mesures, et calculer des probabilités "a priori" cohérentes dans ces directions à partir des valeurs de paramètres dans la population. Cela permet en particulier de contraindre l'estimation de paramètres dans les cas où des mesures sont manquantes. Au final nous présentons des estimations cohérentes de paramètres dans une base de données de 811 cas avec le modèle 0D et 137 cas du modèle 3D. / Personalised cardiac modeling consists in creating virtual 3D simulations of real clinical cases to help clinicians predict the behaviour of the heart, or better understand some pathologies from the estimated values of biophysical parameters. In this work we first motivate the need for a consistent parameter estimation framework, from a case study were uncertainty in myocardial fibre orientation leads to an uncertainty in estimated parameters which is extremely large compared to their physiological variability. To build a consistent approach to parameter estimation, we then tackle the computational complexity of 3D models. We introduce an original multiscale 0D/3D approach for cardiac models, based on a multiscale coupling to approximate outputs of a 3D model with a reduced "0D" version of the same model. Then we derive from this coupling an efficient multifidelity optimisation algorithm for the 3D model. In a second step, we build more than 140 personalised 3D simulations, in the context of two studies involving the longitudinal analysis of the cardiac function: on one hand the analysis of long-term evolution of cardiomyopathies under therapy, on the other hand the modeling of short-term cardiovascular changes during digestion. Finally we present an algorithm to automatically detect and select observable directions in the parameter space from a set of measurements, and compute consistent population-based priors probabilities in these directions, which can be used to constrain parameter estimation for cases where measurements are missing. This enables consistent parameter estimations in a large databases of 811 cases with the 0D model, and 137 cases of the 3D model.
144

The Impotency of Post Hoc Power

Sebyhed, Hugo, Gunnarsson, Emma January 2020 (has links)
In this thesis, we hope to dispel some confusion regarding the so-called post hoc power, i.e. power computed making the assumption that the estimated sample effect is equal to the population effect size. In previous research, it has been shown that post hoc power is a function of the p-value, making it redundant as a tool of analysis. We go further, arguing for it to never be reported, since it is a source of confusion and potentially harmful incentives. We also conduct a Monte Carlo simulation to illustrate our points of view. Previous research is confirmed by the results of this study.
145

Approche Bayésienne de la survie dans les essais cliniques pour les cancers rares / Bayesian Approach to Survival in Clinical Trials in Rare Cancers

Brard, Caroline 20 November 2018 (has links)
L'approche Bayésienne permet d’enrichir l'information apportée par l'essai clinique, en intégrant des informations externes à l'essai. De plus, elle permet d’exprimer les résultats directement en termes de probabilité d’un certain effet du traitement, plus informative et interprétable qu’une p-valeur et un intervalle de confiance. Par ailleurs, la réduction fréquente d’une analyse à une interprétation binaire des résultats (significatif ou non) est particulièrement dommageable dans les maladies rares. L’objectif de mon travail était d'explorer la faisabilité, les contraintes et l'apport de l'approche Bayésienne dans les essais cliniques portant sur des cancers rares lorsque le critère principal est censuré. Tout d’abord, une revue de la littérature a confirmé la faible implémentation actuelle des méthodes Bayésiennes dans l'analyse des essais cliniques avec critère de survie.Le second axe de ce travail a porté sur le développement d’un essai Bayésien avec critère de survie, intégrant des données historiques, dans le cadre d’un essai réel portant sur une pathologie rare (ostéosarcome). Le prior intégrait des données historiques individuelles sur le bras contrôle et des données agrégées sur l’effet relatif du traitement. Une large étude de simulations a permis d’évaluer les caractéristiques opératoires du design proposé, de calibrer le modèle, tout en explorant la problématique de la commensurabilité entre les données historiques et actuelles. Enfin, la ré-analyse de trois essais cliniques publiés a permis d’illustrer l'apport de l'approche Bayésienne dans l'expression des résultats et la manière dont cette approche permet d’enrichir l’analyse fréquentiste d’un essai. / Bayesian approach augments the information provided by the trial itself by incorporating external information into the trial analysis. In addition, this approach allows the results to be expressed in terms of probability of some treatment effect, which is more informative and interpretable than a p-value and a confidence interval. In addition, the frequent reduction of an analysis to a binary interpretation of the results (significant versus non-significant) is particularly harmful in rare diseases.In this context, the objective of my work was to explore the feasibility, constraints and contribution of the Bayesian approach in clinical trials in rare cancers with a primary censored endpoint. A review of the literature confirmed that the implementation of Bayesian methods is still limited in the analysis of clinical trials with a censored endpoint.In the second part of our work, we developed a Bayesian design, integrating historical data in the setting of a real clinical trial with a survival endpoint in a rare disease (osteosarcoma). The prior incorporated individual historical data on the control arm and aggregate historical data on the relative treatment effect. Through a large simulation study, we evaluated the operating characteristics of the proposed design and calibrated the model while exploring the issue of commensurability between historical and current data. Finally, the re-analysis of three clinical trials allowed us to illustrate the contribution of Bayesian approach to the expression of the results, and how this approach enriches the frequentist analysis of a trial.
146

Biologically Inspired Modular Neural Networks

Azam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
147

Error Analysis for Geometric Finite Element Discretizations of a Cosserat Rod Optimization Problem

Bauer, Robert 08 April 2024 (has links)
In summary, this thesis focuses on developing an a priori theory for geometric finite element discretizations of a Cosserat rod model, which is derived from incompatible elasticity. This theory will be supported by corresponding numerical experiments to validate the convergence behavior of the proposed method. The main result describes the qualitative behavior of intrinsic H1-errors and L2-errors in terms of the mesh diameter 0 < h ≪ 1 of the approximation scheme. Geometric Finite Element functions uh with its subclasses Geodesic Finite Elements and Projection- based Finite Elements as conforming path-independent and objective discretizations of Cosserat rod configurations were used. Existence, regularity, variational bounds and vector field transport estimates of the Cosserat rod model were derived to ob- tain an intrinsic a-priori theory. In the second part, this thesis concerns the derivation of the Cosserat rod from 3D elasticity featuring prestress together with numerical experiments for microheteroge- neous prestressed materials.
148

Analysis of the quasicontinuum method and its application

Wang, Hao January 2013 (has links)
The present thesis is on the error estimates of different energy based quasicontinuum (QC) methods, which are a class of computational methods for the coupling of atomistic and continuum models for micro- or nano-scale materials. The thesis consists of two parts. The first part considers the a priori error estimates of three energy based QC methods. The second part deals with the a posteriori error estimates of a specific energy based QC method which was recently developed. In the first part, we develop a unified framework for the a priori error estimates and present a new and simpler proof based on negative-norm estimates, which essentially extends previous results. In the second part, we establish the a posteriori error estimates for the newly developed energy based QC method for an energy norm and for the total energy. The analysis is based on a posteriori residual and stability estimates. Adaptive mesh refinement algorithms based on these error estimators are formulated. In both parts, numerical experiments are presented to illustrate the results of our analysis and indicate the optimal convergence rates. The thesis is accompanied by a thorough introduction to the development of the QC methods and its numerical analysis, as well as an outlook of the future work in the conclusion.
149

Essays on semantic content and context-sensitivity

Yli-Vakkuri, Tuomo Juhani January 2012 (has links)
The thesis comprises three foundational studies on the topics named in its title, together with an introduction. Ch. 1 argues against a popular combination of views in the philosophy of language: Propositionality, which says that the semantic values of natural language sentences (relative to contexts) are the propositions they express (in those contexts) and Compositionality, which says that the semantic value of a complex expression of a natural language (in a context) is determined by the semantic values its immediate constituents have (in that same context) together with their syntactic mode of combination. Ch. 1 argues that the Naïve Picture is inconsistent with the presence of variable-binding in natural languages. Ch. 2 criticizes the strategy of using “operator arguments” to establish relativist conclusions such as: that the truth values of propositions vary with time (Time Relativism) or that they vary with location (Location Relativism). Operator arguments purport to derive the conclusion that propositions vary in truth value along some parameter P from the premise that there are, in some language, sentential operators that operate on or “shift” the P parameter. I identify two forms of operator argument, offer a reconstruction of each, and I argue that both they rely on an implausible, coarse-grained conception of propositions. Ch. 3 is an assessment of the prospects for semantic internalism. It argues, first, that to accommodate Putnam’s famous Twin Earth examples, an internalist must maintain that narrow semantic content determines different extensions relative to agents and times. Second, that the most thoroughly worked out version of semantic internalism – the epistemic two-dimensionalism (E2D) of David Chalmers – can accommodate the original Twin Earth thought experiments but is refuted by similar thought experiments that involve temporally or spatially symmetric agents.
150

Le décours temporel de l'utilisation des fréquences spatiales dans les troubles du spectre autistique

Caplette, Laurent 08 1900 (has links)
Notre système visuel extrait d'ordinaire l'information en basses fréquences spatiales (FS) avant celles en hautes FS. L'information globale extraite tôt peut ainsi activer des hypothèses sur l'identité de l'objet et guider l'extraction d'information plus fine spécifique par la suite. Dans les troubles du spectre autistique (TSA), toutefois, la perception des FS est atypique. De plus, la perception des individus atteints de TSA semble être moins influencée par leurs a priori et connaissances antérieures. Dans l'étude décrite dans le corps de ce mémoire, nous avions pour but de vérifier si l'a priori de traiter l'information des basses aux hautes FS était présent chez les individus atteints de TSA. Nous avons comparé le décours temporel de l'utilisation des FS chez des sujets neurotypiques et atteints de TSA en échantillonnant aléatoirement et exhaustivement l'espace temps x FS. Les sujets neurotypiques extrayaient les basses FS avant les plus hautes: nous avons ainsi pu répliquer le résultat de plusieurs études antérieures, tout en le caractérisant avec plus de précision que jamais auparavant. Les sujets atteints de TSA, quant à eux, extrayaient toutes les FS utiles, basses et hautes, dès le début, indiquant qu'ils ne possédaient pas l'a priori présent chez les neurotypiques. Il semblerait ainsi que les individus atteints de TSA extraient les FS de manière purement ascendante, l'extraction n'étant pas guidée par l'activation d'hypothèses. / Our visual system usually samples low spatial frequency (SF) information before higher SF information. The coarse information thereby extracted can activate hypotheses in regard to the object's identity and guide further extraction of specific finer information. In autism spectrum disorder (ASD) however, SF perception is atypical. Moreover, individuals with ASD seem to rely less on their prior knowledge when perceiving objects. In the present study, we aimed to verify if the prior according to which we sample visual information in a coarse-to-fine fashion is existent in ASD. We compared the time course of SF sampling in neurotypical and ASD subjects by randomly and exhaustively sampling the SF x time space. Neurotypicals were found to sample low SFs before higher ones, thereby replicating the finding from many other studies, but characterizing it with much greater precision. ASD subjects were found, for their part, to extract SFs in a more fine-to-coarse fashion, extracting all relevant SFs upon beginning. This indicated that they did not possess a coarse-to-fine prior. Thus, individuals with ASD seem to sample information in a purely bottom-up fashion, without the guidance from hypotheses activated by coarse information.

Page generated in 0.0665 seconds