• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 2
  • 1
  • Tagged with
  • 33
  • 33
  • 8
  • 8
  • 7
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Contributions to fuzzy object comparison and applications : similarity measures for fuzzy and heterogeneous data and their applications

Bashon, Yasmina Massoud January 2013 (has links)
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
22

Décompositions tensorielles et factorisations de calculs intensifs appliquées à l'identification de modèles de comportement non linéaire / Tensor decompositions and factorizations of intensive computing applied to the calibration of nonlinear constitutive material laws

Olivier, Clément 14 December 2017 (has links)
Cette thèse développe une méthodologie originale et non intrusive de construction de modèles de substitution applicable à des modèles physiques multiparamétriques.La méthodologie proposée permet d’approcher en temps réel, sur l’ensemble du domaine paramétrique, de multiples quantités d’intérêt hétérogènes issues de modèles physiques.Les modèles de substitution sont basés sur des représentations en train de tenseurs obtenues lors d'une phase hors ligne de calculs intensifs.L'idée essentielle de la phase d'apprentissage est de construire simultanément les approximations en se basant sur un nombre limité de résolutions du modèle physique lancées à la volée.L'exploration parcimonieuse du domaine paramétrique couplée au format compact de train de tenseurs permet de surmonter le fléau de la dimension.L'approche est particulièrement adaptée pour traiter des modèles présentant un nombre élevé de paramètres définis sur des domaines étendus.Les résultats numériques sur des lois élasto-viscoplastiques non linéaires montrent que des modèles de substitution compacts en mémoire qui approchent précisément les différentes variables mécaniques dépendantes du temps peuvent être obtenus à des coûts modérés.L'utilisation de tels modèles exploitables en temps réel permet la conception d'outils d'aide à la décision destinés aux experts métiers dans le cadre d'études paramétriques et visent à améliorer la procédure de calibration des lois matériaux. / This thesis presents a novel non-intrusive methodology to construct surrogate models of parametric physical models.The proposed methodology enables to approximate in real-time, over the entire parameter space, multiple heterogeneous quantities of interest derived from physical models.The surrogate models are based on tensor train representations built during an intensive offline computational stage.The fundamental idea of the learning stage is to construct simultaneously all tensor approximations based on a reduced number of solutions of the physical model obtained on the fly.The parsimonious exploration of the parameter space coupled with the compact tensor train representation allows to alleviate the curse of dimensionality.The approach accommodates particularly well to models involving many parameters defined over large domains.The numerical results on nonlinear elasto-viscoplastic laws show that compact surrogate models in terms of memory storage that accurately predict multiple time dependent mechanical variables can be obtained at a low computational cost.The real-time response provided by the surrogate model for any parameter value allows the implementation of decision-making tools that are particularly interesting for experts in the context of parametric studies and aim at improving the procedure of calibration of material laws.
23

Modelo baseado em processamento de dados heterogêneos para aplicações de apoio clínico

Rönnau, Rodrigo Freiberger 06 December 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-02-08T12:32:48Z No. of bitstreams: 1 Rodrigo Freiberger Rönnau_.pdf: 4107183 bytes, checksum: a19ee8d2e8f8964708c6b3baf34e7ad2 (MD5) / Made available in DSpace on 2018-02-08T12:32:48Z (GMT). No. of bitstreams: 1 Rodrigo Freiberger Rönnau_.pdf: 4107183 bytes, checksum: a19ee8d2e8f8964708c6b3baf34e7ad2 (MD5) Previous issue date: 2017-12-06 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O uso de sistemas computacionais no auxílio à prática clínica vem sendo amplamente estudado atualmente, buscando a avaliação de suas possibilidades na melhoria da qualidade do atendimento prestado aos pacientes. Dentre as aplicações com essa finalidade, podem-se destacar aquelas que atuam sobre laudos médicos ou sobre os exames por imagem, em atividades que realizam a extração, disponibilização e uso de características adquiridas através do processamento desses documentos. Entretanto, ao observar a literatura da área, percebe-se que há uma lacuna na utilização combinada das informações obtidas em cada tipo de processamento, ao mesmo tempo em que são indicadas possibilidades relevantes de criação de aplicações compartilhando e integrando estas informações. Outra lacuna identificada está relacionada à interoperabilidade dos dados e dos resultados obtidos entre os diferentes sistemas já existentes. Com o intuito de contribuir para a solução das questões apresentadas, neste trabalho é proposto um modelo, com estrutura modular e expansível, que viabiliza o emprego de diferentes formatos de entrada com o objetivo de prover, de forma integrada, informações de apoio ao médico ou especialista. Os dados extraídos são disponibilizados de maneira estruturada através de padrões reconhecidos, viabilizando a interoperabilidade entre os sistemas e o seu uso a partir de diferentes aplicações computacionais. Foram construídos dois protótipos, utilizando como base o modelo proposto. Para apresentar o funcionamento e os benefícios de uso do modelo, bem como possibilitar a sua avaliação, foram descritos cenários que demonstram seu emprego. Tanto o modelo como os protótipos foram apresentados a 12 profissionais da saúde e a 35 profissionais da computação. Os participantes preencheram um questionário de avaliação. Como resultado, 97,8% dos entrevistados indicaram que o modelo proposto é útil e 76,6% pretendem utilizá-lo e/ou divulgá-lo. / The use of computer systems to aid in the clinical practice has been widely studied, seeking an evaluation of its possibilities in improving the quality of the care provided to patients. Among the applications for this purpose, it is possible to highlight those that act on medical reports or medical images, in activities that perform the extraction, storage and use of characteristics acquired through the processing of these documents. However, observing the literature of the area, it is noticed that there is a gap in the combined use of information obtained in each type of processing, while indicating, at the same time, relevant possibilities of applications sharing and integrating this information. Another identified gap is related to the interoperability of the data and results obtained between different existing systems. In order to contribute to the solution of the presented questions, this work proposes a model, with a modular and expandable structure, which makes possible the use of different input formats with the objective of providing, in an integrated way, support information to the physician or specialist. The extracted data are made available in a structured manner through recognized standards, allowing the interoperability between the systems and their use from different computational applications. Two prototypes were constructed, using as basis the proposed model. Scenarios that demonstrate the use and benefits of the model have been described and used in its evaluation. Both the model and the prototypes were presented to 12 health professionals and 35 computer professionals. Participants completed an evaluation questionnaire. As result, 97.8% of respondents indicated that the proposed model is useful and 76.6% intend to use it and/or disseminate it.
24

Conceptual design of shapes by reusing existing heterogeneous shape data through a multi-layered shape description model and for VR applications / Design conceptuel de formes par exploitation de données hétérogènes au sein d’un modèle de description de forme multi-niveaux et pour des applications de RV

Li, Zongcheng 28 September 2015 (has links)
Les récentes avancées en matière de systèmes d'acquisition et de modélisation ont permis la mise à disposition d'une très grande quantité de données numériques (e.g. images, vidéos, modèles 3D) dans différents domaines d'application. En particulier, la création d'Environnements Virtuels (EVs) nécessite l'exploitation de données nu-mériques pour permettre des simulations et des effets proches de la réalité. Malgré ces avancées, la conception d'EVs dédiés à certaines applications requiert encore de nombreuses et parfois laborieuses étapes de modélisation et de traitement qui impliquent plusieurs experts (e.g. experts du domaine de l'application, experts en modélisation 3D et programmeur d'environnements virtuels, designers et experts communication/marketing). En fonction de l'application visée, le nombre et le profil des experts impliqués peuvent varier. Les limitations et difficultés d'au-jourd'hui sont principalement dues au fait qu'il n'existe aucune relation forte entre les experts du domaine qui ont des besoins, les experts du numérique ainsi que les outils et les modèles qui prennent part au processus de déve-loppement de l'EV. En fait, les outils existants focalisent sur des définitions souvent très détaillées des formes et ne sont pas capables de supporter les processus de créativité et d'innovation pourtant garants du succès d'un pro-duit ou d'une application. De plus, la grande quantité de données numériques aujourd'hui accessible n'est pas réellement exploitée. Clairement, les idées innovantes viennent souvent de la combinaison d'éléments et les don-nées numériques disponibles pourraient être mieux utilisées. Aussi, l'existence de nouveaux outils permettant la réutilisation et la combinaison de ces données serait d'une grande aide lors de la phase de conception conceptuelle de formes et d'EVs. Pour répondre à ces besoins, cette thèse propose une nouvelle approche et un nouvel outil pour la conception conceptuelle d'EVs exploitant au maximum des ressources existantes, en les intégrant et en les combinant tout en conservant leurs propriétés sémantiques. C'est ainsi que le Modèle de Description Générique de Formes (MDGF) est introduit. Ce modèle permet la combinaison de données multimodales (e.g. images et maillages 3D) selon trois niveaux : Conceptuel, Intermédiaire et Données. Le niveau Conceptuel exprime quelles sont les différentes parties de la forme ainsi que la façon dont elles sont combinées. Chaque partie est définie par un Elément qui peut être soit un Composant soit un Groupe de Composants lorsque ceux-ci possèdent des carac-téristiques communes (e.g. comportement, sens). Les Eléments sont liés par des Relations définies au niveau Con-ceptuel là où les experts du domaine interagissent. Chaque Composant est ensuite décrit au niveau Données par sa Géométrie, sa Structure et ses informations Sémantiques potentiellement attachées. Dans l'approche proposée, un Composant est une partie d'image ou une partie d'un maillage triangulaire 3D. Quatre Relations sont proposées (fusion, assemblage, shaping et localisation) et décomposées en un ensemble de Contraintes qui contrôlent la po-sition relative, l'orientation et le facteur d'échelle des Composants au sein de la scène graphique. Les Contraintes sont stockées au niveau Intermédiaire et agissent sur des Entités Clés (e.g. points, des lignes) attachées à la Géo-métrie ou à la Structure des Composants. Toutes ces contraintes sont résolues en minimisant une fonction énergie basée sur des grandeurs physiques. Les concepts du MDGF ont été implémentés et intégrés au sein d'un outil de design conceptuel développé par l'auteur. Différents exemples illustrent le potentiel de l'approche appliquée à différents domaines d'application. / Due to the great advances in acquisition devices and modeling tools, a huge amount of digital data (e.g. images, videos, 3D models) is becoming now available in various application domains. In particular, virtual envi-ronments make use of those digital data allowing more attractive and more effectual communication and simula-tion of real or not (yet) existing environments and objects. Despite those innovations, the design of application-oriented virtual environment still results from a long and tedious iterative modeling and modification process that involves several actors (e.g. experts of the domain, 3D modelers and VR programmers, designers or communica-tions/marketing experts). Depending of the targeted application, the number and the profiles of the involved actors may change. Today's limitations and difficulties are mainly due to the fact there exists no strong relationships between the expert of the domain with creative ideas, the digitally skilled actors, the tools and the shape models taking part to the virtual environment development process. Actually, existing tools mainly focus on the detailed geometric definition of the shapes and are not suitable to effectively support creativity and innovation, which are considered as key elements for successful products and applications. In addition, the huge amount of available digital data is not fully exploited. Clearly, those data could be used as a source of inspiration for new solutions, being innovative ideas frequently coming from the (unforeseen) combination of existing elements. Therefore, the availability of software tools allowing the re-use and combination of such digital data would be an effective support for the conceptual design phase of both single shapes and VR environments. To answer those needs, this thesis proposes a new approach and system for the conceptual design of VRs and associated digital assets by taking existing shape resources, integrating and combining them together while keeping their semantic meanings. To support this, a Generic Shape Description Model (GSDM) is introduced. This model allows the combination of multimodal data (e.g. images and 3D meshes) according to three levels: conceptual, intermediate and data levels. The conceptual level expresses what the different parts of a shape are, and how they are combined together. Each part of a shape is defined by an Element that can either be a Component or a Group of Components when they share common characteristics (e.g. behavior, meaning). Elements are linked with Relations defined at the Concep-tual level where the experts in the domain are acting and exchanging. Each Component is then further described at the data level with its associated Geometry, Structure and potentially attached Semantics. In the proposed ap-proach, a Component is a part of an image or a part of a 3D mesh. Four types of Relation are proposed (merging, assembly, shaping and location) and decomposed in a set of Constraints which control the relative position, orien-tation and scaling of the Components within the 3D viewer. Constraints are stored at the intermediate level and are acting on Key Entities (such as points, a lines, etc.) laying on the Geometry or Structure of the Components. All these constraints are finally solved while minimizing an additional physically-based energy function. At the end, most of the concepts of GSDM have been implemented and integrated into a user-oriented conceptual design tool totally developed by the author. Different examples have been created using this tool demonstrating the potential of the approach proposed in this document.
25

Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie / Management and visualisation oh heterogeneous multidimensional data : PLM application to neuroimaging

Allanic, Marianne 17 December 2015 (has links)
La neuroimagerie est confrontée à des difficultés pour analyser et réutiliser la masse croissante de données hétérogènes qu’elle produit. La provenance des données est complexe – multi-sujets, multi-analyses, multi-temporalités – et ces données ne sont stockées que partiellement, limitant les possibilités d’études multimodales et longitudinales. En particulier, la connectivité fonctionnelle cérébrale est analysée pour comprendre comment les différentes zones du cerveau travaillent ensemble. Il est nécessaire de gérer les données acquises et traitées suivant plusieurs dimensions, telles que le temps d’acquisition, le temps entre les acquisitions ou encore les sujets et leurs caractéristiques. Cette thèse a pour objectif de permettre l’exploration de relations complexes entre données hétérogènes, ce qui se décline selon deux axes : (1) comment gérer les données et leur provenance, (2) comment visualiser les structures de données multidimensionnelles. L’apport de nos travaux s’articule autour de trois propositions qui sont présentées à l’issue d’un état de l’art sur les domaines de la gestion de données hétérogènes et de la visualisation de graphes. Le modèle de données BMI-LM (Bio-Medical Imaging – Lifecycle Management) structure la gestion des données de neuroimagerie en fonction des étapes d’une étude et prend en compte le caractère évolutif de la recherche grâce à l’association de classes spécifiques à des objets génériques. L’implémentation de ce modèle au sein d’un système PLM (Product Lifecycle Management) montre que les concepts développés depuis vingt ans par l’industrie manufacturière peuvent être réutilisés pour la gestion des données en neuroimagerie. Les GMD (Graphes Multidimensionnels Dynamiques) sont introduits pour représenter des relations complexes entre données qui évoluent suivant plusieurs dimensions, et le format JGEX (Json Graph EXchange) a été créé pour permettre le stockage et l’échange de GMD entre applications. La méthode OCL (Overview Constraint Layout) permet l’exploration visuelle et interactive de GMD. Elle repose sur la préservation partielle de la carte mentale de l’utilisateur et l’alternance de vues complètes et réduites des données. La méthode OCL est appliquée à l’étude de la connectivité fonctionnelle cérébrale au repos de 231 sujets représentées sous forme de GMD – les zones du cerveau sont représentées par les nœuds et les mesures de connectivité par les arêtes – en fonction de l’âge, du genre et de la latéralité : les GMD sont obtenus par l’application de chaînes de traitement sur des acquisitions IRM dans le système PLM. Les résultats montrent deux intérêts principaux à l’utilisation de la méthode OCL : (1) l’identification des tendances globales sur une ou plusieurs dimensions et (2) la mise en exergue des changements locaux entre états du GMD. / Neuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization. The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states.
26

Contributions to fuzzy object comparison and applications. Similarity measures for fuzzy and heterogeneous data and their applications.

Bashon, Yasmina M. January 2013 (has links)
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison. / Libyan Embassy
27

Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data context

Keylla Ramos Saes 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
28

Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data context

Saes, Keylla Ramos 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
29

Extraction of mobility information through heterogeneous data fusion : a multi-source, multi-scale, and multi-modal problem / Fusion de données hétérogènes pour l'extraction d'informations de mobilité : un problème multi-source, multi-échelle, et multi-modal

Thuillier, Etienne 11 December 2017 (has links)
Aujourd'hui c'est un fait, nous vivons dans un monde où les enjeux écologiques, économiques et sociétaux sont de plus en plus pressants. Au croisement des différentes lignes directrices envisagées pour répondre à ces problèmes, une vision plus précise de la mobilité humaine est un axe central et majeur, qui a des répercussions sur tous les domaines associés tels que le transport, les sciences sociales, l'urbanisme, les politiques d'aménagement, l'écologie, etc. C'est par ailleurs dans un contexte de contraintes budgétaires fortes que les principaux acteurs de la mobilité sur les territoires cherchent à rationaliser les services de transport, et les déplacements des individus. La mobilité humaine est donc un enjeu stratégique aussi bien pour les collectivités locales que pour les usagers, qu'il faut savoir observer, comprendre, et anticiper.Cette étude de la mobilité passe avant tout par une observation précise des déplacements des usagers sur les territoires. Aujourd'hui les acteurs de la mobilité se tournent principalement vers l'utilisation massive des données utilisateurs. L'utilisation simultanée de données multi-sources, multi-modales, et multi-échelles permet d'entrevoir de nombreuses possibilités, mais cette dernière présente des défis technologiques et scientifiques majeurs. Les modèles de mobilité présentés dans la littérature sont ainsi trop souvent axés sur des zones d'expérimentation limitées, en utilisant des données calibrées, etc. et leur application dans des contextes réels, et à plus large échelle est donc discutable. Nous identifions ainsi deux problématiques majeures qui permettent de répondre à ce besoin d'une meilleure connaissance de la mobilité humaine, mais également à une meilleure application de cette connaissance. La première problématique concerne l'extraction d'informations de mobilité à partir de la fusion de données hétérogènes. La seconde problématique concerne la pertinence de cette fusion dans un contexte réel, et à plus large échelle. Nous apportons différents éléments de réponses à ces problématiques dans cette thèse. Tout d'abord en présentant deux modèles de fusion de données, qui permettent une extraction d'informations pertinentes. Puis, en analysant l'application de ces deux modèles au sein du projet ANR Norm-Atis.Dans cette thèse, nous suivons finalement le développement de toute une chaine de processus. En commençant par une étude de la mobilité humaine, puis des modèles de mobilité, nous présentons deux modèles de fusion de données, et nous analysons leur pertinence dans un cas concret. Le premier modèle que nous proposons permet d'extraire 12 comportements types de mobilité. Il est basé sur un apprentissage non-supervisé de données issues de la téléphonie mobile. Nous validons nos résultats en utilisant des données officielles de l'INSEE, et nous déduisons de nos résultats, des comportements dynamiques qui ne peuvent pas être observés par les données de mobilité traditionnelles. Ce qui est une forte valeur-ajoutée de notre modèle. Le second modèle que nous proposons permet une désagrégation des flux de mobilité en six motifs de mobilité. Il se base sur un apprentissage supervisé des données issues d'enquêtes de déplacements ainsi que des données statiques de description du sursol. Ce modèle est appliqué par la suite aux données agrégés au sein du projet Norm-Atis. Les temps de calculs sont suffisamment performants pour permettre une application de ce modèle dans un contexte temps-réel. / Today it is a fact that we live in a world where ecological, economic and societal issues are increasingly pressing. At the crossroads of the various guidelines envisaged to address these problems, a more accurate vision of human mobility is a central and major axis, which has repercussions on all related fields such as transport, social sciences, urban planning, management policies, ecology, etc. It is also in the context of strong budgetary constraints that the main actors of mobility on the territories seek to rationalize the transport services and the movements of individuals. Human mobility is therefore a strategic challenge both for local communities and for users, which must be observed, understood and anticipated.This study of mobility is based above all on a precise observation of the movements of users on the territories. Nowadays mobility operators are mainly focusing on the massive use of user data. The simultaneous use of multi-source, multi-modal, and multi-scale data opens many possibilities, but the latter presents major technological and scientific challenges. The mobility models presented in the literature are too often focused on limited experimental areas, using calibrated data, etc., and their application in real contexts and on a larger scale is therefore questionable. We thus identify two major issues that enable us to meet this need for a better knowledge of human mobility, but also to a better application of this knowledge. The first issue concerns the extraction of mobility information from heterogeneous data fusion. The second problem concerns the relevance of this fusion in a real context, and on a larger scale. These issues are addressed in this dissertation: the first, through two data fusion models that allow the extraction of mobility information, the second through the application of these fusion models within the ANR Norm-Atis project.In this thesis, we finally follow the development of a whole chain of processes. Starting with a study of human mobility, and then mobility models, we present two data fusion models, and we analyze their relevance in a concrete case. The first model we propose allows to extract 12 types of mobility behaviors. It is based on an unsupervised learning of mobile phone data. We validate our results using official data from the INSEE, and we infer from our results, dynamic behaviors that can not be observed through traditional mobility data. This is a strong added-value of our model. The second model operates a mobility flows decompositoin into six mobility purposes. It is based on a supervised learning of mobility surveys data and static data from the land use. This model is then applied to the aggregated data within the Norm-Atis project. The computing times are sufficiently powerful to allow an application of this model in a real-time context.
30

Fusion d'images de télédétection hétérogènes par méthodes crédibilistes / Fusion of heterogeneous remote sensing images by credibilist methods

Hammami, Imen 08 December 2017 (has links)
Avec l’avènement de nouvelles techniques d’acquisition d’image et l’émergence des systèmes satellitaires à haute résolution, les données de télédétection à exploiter sont devenues de plus en plus riches et variées. Leur combinaison est donc devenue essentielle pour améliorer le processus d’extraction des informations utiles liées à la nature physique des surfaces observées. Cependant, ces données sont généralement hétérogènes et imparfaites ce qui pose plusieurs problèmes au niveau de leur traitement conjoint et nécessite le développement de méthodes spécifiques. C’est dans ce contexte que s’inscrit cette thèse qui vise à élaborer une nouvelle méthode de fusion évidentielle dédiée au traitement des images de télédétection hétérogènes à haute résolution. Afin d’atteindre cet objectif, nous axons notre recherche, en premier lieu, sur le développement d’une nouvelle approche pour l’estimation des fonctions de croyance basée sur la carte de Kohonen pour simplifier l’opération d’affectation des masses des gros volumes de données occupées par ces images. La méthode proposée permet de modéliser non seulement l’ignorance et l’imprécision de nos sources d’information, mais aussi leur paradoxe. Ensuite, nous exploitons cette approche d’estimation pour proposer une technique de fusion originale qui permettra de remédier aux problèmes dus à la grande variété des connaissances apportées par ces capteurs hétérogènes. Finalement, nous étudions la manière dont la dépendance entre ces sources peut être considérée dans le processus de fusion moyennant la théorie des copules. Pour cette raison, une nouvelle technique pour choisir la copule la plus appropriée est introduite. La partie expérimentale de ce travail est dédiée à la cartographie de l’occupation des sols dans les zones agricoles en utilisant des images SPOT-5 et RADARSAT-2. L’étude expérimentale réalisée démontre la robustesse et l’efficacité des approches développées dans le cadre de cette thèse. / With the advent of new image acquisition techniques and the emergence of high-resolution satellite systems, remote sensing data to be exploited have become increasingly rich and varied. Their combination has thus become essential to improve the process of extracting useful information related to the physical nature of the observed surfaces. However, these data are generally heterogeneous and imperfect, which poses several problems in their joint treatment and requires the development of specific methods. It is in this context that falls this thesis that aimed at developing a new evidential fusion method dedicated to heterogeneous remote sensing images processing at high resolution. In order to achieve this objective, we first focus our research, firstly, on the development of a new approach for the belief functions estimation based on Kohonen’s map in order to simplify the masses assignment operation of the large volumes of data occupied by these images. The proposed method allows to model not only the ignorance and the imprecision of our sources of information, but also their paradox. After that, we exploit this estimation approach to propose an original fusion technique that will solve problems due to the wide variety of knowledge provided by these heterogeneous sensors. Finally, we study the way in which the dependence between these sources can be considered in the fusion process using the copula theory. For this reason, a new technique for choosing the most appropriate copula is introduced. The experimental part of this work isdevoted to land use mapping in case of agricultural areas using SPOT-5 and RADARSAT-2 images. The experimental study carried out demonstrates the robustness and effectiveness of the approaches developed in the framework of this thesis.

Page generated in 0.0926 seconds