• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 53
  • 19
  • 18
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 96
  • 65
  • 64
  • 61
  • 52
  • 50
  • 50
  • 36
  • 35
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Estimation de la hauteur des arbres à l'échelle régionale : application à la Guyane Française / Canopy height estimation on a regional scale : Application to French Guiana

Fayad, Ibrahim 15 June 2015 (has links)
La télédétection contribue à la cartographie et la modélisation des paramètres forestiers. Ce sont les systèmes optiques et radars qui sont le plus généralement utilisés pour extraire des informations utiles à la caractérisation de ces paramètres. Ces systèmes ont montré des bons résultats pour estimer la biomasse dans certains biomes. Cependant, ils présentent des limitations importantes pour des forêts ayant un niveau de biomasse élevé. En revanche, la télédétection LiDAR s’est avérée être une bonne technique pour l'estimation des paramètres forestiers tels que la hauteur de la canopée et la biomasse. Alors que les LiDAR aéroportés acquièrent en général des données avec une forte densité de points mais sur des petites zones en raison du coût de leurs acquisitions, les données LiDAR satellitaires acquises par le système spatial (GLAS) ont une densité d'acquisition faible mais avec une couverture géographique mondiale. Il est donc utile d'analyser la pertinence de l'intégration des hauteurs estimées à partir des capteurs LiDAR et des données auxiliaires afin de proposer une carte de la hauteur des arbres avec une bonne précision et une résolution spatiale élevée. En outre, l'estimation de la hauteur des arbres à partir du GLAS est difficile compte tenu de l'interaction complexe entre les formes d'onde LiDAR, le terrain et la végétation, en particulier dans les forêts denses. Par conséquent, la recherche menée dans cette thèse vise à: 1) Estimer et valider la hauteur des arbres en utilisant des données acquises par le LiDAR aéroportés et GLAS. 2) évaluer le potentiel de la fusion des données LiDAR (avec les données aéroportées ou satellitaires) et des données auxiliaires pour l'estimation de la hauteur des arbres à une échelle régionale (Guyane française). L'estimation de la hauteur avec le LiDAR aéroporté a montré une EQM sur les estimations de 1,6 m. Ensuite, le potentiel de GLAS pour l'estimation de la hauteur a été évalué en utilisant des modèles de régression linéaire (ML) ou Random Forest (RF) avec des métriques provenant de la forme d'onde et de l'ACP. Les résultats ont montré que les modèles d’estimation des hauteurs avaient des précisions semblables en utilisant soit les métriques de GLAS ou les composantes principales (PC) obtenues à partir des formes d’onde GLAS (EQM ~ 3,6 m). Toutefois, un modèle de régression (ML ou RF) basé sur les PCs est une alternative pour l'estimation de la hauteur, car il ne nécessite pas l'extraction de certaines métriques de GLAS qui sont en général difficiles à dériver dans les forêts denses.Finalement, la hauteur extraite à la fois des données LiDAR aéroporté et GLAS a servi tout d'abord à spatialiser la hauteur en utilisant les données environnementales cartographiées. En utilisant le RF, la spatialisation de la hauteur des arbres a montré une EQM sur les estimations de la hauteur de 6,5 m à partir de GLAS et de 5,8 m à partir du LiDAR aéroporté. Ensuite, afin d'améliorer la précision de la spatialisation de la hauteur, la technique régression-krigeage (krigeage des résidus de la régression du RF) a été utilisée. Les résultats de la régression-krigeage indiquent une diminution de l'erreur quadratique moyenne de 6,5 à 4,2 m pour les cartes de la hauteur de la canopée à partir de GLAS, et de 5,8 à 1,8 m pour les cartes de la hauteur de la canopée à partir des données LiDAR aéroporté. Enfin, afin d'étudier l'impact de l'échantillonnage spatial des futures missions LiDAR sur la précision des estimations de la hauteur de la canopée, six sous-ensembles ont été extraits de de la base LiDAR aéroporté. Ces six sous-ensembles de données LiDAR ont respectivement un espacement des lignes de vol de 5, 10, 20, 30, 40 et 50 km. Finalement, en utilisant la technique régression-krigeage, l’EQM sur la carte des hauteurs était de 1,8 m pour le sous-ensemble ayant des lignes de vol espacés de 5 km, et a augmentée jusqu’à 4,8 m pour le sous-ensemble ayant des lignes de vol espacés de 50 km. / Remote sensing has facilitated the techniques used for the mapping, modelling and understanding of forest parameters. Remote sensing applications usually use information from either passive optical systems or active radar sensors. These systems have shown satisfactory results for estimating, for example, aboveground biomass in some biomes. However, they presented significant limitations for ecological applications, as the sensitivity from these sensors has been shown to be limited in forests with medium levels of aboveground biomass. On the other hand, LiDAR remote sensing has been shown to be a good technique for the estimation of forest parameters such as canopy heights and above ground biomass. Whilst airborne LiDAR data are in general very dense but only available over small areas due to the cost of their acquisition, spaceborne LiDAR data acquired from the Geoscience Laser Altimeter System (GLAS) have low acquisition density with global geographical cover. It is therefore valuable to analyze the integration relevance of canopy heights estimated from LiDAR sensors with ancillary data (geological, meteorological, slope, vegetation indices etc.) in order to propose a forest canopy height map with good precision and high spatial resolution. In addition, estimating forest canopy heights from large-footprint satellite LiDAR waveforms, is challenging given the complex interaction between LiDAR waveforms, terrain, and vegetation, especially in dense tropical and equatorial forests. Therefore, the research carried out in this thesis aimed at: 1) estimate, and validate canopy heights using raw data from airborne LiDAR and then evaluate the potential of spaceborne LiDAR GLAS data at estimating forest canopy heights. 2) evaluate the fusion potential of LiDAR (using either sapceborne and airborne data) and ancillary data for forest canopy height estimation at very large scales. This research work was carried out over the French Guiana.The estimation of the canopy heights using the airborne showed an RMSE on the canopy height estimates of 1.6 m. Next, the potential of GLAS for the estimation of canopy heights was assessed using multiple linear (ML) and Random Forest (RF) regressions using waveform metrics and principal component analssis (PCA). Results showed canopy height estimations with similar precisions using either LiDAR metrics or the principal components (PCs) (RMSE ~ 3.6 m). However, a regression model (ML or RF) based on the PCA of waveform samples is an interesting alternative for canopy height estimation as it does not require the extraction of some metrics from LiDAR waveforms that are in general difficult to derive in dense forests, such as those in French Guiana. Next, canopy heights extracted from both airborne and spaceborne LiDAR were first used to map canopy heights from available mapped environmental data (geological, meteorological, slope, vegetation indices etc.). Results showed an RMSE on the canopy height estimates of 6.5 m from the GLAS dataset and of 5.8 m from the airborne LiDAR dataset. Then, in order to improve the precision of the canopy height estimates, regression-kriging (kriging of random forest regression residuals) was used. Results indicated a decrease in the RMSE from 6.5 to 4.2 m for the regression-kriging maps from the GLAS dataset, and from 5.8 to 1.8 m for the regression-kriging map from the airborne LiDAR dataset. Finally, in order to study the impact of the spatial sampling of future LiDAR missions on the precision of canopy height estimates, six subsets were derived from the airborne LiDAR dataset with flight line spacing of 5, 10, 20, 30, 40 and 50 km (corresponding to 0.29, 0.11, 0.08, 0.05, 0.04, and 0.03 points/km², respectively). Results indicated that using the regression-kriging approach, the precision on the canopy height map was 1.8 m with flight line spacing of 5 km and decreased to an RMSE of 4.8 m for the configuration for the 50 km flight line spacing.
142

Um modelo de fusão de rankings baseado em análise de preferência / A model to ranking fusion based on preference analysis

Dutra Junior, Elmário Gomes January 2008 (has links)
O crescente volume de informações disponíveis na rede mundial de computadores, gera a necessidade do uso de ferramentas que sejam capazes de localizá-las e ordenálas, de forma cada vez mais precisa e que demandem cada vez menos recursos computacionais. Esta necessidade tem motivado pesquisadores a estudar e desenvolver modelos e técnicas que atendam esta demanda. Estudos recentes têm sinalizado que utilizar vários ordenamentos (rankings) previamente montados possibilita o retorno e ordenação de objetos de qualquer natureza com mais eficiência, principalmente pelo fato de haver uma redução no custo da busca pela informação. Este processo, conhecido como fusão de rankings, permite que se obtenha um ordenamento com base na opinião de diversos juízes (critérios), o que possibilita considerar um grande número de fontes, tanto geradas automaticamente como por especialistas. Entretanto os modelos propostos até então tem apresentado várias limitações na sua aplicação: desde a quantidade de rankings envolvidos até, principalmente, a utilização de rankings parciais. A proposta desta dissertação é apresentar um modelo de fusão de rankings que busca estabelecer um consenso entre as opiniões (rankings) dos diferentes juízes envolvidos, considerando distintos graus de relevância ou importância entre eles. A base desta proposta está na Análise de Preferência, um conjunto de técnicas que permite o tratamento da multidimensionalidade dos dados envolvidos. Ao ser testado em uma aplicação real, o modelo mostrou conseguir suprir algumas limitações apresentadas em outras abordagens, bem como apresentou resultados similares aos das aplicações originais. Esta pesquisa, ainda contribui, com a especificação de um sistema Web baseado em tecnologias open source, o qual permite que qualquer pessoa possa realizar a fusão de rankings. / The growing volume of available information on the web creates the need to use tools that are capable of retrieve and ordering this information, ever more precise and using less computer resources. This need has motivated researchers to study and develop models and techniques that solve this problem. Recent studies have indicated that use multiple rankings previously mounted makes possible the return and sorting of the objects of any kind with more efficiency, mainly because there is a reduction in the cost of searching for information. This process, called ranking fusion, provide a ranking based on the opinion of several judges (criteria), considering a large number of sources, both generated automatically and also by specialists. However the proposed models have shown severe limitations in its application: from the amount involved rankings to the use of partial rankings. The proposal of this dissertation is to show a model of ranking fusion that seeks to establish a consensus between the judgement (rankings) of the various judges involved, considering different degrees of relevance or importance among them. The baseline of this proposal is the Preference Analysis, a set of techniques that allows the treatment of multidimensional data handling. During tests in a real application, the model supplied some limitations presented by other approaches, and presented results similar to the original applications. Additionally, this research contributes with the specification of a web system based on open-sources technologies, enabling the realization of fusion rankings by anyone.
143

Perception intelligente pour la navigation rapide de robots mobiles en environnement naturel / Intelligent perception for fast navigation of mobile robots in natural environments

Malartre, Florent 16 June 2011 (has links)
Cette thèse concerne la perception de l’environnement pour le guidage automatique d’un robot mobile. Lorsque l’on souhaite réaliser un système de navigation autonome, plusieurs éléments doivent être abordés. Parmi ceux-ci nous traiterons de la franchissabilité de l’environnement sur la trajectoire du véhicule. Cette franchissabilité dépend notamment de la géométrie et du type de sol mais également de la position du robot par rapport à son environnement (dans un repère local) ainsi que l’objectif qu’il doit atteindre (dans un repère global). Les travaux de cette thèse traitent donc de la perception de l’environnement d’un robot au sens large du terme en adressant la cartographie de l’environnement et la localisation du véhicule. Pour cela un système de fusion de données est proposé afin d’estimer ces informations. Ce système de fusion est alimenté par plusieurs capteurs dont une caméra, un télémètre laser et un GPS. L’originalité de ces travaux porte sur la façon de combiner ces informations capteurs. A la base du processus de fusion, nous utilisons un algorithme d’odométrie visuelle basé sur les images de la caméra. Pour accroitre la précision et la robustesse l’initialisation de la position des points sélectionnés se fait grâce à un télémètre laser qui fournit les informations de profondeur. De plus, le positionnement dans un repère global est effectué en combinant cette odométrie visuelle avec les informations GPS. Pour cela un procédé a été mis en place pour assurer l’intégrité de localisation du véhicule avant de fusionner sa position avec les données GPS. La cartographie de l’environnement est toute aussi importante puisqu’elle va permettre de calculer le chemin qui assurera au véhicule une évolution sans risque de collision ou de renversement. Dans cette optique, le télémètre laser déjà présent dans le processus de localisation est utilisé pour compléter la liste courante de points 3D qui matérialisent le terrain à l’avant du véhicule. En combinant la localisation précise du véhicule avec les informations denses du télémètre il est possible d’obtenir une cartographie précise, dense et géo-localisée de l’environnement. Tout ces travaux ont été expérimentés sur un simulateur robotique développé pour l’occasion puis sur un véhicule tout-terrain réel évoluant dans un monde naturel. Les résultats de cette approche ont montré la pertinence de ces travaux pour le guidage autonome de robots mobiles. / This thesis addresses the perception of the environment for the automatic guidance of a mobile robot. When one wishes to achieve autonomous navigation, several elements must be addressed. Among them we will discuss the traversability of the environment on the vehicle path. This traversability depends on the ground geometry and type and also the position of the robot in its environment (in a local coordinate system) taking into acount the objective that must be achieved (in a global coordinate system).The works of this thesis deal with the environment perception of a robot inthe broad sense by addressing the mapping of the environment and the location of the vehicle. To do this, a data fusion system is proposed to estimate these informations. The fusion system is supplied by several low cost sensors including a camera, a rangefinder and a GPS receiver. The originality of this work focuses on how to combine these sensors informations. The base of the fusion process is a visual odometry algorithm based on camera images. To increase the accuracy and the robustness, the initialization of the selected points position is done with a rangefinder that provides the depth information.In addition, the localization in a global reference is made by combining the visual odometry with GPS information. For this, a process has been established to ensure the integrity of localization of the vehicle before merging its position with the GPS data. The mapping of the environment is also important as it will allow to compute the path that will ensure an evolution of the vehicle without risk of collision or overturn. From this perspective, the rangefinder already present in the localization process is used to complete the current list of 3D points that represent the field infront of the vehicle. By combining an accurate localization of the vehicle with informations of the rangefinder it is possible to obtain an accurate, dense and geo-located map environment. All these works have been tested on a robotic simulator developed for this purpose and on a real all-terrain vehicle moving in a natural world. The results of this approach have shown the relevance of this work for autonomous guidance of mobile robots.
144

Um modelo de fusão de rankings baseado em análise de preferência / A model to ranking fusion based on preference analysis

Dutra Junior, Elmário Gomes January 2008 (has links)
O crescente volume de informações disponíveis na rede mundial de computadores, gera a necessidade do uso de ferramentas que sejam capazes de localizá-las e ordenálas, de forma cada vez mais precisa e que demandem cada vez menos recursos computacionais. Esta necessidade tem motivado pesquisadores a estudar e desenvolver modelos e técnicas que atendam esta demanda. Estudos recentes têm sinalizado que utilizar vários ordenamentos (rankings) previamente montados possibilita o retorno e ordenação de objetos de qualquer natureza com mais eficiência, principalmente pelo fato de haver uma redução no custo da busca pela informação. Este processo, conhecido como fusão de rankings, permite que se obtenha um ordenamento com base na opinião de diversos juízes (critérios), o que possibilita considerar um grande número de fontes, tanto geradas automaticamente como por especialistas. Entretanto os modelos propostos até então tem apresentado várias limitações na sua aplicação: desde a quantidade de rankings envolvidos até, principalmente, a utilização de rankings parciais. A proposta desta dissertação é apresentar um modelo de fusão de rankings que busca estabelecer um consenso entre as opiniões (rankings) dos diferentes juízes envolvidos, considerando distintos graus de relevância ou importância entre eles. A base desta proposta está na Análise de Preferência, um conjunto de técnicas que permite o tratamento da multidimensionalidade dos dados envolvidos. Ao ser testado em uma aplicação real, o modelo mostrou conseguir suprir algumas limitações apresentadas em outras abordagens, bem como apresentou resultados similares aos das aplicações originais. Esta pesquisa, ainda contribui, com a especificação de um sistema Web baseado em tecnologias open source, o qual permite que qualquer pessoa possa realizar a fusão de rankings. / The growing volume of available information on the web creates the need to use tools that are capable of retrieve and ordering this information, ever more precise and using less computer resources. This need has motivated researchers to study and develop models and techniques that solve this problem. Recent studies have indicated that use multiple rankings previously mounted makes possible the return and sorting of the objects of any kind with more efficiency, mainly because there is a reduction in the cost of searching for information. This process, called ranking fusion, provide a ranking based on the opinion of several judges (criteria), considering a large number of sources, both generated automatically and also by specialists. However the proposed models have shown severe limitations in its application: from the amount involved rankings to the use of partial rankings. The proposal of this dissertation is to show a model of ranking fusion that seeks to establish a consensus between the judgement (rankings) of the various judges involved, considering different degrees of relevance or importance among them. The baseline of this proposal is the Preference Analysis, a set of techniques that allows the treatment of multidimensional data handling. During tests in a real application, the model supplied some limitations presented by other approaches, and presented results similar to the original applications. Additionally, this research contributes with the specification of a web system based on open-sources technologies, enabling the realization of fusion rankings by anyone.
145

A Bayesian Synthesis Approach to Data Fusion Using Augmented Data-Dependent Priors

January 2017 (has links)
abstract: The process of combining data is one in which information from disjoint datasets sharing at least a number of common variables is merged. This process is commonly referred to as data fusion, with the main objective of creating a new dataset permitting more flexible analyses than the separate analysis of each individual dataset. Many data fusion methods have been proposed in the literature, although most utilize the frequentist framework. This dissertation investigates a new approach called Bayesian Synthesis in which information obtained from one dataset acts as priors for the next analysis. This process continues sequentially until a single posterior distribution is created using all available data. These informative augmented data-dependent priors provide an extra source of information that may aid in the accuracy of estimation. To examine the performance of the proposed Bayesian Synthesis approach, first, results of simulated data with known population values under a variety of conditions were examined. Next, these results were compared to those from the traditional maximum likelihood approach to data fusion, as well as the data fusion approach analyzed via Bayes. The assessment of parameter recovery based on the proposed Bayesian Synthesis approach was evaluated using four criteria to reflect measures of raw bias, relative bias, accuracy, and efficiency. Subsequently, empirical analyses with real data were conducted. For this purpose, the fusion of real data from five longitudinal studies of mathematics ability varying in their assessment of ability and in the timing of measurement occasions was used. Results from the Bayesian Synthesis and data fusion approaches with combined data using Bayesian and maximum likelihood estimation methods were reported. The results illustrate that Bayesian Synthesis with data driven priors is a highly effective approach, provided that the sample sizes for the fused data are large enough to provide unbiased estimates. Bayesian Synthesis provides another beneficial approach to data fusion that can effectively be used to enhance the validity of conclusions obtained from the merging of data from different studies. / Dissertation/Thesis / Doctoral Dissertation Psychology 2017
146

Planification visuelle et interactive d'interventions dans des environnements d'accélérateur de particules émettant des rayonnements ionisants / Interactive visual intervention planning in particle accelerator environments with ionizing radiation

Fabry, Thomas 30 May 2014 (has links)
Les radiations sont omniprésentes. Elles ont de nombreuses applications dans des domaines variés: en médecine, elles permettent de réaliser des diagnostiques et de guérir des patients; en communication, tous les systèmes modernes utilisent des formes de rayonnements électromagnétiques; et en science, les chercheurs les utilisent pour découvrir la composition et la structure des matériaux, pour n'en nommer que quelques-unes. Concrètement, la radiation est un processus au cours duquel des particules ou des ondes voyagent à travers différents types de matériaux. La radiation peut être très énergétique, et aller jusqu'à casser les atomes de la matière ordinaire. Dans ce cas, on parlera de radiation ionisante. Il est communément admis que la radiation ionisante peut être bien plus nocif pour les êtres vivants que la radiation non ionisante. Dans cette dissertation, nous traiterons de la radiation ionisante. La radioactivité est le processus d'émission des radiations ionisantes. Elle existe sous forme naturelle, et est présente dans les sols, dans l'air et notre planète entière est bombardée en permanence de rayonnements cosmiques énergétiques. Depuis le début du XXe siècle, les chercheurs sont capables de créer artificiellement de la matière radioactive. Cette découverte a offert de multiples avancées technologiques, mais a eu également de lourdes conséquences pour l'humanité comme l'ont démontrés les évènements de Tchernobyl et de Fukushima ou d'autres accidents dans le monde médical. Cette dangerosité a conduit à l'élaboration d'un système de radioprotection. Dans la pratique, la radioprotection est principalement mise en œuvre en utilisant la méthode ALARA. Cette méthodologie consiste à justifier, optimiser et limiter les doses reçues. Elle est utilisée conjointement avec les limites légales. Le facteur d'optimisation est contraint par le fait que l'exposition volontaire d'un travailleur aux radiations lors d'une opération doit être plus bénéfique que si aucune intervention humaine n'était conduite dans une situation donnée. Dans le monde industriel et scientifique, il existe des infrastructures qui émettent des rayonnements ionisants. La plupart d'entre elles nécessitent des opérations de maintenance. Dans l'esprit du principe ALARA, ces interventions doivent être optimisées pour réduire l'exposition des travailleurs aux rayonnements ionisants. Cette optimisation ne peut pas être réalisée de manière automatique car la faisabilité des interventions nécessite dans tous les cas une évaluation humaine. La planification des interventions peut cependant être facilitée par des moyens techniques et scientifiques comme par exemple un outil informatique. Dans le contexte décrit ci-dessus, cette thèse regroupe des considérations techniques et scientifiques, et présente la méthodologie utilisée pour développer des outils logiciels pour la mise en œuvre de la radioprotection. / Radiation is omnipresent. It has many interesting applications: in medicine, where it allows curing and diagnosing patients; in communication, where modern communication systems make use of electromagnetic radiation; and in science, where it is used to discover the structure of materials; to name a few. Physically, radiation is a process in which particles or waves travel through any kind of material, usually air. Radiation can be very energetic, in which case it can break the atoms of ordinary matter (ionization). If this is the case, radiation is called ionizing. It is known that ionizing radiation can be far more harmful to living beings than non-ionizing radiation. In this dissertation, we are concerned with ionizing radiation. Naturally occurring ionizing radiation in the form of radioactivity is a most natural phenomenon. Almost everything is radioactive: there is radiation emerging from the soil, it is in the air, and the whole planet is constantly undergoing streams of energetic cosmic radiation. Since the beginning of the twentieth century, we are also able to artificially create radio-active matter. This has opened a lot of interesting technological opportunities, but has also given a tremendous responsibility to humanity, as the nuclear accidents in Chernobyl and Fukushima, and various accidents in the medical world have made clear. This has led to the elaboration of a radiological protection system. In practice, the radiological protection system is mostly implemented using a methodology that is indicated with the acronym ALARA: As Low As Reasonably Achievable. This methodology consists of justifying, optimizing and limiting the radiation dose received. This methodology is applied in conjunction with the legal limits. The word "reasonably" means that the optimization of radiation exposure has to be seen in context. The optimization is constrained by the fact that the positive effects of an operation might surpass the negative effects caused by the radiation. Several industrial and scientific procedures give rise to facilities with ionizing radiation. Most technical and scientific facilities also need maintenance operations. In the spirit of ALARA, these interventions need to be optimized in terms of the exposure of the maintenace workers to ionizing radiation. This optimization cannot be automated since the feasibility of the intervention tasks requires human assessment. The intervention planning could however be facilitated by technical-scientific means, e.g. software tools. In the context sketched above, this thesis provides technical-scientific considerations and the development of technical-scientific methodologies and software tools for the implementation of radiation protection.In particular, this thesis addresses the need for an interactive visual intervention planning tool in the context of high energy particle accelerator facilities.
147

Multimodal Data Fusion As a Predictior of Missing Information in Social Networks

January 2012 (has links)
abstract: Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint. / Dissertation/Thesis / M.S. Computer Science 2012
148

Um modelo de fusão de rankings baseado em análise de preferência / A model to ranking fusion based on preference analysis

Dutra Junior, Elmário Gomes January 2008 (has links)
O crescente volume de informações disponíveis na rede mundial de computadores, gera a necessidade do uso de ferramentas que sejam capazes de localizá-las e ordenálas, de forma cada vez mais precisa e que demandem cada vez menos recursos computacionais. Esta necessidade tem motivado pesquisadores a estudar e desenvolver modelos e técnicas que atendam esta demanda. Estudos recentes têm sinalizado que utilizar vários ordenamentos (rankings) previamente montados possibilita o retorno e ordenação de objetos de qualquer natureza com mais eficiência, principalmente pelo fato de haver uma redução no custo da busca pela informação. Este processo, conhecido como fusão de rankings, permite que se obtenha um ordenamento com base na opinião de diversos juízes (critérios), o que possibilita considerar um grande número de fontes, tanto geradas automaticamente como por especialistas. Entretanto os modelos propostos até então tem apresentado várias limitações na sua aplicação: desde a quantidade de rankings envolvidos até, principalmente, a utilização de rankings parciais. A proposta desta dissertação é apresentar um modelo de fusão de rankings que busca estabelecer um consenso entre as opiniões (rankings) dos diferentes juízes envolvidos, considerando distintos graus de relevância ou importância entre eles. A base desta proposta está na Análise de Preferência, um conjunto de técnicas que permite o tratamento da multidimensionalidade dos dados envolvidos. Ao ser testado em uma aplicação real, o modelo mostrou conseguir suprir algumas limitações apresentadas em outras abordagens, bem como apresentou resultados similares aos das aplicações originais. Esta pesquisa, ainda contribui, com a especificação de um sistema Web baseado em tecnologias open source, o qual permite que qualquer pessoa possa realizar a fusão de rankings. / The growing volume of available information on the web creates the need to use tools that are capable of retrieve and ordering this information, ever more precise and using less computer resources. This need has motivated researchers to study and develop models and techniques that solve this problem. Recent studies have indicated that use multiple rankings previously mounted makes possible the return and sorting of the objects of any kind with more efficiency, mainly because there is a reduction in the cost of searching for information. This process, called ranking fusion, provide a ranking based on the opinion of several judges (criteria), considering a large number of sources, both generated automatically and also by specialists. However the proposed models have shown severe limitations in its application: from the amount involved rankings to the use of partial rankings. The proposal of this dissertation is to show a model of ranking fusion that seeks to establish a consensus between the judgement (rankings) of the various judges involved, considering different degrees of relevance or importance among them. The baseline of this proposal is the Preference Analysis, a set of techniques that allows the treatment of multidimensional data handling. During tests in a real application, the model supplied some limitations presented by other approaches, and presented results similar to the original applications. Additionally, this research contributes with the specification of a web system based on open-sources technologies, enabling the realization of fusion rankings by anyone.
149

Aplicação de tecnicas de fusão de sensores no monitoramento de ambientes / Application of sensor fusion techniques in the environmental monitory

Salustiano, Rogerio Esteves, 1978- 16 January 2006 (has links)
Orientador: Carlos Alberto dos Reis Filho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-05T17:36:56Z (GMT). No. of bitstreams: 1 Salustiano_RogerioEsteves_M.pdf: 3698724 bytes, checksum: a5c6d59ec19db38f5a0324243ddb1eb5 (MD5) Previous issue date: 2006 / Resumo: Este trabalho propõe um sistema computacional no qual são aplicadas técnicas de Fusão de Sensores no monitoramento de ambientes. O sistema proposto permite a utilização e incorporação de diversos tipos de dados, incluindo imagens, sons e números em diferentes bases. Dentre os diversos algoritmos pertinentes a um sistema como este, foram implementados os de Sensores em Consenso que visam a combinação de dados de uma mesma natureza. O sistema proposto é suficientemente flexível, permitindo a inclusão de novos tipos de dados e os correspondentes algoritmos que os processem. Todo o processo de recebimento dos dados produzidos pelos sensores, configuração e visualização dos resultados é realizado através da Internet / Abstract: This work proposes a computer system in which Sensor Fusion techniques are applied to monitoring the environment. The proposed system allows the use and incorporation of different data types, including images, sounds and numbers in different bases. Among the existing algorithms that pertain to a system like this, those, which aim to combine data of the same nature, called Consensus Sensors, have been particularly implemented. The proposed system is flexible enough and allows the inclusion of new data types and their corresponding algorithms. The whole process of receiving the data produced by the sensors, configuration of produced results as well as their visualization is performed through the Internet / Mestrado / Eletrônica, Microeletrônica e Optoeletrônica / Mestre em Engenharia Elétrica
150

Détermination de la vitesse limite par fusion de données vision et cartographiques temps-réel embarquées / Speed limit determination by real-time embedded visual and cartographical data fusion

Puthon, Anne-Sophie 02 April 2013 (has links)
Les systèmes d'aide à la conduite sont de plus en plus présents dans nos véhicules et nous garantissent un meilleur confort et plus de sécurité. Dans cette thèse, nous nous sommes particulièrement intéressés aux systèmes d'adaptation automatique de la vitesse limite. Nous avons proposé une approche alliant vision et navigation pour gérer de façon optimale l'environnement routier.Panneaux, panonceaux et marquages sont autant d'informations visuelles utiles au conducteur pour connaître les limitations temporaires en vigueur sur la route. La reconnaissance des premiers ont fait l'objet ces dernières années d'un grand nombre d'études et sont même commercialisés, contrairement aux seconds. Nous avons donc proposé un module de détection et classification de panonceaux sur des images à niveaux de gris. Un algorithme de reconstruction morphologique associé à une croissance de régions nous ont permis de concentrer la segmentation sur les zones fortement contrastées de l'image entourées d'un ensemble de pixels d'intensité similaire. Les rectangles ainsi détectés ont ensuite fait l'objet d'une classification au moyen de descripteurs globaux de type PHOG et d'une structure hiérarchique de SVMs. Afin d'éliminer en dernier lieu les panonceaux ne s'appliquant pas à la voie sur laquelle circule le véhicule, nous avons pris en compte les informations de marquages à l'aide d'une machine d'états.Après avoir élaboré un module de vision intégrant au mieux toutes les informations disponibles, nous avons amélioré le système de navigation. Son objectif est d'extraire d'une base de données embarquée, le contexte de conduite lié à la position du véhicule. Ville ou non, classe fonctionnelle et type de la route, vitesse limite sont extraits et modélisés sous forme d'attributs. La fiabilité du capteur est ensuite calculée en fonction du nombre de satellites visibles et de la qualité de numérisation du réseau. La confiance en chaque vitesse limite sera alors fonction de ces deux ensembles.La fusion des deux sources au moyen de Demspter-Shafer a conduit à de très bonnes performances sur nos bases de données et démontré l'intérêt de tous ces éléments. / ADAS (Autonomous Driving Assistance Systems) are more and more integrated in vehicles and provide to drivers more confort and safety. In this thesis, we focused on Intelligent Speed Adaptation. We proposed an approach combining vision and navigation in order to optimally manage the driving context information.Roadsigns, subsigns and markings are visual data used by the driver to determine the current temporary speed limitations. Many research were conducted during last years to recognise the first one, contrary to the second. Commercialised products are even implemented in vehicles. We thus developped a subsign detection and classification module using greyscale images. A morphological reconstruction with a growing region helped us to focus the segmentation on highly contrasted pixels surrounded by homogeneous regions. Global descriptors such as PHOGs combined to a hierarchical structure of SVMs were then used to classify the output rectangles. Finally, we eliminated subsigns which are not applicable to the current lane by considering markings.After having developed a vision module integrating all the available information, we improved the navigation system. The objective was to extract from an embedded database the driving context related to the vehicle position. Urban context or not, functional class, road type and speed limit were collected and modelised into criteria. The sensor reliability was then computed and depended on the satellite configuration and the network digitisation quality. Confidence in each speed limit combined all these elements.Fusion of both sources with the Dempster-Shafer theory led to very good performances on our databases et showed the importance of all the used information.

Page generated in 0.052 seconds