Spelling suggestions: "subject:"aprocessing data"" "subject:"eprocessing data""
51 |
Modèles de rivières animées pour l'exploration interactive de paysagesYu, Qizhi 17 November 2008 (has links) (PDF)
Dans cette thèse, nous avons proposé un modèle multi-échelle pour l'animation de rivière. Nous avons présenté un nouveau modèle pour chaque échelle. A l'échelle macro, nous avons proposé une méthode procédurale permettant de générer une rivière réaliste à la volée. A l'échelle méso nous avons amélioré un modèle phénoménologique basé sur une représentation vectorielle des ondes de choc près des obstacles, et proposé une methode pour la reconstruction adaptative de la surface de l'eau. A l'échelle micro, nous avons présenté une méthode adaptative pour texturer des surfaces de grande étendue avec des performances indépendantes de la scène. Nous avons également propos é une méthode d'advection de texture. Ces deux modèles reposent sur notre schéma d'échantillonnage adaptatif. En combinant ces modèles, nous avons pu animer des rivières de taille mondiale en temps réel, tout en étant contr?olable. Les performances de notre système sont indépendantes de la scène. La vitesse procédurale et l'échantillonage en espace écran permettent à notre système de fonctionner sur des domaines illimités. Les utilisateurs peuvent observer la rivière de très près ou de très loin à tout moment. Des vagues très détaillées peuvent être affichées. Les différents parties des rivières sont continues dans l'espace et dans le temps, même lors de l'exploration ou de l'édition de la rivière par un utilisateur. Cela signifie que l'utilisateur peut éditer les lits des rivières ou ajouter des îles à la volée sans interrompre l'animation. La vitesse de la rivière change dès que l'utilisateur en édite les caractéristiques, et l'utilisateur peut auss modifier son apparence avec des textures.
|
52 |
Modélisation 3D et 3D+t des artères coronaires à partir de séquences rotationnelles de projections rayons XBlondel, Christophe 29 March 2004 (has links) (PDF)
L'angiographie par rayons X est la modalité d'imagerie médicale la plus utilisée pour l'exploration des pathologies des vaisseaux coronariens. La routine clinique actuelle repose sur l'utilisation brute des images angiographiques. Pourtant, ces images présentent des défauts tels que le raccourcissement des longueurs, l'effet de grandissement ou la présence de superpositions. Ces faiblesses peuvent fausser le diagnostic et le choix thérapeutique. Nous proposons d'exploiter un nouveau mode d'acquisition angiographique, le mode rotationnel, pour produire des modélisations tridimensionnelles et dynamiques de l'arbre coronaire. Ces modélisations permettraient de s'affranchir des défauts intrinsèques aux images. Notre travail se compose de trois étapes. Dans un premier temps, une reconstruction 3D multi-oculaire donne un modèle statique des lignes centrales des artères coronaires, prenant en compte le mouvement respiratoire. Par la suite, un mouvement 4D des artères coronaires est determiné sur l'ensemble du cycle cardiaque. Enfin, la connaissance des mouvements respiratoire et cardiaque permet de réaliser la reconstruction tomographique des artères coronaires. Nous avons testé notre approche sur une base de 22 patients et avons proposé de nouveaux outils et applications cliniques à partir de ces modélisations tridimensionnelles et dynamiques. Ces outils diagnostiques ont été prototypés et feront l'objet d'une validation clinique.
|
53 |
Query processing on low-energy many-core processorsLehner, Wolfgang, Ungethüm, Annett, Habich, Dirk, Karnagel, Tomas, Asmussen, Nils, Völp, Marcus, Nöthen, Benedikt, Fettweis, Gerhard 12 January 2023 (has links)
Aside from performance, energy efficiency is an increasing challenge in database systems. To tackle both aspects in an integrated fashion, we pursue a hardware/software co-design approach. To fulfill the energy requirement from the hardware perspective, we utilize a low-energy processor design offering the possibility to us to place hundreds to millions of chips on a single board without any thermal restrictions. Furthermore, we address the performance requirement by the development of several database-specific instruction set extensions to customize each core, whereas each core does not have all extensions. Therefore, our hardware foundation is a low-energy processor consisting of a high number of heterogeneous cores. In this paper, we introduce our hardware setup on a system level and present several challenges for query processing. Based on these challenges, we describe two implementation concepts and a comparison between these concepts. Finally, we conclude the paper with some lessons learned and an outlook on our upcoming research directions.
|
54 |
Modeling human behaviors and frailty for a personalized ambient assisted living framework / Modélisation des comportements humains et de la fragilité pour la conception d'une plateforme d'assistance d'intelligence ambianteBellmunt Montoya, Joaquim 21 November 2017 (has links)
Les technologies d’assistance à la vie autonome est aujourd'hui nécessaire pour soutenir les personnes ayant des besoins spécifiques dans leurs activités de la vie quotidienne, mais leurs développements demeure limités malgré les enjeux liés à l’accompagnement des personnes âgées et dépendantes. Par ailleurs, l'élaboration de plateformes technologiques durant la dernière décennie s'est principalement concentrée sur la dimension technologique, en négligeant l'impact des facteurs humains et des besoins sociaux. Les nouvelles technologies, telles que le cloud et l’Internet des objets (IoT) pourraient apporter de nouvelles capacités dans ce domaine de recherche permettant aux systèmes de traiter les activités humaines selon des modèles orientés vers l'usage (ie. la fragilité) dans une approche non invasive.Cette thèse se propose d'envisager un nouveau paradigme dans les technologies d'assistance pour le vieillissement et le bien-être en introduisant (i) des métriques de la fragilité humaine et (ii) une dimension urbaine dans un cadre d'assistance ambiant (extension de l'espace de vie de l'intérieur vers l'extérieur). Elle propose une plateforme basée sur l’Informatique dans le cloud (cloud computing) pour une communication transparente avec les objets connectés, permettant au système intégré de calculer et de modéliser différents niveaux de fragilité humaine. Cette thèse propose d'utiliser des données hétérogènes en temps réel fournies par différents types de sources (capteurs intérieurs et extérieurs), ainsi que des données de référence, collectées sur un serveur de cloud de raisonnement central. La plateforme stocke les données brutes et les traite à travers un moteur de raisonnement hybride combinant à la fois l'approche basée sur les données (apprentissage automatique), et l'approche basée sur la connaissance (raisonnement sémantique) pour (i) déduire les activités de la vie quotidienne, (ii) détecter le changement du comportement humain, et enfin (iii) calibrer les valeurs de fragilité humaine. Les valeurs de fragilité peuvent permettre au système de détecter automatiquement tout changement de comportement, ou toute situation anormale, qui pourrait entraîner un risque à la maison ou à l'extérieur. L'ambition à long terme est de détecter et d'intervenir pour éviter un risque avant même qu'un médecin ne le détecte lors d'une consultation. L'objectif ultime est de promouvoir le paradigme de la prévention pour la santé et du bien-être.Cette thèse vise à concevoir et développer une plateforme intégrée, personnalisée, basée sur le cloud, capable de communiquer avec des capteurs intérieurs non invasifs (par ex. mouvement, contact, fibre optique) et à l'extérieur (par ex. BLE Beacons, smartphone, bracelet..). La plateforme développée comprend également un classificateur de mobilité du comportement humain qui utilise les capteurs internes du Smart Phone pour calibrer le type de mouvement effectué par l'individu (p. ex. marche, vélo, tram, bus, et voiture). Les données recueillies dans ce contexte servent à construire un modèle multidimensionnel de fragilité basé sur plusieurs éléments standardisés de fragilité, à partir d'une littérature abondante et d'un examen approfondi d’autres plateformes. La plateforme et les modèles associés ont été évalués dans des conditions réelles de vie impliquant les utilisateurs et les aidants par le biais de différents sites pilotes à Singapour et en France. Les données obtenues ont été analysées et publiées dans de nombreuses conférences et revues internationales.La plateforme développée est actuellement déployée en situation écologique dans 24 habitats individuels. Cela comprend cinq chambres en EHPAD, et neuf maisons sont situées en France, en collaboration avec une maison de retraite (Argentan-Normandie) et à Montpellier en collaboration avec Montpellier Métropole. Entre autre dix appartements privés sont situés à Singapour en collaboration avec un Senior Activity Center. / Ambient Assisted Living is nowadays necessary to support people with special needs in performing their activities of daily living, but it remains unaltered in front of the necessity to accompany aging and dependent people in their outdoors activities. Moreover, the development of multiple frameworks during the last decade has mainly focused on the engineering dimension neglecting impact of human factors and social needs in the design process. New technologies, such as cloud computing and Internet of Things (IoT) could bring new capabilities to this field of research allowing systems to process human condition following usage oriented models (e.g. frailty) in a non-invasive approach. This thesis proposes to consider a new paradigm in assistive technologies for aging and wellbeing by introducing (i) human frailty metrics, and (ii) urban dimension in an ambient assistive framework (extending the living space from indoors to outdoors). It proposes a cloud-based framework for seamless communication with connected objects, allowing the integrated system to compute and to model different levels of human frailty based on several frailty standardized items, and leveraged from an extensive literature and frameworks reviews.This thesis aims at designing and developing an integrated cloud-based framework, which would be able to communicate with heterogeneous real-time non-invasive indoor sensors (e.g. motion, contact, fiber optic) and outdoors (e.g. BLE Beacons, smartphone). The framework stores the raw data and processes it through a designed hybrid reasoning engine combining both approaches, data driven (machine learning), and knowledge driven (semantic reasoning) algorithms, to (i) infer the activities of the daily living (ADL), (ii) detect changes of human behavior, and ultimately (iii) calibrate human frailty values. It also includes a human behavior mobility classifier that uses the inner smartphone sensors to classify the type of movement performed by the individual (e.g. Walk, Cycling, MRT, Bus, Car). The frailty values might allow the system to automatically detect any change of behaviors, or abnormal situations, which might lead to a risk at home or outside.The proposed models and framework have been developed in close collaboration with IPAL and LIRMM research teams. They also have been assessed in real conditions involving end-users and caregivers through different pilots sites in Singapore and in France. Nowadays, the proposed framework, is currently deployed in a real world deployment in 24 individual homes. 14 spaces are located in France (5 privates rooms in nursing home and 9 private houses) in collaboration with a nursing home (Argentan-Normandie and Montpellier). 10 individual homes are located in Singapore in collaboration with a Senior Activity Center (non-profit organization).The long-term ambition is to detect and intervene to avoid a risk even before a medical doctor detects it during a consultation. The ultimate goal is to promote prevention paradigm for health and wellbeing. The obtained data has been analyzed and published in multiple international conferences and journals.
|
55 |
Desenvolvimento e avaliação de modelo computacional para geração de alertas a partir de notificações de casos de meningite meningocócica / Development and evaluation of a computing model of alerts from meningococcal meningitis notification casesZaparoli, Wagner 24 November 2008 (has links)
INTRODUÇÃO: este trabalho apresenta a arquitetura de um sistema de emissão de alertas para surtos e epidemias em tempo real, baseado em notificações eletrônicas da meningite, e discute os resultados dos testes e simulações realizados. MÉTODOS: esse sistema foi desenvolvido em quatro etapas: Concepção, Análise, Construção e Teste/Simulações. A Concepção contemplou a elicitação de requisitos, a qual definiu o que o sistema deve fazer. A Análise se preocupou com a modelagem e especificação das regras que definem como o sistema deve trabalhar. A Construção abrangeu a transformação das regras definidas e modeladas em linguagem de programação. A última etapa, Teste/Simulação, foi responsável por garantir que o sistema construído estava em conformidade com os requisitos elicitados na etapa de Concepção. RESULTADOS: vários artefatos foram criados e algumas constatações foram verificadas nesta etapa. Sobre os artefatos podemos citar os requisitos, casos de uso, diagrama de classes, modelo físico de dados, casos de teste e programas. Sobre as constatações podemos citar o disparo de alertas nas simulações realizadas pelo sistema dois dias antes que o alerta feito pelas autoridades de saúde do Estado de São Paulo usando os procedimentos habituais. DISCUSSÃO e CONCLUSÃO: O sistema desenvolvido pode ser classificado como um Early Warning System. Nas simulações, observamos que em duas oportunidades ele conseguiu evidenciar ocorrência de surto antecipadamente ao método tradicional utilizado pelo Centro de Vigilância Epidemiológico de São Paulo. Comparando-o com sistemas semelhantes em produção, verificamos que esse sistema se diferencia ao emitir ativamente alertas de surtos em tempo real. / INTRODUCTION: this essay presents the architecture of an alert system for epidemics based on real-time electronic notification of meningococcal meningitis, and discusses the results of tests and simulations made. METHODS: this system was developed in four stages: Conception, Analysis, Construction and Test/Simulation. The Conception covered the requirements elicitation, which defined what the system should do. The Analysis involved the modeling and specification rules that defined how the system should work. The Construction covered the transformation of defined and modeled rules in programming language. The last stage, Test/Simulation, checked the system under known scenarios, comparing the timing of outputs with the Brazilian notification surveillance framework. RESULTS: many artifacts were made and some evidences were verified. About the artifacts we can mention the requirements, use cases, class diagram, physical data model, test cases, and algorithms. About the evidences we can mention the fast alert production in simulations of this system as compared with the current procedure in use by health authorities. DISCUSSION AND CONCLUSION: this system can be classified as an Early Warning System. In simulations we observed that in two opportunities, it managed to put in evidence outbreak occurrence in advance to the traditional used method by Epidemiological Surveillance Center of São Paulo. In comparison with the similar systems under operation, we note this system is distinguished from them in issuing real-time outbreak alerts.
|
56 |
Investigating data quality in question and answer reportsMohamed Zaki Ali, Mona January 2016 (has links)
Data Quality (DQ) has been a long-standing concern for a number of stakeholders in a variety of domains. It has become a critically important factor for the effectiveness of organisations and individuals. Previous work on DQ methodologies have mainly focused on either the analysis of structured data or the business-process level rather than analysing the data itself. Question and Answer Reports (QAR) are gaining momentum as a way to collect responses that can be used by data analysts, for instance, in business, education or healthcare. Various stakeholders benefit from QAR such as data brokers and data providers, and in order to effectively analyse and identify the common DQ problems in these reports, the various stakeholders' perspectives should be taken into account which adds another complexity for the analysis. This thesis investigates DQ in QAR through an in-depth DQ analysis and provide solutions that can highlight potential sources and causes of problems that result in "low-quality" collected data. The thesis proposes a DQ methodology that is appropriate for the context of QAR. The methodology consists of three modules: question analysis, medium analysis and answer analysis. In addition, a Question Design Support (QuDeS) framework is introduced to operationalise the proposed methodology through the automatic identification of DQ problems. The framework includes three components: question domain-independent profiling, question domain-dependent profiling and answers profiling. The proposed framework has been instantiated to address one example of DQ issues, namely Multi-Focal Question (MFQ). We introduce MFQ as a question with multiple requirements; it asks for multiple answers. QuDeS-MFQ (the implemented instance of QuDeS framework) has implemented two components of QuDeS for MFQ identification, these are question domain-independent profiling and question domain-dependent profiling. The proposed methodology and the framework are designed, implemented and evaluated in the context of the Carbon Disclosure Project (CDP) case study. The experiments show that we can identify MFQs with 90% accuracy. This thesis also demonstrates the challenges including the lack of domain resources for domain knowledge representation, such as domain ontology, the complexity and variability of the structure of QAR, as well as the variability and ambiguity of terminology and language expressions and understanding stakeholders or users need.
|
57 |
Desenvolvimento e avaliação de modelo computacional para geração de alertas a partir de notificações de casos de meningite meningocócica / Development and evaluation of a computing model of alerts from meningococcal meningitis notification casesWagner Zaparoli 24 November 2008 (has links)
INTRODUÇÃO: este trabalho apresenta a arquitetura de um sistema de emissão de alertas para surtos e epidemias em tempo real, baseado em notificações eletrônicas da meningite, e discute os resultados dos testes e simulações realizados. MÉTODOS: esse sistema foi desenvolvido em quatro etapas: Concepção, Análise, Construção e Teste/Simulações. A Concepção contemplou a elicitação de requisitos, a qual definiu o que o sistema deve fazer. A Análise se preocupou com a modelagem e especificação das regras que definem como o sistema deve trabalhar. A Construção abrangeu a transformação das regras definidas e modeladas em linguagem de programação. A última etapa, Teste/Simulação, foi responsável por garantir que o sistema construído estava em conformidade com os requisitos elicitados na etapa de Concepção. RESULTADOS: vários artefatos foram criados e algumas constatações foram verificadas nesta etapa. Sobre os artefatos podemos citar os requisitos, casos de uso, diagrama de classes, modelo físico de dados, casos de teste e programas. Sobre as constatações podemos citar o disparo de alertas nas simulações realizadas pelo sistema dois dias antes que o alerta feito pelas autoridades de saúde do Estado de São Paulo usando os procedimentos habituais. DISCUSSÃO e CONCLUSÃO: O sistema desenvolvido pode ser classificado como um Early Warning System. Nas simulações, observamos que em duas oportunidades ele conseguiu evidenciar ocorrência de surto antecipadamente ao método tradicional utilizado pelo Centro de Vigilância Epidemiológico de São Paulo. Comparando-o com sistemas semelhantes em produção, verificamos que esse sistema se diferencia ao emitir ativamente alertas de surtos em tempo real. / INTRODUCTION: this essay presents the architecture of an alert system for epidemics based on real-time electronic notification of meningococcal meningitis, and discusses the results of tests and simulations made. METHODS: this system was developed in four stages: Conception, Analysis, Construction and Test/Simulation. The Conception covered the requirements elicitation, which defined what the system should do. The Analysis involved the modeling and specification rules that defined how the system should work. The Construction covered the transformation of defined and modeled rules in programming language. The last stage, Test/Simulation, checked the system under known scenarios, comparing the timing of outputs with the Brazilian notification surveillance framework. RESULTS: many artifacts were made and some evidences were verified. About the artifacts we can mention the requirements, use cases, class diagram, physical data model, test cases, and algorithms. About the evidences we can mention the fast alert production in simulations of this system as compared with the current procedure in use by health authorities. DISCUSSION AND CONCLUSION: this system can be classified as an Early Warning System. In simulations we observed that in two opportunities, it managed to put in evidence outbreak occurrence in advance to the traditional used method by Epidemiological Surveillance Center of São Paulo. In comparison with the similar systems under operation, we note this system is distinguished from them in issuing real-time outbreak alerts.
|
58 |
High-Performance Analytics (HPA) / High-Performance Analytics (HPA)Soukup, Petr January 2012 (has links)
The aim of the thesis on the topic of High-Performance Analytics is to gain a structured overview of solutions of high performance methods for data analysis. The thesis introduction concerns with definitions of primary and secondary data analysis, and with the primary systems which are not appropriate for analytical data analysis. The usage of mobile devices, modern information technologies and other factors caused a rapid change of the character of data. The major part of this thesis is devoted particularly to the historical turn in the new approaches towards analytical data analysis, which was caused by Big Data, a very frequent term these days. Towards the end of the thesis there are discussed the system sources which greatly participate in the new approaches to the analytical data analysis as well as in the technological solutions of High Performance Analytics themselves. The second, practical part of the thesis is aimed at a comparison of the performance in conventional methods for data analysis and in one of the high performance methods of High Performance Analytics (more precisely, with In-Memory Analytics). Comparison of individual solutions is performed in identical environment of High Performance Analytics server. The methods are applied to a certain sample whose volume is increased after every round of executed measurement. The conclusion evaluates the tests results and discusses the possibility of usage of the individual High Performance Analytics methods.
|
59 |
Design von Stichproben in analytischen DatenbankenRösch, Philipp 17 July 2009 (has links)
Aktuelle Studien belegen ein rasantes, mehrdimensionales Wachstum in analytischen Datenbanken: Das Datenvolumen verzehnfachte sich in den letzten vier Jahren, die Anzahl der Nutzer wuchs um durchschnittlich 25% pro Jahr und die Anzahl der Anfragen verdoppelte sich seit 2004 jährlich. Bei den Anfragen handelt es sich zunehmend um komplexe Verbundanfragen mit Aggregationen; sie sind häufig explorativer Natur und werden interaktiv an das System gestellt. Eine Möglichkeit, der Forderung nach Interaktivität bei diesem starken, mehrdimensionalen Wachstum nachzukommen, stellen Stichproben und eine darauf aufsetzende näherungsweise Anfrageverarbeitung dar. Diese Lösung bietet signifikant kürzere Antwortzeiten sowie Schätzungen mit probabilistischen Fehlergrenzen. Mit den Operationen Verbund, Gruppierung und Aggregation als Hauptbestandteile analytischer Anfragen ergeben sich folgende Anforderungen an das Design von Stichproben in analytischen Datenbanken: Zwischen den Stichproben fremdschlüsselverbundener Relationen ist die referenzielle Integrität zu gewährleisten, sämtliche Gruppen sind angemessen zu repräsentieren und Aggregationsattribute sind auf extreme Werte zu untersuchen.
In dieser Dissertation wird für jedes dieser Teilprobleme ein Stichprobenverfahren vorgestellt, das sich durch speicherplatzbeschränkte Stichproben und geringe Schätzfehler auszeichnet. Im ersten der vorgestellten Verfahren wird durch eine korrelierte Stichprobenerhebung die referenzielle Integrität bei minimalem zusätzlichen Speicherplatz gewährleistet. Das zweite vorgestellte Stichprobenverfahren hat durch eine Berücksichtigung der Streuung der Daten eine angemessene Repräsentation sämtlicher Gruppen zur Folge und unterstützt damit beliebige Gruppierungen, und im dritten Verfahren ermöglicht eine mehrdimensionale Ausreißerbehandlung geringe Schätzfehler für beliebig viele Aggregationsattribute. Für jedes dieser Verfahren wird die Qualität der resultierenden Stichprobe diskutiert und bei der Berechnung speicherplatzbeschränkter Stichproben berücksichtigt. Um den Berechnungsaufwand und damit die Systembelastung gering zu halten, werden für jeden Algorithmus Heuristiken vorgestellt, deren Kennzeichen hohe Effizienz und eine geringe Beeinflussung der Stichprobenqualität sind. Weiterhin werden alle möglichen Kombinationen der vorgestellten Stichprobenverfahren betrachtet; diese Kombinationen ermöglichen eine zusätzliche Verringerung der Schätzfehler und vergrößern gleichzeitig das Anwendungsspektrum der resultierenden Stichproben. Mit der Kombination aller drei Techniken wird ein Stichprobenverfahren vorgestellt, das alle Anforderungen an das Design von Stichproben in analytischen Datenbanken erfüllt und die Vorteile der Einzellösungen vereint. Damit ist es möglich, ein breites Spektrum an Anfragen mit hoher Genauigkeit näherungsweise zu beantworten. / Recent studies have shown the fast and multi-dimensional growth in analytical databases: Over the last four years, the data volume has risen by a factor of 10; the number of users has increased by an average of 25% per year; and the number of queries has been doubling every year since 2004. These queries have increasingly become complex join queries with aggregations; they are often of an explorative nature and interactively submitted to the system.
One option to address the need for interactivity in the context of this strong, multi-dimensional growth is the use of samples and an approximate query processing approach based on those samples. Such a solution offers significantly shorter response times as well as estimates with probabilistic error bounds. Given that joins, groupings and aggregations are the main components of analytical queries, the following requirements for the design of samples in analytical databases arise: 1) The foreign-key integrity between the samples of foreign-key related tables has to be preserved. 2) Any existing groups have to be represented appropriately. 3) Aggregation attributes have to be checked for extreme values.
For each of these sub-problems, this dissertation presents sampling techniques that are characterized by memory-bounded samples and low estimation errors. In the first of these presented approaches, a correlated sampling process guarantees the referential integrity while only using up a minimum of additional memory. The second illustrated sampling technique considers the data distribution, and as a result, any arbitrary grouping is supported; all groups are appropriately represented. In the third approach, the multi-column outlier handling leads to low estimation errors for any number of aggregation attributes. For all three approaches, the quality of the resulting samples is discussed and considered when computing memory-bounded samples. In order to keep the computation effort - and thus the system load - at a low level, heuristics are provided for each algorithm; these are marked by high efficiency and minimal effects on the sampling quality. Furthermore, the dissertation examines all possible combinations of the presented sampling techniques; such combinations allow to additionally reduce estimation errors while increasing the range of applicability for the resulting samples at the same time. With the combination of all three techniques, a sampling technique is introduced that meets all requirements for the design of samples in analytical databases and that merges the advantages of the individual techniques. Thereby, the approximate but very precise answering of a wide range of queries becomes a true possibility.
|
60 |
Metodika zpracování dat pro vizualizaci FM objektů v GIS / Data processing procedure applicable for visualisation of FM objects in GIS environmentHájek, Filip January 2011 (has links)
This diploma thesis deals with the use of geographic information systems in issue of facility management. Its purpose is to show the advantages of this connection, mostly the visualization and analytical capabilities of GIS. The general issues of facility management are described in the first part of the thesis - what it is about, what it brings, the most important definitions and standards are provided there. Then it continues with theory about connection with GIS and answers the question, why this cooperation is beneficial. It continues with description of importance of GIS during whole life circle of real estate. The chapter is concluded by definition of another powerful tool in FM - CAD/BIM. The fourth chapter discusses one of the important tasks of FM - passportization and its meaning. The last chapter before conclusion focuses on the practical part of thesis dealing with campus of the Czech Post in Malešice. Keywords: Facility management, GIS, passportization, visualization
|
Page generated in 0.1019 seconds