191 |
Tratamento de eventos em redes elétricas: uma ferramenta. / Treatment of events in electrical networks: a tool.DUARTE, Alexandre Nóbrega. 15 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-15T14:16:38Z
No. of bitstreams: 1
ALEXANDRE NÓBREGA DUARTE - DISSERTAÇÃO PPGCC 2003..pdf: 1526817 bytes, checksum: dfc39cd8b1649bf64468cbe2eaefe99b (MD5) / Made available in DSpace on 2018-08-15T14:16:38Z (GMT). No. of bitstreams: 1
ALEXANDRE NÓBREGA DUARTE - DISSERTAÇÃO PPGCC 2003..pdf: 1526817 bytes, checksum: dfc39cd8b1649bf64468cbe2eaefe99b (MD5)
Previous issue date: 2003-02-25 / Apresenta uma nova ferramenta para o diagnóstico automático de falhas em redes elétricas. A ferramenta utiliza uma técnica híbrida de correlação de eventos criada especialmente para ser utilizada em redes com constantes modificações de topologia. A técnica híbrida combina o raciocínio baseado em regras com o raciocínio baseado em modelos para eliminar as principais limitações do raciocínio baseado em regras. Com a ferramenta de diagnóstico foi possível validar o conhecimento dos especialistas em sistemas de transmissão de energia elétrica necessário para o diagnóstico de falhas em linhas de transmissão e construir uma base de regras para tal. A ferramenta foi testada no diagnóstico de falhas em linhas de transmissão de um dos cinco centros regionais da Companhia Hidro Elétrica do São Francisco (CHESF) e apresentou resultados satisfatórios de desempenho e precisão. / It presents a new tool for the automatic diagnosis of faults in electric networks. The toot uses a hybrid event correlation technique especially created to be used in networks with constant topological modifications. The hybrid technique combines ruJe-based reasoning with modelbased reasoning to eliminate the main limitations of rule-based reasoning. With the tool it was possible to validate the knowledge acquired from electric energy transmission systems specialists needed for the diagnosis of faults in transmission lines and to construct rules. The tool was tested in the diagnosis of faults in transmission lines of one of the five regional centers of the Companhia Hidro Elétrica do São Francisco (CHESF) and presented satisfactoiy results in terms of performance and precision.
|
192 |
Descoberta de causa-raiz em ocorrências de sistemas elétricos. / Root cause discovery in occurrences of electrical systems.PIRES, Stéfani Silva. 16 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-16T13:58:48Z
No. of bitstreams: 1
STEFANI SILVA PIRES - DISSERTAÇÃO PGCC 2010..pdf: 819684 bytes, checksum: 625f468cb174d699bf5b98131d1adf61 (MD5) / Made available in DSpace on 2018-08-16T13:58:48Z (GMT). No. of bitstreams: 1
STEFANI SILVA PIRES - DISSERTAÇÃO PGCC 2010..pdf: 819684 bytes, checksum: 625f468cb174d699bf5b98131d1adf61 (MD5)
Previous issue date: 2010-08-19 / Este trabalho apresenta uma técnica de análise de causa-raiz para sistemas elétricos de
potência. A análise de causa-raiz é uma forma de auxiliar o operador na compreensão da
ocorrênciadefalha,interpretandoasocorrênciascomefeito"cascata"entreoselementosda
rede. A técnica proposta utiliza o raciocínio baseado em regras, onde regras parametrizadas constroem um modelo de propagação com os diagnósticos de uma ocorrência de falha. A técnica permite apontar o elemento causador da ocorrência, e detalhar a sua propagação para os demais elementos em um modelo de causa-efeito. A utilização de regras parametrizadas traz grandes vantagens ao processo, permitindo que a técnica seja adaptável a alterações na topologia do sistema, e contribuindo para sua escalabilidade. Um estudo de caso foi elaborado para sua avaliação, no contexto da Companhia Hidro Elétrica do São Francisco (CHESF), onde foi desenvolvido um protótipo que implementa a técnica, e levantados um conjunto de regras parametrizadas e um conjunto de cenários de falha utilizando uma ferramenta de simulação de um ambiente real, o Simulop. Utilizamos também na avaliação, um conjunto de regressões, que são dados históricos armazenados pela CHESF. As regressões
foramimportantesnaprimeirafasededefiniçãodatécnica,masapresentamproblemascomo
afaltadedados,ecomportamentosinesperadosdosistema,ondeamargemdeacertodatécnica
foi de 74%. Para o conjunto de cenários levantados com oSimulop, a técnica proposta
conseguiu realizar com sucesso o processo de análise de causa-raiz, identificando a causaraiz da ocorrência em 100% dos cenários de falha, e detalhando sua propagação para todos os outros elementos da rede envolvidos em 89% dos cenários, onde a margem de erro é composta de cenários cuja propagação foi identificada apenas parcialmente, devido à falta de regras que contemplassem os cenários. Dessa forma, a técnica proposta se mostrou uma abordagem viável para a análise de causa-raiz em sistemas elétricos. A margem de acerto reduzida nas regressões, indica que, para ser aplicada em um ambiente operacional real, faz-se necessária a elaboração de um conjunto de regras mais abrangente e que possa contornar esses problemas. / This paper presents a root cause analysis technique for electric power systems. The root
cause analysis is a way to assist the operator in understanding the occurrence of failure, interpreting the events cascade occurrences. The proposed technique uses a rule based reasoning, where parameterized rules construct a propagation model with diagnosis of an occurrence of failure. The technique allows to point out the element that causes the occurrence, and detailing its propagation to other elements in a cause and effect model. The use of parameterized rules brings major benefits to the process, allowing the technique to be adaptable to changes in system topology, and contributing to its scalability A case study was prepared for evaluation in the context of the Companhia Hidro Elétrica do São Francisco (CHESF). We developed a prototype that implements the technique, and raised a set of parameterized rules and a set of failure scenarios using a tool to simulate a real environment, the Simulop. We also used in the evaluation process, a set of regressions, which are historical data stored by CHESF. The regressions were important in the first phase of the technique, but they have problems such as lack of data, and unexpected behavior of the system, where the accuracy of the technique was 74%. For the set of scenarios created with Simulop, the proposed technique has achieved success in the root cause analysis process, identifying the root cause of the occurrence in 100% of failure scenarios, and detailing their propagation to all other equipments involved in 89% of scenarios, where the margin of error is composed of scenarios whose propagation has been identified only in part due to the lack of rules that contemplate these scenarios. Thus, the proposed technique proved to be a viable approach to root cause analysis in electrical systems. The reduced margin of success in the regressions , indicates that, to be applied to an operational environment, it is necessary to elaborate a comprehensive set of rules that can deal these problems.
|
193 |
Recourse policies in the vehicle routing problem with stochastic demandsSalavati-Khoshghalb, Majid 09 1900 (has links)
No description available.
|
194 |
Consumer liking and sensory attribute prediction for new product development support : applications and enhancements of belief rule-based methodologySavan, Emanuel-Emil January 2015 (has links)
Methodologies designed to support new product development are receiving increasing interest in recent literature. A significant percentage of new product failure is attributed to a mismatch between designed product features and consumer liking. A variety of methodologies have been proposed and tested for consumer liking or preference prediction, ranging from statistical methodologies e.g. multiple linear regression (MLR) to non-statistical approaches e.g. artificial neural networks (ANN), support vector machines (SVM), and belief rule-based (BRB) systems. BRB has been previously tested for consumer preference prediction and target setting in case studies from the beverages industry. Results have indicated a number of technical and conceptual advantages which BRB holds over the aforementioned alternative approaches. This thesis focuses on presenting further advantages and applications of the BRB methodology for consumer liking prediction. The features and advantages are selected in response to challenges raised by three addressed case studies. The first case study addresses a novel industry for BRB application: the fast moving consumer goods industry, the personal care sector. A series of challenges are tackled. Firstly, stepwise linear regression, principal component analysis and AutoEncoder are tested for predictors’ selection and data reduction. Secondly, an investigation is carried out to analyse the impact of employing complete distributions, instead of averages, for sensory attributes. Moreover, the effect of modelling instrumental measurement error is assessed. The second case study addresses a different product from the personal care sector. A bi-objective prescriptive approach for BRB model structure selection and validation is proposed and tested. Genetic Algorithms and Simulated Annealing are benchmarked against complete enumeration for searching the model structures. A novel criterion based on an adjusted Akaike Information Criterion is designed for identifying the optimal model structure from the Pareto frontier based on two objectives: model complexity and model fit. The third case study introduces yet another novel industry for BRB application: the pastry and confectionary specialties industry. A new prescriptive framework, for rule validation and random training set allocation, is designed and tested. In all case studies, the BRB methodology is compared with the most popular alternative approaches: MLR, ANN, and SVM. The results indicate that BRB outperforms these methodologies both conceptually and in terms of prediction accuracy.
|
195 |
Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications / Programmation événementielle et programmation à base de règles pour le développement d'applications réactives sensibles au contexteLe, Truong Giang 30 September 2013 (has links)
Les applications réactives et sensibles au contexte sont des applications intelligentes qui observent l’environnement (ou contexte) dans lequel elles s’exécutent et qui adaptent, si nécessaire, leur comportement en cas de changements dans ce contexte, ou afin de satisfaire les besoins ou d'anticiper les intentions des utilisateurs. La recherche dans ce domaine suscite un intérêt considérable tant de la part des académiques que des industriels. Les domaines d'applications sont nombreux: robots industriels qui peuvent détecter les changements dans l'environnement de travail de l'usine pour adapter leurs opérations; systèmes de contrôle automobiles pour observer d'autres véhicules, détecter les obstacles, ou surveiller le niveau d'essence ou de la qualité de l'air afin d'avertir les conducteurs en cas d'urgence; systèmes embarqués monitorant la puissance énergétique disponible et modifiant la consommation en conséquence. Dans la pratique, le succès de la mise en œuvre et du déploiement de systèmes sensibles au contexte dépend principalement du mécanisme de reconnaissance et de réaction aux variations de l'environnement. En d'autres termes, il est nécessaire d'avoir une approche adaptative bien définie et efficace de sorte que le comportement des systèmes peut être modifié dynamiquement à l'exécution. En outre, la concurrence devrait être exploitée pour améliorer les performances et la réactivité des systèmes. Tous ces exigences, ainsi que les besoins en sécurité et fiabilité constituent un grand défi pour les développeurs.C’est pour permettre une écriture plus intuitive et directe d'applications réactives et sensibles au contexte que nous avons développé dans cette thèse un nouveau langage appelé INI. Pour observer les changements dans le contexte et y réagir, INI s’appuie sur deux paradigmes : la programmation événementielle et la programmation à base de règles. Événements et règles peuvent être définis en INI de manière indépendante ou en combinaison. En outre, les événements peuvent être reconfigurésdynamiquement au cours de l’exécution. Un autre avantage d’INI est qu’il supporte laconcurrence afin de gérer plusieurs tâches en parallèle et ainsi améliorer les performances et la réactivité des programmes. Nous avons utilisé INI dans deux études de cas : une passerelle M2M multimédia et un programme de suivi d’objet pour le robot humanoïde Nao. Enfin, afin d’augmenter la fiabilité des programmes écrits en INI, un système de typage fort a été développé, et la sémantique opérationnelle d’INI a été entièrement définie. Nous avons en outre développé un outil appelé INICheck qui permet de convertir automatiquement un sous-ensemble d’INI vers Promela pour permettre un analyse par model checking à l’aide de l’interpréteur SPIN. / Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
|
196 |
Design, Control, and Validation of a Transient Thermal Management System with Integrated Phase-Change Thermal Energy StorageMichael Alexander Shanks (14216549) 06 December 2022 (has links)
<p>An emerging technology in the field of transient thermal management is thermal energy storage, or TES, which enables temporary, on-demand heat rejection via storage as latent heat in a phase-change material. Latent TES devices have enabled advances in many thermal management applications, including peak load shifting for reducing energy demand and cost of HVAC systems and providing supplemental heat rejection in transient thermal management systems. However, the design of a transient thermal management system with integrated storage comprises many challenges which are yet to be solved. For example, design approaches and performance metrics for determining the optimal dimensions of the TES device have only recently been studied. Another area of active research is estimation of the internal temperature state of the device, which can be difficult to directly measure given the transient nature of the thermal storage process. Furthermore, in contrast to the three main functions of a thermal-fluid system--heat addition, thermal transport, and heat rejection--thermal storage introduces the need for active, real-time control and automated decision making for managing the operation of the thermal storage device. </p>
<p>In this thesis, I present the design process for integrating thermal energy storage into a single-phase thermal management system for rejecting transient heat loads, including design of the TES device, state estimation and control algorithm design, and validation in both simulation and experimental environments. Leveraging a reduced-order finite volume simulation model of a plate-fin TES device, I develop a design approach which involves a transient simulation-based design optimization to determine the required geometric dimensions of the device to meet transient performance objectives while maximizing power density. The optimized TES device is integrated into a single-phase thermal-fluid testbed for experimental testing. Using the finite volume model and feedback from thermocouples embedded in the device, I design and experimentally validate a state estimator based on the state-dependent Riccati equation approach for determining the internal temperature distribution to a high degree of accuracy. Real-time knowledge of the internal temperature state is critical for making control decisions; to manage the operation of the TES device in the context of a transient thermal management system, I design and test, both in simulation and experimentally, a logic-based control strategy that uses fluid temperature measurements and estimates of the TES state to make real-time control decisions to meet critical thermal management objectives. Together, these advances demonstrate the potential of thermal energy storage technology as a component of thermal management systems and the feasibility of logic-based control strategies for real-time control of thermal management objectives.</p>
|
197 |
Génération de données synthétiques pour l'adaptation hors-domaine non-supervisée en réponse aux questions : méthodes basées sur des règles contre réseaux de neuronesDuran, Juan Felipe 02 1900 (has links)
Les modèles de réponse aux questions ont montré des résultats impressionnants sur plusieurs ensembles de données et tâches de réponse aux questions. Cependant, lorsqu'ils sont testés sur des ensembles de données hors domaine, la performance diminue. Afin de contourner l'annotation manuelle des données d'entraînement du nouveau domaine, des paires de questions-réponses peuvent être générées synthétiquement à partir de données non annotées. Dans ce travail, nous nous intéressons à la génération de données synthétiques et nous testons différentes méthodes de traitement du langage naturel pour les deux étapes de création d'ensembles de données : génération de questions et génération de réponses. Nous utilisons les ensembles de données générés pour entraîner les modèles UnifiedQA et Bert-QA et nous les testons sur SCIQ, un ensemble de données hors domaine sur la physique, la chimie et la biologie pour la tâche de question-réponse à choix multiples, ainsi que sur HotpotQA, TriviaQA, NatQ et SearchQA, quatre ensembles de données hors domaine pour la tâche de question-réponse. Cette procédure nous permet d'évaluer et de comparer les méthodes basées sur des règles avec les méthodes de réseaux neuronaux. Nous montrons que les méthodes basées sur des règles produisent des résultats supérieurs pour la tâche de question-réponse à choix multiple, mais que les méthodes de réseaux neuronaux produisent généralement des meilleurs résultats pour la tâche de question-réponse. Par contre, nous observons aussi qu'occasionnellement, les méthodes basées sur des règles peuvent compléter les méthodes de réseaux neuronaux et produire des résultats compétitifs lorsqu'on entraîne Bert-QA avec les bases de données synthétiques provenant des deux méthodes. / Question Answering models have shown impressive results in several question answering datasets and tasks. However, when tested on out-of-domain datasets, the performance decreases. In order to circumvent manually annotating training data from the new domain, question-answer pairs can be generated synthetically from unnanotated data. In this work, we are interested in the generation of synthetic data and we test different Natural Language Processing methods for the two steps of dataset creation: question/answer generation. We use the generated datasets to train QA models UnifiedQA and Bert-QA and we test it on SCIQ, an out-of-domain dataset about physics, chemistry, and biology for MCQA, and on HotpotQA, TriviaQA, NatQ and SearchQA, four out-of-domain datasets for QA. This procedure allows us to evaluate and compare rule-based methods with neural network methods. We show that rule-based methods yield superior results for the multiple-choice question-answering task, but neural network methods generally produce better results for the question-answering task. However, we also observe that occasionally, rule-based methods can complement neural network methods and produce competitive results when training Bert-QA with synthetic databases derived from both methods.
|
198 |
Rule-Based Software Verification and CorrectionBallis, Demis 07 May 2008 (has links)
The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature.
In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data).
Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm.
Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic
properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one
can also infer semi-automatically some possible corrections in order to fix theWeb site.
The methodology is based on a novel rewrit / Ballis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1948
|
199 |
Contributions à la co-optimisation contrôle-dimensionnement sur cycle de vie sous contrainte réseau des houlogénérateurs directs / Contribution to the sizing-control co-optimization over life cycle under grid constraint for direct-drive wave energy convertersKovaltchouk, Thibaut 09 July 2015 (has links)
Les Energies Marines Renouvelables (EMR) se développent aujourd’hui très vite tant au niveau de la recherche amont que de la R&D, et même des premiers démonstrateurs à la mer. Parmi ces EMR, l'énergie des vagues présente un potentiel particulièrement intéressant. Avec une ressource annuelle brute moyenne estimée à 40 kW/m au large de la côte atlantique, le littoral français est plutôt bien exposé. Mais l’exploitation à grande échelle de cette énergie renouvelable ne sera réalisable et pertinente qu'à condition d'une bonne intégration au réseau électrique (qualité) ainsi que d'une gestion et d'un dimensionnement optimisé au sens du coût sur cycle de vie. Une première solution de génération tout électrique pour un houlogénérateur a d’abord été évaluée dans le cadre de la thèse de Marie RUELLAN menée sur le site de Bretagne du laboratoire SATIE (ENS de Cachan). Ces travaux ont mis en évidence le potentiel de viabilité économique de cette chaîne de conversion et ont permis de poser la question du dimensionnement de l’ensemble convertisseur-machine et de soulever les problèmes associés à la qualité de l’énergie produite. Puis une seconde thèse a été menée par Judicaël AUBRY dans la même équipe de recherche. Elle a consisté, entre autres, en l’étude d’une première solution de traitement des fluctuations de la puissance basée sur un système de stockage par supercondensateurs. Une méthodologie de dimensionnement de l’ensemble convertisseur-machine et de gestion de l’énergie stockée fut également élaborée, mais en découplant le dimensionnement et la gestion de la production d’énergie et de ceux de son système de stockage. Le doctorant devra donc : 1. S’approprier les travaux antérieurs réalisés dans le domaine de la récupération de l’énergie des vagues ainsi que les modèles hydrodynamiques et mécaniques réalisés par notre partenaire : le LHEEA de l’Ecole Centrale de Nantes - 2. Résoudre le problème du couplage entre dimensionnement/gestion de la chaîne de conversion et dimensionnement/gestion du système de stockage. 3. Participer à la réalisation d’un banc test à échelle réduite de la chaine électrique et valider expérimentalement les modèles énergétiques du stockage et des convertisseurs statiques associés - 4. Proposer une méthodologie de dimensionnement de la chaine électrique intégrant le stockage et les lois de contrôle préalablement élaborées 5. Déterminer les gains en termes de capacités de stockage obtenus grâce à la mutualisation de la production (parc de machines) et évaluer l’intérêt d’un stockage centralisé - 6. Analyser l’impact sur le réseau d’une production houlogénérée selon divers scenarii, modèles et outils développés par tous les partenaires dans le cadre du projet QUALIPHE. L’exemple traité sera celui de l’Ile d’Yeu (en collaboration avec le SyDEV. / The work of this PhD thesis deals with the minimization of the per-kWh cost of direct-drive wave energy converter, crucial to the economic feasibility of this technology. Despite the simplicity of such a chain (that should provide a better reliability compared to indirect chain), the conversion principle uses an oscillating system (a heaving buoy for example) that induces significant power fluctuations on the production. Without precautions, such fluctuations can lead to: a low global efficiency, an accelerated aging of the fragile electrical components and a failure to respect power quality constraints. To solve these issues, we firstly study the optimization of the direct drive wave energy converter control in order to increase the global energy efficiency (from wave to grid), considering conversion losses and the limit s from the sizing of an electrical chain (maximum force and power). The results point out the effect of the prediction horizon or the mechanical energy into the objective function. Production profiles allow the study of the flicker constraint (due to grid voltage fluctuations) linked notably to the grid characteristics at the connection point. Other models have also been developed to quantify the aging of the most fragile and highly stressed components, namely the energy storage system used for power smoothing (with super capacitors or electrochemical batteries Li-ion) and power semiconductors.Finally, these aging models are used to optimize key design parameters using life-cycle analysis. Moreover, the sizing of the storage system is co-optimized with the smoothing management.
|
200 |
GIS-based Episode Reconstruction Using GPS Data for Activity Analysis and Route Choice Modeling / GIS-based Episode Reconstruction Using GPS DataDalumpines, Ron 26 September 2014 (has links)
Most transportation problems arise from individual travel decisions. In response, transportation researchers had been studying individual travel behavior – a growing trend that requires activity data at individual level. Global positioning systems (GPS) and geographical information systems (GIS) have been used to capture and process individual activity data, from determining activity locations to mapping routes to these locations. Potential applications of GPS data seem limitless but our tools and methods to make these data usable lags behind. In response to this need, this dissertation presents a GIS-based toolkit to automatically extract activity episodes from GPS data and derive information related to these episodes from additional data (e.g., road network, land use).
The major emphasis of this dissertation is the development of a toolkit for extracting information associated with movements of individuals from GPS data. To be effective, the toolkit has been developed around three design principles: transferability, modularity, and scalability. Two substantive chapters focus on selected components of the toolkit (map-matching, mode detection); another for the entire toolkit. Final substantive chapter demonstrates the toolkit’s potential by comparing route choice models of work and shop trips using inputs generated by the toolkit.
There are several tools and methods that capitalize on GPS data, developed within different problem domains. This dissertation contributes to that repository of tools and methods by presenting a suite of tools that can extract all possible information that can be derived from GPS data. Unlike existing tools cited in the transportation literature, the toolkit has been designed to be complete (covers preprocessing up to extracting route attributes), and can work with GPS data alone or in combination with additional data. Moreover, this dissertation contributes to our understanding of route choice decisions for work and shop trips by looking into the combined effects of route attributes and individual characteristics. / Dissertation / Doctor of Philosophy (PhD)
|
Page generated in 0.0661 seconds