• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Business continuity of energy systems : a quantitative framework for dynamic assessment and optimization / Un cadre quantitatif pour l'évaluation et l'optimisation dynamique de la continuité d'activité des systèmes énergétique

Xing, Jinduo 03 December 2019 (has links)
La gestion de la continuité des opérations est un cadre complet visant à éviter que les événements perturbateurs n’affectent les opérations commerciales, à rétablir rapidement les activités et à réduire les dommages potentiels correspondants pour les systèmes énergétiques, tels que les centrales nucléaires. Cette thèse propose des discussions sur les aspects suivants: développement de méthodes appropriées d'évaluation des risques afin d'intégrer les données de surveillance de l'état et les données d'inspection pour une mise à jour et des pronostics robustes et en temps réel du profil de risque. Pour tenir compte de l'incertitude des données de surveillance de l'état, un modèle de mélange gaussien de Markov caché est développé pour modéliser les données de surveillance de l'état. Un réseau bayésien est appliqué pour intégrer les deux sources de données. Pour améliorer l'applicabilité de la continuité des opérations dans la pratique, les variables variant dans le temps considèrent l'indice de continuité des opérations, par ex. la dégradation des composants, les revenus en fonction du temps, etc. sont pris en compte dans le processus de modélisation de la continuité des activités. Sur la base de l'indice de continuité d'activité proposé, une méthode d'optimisation conjointe prenant en compte toutes les mesures de sécurité dans le processus d'évolution des événements, y compris les étapes de prévention, d'atténuation, d'urgence et de récupération, est développée pour améliorer la continuité des opérations du système avec des ressources limitées. Les méthodologies proposées sont appliquées aux centrales nucléaires contre les événements perturbateurs. / Business continuity management is a comprehensive framework to prevent the disruptive events from impacting the business operations, quickly recovering business and reducing the corresponding potential damages for energy system, such as nuclear power plants (NPPs). This dissertation provides discussions on the following aspects: developing appropriate risk assessment methods in order to integrate condition monitoring data and inspection data for a robust and real-time risk profile updating and prognostics. To account for the uncertainty of condition monitoring data, a hidden Markov gaussian mixture model is developed to model the condition monitoring data. A Bayesian network is applied to integrate the two data sources. For improving applicability of business continuity in practice, time-variant variables regard business continuity index, e.g. component degradation, time-dependent revenue, etc are taken into consideration in the business continuity modelling process. Based on the proposed business continuity index, a joint optimization method considering all the safety measures in event evolvement process including prevention stage, mitigation stage, emergency stage and recovery stage is developed to enhance system business continuity under limited resources. The proposed methodologies are applied to NPP against disruptive event.
12

Using DevOps principles to continuously monitor RDF data quality

Meissner, Roy, Junghanns, Kurt 01 August 2017 (has links)
One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software development and semantic web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real world usage scenario is discussed, outlining the benefit of the usage of such a pipeline.
13

Patient Record Summarization Through Joint Phenotype Learning and Interactive Visualization

Levy-Fix, Gal January 2020 (has links)
Complex patient are becoming more and more of a challenge to the health care system given the amount of care they require and the amount of documentation needed to keep track of their state of health and treatment. Record keeping using the EHR makes this easier but mounting amounts of patient data also means that clinicians are faced with information overload. Information overload has been shown to have deleterious effects on care, with increased safety concerns due to missed information. Patient record summarization has been a promising mitigator for information overload. Subsequently, a lot of research has been dedicated to record summarization since the introduction of EHRs. In this dissertation we examine whether unsupervised inference methods can derive patient problem-oriented summaries, that are robust to different patients. By grounding our experiments with HIV patients we leverage the data of a group of patients that are similar in that they share one common disease (HIV) but also exhibit complex histories of diverse comorbidities. Using a user-centered, iterative design process, we design an interactive, longitudinal patient record summarization tool, that leverages automated inferences about the patient's problems. We find that unsupervised, joint learning of problems using correlated topic models, adapted to handle the multiple data types (structured and unstructured) of the EHR, is successful in identifying the salient problems of complex patients. Utilizing interactive visualization that exposes inference results to users enables them to make sense of a patient's problems over time and to answer questions about a patient more accurately and faster than using the EHR alone.
14

Integrating environmental data acquisition and low cost Wi-Fi data communication.

Gurung, Sanjaya 12 1900 (has links)
This thesis describes environmental data collection and transmission from the field to a server using Wi-Fi. Also discussed are components, radio wave propagation, received power calculations, and throughput tests. Measured receive power resulted close to calculated and simulated values. Throughput tests resulted satisfactory. The thesis provides detailed systematic procedures for Wi-Fi radio link setup and techniques to optimize the quality of a radio link.
15

Design and Implementation of Large-Scale Wireless Sensor Networks for Environmental Monitoring Applications

Yang, Jue 05 1900 (has links)
Environmental monitoring represents a major application domain for wireless sensor networks (WSN). However, despite significant advances in recent years, there are still many challenging issues to be addressed to exploit the full potential of the emerging WSN technology. In this dissertation, we introduce the design and implementation of low-power wireless sensor networks for long-term, autonomous, and near-real-time environmental monitoring applications. We have developed an out-of-box solution consisting of a suite of software, protocols and algorithms to provide reliable data collection with extremely low power consumption. Two wireless sensor networks based on the proposed solution have been deployed in remote field stations to monitor soil moisture along with other environmental parameters. As parts of the ever-growing environmental monitoring cyberinfrastructure, these networks have been integrated into the Texas Environmental Observatory system for long-term operation. Environmental measurement and network performance results are presented to demonstrate the capability, reliability and energy-efficiency of the network.
16

Monitoring a analýza uživatelů systémem DLP / Monitoring and Analysis of Users Using DLP System

Pandoščák, Michal January 2011 (has links)
The purpose of this masters thesis is to study issues of monitoring and analysis of users using DLP (Data Loss Prevention) system, the definition of internal and external attacks, the description of the main parts of the DLP system, managing of politic, monitoring user activities and classifying the data content. This paper explains the difference between contextual and content analysis and describes their techniques. It shows the fundamentals of network and endpoint monitoring and describes the process and users activities which may cause a data leakage. Lastly, we have developed endpoint protection agent who serves to the monitoring activities at a terminal station.
17

Control of critical data flows : Automated monitoring of insurance data

Karlsson, Christoffer January 2016 (has links)
EU insurance companies work on implementing the Solvency II directive, which calls for stronger focus on data quality and information controls. Information controls are procedures that can validate data at rest and data in motion to detect errors and anomalies. In this master thesis a case study was carried out at AMF, a Swedish pension insurance company, to identify and investigate their critical data flows and the controls performed in the respective flows. A purpose of this project is to help AMF ensure data quality requirements from the Financial Supervisory Authority that they have to fulfill. The thesis was conducted at AMF between September and December 2015, and included tasks such as carrying out interviews, Enterprise Architecture modeling, analysis, prototyping, product evaluation and calculation of a business case.  A gap analysis was carried out to analyze the needs for change regarding existing information controls at AMF, where different states of the company are documented and analyzed. The current state corresponds to the present situation at the company including attributes to be improved while the future state outlines the target condition that the company wants to achieve. A gap between the current state and future state is identified and elements that make up the gap are presented in the gap description. Lastly, possible remedies for bridging the gap between the current and future state are presented.  Furthermore, a prototype of an automated control tool from a company called Infogix has been implemented and analyzed regarding usability, governance and cost.  A benefits evaluation was carried out on the information control tool to see whether an investment would be beneficial for AMF. The benefit evaluation was carried out using the PENG method, a Swedish model developed by three senior consultants that has been specially adjusted for evaluation of IT investments. The evaluation showed that such an investment would become beneficial during the second year after investment. / Försäkringsbolag i EU arbetar med införandet av Solvens II-direktivet som kräver att företag har ett större fokus på datakvalitet och informationskontroller. I detta examensarbete har en fältstudie utförts på AMF som är ett svenskt pensionsbolag. Arbetet har gått ut på att identifiera och undersöka kritiska dataflöden i företaget samt kontroller som utförs i dessa flöden. Ett syfte med arbetet var att hjälpa AMF att kunna påvisa att man uppfyller krav från finansinspektionen på datakvalitet och spårbarhet. Projektet utfördes under perioden september till december hösten 2015, vilket inkluderade arbetsuppgifter såsom intervjuer, Enterprise Architecture-modellering, implementering av prototyp, produktutvärdering samt kalkylering av ett business case.  En gap-analys har utförts för att analysera behovet av förändringar på de nuvarande informationskontrollerna som finns på AMF, där olika lägen har dokumenterats och analyserats. Nuläget motsvarar hur situationen ser ut på företaget i dagsläget och fokuserar på de attribut som man vill förbättra, medan önskat läge beskriver de mål som företaget vill uppnå. Ett gap mellan nuläge och önskat läge identifieras tillsammans med de faktorer som utgör skillnaden mellan dessa lägen presenteras. Till sist presenteras tänkbara åtgärder för att uppnå önskat läge. Som en del av detta examensarbete har en prototyp av ett automatiserat kontrollverktyg från ett företag som heter Infogix implementerats och utvärderas med avseende på användbarhet, styrning och kostnad. En nyttovärdering har utförts på kontrollverktyget för att undersöka huruvida en investering skulle vara gynnsam för AMF. Nyttovärderingen gjordes med hjälp av PENG, en svensk nyttovärderingsmodell utvecklad av tre ekonomer/IT-konsulter, som har anpassat speciellt för att bedöma IT-investeringar. Värderingen visade på att en sådan investering skulle komma att bli gynnsam under andra året efter att investeringen gjordes.
18

Design of a microcomputer-based open heart surgery patient monitor

Brinkman, Karen L. January 1985 (has links)
A patient monitor device for use during open heart surgery has been designed and constructed. The device uses a VIC 20 microcomputer along with some additional circuitry to monitor 3 separate functions. The first patient variable monitored is the blood flow rate through the extracorporeal blood circuit during surgery. The device also continuously monitors and displays 6 separate temperatures. Finally, 3 individual timers are monitored and displayed with the device. Both the hardware and the software used in the design are fully described. / Master of Science
19

Learning in wireless sensor networks for energy-efficient environmental monitoring / Apprentissage dans les réseaux de capteurs pour une surveillance environnementale moins coûteuse en énergie

Le Borgne, Yann-Aël 30 April 2009 (has links)
Wireless sensor networks form an emerging class of computing devices capable of observing the world with an unprecedented resolution, and promise to provide a revolutionary instrument for environmental monitoring. Such a network is composed of a collection of battery-operated wireless sensors, or sensor nodes, each of which is equipped with sensing, processing and wireless communication capabilities. Thanks to advances in microelectronics and wireless technologies, wireless sensors are small in size, and can be deployed at low cost over different kinds of environments in order to monitor both over space and time the variations of physical quantities such as temperature, humidity, light, or sound. <p><p>In environmental monitoring studies, many applications are expected to run unattended for months or years. Sensor nodes are however constrained by limited resources, particularly in terms of energy. Since communication is one order of magnitude more energy-consuming than processing, the design of data collection schemes that limit the amount of transmitted data is therefore recognized as a central issue for wireless sensor networks.<p><p>An efficient way to address this challenge is to approximate, by means of mathematical models, the evolution of the measurements taken by sensors over space and/or time. Indeed, whenever a mathematical model may be used in place of the true measurements, significant gains in communications may be obtained by only transmitting the parameters of the model instead of the set of real measurements. Since in most cases there is little or no a priori information about the variations taken by sensor measurements, the models must be identified in an automated manner. This calls for the use of machine learning techniques, which allow to model the variations of future measurements on the basis of past measurements.<p><p>This thesis brings two main contributions to the use of learning techniques in a sensor network. First, we propose an approach which combines time series prediction and model selection for reducing the amount of communication. The rationale of this approach, called adaptive model selection, is to let the sensors determine in an automated manner a prediction model that does not only fits their measurements, but that also reduces the amount of transmitted data. <p><p>The second main contribution is the design of a distributed approach for modeling sensed data, based on the principal component analysis (PCA). The proposed method allows to transform along a routing tree the measurements taken in such a way that (i) most of the variability in the measurements is retained, and (ii) the network load sustained by sensor nodes is reduced and more evenly distributed, which in turn extends the overall network lifetime. The framework can be seen as a truly distributed approach for the principal component analysis, and finds applications not only for approximated data collection tasks, but also for event detection or recognition tasks. <p><p>/<p><p>Les réseaux de capteurs sans fil forment une nouvelle famille de systèmes informatiques permettant d'observer le monde avec une résolution sans précédent. En particulier, ces systèmes promettent de révolutionner le domaine de l'étude environnementale. Un tel réseau est composé d'un ensemble de capteurs sans fil, ou unités sensorielles, capables de collecter, traiter, et transmettre de l'information. Grâce aux avancées dans les domaines de la microélectronique et des technologies sans fil, ces systèmes sont à la fois peu volumineux et peu coûteux. Ceci permet leurs deploiements dans différents types d'environnements, afin d'observer l'évolution dans le temps et l'espace de quantités physiques telles que la température, l'humidité, la lumière ou le son.<p><p>Dans le domaine de l'étude environnementale, les systèmes de prise de mesures doivent souvent fonctionner de manière autonome pendant plusieurs mois ou plusieurs années. Les capteurs sans fil ont cependant des ressources limitées, particulièrement en terme d'énergie. Les communications radios étant d'un ordre de grandeur plus coûteuses en énergie que l'utilisation du processeur, la conception de méthodes de collecte de données limitant la transmission de données est devenue l'un des principaux défis soulevés par cette technologie. <p><p>Ce défi peut être abordé de manière efficace par l'utilisation de modèles mathématiques modélisant l'évolution spatiotemporelle des mesures prises par les capteurs. En effet, si un tel modèle peut être utilisé à la place des mesures, d'importants gains en communications peuvent être obtenus en utilisant les paramètres du modèle comme substitut des mesures. Cependant, dans la majorité des cas, peu ou aucune information sur la nature des mesures prises par les capteurs ne sont disponibles, et donc aucun modèle ne peut être a priori défini. Dans ces cas, les techniques issues du domaine de l'apprentissage machine sont particulièrement appropriées. Ces techniques ont pour but de créer ces modèles de façon autonome, en anticipant les mesures à venir sur la base des mesures passées. <p><p>Dans cette thèse, deux contributions sont principalement apportées permettant l'applica-tion de techniques d'apprentissage machine dans le domaine des réseaux de capteurs sans fil. Premièrement, nous proposons une approche qui combine la prédiction de série temporelle avec la sélection de modèles afin de réduire la communication. La logique de cette approche, appelée sélection de modèle adaptive, est de permettre aux unités sensorielles de determiner de manière autonome un modèle de prédiction qui anticipe correctement leurs mesures, tout en réduisant l'utilisation de leur radio.<p><p>Deuxièmement, nous avons conçu une méthode permettant de modéliser de façon distribuée les mesures collectées, qui se base sur l'analyse en composantes principales (ACP). La méthode permet de transformer les mesures le long d'un arbre de routage, de façon à ce que (i) la majeure partie des variations dans les mesures des capteurs soient conservées, et (ii) la charge réseau soit réduite et mieux distribuée, ce qui permet d'augmenter également la durée de vie du réseau. L'approche proposée permet de véritablement distribuer l'ACP, et peut être utilisée pour des applications impliquant la collecte de données, mais également pour la détection ou la classification d'événements. <p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.1282 seconds