• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 41
  • 41
  • 18
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Event Mining for System and Service Management

Tang, Liang 18 April 2014 (has links)
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.
12

Assesment of drinking water quality using disinfection by-products in a distribution system following a treatment technology upgrade

Bush, Kelly Lynn 05 1900 (has links)
Chlorine is the most widely used disinfectant for drinking water treatment. Chlorine canreact with natural organic matter (NOM) in water sources resulting in the formation of potentially carcinogenic disinfection by-products (DBPs). The most common DBPs measured in chlorinated drinking water distribution systems are trihalomethanes (THMs) and haloacetic acids (HAAs). In 2005, the City of Kamloops, British Columbia upgraded the drinking water treatment system to ultrafiltration membrane treatment. The objective of this study was to determine the extent to which upgrades to a drinking water treatment system, specifically, implementation of an ultrafiltration treatment process, impacted DBP formation within a distribution system. This study used a two-phase research approach. Phase I of the study was a distribution system monitoring program that collected water samples and physical and chemical information using data loggers at five sampling sites within the distribution system. Phase II of the study used bench-scale simulations that modeled DBP formation using a flow-through reactor system, the material-specific simulated distribution system (MS-SDS), constructed of pipe material resurrected from the City of Kamloops distribution system. Phase I results suggested that implementation of the ultrafiltration treatment process and accompanying treatment system upgrade was not effective at reducing the concentration of DBPs delivered to consumers. Concentrations of THMs remained relatively constant at sampling sites, while concentrations of HAAs increased following implementation of the ultrafiltration treatment process. The increase in HAA formation was likely due to an increase in retention time of the water within the distribution system following implementation of the ultrafiltration treatment process, rather than due to the treatment process itself. The results of this study are consistent with previous work on South Thompson River water DBP precursors, which suggested that THM and HAA precursors of this source water are small and hydrophilic, and therefore cannot be removed by ultrafiltration processes. Phase II results showed that the MS-SDS was more representative of distribution system c onditions than traditional glass bottles to estimate DBP formation. It is recommended that the MS-SDS be used in parallel with a simultaneous distribution system monitoring program to estimate distribution system retention times from THM and HAA concentrations. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
13

Advancements in power system monitoring and inter-operability

Mohan, Vinoth Mohan 11 December 2009 (has links)
In a typical utility control center, there could be hundreds of applications running to take care of day-to-day functionality. In many cases, these applications are custom-built by different vendors. With the expectation for high reliability of the electric power grid, many utilities are increasingly moving towards sharing data with each other and with security coordinators. But this data exchange is hampered by incompatible electrical applications built on proprietary data formats and file systems. Electric Power Research Institute's (EPRI‟s) Common Information Model (CIM) was envisioned as a one-sizeits-all data model to remove incompatibility between applications. This research work utilizes the CIM models to exchange power system models and measurements between a state estimator application and sensor web application. The CIM was further extended to include few unique devices from the shipboard medium voltage DC power system. Finally, a wide-area monitoring test bed was set up at MSU to perform wide-area monitoring using phasor measurement units (PMU). The outputs from the Phasor Data Concentrator (PDC) were then converted into CIM/XML documents to make them compatible with the sensor web application. These applications have created advancements in power system monitoring and interoperability
14

Hígia: um modelo para cuidado ubíquo de pessoas com depressão

Petry, Milene Martini 31 March 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-06-13T18:17:55Z No. of bitstreams: 1 Milene Martini Petry_.pdf: 1260504 bytes, checksum: 1fec3855497e4548d9954d1fcc13e6ae (MD5) / Made available in DSpace on 2016-06-13T18:17:55Z (GMT). No. of bitstreams: 1 Milene Martini Petry_.pdf: 1260504 bytes, checksum: 1fec3855497e4548d9954d1fcc13e6ae (MD5) Previous issue date: 2016-03-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Atualmente o mundo globalizado vem mudando a rotina das pessoas de um modo geral. Cada vez mais elas têm uma vida estressante, sofrendo inúmeras pressões e desenvolvendo angústias. Com isso, muitas delas acabam desenvolvendo transtornos mentais como, por exemplo, depressão. Nos dias de hoje já existem várias pesquisas e aplicativos disponíveis para auxiliar pessoas que sofrem ou já sofreram de depressão. O diferencial deste trabalho é identificar em ações diárias e nas interações sociais do usuário, ações semelhantes às de crises anteriores através dos diversos dispositivos que o mesmo utiliza. Através da coleta de informações de utilização de e-mails, redes sociais e da localização do usuário, a fim de identificar se o mesmo está desenvolvendo novamente características de seu próprio perfil depressivo. Ele realiza o monitoramento das interações dos usuários que já sofreram da doença e identifica padrões semelhantes às crises anteriores. Sempre que um padrão de ações for identificado algum familiar ou amigo indicado pelo usuário, aqui chamados de auxiliares, e o médico responsável, caso haja, serão avisados, evitando assim a reincidência de uma crise. Outro diferencial do trabalho é que as informações são coletadas sem que ocorra intervenção do usuário. O usuário apenas aceita que haja o monitoramento e as interações são coletadas e avaliadas. Dentre as pesquisas encontradas, a maioria realiza testes ou iterações nas quais o usuário pode abstrair alguns dados, como por exemplo, sintomas e fatos importantes para a detecção de um sintoma relevante da doença. Hígia propõe a não intervenção do usuário, pois desta forma, não há abstração de informações. Além disso, envolve dados de interações sociais que ainda não foram muito exploradas em conjunto. Hígia tem como objetivo constituir trilhas usuais para identificar possíveis sinais depressivos semelhantes aos vividos em momentos anteriores pelo usuário e avisar pessoas vinculadas o quanto antes, a fim de que providências possam ser tomadas. Isso é feito através da avaliação constante de características do usuário em redes sociais, e-mails e interações com o smartphone, o computador ou outros dispositivos, além de sua localização. Foi implementado um protótipo da solução desenhada. Participaram da avaliação sete pessoas que utilizaram o protótipo para que as trilhas fossem formadas. A utilização do protótipo se deu por um período de sete dias e gerou uma análise baseada no comportamento do protótipo, nas trilhas geradas e nas comparações realizadas. Também foram questionados profissionais da área da psicologia e sua opinião sobre a utilidade das trilhas geradas para ajudar em diagnósticos futuros. Sobre os resultados com os usuários os percentuais tiveram resultados bastante positivos. Os usuários foram confiantes em relação ao uso do Hígia e entenderam a necessidade e as funcionalidades da aplicação. Sobre os resultados do trabalho relatados pelos médicos e especialistas todos disseram que a aplicação poderia trazer benefícios para o tratamento e para os pacientes. / The globalized world is changing people's routine of life. Increasingly, they have a stressful life, suffering numerous pressures and developing anxieties. Thereby, much of them, develop mental diseases like depression for example. Nowadays, there are several researches and applications available to assist people who suffer or have suffered from depression. This essay aims to identify users’ daily actions and social interactions, similar to those already identified of previous crises through via various devices it them might uses. Through the collection of e-mail usage information, social networks and the user's location, this model intends to identify whether the user is developing again features of its own depressive profile. The proposed model performs monitoring of user interactions that have already suffered the disease and identifies similar patterns of previous crises. Whenever a pattern of actions is identified, a family member or friend indicated by the user, here called auxiliary, and the doctor in charge, if any, will be notified, thus avoiding the recurrence of a crisis. The model created proposes that the information is collected without user intervention. They just have to accept that there will be monitoring and their interactions will be recorded and evaluated. In the related studies found, most of the interactions with the users might lead them to manipulate the results, such as symptoms and facts to detect a significant symptom of the disease. Hígia proposes no user intervention, because this way, there is no abstraction of information. Furthermore, it analysis social data interactions. These set of features have not been explored together related work researched. Hígia aims to build usual trails to identify possible depressive signs similar to those experienced in previous times by the user and warn related people as soon as possible so that arrangements can be made. This is done through constant evaluation of user characteristics on social networks, e-mails and interactions with your its smartphone, computer or other devices, as well as its location. A prototype of the designed solution was implemented. In order to evaluate the results, seven people were invited to use the prototype, for the period of seven days. Assessment attended seven people who used the prototype for the tracks were formed. The use of the prototype is given for a period of seven days and generated based on an analysis prototype behavior in the generated track, and the comparisons made. Professional psychology area and their opinion on the usefulness of tracks generated to aid in future diagnosis were also questioned. The summary of the results evaluated was presented to some psychologists and their teams, in order to identify the usefulness of tracks generated to help in future diagnosis. Regarding the results with the users, there were a high percentage of positive feedbacks. Users were confident about the use of the application and model and understand the importance of the application functionality. In the feedback received by doctors and specialists all said that the application could be beneficial for the treatment and for patients.
15

Robust and efficient malware analysis and host-based monitoring

Sharif, Monirul Islam 15 November 2010 (has links)
Today, host-based malware detection approaches such as antivirus programs are severely lagging in terms of defense against malware. Two important aspects that the overall effectiveness of malware detection depend on are the success of extracting information from malware using malware analysis to generate signatures, and then the success of utilizing these signatures on target hosts with appropriate system monitoring techniques. Today's malware employ a vast array of anti-analysis and anti-monitoring techniques to deter analysis and to neutralize antivirus programs, reducing the overall success of malware detection. In this dissertation, we present a set of practical approaches of robust and efficient malware analysis and system monitoring that can help make malware detection on hosts become more effective. First, we present a framework called Eureka, which efficiently deobfuscates single-pass and multi-pass packed binaries and restores obfuscated API calls, providing a basis for extracting comprehensive information from the malware using further static analysis. Second, we present the formal framework of transparent malware analysis and Ether, a dynamic malware analysis environment based on this framework that provides transparent fine-(single instruction) and coarse-(system call) granularity tracing. Third, we introduce an input-based obfuscation technique that hides trigger-based behavior from any input-oblivious analyzer. Fourth, we present an approach that automatically reverse-engineers the emulator and extracts the syntax and semantics of the bytecode language, which helps constructing control-flow graphs of the bytecode program and enables further analysis on the malicious code. Finally, we present Secure In-VM Monitoring, an approach of efficiently monitoring a target host while being robust against unknown malware that may attempt to neutralize security tools.
16

System-Level Observation Framework for Non-Intrusive Runtime Monitoring of Embedded Systems

Lee, Jong Chul January 2014 (has links)
As system complexity continues to increase, the integration of software and hardware subsystems within system-on-a-chip (SOC) presents significant challenges in post-silicon validation, testing, and in-situ debugging across hardware and software layers. The deep integration of software and hardware components within SOCs often prevents the use of traditional analysis methods to observe and monitor the internal state of these components. This situation is further exacerbated for in-situ debugging and testing in which physical access to traditional debug and trace interfaces is unavailable, infeasible, or cost prohibitive. In this dissertation, we present a system-level observation framework (SOF) that provides minimally intrusive methods for dynamically monitoring and analyzing deeply integrated hardware and software components within embedded systems. The SOF monitors hardware and software events by inserting additional logic within hardware cores and by listening to processor trace ports. The SOF provides visibility for monitoring complex execution behavior of software applications without affecting the system execution. The SOF utilizes a dedicated event-streaming interface that allows efficient observation and analysis of rapidly occurring events at runtime. The event-streaming interface supports three alternatives: (1) an in-order priority-based event stream controller, (2) a round-robin priority-based event stream controller, and (3) a priority-level based event stream controller. The in-order priority-based event stream controller, which uses efficient pipelined hardware architecture, ensures that events are reported in-order based on the time of the event occurrence. While the in-order priority-based event stream controller provides high throughput for reporting events, significant area requirement can be incurred. The round-robin priority-based event stream controller is an area-efficient event stream ordering technique with acceptable tradeoffs in event stream throughput. To further reduce area requirement, the SOF supports a priority-level based event stream controller that provides an in-ordering method with smaller area requirements than the round-robin priority-based event stream controller. Comprehensive experimental results using a complete prototype system implementation are presented to quantify the tradeoffs in area, throughput, and latency for the various event streaming interfaces considering several execution scenarios.
17

Thevenin Equivalent Circuit Estimation and Application for Power System Monitoring and Protection

Iftakhar, Mohammad M 01 January 2008 (has links)
The Estimation of Thevenin Equivalent Parameters is useful for System Monitoring and Protection. We studied a method for estimating the Thevenin equivalent circuits. We then studied two applications including voltage stability and fault location. A study of the concepts of Voltage Stability is done in the initial part of this thesis. A Six Bus Power System Model was simulated using MATLAB SIMULINK®. Subsequently, the Thevenin Parameters were calculated. The results were then used for two purposes, to calculate the Maximum Power that can be delivered and for Fault Location.
18

A System for Automatic Information Extraction from Log Files

Chhabra, Anubhav 15 August 2022 (has links)
The development of technology, data-driven systems and applications are constantly revolutionizing our lives. We are surrounded by digitized systems/solutions that are transforming and making our lives easier. The criticality and complexity behind these systems are immense. So as to meet user satisfaction and keep up with the business needs, these digital systems should possess high availability, minimum downtime, and mitigate cyber attacks. Hence, system monitoring becomes an integral part of the lifecycle of a digital product/system. System monitoring often includes monitoring and analyzing logs outputted by the systems containing information about the events occurring within a system. The first step in log analysis generally includes understanding and segregating the various logical components within a log line, termed log parsing. Traditional log parsers use regular expressions and human-defined grammar to extract information from logs. Human experts are required to create, maintain and update the database containing these regular expressions and rules. They should keep up with the pace at which new products, applications and systems are being developed and deployed, as each unique application/system would have its own set of logs and logging standards. Logs from new sources tend to break the existing systems as none of the expressions match the signature of the incoming logs. The reasons mentioned above make the traditional log parsers time-consuming, hard to maintain, prone to errors, and not a scalable approach. On the other hand, machine learning based methodologies can help us develop solutions that automate the log parsing process without much intervention from human experts. NERLogParser is one such solution that uses a Bidirectional Long Short Term Memory (BiLSTM) architecture to frame the log parsing problem as a Named Entity Recognition (NER) problem. There have been recent advancements in the Natural Language Processing (NLP) domain with the introduction of architectures like Transformer and Bidirectional Encoder Representations from Transformers (BERT). However, these techniques have not been applied to tackle the problem of information extraction from log files. This gives us a clear research gap to experiment with the recent advanced deep learning architectures. This thesis extensively compares different machine learning based log parsing approaches that frame the log parsing problem as a NER problem. We compare 14 different approaches, including three traditional word-based methods: Naive Bayes, Perceptron and Stochastic Gradient Descent; a graphical model: Conditional Random Fields (CRF); a pre-trained sequence-to-sequence model for log parsing: NERLogParser; an attention-based sequence-to-sequence model: Transformer Neural Network; three different neural language models: BERT, RoBERTa and DistilBERT; two traditional ensembles and three different cascading classifiers formed using the individual classifiers mentioned above. We evaluate the NER approaches using an evaluation framework that offers four different evaluation schemes that not just help in comparing the NER approaches but also help us assess the quality of extracted information. The primary goal of this research is to evaluate the NER approaches on logs from new and unseen sources. To the best of our knowledge, no study in the literature evaluates the NER methodologies in such a context. Evaluating NER approaches on unseen logs helps us understand the robustness and the generalization capabilities of various methodologies. To carry out the experimentation, we use In-Scope and Out-of-Scope datasets. Both the datasets originate from entirely different sources and are entirely mutually exclusive. The In-Scope dataset is used for training, validation and testing purposes, whereas the Out-of-Scope dataset is purely used to evaluate the robustness and generalization capability of NER approaches. To better deal with logs from unknown sources, we propose Log Diversification Unit (LoDU), a unit of our system that enables us to carry out log augmentation and enrichment, which helps make the NER approaches more robust towards new and unseen logs. We segregate our final results on a use-case basis where different NER approaches may be suitable for various applications. Overall, traditional ensembles perform the best in parsing the Out-of-Scope log files, but they may not be the best option to consider for real-time applications. On the other hand, if we want to balance the trade-off between performance and throughput, cascading classifiers can be considered the go-to solution.
19

Intelligent placement of meters/sensors for shipboard power system analysis

Sankar, Sandhya 15 December 2007 (has links)
Real time monitoring of the shipboard power system is a complex task to address. Unlike the terrestrial power system, the shipboard power system is a comparatively smaller system but with more complexity in terms of its system operation. This requires the power system to be continuously monitored to detect any type of fluctuations or disturbances. Planning metering systems in the power system of a ship is a challenging task not only due to the dimensionality of the problem, but also due to the need for reducing redundancy while improving network observability and efficient data collection for a reliable state estimation process. This research is geared towards the use of a Genetic Algorithm for intelligent placement of meters in a shipboard system for real time power system monitoring taking into account different system topologies and critical parameters to be measured from the system. The algorithm predicts the type and location of meters for identification and collection of measurements from the system. The algorithm has been tested with several system topologies.
20

Power System Disturbance Analysis and Detection Based on Wide-Area Measurements

Dong, Jingyuan 09 January 2009 (has links)
Wide-area measurement systems (WAMS) enable the monitoring of overall bulk power systems and provide critical information for understanding and responding to power system disturbances and cascading failures. The North American Frequency Monitoring Network (FNET) takes GPS-synchronized wide-area measurements in a low-cost, easily deployable manner at the 120 V distribution level, which presents more opportunities to study power system dynamics. This work explores the topics of power system disturbance analysis and detection by utilizing the wide-area measurements obtained in the distribution networks. In this work, statistical analysis is conducted based on the major disturbances in the North American Interconnections detected by the FNET situation awareness system between 2006 and 2008. Typical frequency patterns of the generation and load loss events are analyzed for the three North American power Interconnections: the Eastern Interconnection (EI), the Western Electricity Coordinating Council (WECC), and the Electric Reliability Council of Texas (ERCOT). The linear relationship between frequency deviation and frequency change rate during generation/loss mismatch events is verified by the measurements in the three Interconnections. The relationship between the generation/load mismatch and system frequency is also examined based on confirmed generation loss events in the EI system. And a power mismatch estimator is developed to improve the current disturbance detection program. Various types of power system disturbances are examined based on frequency, voltage and phase angle to obtain the event signatures in the measurements. To better understand the propagation of disturbances in the power system, an automated visualization tool is developed that can generate frequency and angle replays of disturbances, as well as image snapshots. This visualization tool correlates the wide-area measurements with geographical information by displaying the measurements over a geographical map. This work makes an attempt to investigate the visualization of the angle profile in the wide-area power system to improve situation awareness. This work explores the viability of relying primarily on distribution-level measurements to detect and identify line outages, a topic not yet addressed in previous works. Line outage sensitivity at different voltage levels in the Tennessee Valley Authority (TVA) system is examined to analyze the visibility of disturbances from the point of view of wide-area measurements. The sensor placement strategy is proposed for better observability of the line trip disturbances. The characteristics of line outages are studied extensively with simulations and real measurements. Line trip detection algorithms are proposed that employs the information in frequency and phase angle measurements. In spite of the limited FDR coverage and confirmed training cases, an identification algorithm is developed which uses the information in the real measurements as well as the simulation cases to determine the tripped line. / Ph. D.

Page generated in 0.1092 seconds