• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 18
  • 17
  • 17
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 494
  • 494
  • 219
  • 218
  • 163
  • 139
  • 116
  • 91
  • 81
  • 75
  • 71
  • 70
  • 63
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Towards a learning system for process and energy industry : Enabling optimal control, diagnostics and decision support

Rahman, Moksadur January 2019 (has links)
Driven by intense competition, increasing operational cost and strict environmental regulations, the modern process and energy industry needs to find the best possible way to adapt to maintain profitability. Optimization of control and operation of the industrial systems is essential to satisfy the contradicting objectives of improving product quality and process efficiency while reducing production cost and plant downtime. Use of optimization not only improves the control and monitoring of assets but also offers better coordination among different assets. Thus, it can lead to considerable savings in energy and resource consumption, and consequently offer a reduction in operational costs, by offering better control, diagnostics and decision support. This is one of the main driving forces behind developing new methods, tools and frameworks that can be integrated with the existing industrial automation platforms to benefit from optimal control and operation. The main focus of this dissertation is the use of different process models, soft sensors and optimization techniques to improve the control, diagnostics and decision support for the process and energy industry. A generic architecture for an optimal control, diagnostics and decision support system, referred to here as a learning system, is proposed. The research is centred around an investigation of different components of the proposed learning system. Two very different case studies within the energy-intensive pulp and paper industry and the promising micro-combined heat and power (CHP) industry are selected to demonstrate the learning system. One of the main challenges in this research arises from the marked differences between the case studies in terms of size, functions, quantity and structure of the existing automation systems. Typically, only a few pulp digesters are found in a Kraft pulping mill, but there may be hundreds of units in a micro-CHP fleet. The main argument behind the selection of these two case studies is that if the proposed learning system architecture can be adapted for these significantly different cases, it can be adapted for many other energy and process industrial cases. Within the scope of this thesis, mathematical modelling, model adaptation, model predictive control and diagnostics methods are studied for continuous pulp digesters, whereas mathematical modelling, model adaptation and diagnostics techniques are explored for the micro-CHP fleet. / FUDIPO – FUture DIrections for Process industry Optimization
352

Hluboké neuronové sítě pro detekci anomálií při kontrole kvality / Deep Neural Networks for Defect Detection

Juřica, Tomáš January 2019 (has links)
The goal of this work is to bring automatic defect detection to the manufacturing process of plastic cards. A card is considered defective when it is contaminated with a dust particle or a hair. The main challenges I am facing to accomplish this task are a very few training data samples (214 images), small area of target defects in context of an entire card (average defect area is 0.0068 \% of the card) and also very complex background the detection task is performed on. In order to accomplish the task, I decided to use Mask R-CNN detection algorithm combined with augmentation techniques such as synthetic dataset generation. I trained the model on the synthetic dataset consisting of 20 000 images. This way I was able to create a model performing 0.83 AP at 0.1 IoU on the original data test set.
353

Detekce anomálií běhu RTOS aplikace / Detecting RTOS Runtime Anomalies

Arm, Jakub January 2020 (has links)
Due to higher requirements of computational power and safety, or functional safety ofequipments intended for the use in the industrial domain, embedded systems containing areal-time operating system are still the active area of research. This thesis addresses thehardware-assisted control module that is based on the runtime model-based verificationof a target application. This subsystem is intended to increase the diagnostic coverage,particularly, the detection of the execution errors. After the specification of the architecture,the formal model is defined and implemented into hardware using FPGA technology.This thesis also discuss some other aspects and embodies new approaches in the area ofembedded flow control, e.g. the integration of the design patterns. Using the simulation,the created module was tested using the created scenarios, which follow the real programexecution record. The results suggest that the error detection time is lower than usingstandard techniques, such a watchdog.
354

Detekce síťových anomálií / Network Anomaly Detection

Pšorn, Daniel January 2012 (has links)
This master thesis deals with detecting anomalies methods in network traffic. First of all this thesis analyzes the basic concepts of anomaly detection and already using technology. Next, there are also described in more detail three methods for anomalies search and some types of anomalies. In the second part of this thesis there is described implementation of all three methods and there are presented the results of experimentation using real data.
355

Développement d'algorithmes d'analyse spectrale en spectrométrie gamma embarquée / Embedded gamma spectrometry : new algorithms for spectral analysis

Martin-Burtart, Nicolas 06 December 2012 (has links)
Jusqu’au début des années 1980, la spectrométrie gamma aéroportée a avant tout été utilisée pour des applications géophysiques et ne concernait que la mesure des concentrations dans les sols des trois radionucléides naturels (K40, U238 et Th232). Durant les quinze dernières années, un grand nombre de dispositifs de mesures a été développé, la plupart après l’accident de Tchernobyl, pour intervenir en cas d’incidents nucléaires ou de surveillance de l’environnement. Les algorithmes développés ont suivi les différentes missions de ces systèmes. La plupart sont dédiés à l’extraction des signaux à moyenne et haute énergie, où les radionucléides naturels (K40, les chaînes U238 et Th232) et les produits de fission (Cs137 et Co60 principalement) sont présents. A plus basse énergie (< 400 keV), ces méthodes peuvent toujours être utilisées mais les particularités du fond de diffusion, très intense, les rendent peu précises. Cette zone énergétique est importante : les SNM émettent à ces énergies. Un algorithme, appelé 2-fenêtres (étendu à 3), a été développé permettant une extraction précise et tenant compte des conditions de vol. La surveillance du trafic de matières radioactives dans le cadre de la sécurité globale a fait son apparition depuis quelques années. Cette utilisation nécessite non plus des méthodes sensibles à un élément particulier mais des critères d’anomalie prenant en compte l’ensemble du spectre enregistré. Il faut être sensible à la fois aux radionucléides médicaux, industriels et nucléaires. Ce travail a permis d’identifier deux familles d’algorithmes permettant de telles utilisations. Enfin, les anomalies détectées doivent être identifiées. La liste des radionucléides nécessitant une surveillance particulière, recommandée par l’AIEA, contient une trentaine d’émetteurs. Un nouvel algorithme d’identification a été entièrement développé, permettant de s’appuyer sur plusieurs raies d’absorption par élément et de lever les conflits d’identification. / Airborne gamma spectrometry was first used for mining prospection. Three main families were looked for: K40, U238 and Th232. The Chernobyl accident acted as a trigger and for the last fifteen years, a lot of new systems have been developed for intervention in case of nuclear accident or environmental purposes. Depending on their uses, new algorithms were developed, mainly for medium or high energy signal extraction. These spectral regions are characteristics of natural emissions (K40, U238- and Th-232 decay chains) and fissions products (mainly Cs137 and Co60). Below 400 keV, where special nuclear materials emit, these methods can still be used but are greatly imprecise. A new algorithm called 2-windows (extended to 3), was developed. It allows an accurate extraction, taking the flight altitude into account to minimize false detection. Watching radioactive materials traffic appeared with homeland security policy a few years ago. This particular use of dedicated sensors require a new type of algorithms. Before, one algorithm was very efficient for a particular nuclide or spectral region. Now, we need algorithm able to detect an anomaly wherever it is and whatever it is : industrial, medical or SNM. This work identified two families of methods working under these circumstances. Finally, anomalies have to be identified. IAEA recommend to watch around 30 radionuclides. A brand new identification algorithm was developed, using several rays per element and avoiding identifications conflicts.
356

Hypervisor-based cloud anomaly detection using supervised learning techniques

Nwamuo, Onyekachi 23 January 2020 (has links)
Although cloud network flows are similar to conventional network flows in many ways, there are some major differences in their statistical characteristics. However, due to the lack of adequate public datasets, the proponents of many existing cloud intrusion detection systems (IDS) have relied on the DARPA dataset which was obtained by simulating a conventional network environment. In the current thesis, we show empirically that the DARPA dataset by failing to meet important statistical characteristics of real-world cloud traffic data centers is inadequate for evaluating cloud IDS. We analyze, as an alternative, a new public dataset collected through cooperation between our lab and a non-profit cloud service provider, which contains benign data and a wide variety of attack data. Furthermore, we present a new hypervisor-based cloud IDS using an instance-oriented feature model and supervised machine learning techniques. We investigate 3 different classifiers: Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms. Experimental evaluation on a diversified dataset yields a detection rate of 92.08% and a false-positive rate of 1.49% for the random forest, the best performing of the three classifiers. / Graduate
357

Forecasting anomalies in time series data from online production environments

Sseguya, Raymond January 2020 (has links)
Anomaly detection on time series forecasts can be used by many industries in especially forewarning systems that can predict anomalies before they happen. Infor (Sweden) AB is software company that provides Enterprise Resource Planning cloud solutions. Infor is interested in predicting anomalies in their data and that is the motivation for this thesis work. The general idea is firstly to forecast the time series and then secondly detect and classify anomalies on the forecast. The first part is time series forecasting and the second part is anomaly detection and classification done on the forecasted values. In this thesis work, the time series forecasting to predict anomalous behaviour is done using two strategies namely the recursive strategy and the direct strategy. The recursive strategy includes two methods; AutoRegressive Integrated Moving Average and Neural Network AutoRegression. The direct strategy is done with ForecastML-eXtreme Gradient Boosting. Then the three methods are compared concerning performance of forecasting. The anomaly detection and classification is done by setting a decision rule based on a threshold. In this thesis work, since the true anomaly thresholds were not previously known, an arbitrary initial anomaly threshold is set by using a combination of statistical methods for outlier detection and then human judgement by the company commissioners. These statistical methods include Seasonal and Trend decomposition using Loess + InterQuartile Range, Twitter + InterQuartile Range and Twitter + GESD (Generalized Extreme Studentized Deviate). After defining what an anomaly threshold is in the usage context of Infor (Sweden) AB, then a decision rule is set and used to classify anomalies in time series forecasts. The results from comparing the classifications of the forecasts from the three time series forecasting methods are unfortunate and no recommendation is made concerning what model or algorithm to be used by Infor (Sweden) AB. However, the thesis work concludes by recommending other methods that can be tried in future research.
358

Anomaly Detection in Diagnostics Data with Natural Fluctuations / Anomalidetektering i diagnostikdata med naturliga variationer

Sundberg, Jesper January 2015 (has links)
In this thesis, the red hot topic anomaly detection is studied, which is a subtopic in machine learning. The company, Procera Networks, supports several broadband companies with IT-solutions and would like to detected errors in these systems automatically. This thesis investigates and devises methods and algorithms for detecting interesting events in diagnostics data. Events of interest include: short-term deviations (a deviating point), long-term deviations (a distinct trend) and other unexpected deviations. Three models are analyzed, namely Linear Predictive Coding, Sparse Linear Prediction and Wavelet Transformation. The final outcome is determined by the gap to certain thresholds. These thresholds are customized to fit the model as well as possible. / I den här rapporten kommer det glödheta området anomalidetektering studeras, vilket tillhör ämnet Machine Learning. Företaget där arbetet utfördes på heter Procera Networks och jobbar med IT-lösningar inom bredband till andra företag. Procera önskar att kunna upptäcka fel hos kunderna i dessa system automatiskt. I det här projektet kommer olika metoder för att hitta intressanta företeelser i datatraffiken att genomföras och forskas kring. De mest intressanta företeelserna är framfärallt snabba avvikelser (avvikande punkt) och färändringar äver tid (trender) men också andra oväntade mänster. Tre modeller har analyserats, nämligen Linear Predictive Coding, Sparse Linear Prediction och Wavelet Transform. Det slutgiltiga resultatet från modellerna är grundat på en speciell träskel som är skapad fär att ge ett så bra resultat som mäjligt till den undersäkta modellen..
359

Online Anomaly Detection on the Edge / Sekventiell anomalidetektering i nätverkskanten

Jirwe, Marcus January 2021 (has links)
The society of today relies a lot on the industry and the automation of factory tasks is more prevalent than ever before. However, the machines taking on these tasks require maintenance to continue operating. This maintenance is typically given periodically and can be expensive while sometimes requiring expert knowledge. Thus it would be very beneficial if one could predict when a machine needs maintenance and only employ maintenance as necessary. One method to predict when maintenance is necessary is to collect sensor data from a machine and analyse it for anomalies. Anomalies are usually an indicator of unexpected behaviour and can therefore show when a machine needs maintenance. Due to concerns like privacy and security, it is often not allowed for the data to leave the local system. Hence it is necessary to perform this kind of anomaly detection in an online manner and in an edge environment. This environment imposes limitations on hardware and computational ability. In this thesis we consider four machine learning anomaly detection methods that can learn and detect anomalies in this kind of environment. These methods are LoOP, iForestASD, KitNet and xStream. We first evaluate the four anomaly detectors on the Skoltech Anomaly Benchmark using their suggested metrics as well as the Receiver Operating Characteristic curves. We also perform further evaluation on two data sets provided by the company Gebhardt. The experimental results are promising and indicate that the considered methods perform well at the task of anomaly detection. We finally propose some avenues for future work, such as implementing a dynamically changing anomaly threshold. / Dagens samhälle är väldigt beroende av industrin och automatiseringen av fabriksuppgifter är mer förekommande än någonsin. Dock kräver maskinerna som tar sig an dessa uppgifter underhåll för att forsätta arbeta. Detta underhåll ges typiskt periodvis och kan vara dyrt och samtidigt kräva expertkunskap. Därför skulle det vara väldigt fördelaktigt om det kunde förutsägas när en maskin behövde underhåll och endast göra detta när det är nödvändigt. En metod för att förutse när underhåll krävs är att samla in sensordata från en maskin och analysera det för att hitta anomalier. Anomalier fungerar ofta som en indikator av oväntat beteende, och kan därför visa att en maskin behöver underhåll. På grund av frågor som integritet och säkerhet är det ofta inte tillåtet att datan lämnar det lokala systemet. Därför är det nödvändigt att denna typ av anomalidetektering genomförs sekventiellt allt eftersom datan samlas in, och att detta sker på nätverkskanten. Miljön som detta sker i påtvingar begränsningar på både hårdvara och beräkningsförmåga. I denna avhandling så överväger vi fyra anomalidetektorer som med användning av maskininlärning lär sig och upptäcker anomalier i denna sorts miljö. Dessa metoder är LoOP, iForestASD, KitNet och xStream. Vi analyserar först de fyra anomalidetektorerna genom Skoltech Anomaly Benchmark där vi använder deras föreslagna mått samt ”Receiver Operating Characteristic”-kurvor. Vi genomför även vidare analys på två dataset som vi har tillhandhållit av företaget Gebhardt. De experimentella resultaten är lovande och indikerar att de övervägda metoderna presterar väl när det kommer till detektering av anomalier. Slutligen föreslår vi några idéer som kan utforskas för framtida arbete, som att implementera en tröskel för anomalidetektering som anpassar sig dynamiskt.
360

Run-time Anomaly Detection with Process Mining: Methodology and Railway System Compliance Case-Study

Vitale, Francesco January 2021 (has links)
Detecting anomalies in computer-based systems, including Cyber-Physical Systems (CPS), has attracted a large interest recently. Behavioral anomalies represent deviations from what is regarded as the nominal expected behavior of the system. Both Process science and Data science can yield satisfactory results in detecting behavioral anomalies. Within Process Mining, Conformance Checking addresses data retrieval and the connection of data to behavioral models with the aim to detect behavioral anomalies. Nowadays, computer-based systems are increasingly complex and require appropriate validation, monitoring, and maintenance techniques. Within complex computer-based systems, the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS) represents the specification of a standard Railway System integrating heterogeneous hardware and software components, with the aim of providing international interoperability with trains seemingly interacting within standardized infrastructures. Compliance with the standard as well as expected behavior is essential, considering the criticality of the system in terms of performance, availability, and safety. To that aim, a Process Mining Conformance Checking process can be employed to validate the requirements through run-time model-checking techniques against design-time process models. A Process Mining Conformance Checking methodology has been developed and applied with the goal of validating the behavior exposed by an ERTMS/ETCS system during the execution of specific scenarios. The methodology has been tested and demonstrated correct classification of valid behaviors exposed by the ERTMS/ETCS system prototype. Results also showed that the Fitness metric developed in the methodology allows the detection of latent errors in the system before they can generate any failures.

Page generated in 0.0812 seconds