• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 16
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Bayesovská optimalizace / Bayesian optimization

Kostovčík, Peter January 2017 (has links)
Optimization is an important part of mathematics and is mostly used for practical applications. For specific types of objective functions, a lot of different methods exist. A method to use when the objective is unknown and/or expensive can be difficult to determine. One of the answers is bayesian optimization, which instead of direct optimization creates a probabilistic model and uses it to constructs easily optimizable auxiliary function. It is an iterative method that uses information from previous iterations to find new point in which the objective is evaluated and tries to find the optimum within a fewer iterations. This thesis introduces bayesian optimization, suma- rizes its different approaches in lower and higher dimensions and shows when to use it suitably. An important part of the thesis is my own optimization algorithm which is applied to different practical problems - e.g. parameter optimization in machine learning algorithm. 1
42

Improving Situational Awareness in Aviation: Robust Vision-Based Detection of Hazardous Objects

Levin, Alexandra, Vidimlic, Najda January 2020 (has links)
Enhanced vision and object detection could be useful in the aviation domain in situations of bad weather or cluttered environments. In particular, enhanced vision and object detection could improve situational awareness and aid the pilot in environment interpretation and detection of hazardous objects. The fundamental concept of object detection is to interpret what objects are present in an image with the aid of a prediction model or other feature extraction techniques. Constructing a comprehensive data set that can describe the operational environment and be robust for weather and lighting conditions is vital if the object detector is to be utilised in the avionics domain. Evaluating the accuracy and robustness of the constructed data set is crucial. Since erroneous detection, referring to the object detection algorithm failing to detect a potentially hazardous object or falsely detecting an object, is a major safety issue. Bayesian uncertainty estimations are evaluated to examine if they can be utilised to detect miss-classifications, enabling the use of a Bayesian Neural Network with the object detector to identify an erroneous detection. The object detector Faster RCNN with ResNet-50-FPN was utilised using the development framework Detectron2; the accuracy of the object detection algorithm was evaluated based on obtained MS-COCO metrics. The setup achieved a 50.327 % AP@[IoU=.5:.95] score. With an 18.1 % decrease when exposed to weather and lighting conditions. By inducing artificial artefacts and augmentations of luminance, motion, and weather to the images of the training set, the AP@[IoU=.5:.95] score increased by 15.6 %. The inducement improved the robustness necessary to maintain the accuracy when exposed to variations of environmental conditions, which resulted in just a 2.6 % decrease from the initial accuracy. To fully conclude that the augmentations provide the necessary robustness for variations in environmental conditions, the model needs to be subjected to actual image representations of the operational environment with different weather and lighting phenomena. Bayesian uncertainty estimations show great promise in providing additional information to interpret objects in the operational environment correctly. Further research is needed to conclude if uncertainty estimations can provide necessary information to detect erroneous predictions.
43

Implementation of Anomaly Detection on a Time-series Temperature Data set

Novacic, Jelena, Tokhi, Kablai January 2019 (has links)
Aldrig har det varit lika aktuellt med hållbar teknologi som idag. Behovet av bättre miljöpåverkan inom alla områden har snabbt ökat och energikonsumtionen är ett av dem. En enkel lösning för automatisk kontroll av energikonsumtionen i smarta hem är genom mjukvara. Med dagens IoT teknologi och maskinlärningsmodeller utvecklas den mjukvarubaserade hållbara livsstilen allt mer. För att kontrollera ett hushålls energikonsumption måste plötsligt avvikande beteenden detekteras och regleras för att undvika onödig konsumption. Detta examensarbete använder en tidsserie av temperaturdata för att implementera detektering av anomalier. Fyra modeller implementerades och testades; en linjär regressionsmodell, Pandas EWM funktion, en EWMA modell och en PEWMA modell. Varje modell testades genom att använda dataset från nio olika lägenheter, från samma tidsperiod. Därefter bedömdes varje modell med avseende på Precision, Recall och F-measure, men även en ytterligare bedömning gjordes för linjär regression med R^2-score. Resultaten visar att baserat på noggrannheten hos varje modell överträffade PEWMA de övriga modellerna. EWMA modeller var något bättre än den linjära regressionsmodellen, följt av Pandas egna EWM modell. / Today's society has become more aware of its surroundings and the focus has shifted towards green technology. The need for better environmental impact in all areas is rapidly growing and energy consumption is one of them. A simple solution for automatically controlling the energy consumption of smart homes is through software. With today's IoT technology and machine learning models the movement towards software based ecoliving is growing. In order to control the energy consumption of a household, sudden abnormal behavior must be detected and adjusted to avoid unnecessary consumption. This thesis uses a time-series data set of temperature data for implementation of anomaly detection. Four models were implemented and tested; a Linear Regression model, Pandas EWM function, an exponentially weighted moving average (EWMA) model and finally a probabilistic exponentially weighted moving average (PEWMA) model. Each model was tested using data sets from nine different apartments, from the same time period. Then an evaluation of each model was conducted in terms of Precision, Recall and F-measure, as well as an additional evaluation for Linear Regression, using R^2 score. The results of this thesis show that in terms of accuracy, PEWMA outperformed the other models. The EWMA model was slightly better than the Linear Regression model, followed by the Pandas EWM model.
44

Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique / Web application security : analysis, modeling and attacks detection using machine learning

Makiou, Abdelhamid 16 December 2016 (has links)
Les applications Web sont l’épine dorsale des systèmes d’information modernes. L’exposition sur Internet de ces applications engendre continuellement de nouvelles formes de menaces qui peuvent mettre en péril la sécurité de l’ensemble du système d’information. Pour parer à ces menaces, il existe des solutions robustes et riches en fonctionnalités. Ces solutions se basent sur des modèles de détection des attaques bien éprouvés, avec pour chaque modèle, des avantages et des limites. Nos travaux consistent à intégrer des fonctionnalités de plusieurs modèles dans une seule solution afin d’augmenter la capacité de détection. Pour atteindre cet objectif, nous définissons dans une première contribution, une classification des menaces adaptée au contexte des applications Web. Cette classification sert aussi à résoudre certains problèmes d’ordonnancement des opérations d’analyse lors de la phase de détection des attaques. Dans une seconde contribution, nous proposons une architecture de filtrage des attaques basée sur deux modèles d’analyse. Le premier est un module d’analyse comportementale, et le second utilise l’approche d’inspection par signature. Le principal défi à soulever avec cette architecture est d’adapter le modèle d’analyse comportementale au contexte des applications Web. Nous apportons des réponses à ce défi par l’utilisation d’une approche de modélisation des comportements malicieux. Ainsi, il est possible de construire pour chaque classe d’attaque son propre modèle de comportement anormal. Pour construire ces modèles, nous utilisons des classifieurs basés sur l’apprentissage automatique supervisé. Ces classifieurs utilisent des jeux de données d’apprentissage pour apprendre les comportements déviants de chaque classe d’attaques. Ainsi, un deuxième verrou en termes de disponibilité des données d’apprentissage a été levé. En effet, dans une dernière contribution, nous avons défini et conçu une plateforme de génération automatique des données d’entrainement. Les données générées par cette plateforme sont normalisées et catégorisées pour chaque classe d’attaques. Le modèle de génération des données d’apprentissage que nous avons développé est capable d’apprendre "de ses erreurs" d’une manière continue afin de produire des ensembles de données d’apprentissage de meilleure qualité. / Web applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets .
45

Building High-performing Web Rendering of Large Data Sets

Burwall, William January 2023 (has links)
Interactive visualization is an essential tool for data analysis. Cloud-based data analysis software must handle growing data sets without relying on powerful end-user hardware. This thesis explores and tests various methods to speed up primarily time series plots of large data sets on the web for the biotechnology research company Sartorius. To increase rendering speed, I focused on two main approaches: downsampling and hardware acceleration. To find which sampling algorithms suit Sartorius's needs, I implemented multiple alternatives and compared them quantitatively and qualitatively. The results show that downsampling increases or eliminates data set size limits and that test users favored algorithms maintaining local outliers. With hardware acceleration that substantially increased the amount of simultaneously rendered points for more detailed representations, these methods pave the way for efficient visualization of large data sets on the web.
46

A Computational Fluid Dynamics Feature Extraction Method Using Subjective Logic

Mortensen, Clifton H. 08 July 2010 (has links) (PDF)
Computational fluid dynamics simulations are advancing to correctly simulate highly complex fluid flow problems that can require weeks of computation on expensive high performance clusters. These simulations can generate terabytes of data and pose a severe challenge to a researcher analyzing the data. Presented in this document is a general method to extract computational fluid dynamics flow features concurrent with a simulation and as a post-processing step to drastically reduce researcher post-processing time. This general method uses software agents governed by subjective logic to make decisions about extracted features in converging and converged data sets. The software agents are designed to work inside the Concurrent Agent-enabled Feature Extraction concept and operate efficiently on massively parallel high performance computing clusters. Also presented is a specific application of the general feature extraction method to vortex core lines. Each agent's belief tuple is quantified using a pre-defined set of information. The information and functions necessary to set each component in each agent's belief tuple is given along with an explanation of the methods for setting the components. A simulation of a blunt fin is run showing convergence of the horseshoe vortex core to its final spatial location at 60% of the converged solution. Agents correctly select between two vortex core extraction algorithms and correctly identify the expected probabilities of vortex cores as the solution converges. A simulation of a delta wing is run showing coherently extracted primary vortex cores as early as 16% of the converged solution. Agents select primary vortex cores extracted by the Sujudi-Haimes algorithm as the most probable primary cores. These simulations show concurrent feature extraction is possible and that intelligent agents following the general feature extraction method are able to make appropriate decisions about converging and converged features based on pre-defined information.
47

Data-driven Infrastructure Inspection

Bianchi, Eric Loran 18 January 2022 (has links)
Bridge inspection and infrastructure inspection are critical steps in the lifecycle of the built environment. Emerging technologies and data are driving factors which are disrupting the traditional processes for conducting these inspections. Because inspections are mainly conducted visually by human inspectors, this paper focuses on improving the visual inspection process with data-driven approaches. Data driven approaches, however, require significant data, which was sparse in the existing literature. Therefore, this research first examined the present state of the existing data in the research domain. We reviewed hundreds of image-based visual inspection papers which used machine learning to augment the inspection process and from this, we compiled a comprehensive catalog of over forty available datasets in the literature and identified promising, emerging techniques and trends in the field. Based on our findings in our review we contributed six significant datasets to target gaps in data in the field. The six datasets comprised of structural material segmentation, corrosion condition state segmentation, crack detection, structural detail detection, and bearing condition state classification. The contributed datasets used novel annotation guidelines and benefitted from a novel semi-automated annotation process for both object detection and pixel-level detection models. Using the data obtained from our collected sources, task-appropriate deep learning models were trained. From these datasets and models, we developed a change detection algorithm to monitor damage evolution between two inspection videos and trained a GAN-Inversion model which generated hyper-realistic synthetic bridge inspection image data and could forecast a future deterioration state of an existing bridge element. While the application of machine learning techniques in civil engineering is not wide-spread yet, this research provides impactful contribution which demonstrates the advantages that data driven sciences can provide to more economically and efficiently inspect structures, catalog deterioration, and forecast potential outcomes. / Doctor of Philosophy / Bridge inspection and infrastructure inspection are critical steps in the lifecycle of the built environment. Emerging technologies and data are driving factors which are disrupting the traditional processes for conducting these inspections. Because inspections are mainly conducted visually by human inspectors, this paper focuses on improving the visual inspection process with data-driven approaches. Data driven approaches, however, require significant data, which was sparse in the existing literature. Therefore, this research first examined the present state of the existing data in the research domain. We reviewed hundreds of image-based visual inspection papers which used machine learning to augment the inspection process and from this, we compiled a comprehensive catalog of over forty available datasets in the literature and identified promising, emerging techniques and trends in the field. Based on our findings in our review we contributed six significant datasets to target gaps in data in the field. The six datasets comprised of structural material detection, corrosion condition state identification, crack detection, structural detail detection, and bearing condition state classification. The contributed datasets used novel labeling guidelines and benefitted from a novel semi-automated labeling process for the artificial intelligence models. Using the data obtained from our collected sources, task-appropriate artificial intelligence models were trained. From these datasets and models, we developed a change detection algorithm to monitor damage evolution between two inspection videos and trained a generative model which generated hyper-realistic synthetic bridge inspection image data and could forecast a future deterioration state of an existing bridge element. While the application of machine learning techniques in civil engineering is not widespread yet, this research provides impactful contribution which demonstrates the advantages that data driven sciences can provide to more economically and efficiently inspect structures, catalog deterioration, and forecast potential outcomes.
48

Training a Neural Network using Synthetically Generated Data / Att träna ett neuronnät med syntetisktgenererad data

Diffner, Fredrik, Manjikian, Hovig January 2020 (has links)
A major challenge in training machine learning models is the gathering and labeling of a sufficiently large training data set. A common solution is the use of synthetically generated data set to expand or replace a real data set. This paper examines the performance of a machine learning model trained on synthetic data set versus the same model trained on real data. This approach was applied to the problem of character recognition using a machine learning model that implements convolutional neural networks. A synthetic data set of 1’240’000 images and two real data sets, Char74k and ICDAR 2003, were used. The result was that the model trained on the synthetic data set achieved an accuracy that was about 50% better than the accuracy of the same model trained on the real data set. / Vid utvecklandet av maskininlärningsmodeller kan avsaknaden av ett tillräckligt stort dataset för träning utgöra ett problem. En vanlig lösning är att använda syntetiskt genererad data för att antingen utöka eller helt ersätta ett dataset med verklig data. Denna uppsats undersöker prestationen av en maskininlärningsmodell tränad på syntetisk data jämfört med samma modell tränad på verklig data. Detta applicerades på problemet att använda ett konvolutionärt neuralt nätverk för att tyda tecken i bilder från ”naturliga” miljöer. Ett syntetiskt dataset bestående av 1’240’000 samt två stycken dataset med tecken från bilder, Char74K och ICDAR2003, användes. Resultatet visar att en modell tränad på det syntetiska datasetet presterade ca 50% bättre än samma modell tränad på Char74K.
49

Comprehensiveness of the RUG-III Grouping Methodology in Addressing the Needs of People with Dementia in Long-term Care

Cadieux, Marie-Andrée 31 July 2012 (has links)
Funding of services to residents in publicly funded long-term care (LTC) facilities has historically rested upon a list of physical needs. However, more than 60% of residents in nursing homes have dementia; a condition in which physical needs are only a part of the overall clinical picture. Since past funding formulas focused primarily on the physical characteristics of residents, the Ontario government has adopted the RUG (Resource Utilization Groups)-III (34 Group) for use in LTC facilities which follows the adoption of the Minimum Data Set (MDS) 2.0 assessment instrument. Some still question whether the newer formula adequately reflects the care needs of residents with dementia despite its validation in many countries. The purpose of this study was to determine the comprehensiveness of the RUG-III (34 Group) in addressing the needs of residents with dementia living in LTC. First, a critical systematic review of the literature was conducted to determine the needs of residents with dementia. Numerous electronic databases were searched for articles published between January 2000 and September 2010, and later cross-referenced. Second, needs identified from the literature were matched to the items of the RUG-III which are selected variables of the MDS 2.0. Third, the priority of the items in the RUG-III was analysed in accordance with the importance of the identified needs. The documented needs were taken from 68 studies and classified into 19 main categories. The needs most supported by the literature were the management of behavioural problems, social needs, the need for daily individualized activities/care and emotional needs/personhood. Among the needs identified, activities of daily living (ADLs), cognitive needs and general overall physical health met the most RUG-III items. These needs were found to be well represented within the system. Other needs of importance such as social needs are not thoroughly considered in the grouping methodology though matched to MDS variables. The fact that these needs are not well addressed in the RUG-III poses concerns. Future research is needed to validate the significance of these needs. Considerations should be made as to the adequacy of the funding system and the allocation of funding.
50

Поређење скупова података помоћу графова / Poređenje skupova podataka pomoću grafova / Comparing Data Sets Using Graphs

Ivančević Vladimir 02 March 2017 (has links)
<p>За потребе поређења скупова података осмишљен је приступ поређењу<br />који се заснива на употреби графова. У овом приступу развијене су две<br />врсте графовских представа: представе вредности које описују скуп<br />података и представе разлика које описују разлике између две<br />представе вредности. У испитивањима приступа над синтетичким и<br />реалним скуповима података, показано је да је кроз визуално<br />истраживање представа разлика и примену помоћних поступака обраде<br />могуће уочити корисне обрасце који приказују разлике између представа<br />вредности, а посредно и између скупова података описаних путем ових<br />представа вредности.</p> / <p>Za potrebe poređenja skupova podataka osmišljen je pristup poređenju<br />koji se zasniva na upotrebi grafova. U ovom pristupu razvijene su dve<br />vrste grafovskih predstava: predstave vrednosti koje opisuju skup<br />podataka i predstave razlika koje opisuju razlike između dve<br />predstave vrednosti. U ispitivanjima pristupa nad sintetičkim i<br />realnim skupovima podataka, pokazano je da je kroz vizualno<br />istraživanje predstava razlika i primenu pomoćnih postupaka obrade<br />moguće uočiti korisne obrasce koji prikazuju razlike između predstava<br />vrednosti, a posredno i između skupova podataka opisanih putem ovih<br />predstava vrednosti.</p> / <p>In order to support data set comparison, a graph-based approach to<br />comparison was devised. In this approach, two types of graph-based<br />representations were introduced: value representations that represent a data<br />set and difference representations that represent differences between two<br />value representations. The results of approach evaluations on synthetic and<br />real data sets indicate that, by visually exploring difference representations<br />and applying auxiliary procedures, it is possible to discover useful patterns<br />which describe differences between two value representations and,<br />consequently, differences between the data sets corresponding to the value<br />representations.</p>

Page generated in 0.0458 seconds