• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Normalized Convolution Network and Dataset Generation for Refining Stereo Disparity Maps

Cranston, Daniel, Skarfelt, Filip January 2019 (has links)
Finding disparity maps between stereo images is a well studied topic within computer vision. While both classical and machine learning approaches exist in the literature, they frequently struggle to correctly solve the disparity in regions with low texture, sharp edges or occlusions. Finding approximate solutions to these problem areas is frequently referred to as disparity refinement, and is usually carried out separately after an initial disparity map has been generated. In the recent literature, the use of Normalized Convolution in Convolutional Neural Networks have shown remarkable results when applied to the task of stereo depth completion. This thesis investigates how well this approach performs in the case of disparity refinement. Specifically, we investigate how well such a method can improve the initial disparity maps generated by the stereo matching algorithm developed at Saab Dynamics using a rectified stereo rig. To this end, a dataset of ground truth disparity maps was created using equipment at Saab, namely a setup for structured light and the stereo rig cameras. Because the end goal is a dataset fit for training networks, we investigate an approach that allows for efficient creation of significant quantities of dense ground truth disparities. The method for generating ground truth disparities generates several disparity maps for every scene measured by using several stereo pairs. A densified disparity map is generated by merging the disparity maps from the neighbouring stereo pairs. This resulted in a dataset of 26 scenes and 104 dense and accurate disparity maps. Our evaluation results show that the chosen Normalized Convolution Network based method can be adapted for disparity map refinement, but is dependent on the quality of the input disparity map.
2

A Methodology of Dataset Generation for Secondary Use of Health Care Big Data / 保健医療ビックデータの二次利用におけるデータセット生成に関する方法論

Iwao, Tomohide 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22575号 / 情博第712号 / 新制||情||122(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 黒田 知宏, 教授 守屋 和幸, 教授 吉川 正俊 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
3

Natural Language Understanding for Multi-Level Distributed Intelligent Virtual Sensors

Papangelis, Angelos, Kyriakou, Georgios January 2021 (has links)
In our thesis we explore the Automatic Question/Answer Generation (AQAG) and the application of Machine Learning (ML) in natural language queries. Initially we create a collection of question/answer tuples conceptually based on processing received data from (virtual) sensors placed in a smart city. Subsequently we train a Gated Recurrent Unit(GRU) model on the generated dataset and evaluate the accuracy we can achieve in answering those questions. This will help in turn to address the problem of automatic sensor composition based on natural language queries. To this end, the contribution of this thesis is two-fold: on one hand we are providing anautomatic procedure for dataset construction, based on natural language question templates, and on the other hand we apply a ML approach that establishes the correlation between the natural language queries and their virtual sensor representation, via their functional representation. We consider virtual sensors to be entities as described by Mihailescu et al, where they provide an interface constructed with certain properties in mind. We use those sensors for our application domain of a smart city environment, thus constructing our dataset around questions relevant to it.
4

Real-World Considerations for RFML Applications

Muller, Braeden Phillip Swanson 20 December 2023 (has links)
Radio Frequency Machine Learning (RFML) is the application of ML techniques to solve problems in the RF domain as an alternative to traditional digital-signal processing (DSP) techniques. Notable among these are the tasks of specific emitter identification (SEI), determining source identity of a received RF signal, and automated modulation classification (AMC), determining the modulation scheme of a received RF transmission. Both tasks have a number of algorithms that are effective on simulated data, but struggle to generalize to data collected in the real-world, partially due to the lack of available datasets upon which to train models and understand their limitations. This thesis covers the practical considerations for systems that can create high-quality datasets for RFML tasks, how variances from real-world effects in these datasets affect RFML algorithm performance, and how well models developed from these datasets are able to generalize and adapt across different receiver hardware platforms. Moreover, this thesis presents a proof-of-concept system for large-scale and efficient data generation, proven through the design and implementation of a custom platform capable of coordinating transmissions from nearly a hundred Software-Defined Radios (SDRs). This platform was used to rapidly perform experiments in both RFML performance sensitivity analysis and successful transfer between SDRs of trained models for both SEI and AMC algorithms. / Master of Science / Radio Frequency Machine Learning (RFML) is the application of machine learning techniques to solve problems having to do with radio signals as an alternative to traditional signal processing techniques. Notable among these are the tasks of specific emitter identification (SEI), determining source identity of a received signal, and automated modulation classification (AMC), determining the data encoding format of a received RF transmission. Both tasks have practical limitations related to the real-world collection of RF training data. This thesis presents a proof-of-concept for large-scale, efficient data generation and management, as proven through the design and construction of a custom platform capable of coordinating transmissions from nearly a hundred radios. This platform was used to rapidly perform experiments in both RFML performance sensitivity analysis and successful cross-radio transfer of trained behaviors.
5

Generating Datasets Through the Introduction of an Attack Agent in a SCADA Testbed : A methodology of creating datasets for intrusion detection research in a SCADA system using IEC-60870-5-104

Fundin, August January 2021 (has links)
No description available.
6

Automation and Validation of Big Data Generation via Simulation Pipeline for Flexible Assemblies

Adrian, Alexander F. 26 October 2022 (has links)
No description available.
7

Dataset Generation in a Simulated Environment Using Real Flight Data for Reliable Runway Detection Capabilities

Tagebrand, Emil, Gustafsson Ek, Emil January 2021 (has links)
Implementing object detection methods for runway detection during landing approaches is limited in the safety-critical aircraft domain. This limitation is due to the difficulty that comes with verification of the design and the ability to understand how the object detection behaves during operation. During operation, object detection needs to consider the aircraft's position, environmental factors, different runways and aircraft attitudes. Training such an object detection model requires a comprehensive dataset that defines the features mentioned above. The feature's impact on the detection capabilities needs to be analysed to ensure the correct distribution of images in the dataset. Gathering images for these scenarios would be costly and needed due to the aviation industry's safety standards. Synthetic data can be used to limit the cost and time required to create a dataset where all features occur. By using synthesised data in the form of generating datasets in a simulated environment, these features could be applied to the dataset directly. The features could also be implemented separately in different datasets and compared to each other to analyse their impact on the object detections capabilities. By utilising this method for the features mentioned above, the following results could be determined. For object detection to consider most landing cases and different runways, the dataset needs to replicate real flight data and generate additional extreme landing cases. The dataset also needs to consider landings at different altitudes, which can differ at a different airport. Environmental conditions such as clouds and time of day reduce detection capabilities far from the runway, while attitude and runway appearance reduce it at close range. Runway appearance did also affect the runway at long ranges but only for darker runways.

Page generated in 0.0961 seconds