51 |
Computer vision-based systems for environmental monitoring applicationsPorto Marques, Tunai 12 April 2022 (has links)
Environmental monitoring refers to a host of activities involving the sampling or sensing of diverse properties from an environment in an effort to monitor, study and overall better understand it. While potentially rich and scientifically valuable, these data often create challenging interpretation tasks because of their volume and complexity. This thesis explores the efficiency of Computer Vision-based frameworks towards the processing of large amounts of visual environmental monitoring data.
While considering every potential type of visual environmental monitoring measurement is not possible, this thesis elects three data streams as representatives of diverse monitoring layouts: visual out-of-water stream, visual underwater stream and active acoustic underwater stream. Detailed structure, objectives, challenges, solutions and insights from each of them are presented and used to assess the feasibility of Computer Vision within the environmental monitoring context. This thesis starts by providing an in-depth analysis of the definition and goals of environmental monitoring, as well as the Computer Vision systems typically used in conjunction with it.
The document continues by studying the visual out-of-water stream via the design of a novel system employing a contrast-guided approach towards the enhancement of low-light underwater images. This enhancement system outperforms multiple state-of-the-art methods, as supported by a group of commonly-employed metrics.
A pair of detection frameworks capable of identifying schools of herring, salmon, hake and swarms of krill are also presented in this document. The inputs used in their development, echograms, are visual representations of acoustic backscatter data from echosounder instruments, thus contemplating the active acoustic underwater stream. These detectors use different Deep Learning paradigms to account for the unique challenges presented by each pelagic species. Specifically, the detection of krill and finfish is accomplish with a novel semantic segmentation network (U-MSAA-Net) capable of leveraging local and contextual information from feature maps of multiple scales.
In order to explore the out-of-water visual data stream, we examine a large dataset composed by years-worth of images from a coastal region with strong marine vessels traffic, which has been associated with significant anthropogenic footprints upon marine environments. A novel system that involves ``traditional'' Computer Vision and Deep Learning is proposed for the identification of such vessels under diverse visual appearances on this monitoring imagery. Thorough experimentation shows that this system is able to efficiently detect vessels of diverse sizes, shapes, colors and levels of visibility.
The results and reflections presented in this thesis reinforce the hypothesis that Computer Vision offers an extremely powerful set of methods for the automatic, accurate, time- and space-efficient interpretation of large amounts of visual environmental monitoring data, as detailed in the remainder of this work. / Graduate
|
52 |
Indoor 3D Scene Understanding Using Depth SensorsLahoud, Jean 09 1900 (has links)
One of the main goals in computer vision is to achieve a human-like understanding of images. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
|
53 |
Řízení o přestupcích fyzických osob v prvním stupni / First instance hearing of an administrative delict of natural personŠtádler, Jiří January 2019 (has links)
First instance hearing of an administrative delict of natural person Abstract Subject of this thesis is first instance hearing of an administrative delict of natural person. It is focused on procedural rules which are provided mainly in Act No. 250/2016 Coll. on Liability for Administrative Delicts and Proceedings on Them. In addition to the act on administrative delicts this thesis analyses subsidiary application of Act No. 500/2004 Coll. Administrative Procedure Code and Article 6 of the European Convention on Human Rights focused on legal status of an accused person. This thesis aims to: define proceedings of administrative delicts; define differences between proceedings of administrative delicts and proceedings of criminal delicts; define individual subjects of proceedings regarding administrative delicts and their procedural rights and obligations focused on rights and obligations of an accused person; analyse individual stages of proceedings of administrative delicts including actions preceding the initiation of proceedings. In accordance with the goals, this thesis characterises proceedings of administrative delicts as a special type of public proceedings in which an administrative body determines guilt of a particular person. It compares proceedings of administrative delicts and proceedings of...
|
54 |
Integrate Model and Instance Based Machine Learning for Network Intrusion DetectionAra, Lena 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In computer networks, the convenient internet access facilitates internet services, but at the same time also augments the spread of malicious software which could represent an attack or unauthorized access. Thereby, making the intrusion detection an important area to explore for detecting these unwanted activities. This thesis concentrates on combining the Model and Instance Based Machine Learning for detecting intrusions through a series of algorithms starting from clustering the similar hosts.
Similar hosts have been found based on the supervised machine learning techniques like Support Vector Machines, Decision Trees and K Nearest Neighbors using our proposed Data Fusion algorithm. Maximal cliques of Graph Theory has been explored to find the clusters. A recursive way is proposed to merge the decision areas of best features. The idea is to implement a combination of model and instance based machine learning and analyze how it performs as compared to a conventional machine learning algorithm like Random Forest for intrusion detection. The system has been evaluated on three datasets by CTU-13. The results show that our proposed method gives better detection rate as compared to traditional methods which might overfit the data.
The research work done in model merging, instance based learning, random forests, data mining and ensemble learning with regards to intrusion detection have been studied and taken as reference.
|
55 |
Using Machine Learning Techniques to Improve Static Code Analysis Tools UsefulnessAlikhashashneh, Enas A. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests
(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.
|
56 |
THE K-MULTIPLE INSTANCE REPRESENTATIONVijayanathasamy Srikanthan, Swetha 28 January 2020 (has links)
No description available.
|
57 |
Evaluation of machine learning models for classifying malicious URLsAbad, Shayan, Gholamy, Hassan January 2023 (has links)
Millions of new websites are created daily, making it challenging to determine which ones are safe. Cybersecurity involves protecting companies and users from cyberattacks. Cybercriminals exploit various methods, including phishing attacks, to trick users into revealing sensitive information. In Australia alone, there were over 74,000 reported phishing attacks in 2022, resulting in a financial loss of over $24 million. Artificial intelligence (AI) and machine learning are effective tools in various domains, such as cancer detection, financial fraud detection, and chatbot development. Machine learning models, such as Random Forest and Support Vector Machines, are commonly used for classification tasks. With the rise of cybercrime, it is crucial to use machine learning to identify both known and new malicious URLs. The purpose of the study is to compare different instance selection methods and machine learning models for classifying malicious URLs. In this study, a dataset containing approximately 650,000 URLs from Kaggle was used. The dataset consisted of four categories: phishing, defacement, malware, and benign URLs. Three datasets, each consisting of around 170,000 URLs, were generated using instance selection methods (DRLSH, BPLSH, and random selection) implemented in MATLAB. Machine learning models, including SVM, DT, KNNs, and RF, were employed. The study applied these instance selection methods to a dataset of malicious URLs, trained the machine learning models on the resulting datasets, and evaluated their performance using 16 features and one output feature. In the process of hyperparameter tuning, the training dataset was used to train four models with different hyperparameter settings. Bayesian optimization was employed to find the best hyperparameters for each model. The classification process was then conducted, and the results were compared. The study found that the random instance selection method outperformed the other two methods, BPLSH and DRLSH, in terms of both accuracy and elapsed time for data selection. The lower accuracies achieved by the DRLSH and BPLSH methods may be attributed to the imbalanced dataset, which led to poor sample selection.
|
58 |
Using Instance-Level Meta-Information to Facilitate a More Principled Approach to Machine LearningSmith, Michael Reed 01 April 2015 (has links) (PDF)
As the capability for capturing and storing data increases and becomes more ubiquitous, an increasing number of organizations are looking to use machine learning techniques as a means of understanding and leveraging their data. However, the success of applying machine learning techniques depends on which learning algorithm is selected, the hyperparameters that are provided to the selected learning algorithm, and the data that is supplied to the learning algorithm. Even among machine learning experts, selecting an appropriate learning algorithm, setting its associated hyperparameters, and preprocessing the data can be a challenging task and is generally left to the expertise of an experienced practitioner, intuition, trial and error, or another heuristic approach. This dissertation proposes a more principled approach to understand how the learning algorithm, hyperparameters, and data interact with each other to facilitate a data-driven approach for applying machine learning techniques. Specifically, this dissertation examines the properties of the training data and proposes techniques to integrate this information into the learning process and for preprocessing the training set.It also proposes techniques and tools to address selecting a learning algorithm and setting its hyperparameters.This dissertation is comprised of a collection of papers that address understanding the data used in machine learning and the relationship between the data, the performance of a learning algorithm, and the learning algorithms associated hyperparameter settings.Contributions of this dissertation include:* Instance hardness that examines how difficult an instance is to classify correctly.* hardness measures that characterize properties of why an instance may be misclassified.* Several techniques for integrating instance hardness into the learning process. These techniques demonstrate the importance of considering each instance individually rather than doing a global optimization which considers all instances equally.* Large-scale examinations of the investigated techniques including a large numbers of examined data sets and learning algorithms. This provides more robust results that are less likely to be affected by noise.* The Machine Learning Results Repository, a repository for storing the results from machine learning experiments at the instance level (the prediction for each instance is stored). This allows many data set-level measures to be calculated such as accuracy, precision, or recall. These results can be used to better understand the interaction between the data, learning algorithms, and associated hyperparameters. Further, the repository is designed to be a tool for the community where data can be downloaded and uploaded to follow the development of machine learning algorithms and applications.
|
59 |
Distribution-Based Adversarial Multiple-Instance LearningChen, Sherry 27 January 2023 (has links)
No description available.
|
60 |
Exploration of performance evaluation metrics with deep-learning-based generic object detection for robot guidance systemsGustafsson, Helena January 2023 (has links)
Robots are often used within the industry for automated tasks that are too dangerous, complex, or strenuous for humans, which leads to time and cost benefits. Robots can have an arm and a gripper to manipulate the world and sensors for eyes to be able to perceive the world. Human vision can be seen as an effortless task, but machine vision requires substantial computation in an attempt to be as effective as human vision. Visual object recognition is a common goal for machine vision, and it is often applied using deep learning and generic object detection. This thesis has a focus on robot guidance systems that include a robot with its gripper on the robot arm, a camera that acquires images of the world, boxes to detect in one or more layers, and the software that applies a generic object detection model to detect the boxes. Robot guidance systems’ performance is impacted by many variables such as different environmental, camera, object, and robot gripper aspects. A survey was constructed to receive feedback from professionals on what thresholds that can be defined for detection from the model to be counted as correct, with the aspect of the detection referring to an actual object that needs to be able to be picked up by a robot. This thesis has implemented precision, recall, average precision at a specific threshold, average precision at a range of thresholds, localization-recall-precision error, and a manually constructed counter based on survey results for the robot’s ability to pick up an object from the information provided by the detection, called pickability score. The metrics from this thesis are implemented within a tool intended for analyzing different models’ performance on varying datasets. The values of all the metrics for the applied dataset are presented in the results. The metrics are discussed with regards to what information they portray together with a robot guidance system. The conclusion is to see the metrics for what they are best at by themselves. Use the average precision metrics for the performance evaluation of the models, and the pickability scores with extended features for the robot gripper pickability evaluation.
|
Page generated in 0.0592 seconds