• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 1
  • Tagged with
  • 251
  • 251
  • 251
  • 123
  • 93
  • 92
  • 66
  • 44
  • 40
  • 38
  • 36
  • 32
  • 32
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A Machine Learning Approach for Reconnaissance Detection to Enhance Network Security

Bakaletz, Rachel 01 May 2022 (has links)
Before cyber-crime can happen, attackers must research the targeted organization to collect vital information about the target and pave the way for the subsequent attack phases. This cyber-attack phase is called reconnaissance or enumeration. This malicious phase allows attackers to discover information about a target to be leveraged and used in an exploit. Information such as the version of the operating system and installed applications, open ports can be detected using various tools during the reconnaissance phase. By knowing such information cyber attackers can exploit vulnerabilities that are often unique to a specific version. In this work, we develop an end-to-end system that uses machine learning techniques to detect reconnaissance attacks on cyber networks. Successful detection of such attacks provides the target the time to devise plans on how to evade or mitigate the cyber-attack phases that supervene the reconnaissance phase.
102

Investigating Genetic Algorithm Optimization Techniques in Video Games

Ambuehl, Nathan 01 August 2017 (has links)
Immersion is essential for player experience in video games. Artificial Intelligence serves as an agent that can generate human-like responses and intelligence to reinforce a player’s immersion into their environment. The most common strategy involved in video game AI is using decision trees to guide chosen actions. However, decision trees result in repetitive and robotic actions that reflect an unrealistic interaction. This experiment applies a genetic algorithm that explores selection, crossover, and mutation functions for genetic algorithm implementation in an isolated Super Mario Bros. pathfinding environment. An optimized pathfinding AI can be created by combining an elitist selection strategy with a uniform distribution crossover and minimal mutation rate.
103

Simulating Polistes Dominulus Nest-Building Heuristics with Deterministic and Markovian Properties

Pottinger, Benjamin 01 May 2022 (has links)
European Paper Wasps (Polistes dominula) are social insects that build round, symmetrical nests. Current models indicate that these wasps develop colonies by following simple heuristics based on nest stimuli. Computer simulations can model wasp behavior to imitate natural nest building. This research investigated various building heuristics through a novel Markov-based simulation. The simulation used a hexagonal grid to build cells based on the building rule supplied to the agent. Nest data was compared with natural data and through visual inspection. Larger nests were found to be less compact for the rules simulated.
104

Energetic Path Finding Across Massive Terrain Data

Tsui, Andrew N 01 June 2009 (has links)
Before there were airplanes, cars, trains, boats, or bicycles, the primary means of transportation was on foot. Unfortunately, many of the trails used by ancient travelers have long since been abandoned. We present a software tool which can help visualize and predict where these forgotten trails might lie through the use of a human-centered cost metric. By comparing the paths generated by our software with known historical trails, we demonstrate how the tool can indicate likely trails used by ancient travelers. In addition, this new tool provides novel visualizations to better help the user understand alternate paths, effect of terrain, and nearby areas of interest. Such a tool could be used by archaeologists and historians to better visualize and understand the terrain and paths around sites of historical interest. This thesis is a continuation of previous work, with emphasis on the ability to generate paths which traverse several thousand kilometers. To accomplish this, various graph simplification and path approximation algorithms are explored to construct a real-time path finding algorithm. To this end, we show that it is possible to restrict the search space for a path finding algorithm while not disrupting accuracy. Combined with a multi-threaded variant of Dijkstra's shortest path algorithm, we present a tool capable of traversing the contiguous US, a dataset containing over 19 billion datapoints, in under three hours on a 2.5 Ghz dual core processor. The tool is demonstrated on several examples which show the potential archaeological and historical applicability, and provide avenues for future improvements.
105

Using GIST Features to Constrain Search in Object Detection

Solmon, Joanna Browne 19 August 2014 (has links)
This thesis investigates the application of GIST features [13] to the problem of object detection in images. Object detection refers to locating instances of a given object category in an image. It is contrasted with object recognition, which simply decides whether an image contains an object, regardless of the object's location in the image. In much of computer vision literature, object detection uses a "sliding window" approach to finding objects in an image. This requires moving various sizes of windows across an image and running a trained classifier on the visual features of each window. This brute force method can be time consuming. I investigate whether global, easily computed GIST features can be used to classify the size and location of objects in the image to help reduce the number of windows searched before the object is found. Using K–means clustering and Support Vector Machines to classify GIST feature vectors, I find that object size and vertical location can be classified with 73–80% accuracy. These classifications can be used to constrain the search location and window sizes explored by object detection methods.
106

Object Detection and Recognition in Natural Settings

Dittmar, George William 04 January 2013 (has links)
Much research as of late has focused on biologically inspired vision models that are based on our understanding of how the visual cortex processes information. One prominent example of such a system is HMAX [17]. HMAX attempts to simulate the biological process for object recognition in cortex based on the model proposed by Hubel & Wiesel [10]. This thesis investigates the ability of an HMAX-like system (GLIMPSE [20]) to perform object-detection in cluttered natural scenes. I evaluate these results using the StreetScenes database from MIT [1, 8]. This thesis addresses three questions: (1) Can the GLIMPSE-based object detection system replicate the results on object-detection reported by Bileschi using HMAX? (2) Which features computed by GLIMPSE lead to the best object-detection performance? (3) What effect does elimination of clutter in the training sets have on the performance of our system? As part of this thesis, I built an object detection and recognition system using GLIMPSE [20] and demonstrate that it approximately replicates the results reported in Bileschi's thesis. In addition, I found that extracting and combining features from GLIMPSE using different layers of the HMAX model gives the best overall invariance to position, scale and translation for recognition tasks, but comes with a much higher computational overhead. Further contributions include the creation of modified training and test sets based on the StreetScenes database, with removed clutter in the training data and extending the annotations for the detection task to cover more objects of interest that were not in the original annotations of the database.
107

Tornado outbreak false alarm probabilistic forecasts with machine learning

Snodgrass, Kirsten Reed 12 May 2023 (has links) (PDF)
Tornadic outbreaks occur annually, causing fatalities and millions of dollars in damage. By improving forecasts, the public can be better equipped to act prior to an event. False alarms (FAs) can hinder the public’s ability (or willingness) to act. As such, a probabilistic FA forecasting scheme would be beneficial to improving public response to outbreaks. Here, a machine learning approach is employed to predict FA likelihood from Storm Prediction Center (SPC) tornado outbreak forecasts. A database of hit and FA outbreak forecasts spanning 2010 – 2020 was developed using historical SPC convective outlooks and the SPC Storm Reports database. Weather Research and Forecasting (WRF) model simulations were done for each outbreak to characterize the underlying meteorological environments. Parameters from these simulations were used to train a support vector machine (SVM) to forecast FAs. Results were encouraging and may result in further applications in severe weather operations.
108

Pruning GHSOM to create an explainable intrusion detection system

Kirby, Thomas Michael 12 May 2023 (has links) (PDF)
Intrusion Detection Systems (IDS) that provide high detection rates but are black boxes leadto models that make predictions a security analyst cannot understand. Self-Organizing Maps(SOMs) have been used to predict intrusion to a network, while also explaining predictions throughvisualization and identifying significant features. However, they have not been able to compete withthe detection rates of black box models. Growing Hierarchical Self-Organizing Maps (GHSOMs)have been used to obtain high detection rates on the NSL-KDD and CIC-IDS-2017 network trafficdatasets, but they neglect creating explanations or visualizations, which results in another blackbox model.This paper offers a high accuracy, Explainable Artificial Intelligence (XAI) based on GHSOMs.One obstacle to creating a white box hierarchical model is the model growing too large and complexto understand. Another contribution this paper makes is a pruning method used to cut down onthe size of the GHSOM, which provides a model that can provide insights and explanation whilemaintaining a high detection rate.
109

Reviving Mozart with Intelligence Duplication

Galajda, Jacob E 01 January 2021 (has links)
Deep learning has been applied to many problems that are too complex to solve through an algorithm. Most of these problems have not required the specific expertise of a certain individual or group; most applied networks learn information that is shared across humans intuitively. Deep learning has encountered very few problems that would require the expertise of a certain individual or group to solve, and there has yet to be a defined class of networks capable of achieving this. Such networks could duplicate the intelligence of a person relative to a specific task, such as their writing style or music composition style. For this thesis research, we propose to investigate Artificial Intelligence in a new direction: Intelligence Duplication (ID). ID encapsulates neural networks that are capable of solving problems that require the intelligence of a specific person or collective group. This concept can be illustrated by learning the way a composer positions their musical segments -as in the Deep Composer neural network. This will allow the network to generate similar songs to the aforementioned artist. One notable issue that arises with this is the limited amount of training data that can occur in some cases. For instance, it would be nearly impossible to duplicate the intelligence of a lesser known artist or an artist who did not live long enough to produce many works. Generating many artificial segments in the artist's style will overcome these limitations. In recent years, Generative Adversarial Networks (GANs) have shown great promise in many similarly related tasks. Generating artificial segments will give the network greater leverage in assembling works similar to the artist, as there will be an increased overlap in data points within the hashed embedding. Additional review indicates that current Deep Segment Hash Learning (DSHL) network variations have potential to optimize this process. As there are less nodes in the input and output layers, DSHL networks do not need to compute nearly as much information as traditional networks. We indicate that a synthesis of both DSHL and GAN networks will provide the framework necessary for future ID research. The contributions of this work will inspire a new wave of AI research that can be applied to many other ID problems.
110

Vertical federated learning using autoencoders with applications in electrocardiograms

Chorney, Wesley William 08 August 2023 (has links) (PDF)
Federated learning is a framework in machine learning that allows for training a model while maintaining data privacy. Moreover, it allows clients with their own data to collaborate in order to build a stronger, shared model. Federated learning is of particular interest to healthcare data, since it is of the utmost importance to respect patient privacy while still building useful diagnostic tools. However, healthcare data can be complicated — data format might differ across providers, leading to unexpected inputs and incompatibility between different providers. For example, electrocardiograms might differ in sampling rate or number of leads used, meaning that a classifier trained at one hospital might be useless to another. We propose using autoencoders to address this problem, transforming important information contained in electrocardiograms to a uniform input, where federated learning can then be used to train a strong classifier for multiple healthcare providers. Furthermore, we propose using statistically-guided hyperparameter tuning to ensure fast convergence of the model.

Page generated in 0.218 seconds