• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 1
  • Tagged with
  • 247
  • 247
  • 247
  • 122
  • 91
  • 90
  • 64
  • 44
  • 39
  • 38
  • 36
  • 32
  • 31
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Real-time Vision-Based Lane Detection with 1D Haar Wavelet Transform on Raspberry Pi

Sudini, Vikas Reddy 01 May 2017 (has links)
Rapid progress is being made towards the realization of autonomous cars. Since the technology is in its early stages, human intervention is still necessary in order to ensure hazard-free operation of autonomous driving systems. Substantial research efforts are underway to enhance driver and passenger safety in autonomous cars. Toward that end GreedyHaarSpiker, a real-time vision-based lane detection algorithm is proposed for road lane detection in different weather conditions. The algorithm has been implemented in Python 2.7 with OpenCV 3.0 and tested on a Raspberry Pi 3 Model B ARMv8 1GB RAM coupled to a Raspberry Pi camera board v2. To test the algorithm’s performance, the Raspberry Pi and the camera board were mounted inside a Jeep Wrangler. The algorithm performed better in sunny weather with no snow on the road. The algorithm’s performance deteriorated at night time or when the road surface was covered with snow.
162

Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings

Kilinc, Ismail Ozsel 30 November 2017 (has links)
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
163

Semantic Description of Activities in Videos

Dias Moreira De Souza, Fillipe 07 April 2017 (has links)
Description of human activities in videos results not only in detection of actions and objects but also in identification of their active semantic relationships in the scene. Towards this broader goal, we present a combinatorial approach that assumes availability of algorithms for detecting and labeling objects and actions, albeit with some errors. Given these uncertain labels and detected objects, we link them into interpretative structures using domain knowledge encoded with concepts of Grenander’s general pattern theory. Here a semantic video description is built using basic units, termed generators, that represent labels of objects or actions. These generators have multiple out-bonds, each associated with either a type of domain semantics, spatial constraints, temporal constraints or image/video evidence. Generators combine between each other, according to a set of pre-defined combination rules that capture domain semantics, to form larger structures known as configurations, which here will be used to represent video descriptions. Such connected structures of generators are called configurations. This framework offers a powerful representational scheme for its flexibility in spanning a space of interpretative structures (configurations) of varying sizes and structural complexity. We impose a probability distribution on the configuration space, with inferences generated using a Markov Chain Monte Carlo-based simulated annealing algorithm. The primary advantage of the approach is that it handles known computer vision challenges – appearance variability, errors in object label annotation, object clutter, simultaneous events, temporal dependency encoding, etc. – without the need for a exponentially- large (labeled) training data set.
164

Categorizing Blog Spam

Bevans, Brandon 01 June 2016 (has links)
The internet has matured into the focal point of our era. Its ecosystem is vast, complex, and in many regards unaccounted for. One of the most prevalent aspects of the internet is spam. Similar to the rest of the internet, spam has evolved from simply meaning ‘unwanted emails’ to a blanket term that encompasses any unsolicited or illegitimate content that appears in the wide range of media that exists on the internet. Many forms of spam permeate the internet, and spam architects continue to develop tools and methods to avoid detection. On the other side, cyber security engineers continue to develop more sophisticated detection tools to curb the harmful effects that come with spam. This virtual arms race has no end in sight. Most efforts thus far have been toward accurately detecting spam from ham, and rightfully so since initial detection is essential. However, research is lacking in understanding the current ecosystem of spam, spam campaigns, and the behavior of the botnets that drive the majority of spam traffic. This thesis focuses on characterizing spam, particularly the spam that appears in forums, where the spam is delivered by bots posing as legitimate users. Forum spam is used primarily to push advertisements or to boost other websites’ perceived popularity by including HTTP links in the content of the post. We conduct an experiment to collect a sample of the blog posts and network activity of the spambots that exist in the internet. We then present a corpora available to conduct analysis on and proceed with our own analysis. We cluster associated groups of users and IP addresses into entities, which we accept as a model of the underlying botnets that interact with our honeypots. We use Natural Language Processing (NLP) and Machine Learning (ML) to determine that creating semantic-based models of botnets are sufficient for distinguishing them from one another. We also find that the syntactic structure of posts has little variation from botnet to botnet. Finally we confirm that to a large degree botnet behavior and content hold across different domains.
165

DiSH: Democracy in State Houses

Russo, Nicholas A 01 February 2019 (has links)
In our current political climate, state level legislators have become increasingly impor- tant. Due to cuts in funding and growing focus at the national level, public oversight for these legislators has drastically decreased. This makes it difficult for citizens and activists to understand the relationships and commonalities between legislators. This thesis provides three contributions to address this issue. First, we created a data set containing over 1200 features focused on a legislator’s activity on bills. Second, we created embeddings that represented a legislator’s level of activity and engagement for a given bill using a custom model called Democracy2Vec. Third, we provided a case study focused on the 2015-2016 California State Legislator and had our results verified by a political expert. Our results show that our embeddings can explain relationships between legislator and how they will likely act during the legislative process.
166

Intelligent Cinematic Camera Control for Real-Time Graphics Applications

Meeder, Ian Harris 01 January 2020 (has links)
E-sports is currently estimated to be a billion dollar industry which is only growing in size from year to year. However the cinematography of spectated games leaves much to be desired. In most cases, the spectator either gets to control their own freely-moving camera or they get to see the view that a specific player sees. This thesis presents a system for the generation of cinematically-pleasing views for spectating real-time graphics applications. A custom real-time engine has been built to demonstrate the effect of this system on several different game modes with varying visual cinematic constraints, such as the rule of thirds. To create the cinematic views, we encode cinematic rules as cost functions that are fed into a non-linear least squares solver. These cost functions rely on the geometry of the scene, minimizing residuals based on the 3D positions and 2D reprojections of the geometry. The final cinematic view is found by altering camera position and angle until a local minimum is met. The system was evaluated by comparing video output from a traditional rigidly constrained camera and the results of our algorithm’s optimally solved views. User surveys are then used to qualitatively evaluate the system. The results of these surveys do not statistically find a preference between the cinematic views and the rigidly constrained views. In addition, we present performance and timing considerations for the system, reporting that the system can operate within modern expectations of latency when enough constraints are placed on the non-linear least squares solver.
167

Applying Deep Learning to the Ice Cream Vendor Problem: An Extension of the Newsvendor Problem

Solihu, Gaffar 01 August 2021 (has links)
The Newsvendor problem is a classical supply chain problem used to develop strategies for inventory optimization. The goal of the newsvendor problem is to predict the optimal order quantity of a product to meet an uncertain demand in the future, given that the demand distribution itself is known. The Ice Cream Vendor Problem extends the classical newsvendor problem to an uncertain demand with unknown distribution, albeit a distribution that is known to depend on exogenous features. The goal is thus to estimate the order quantity that minimizes the total cost when demand does not follow any known statistical distribution. The problem is formulated as a mathematical programming problem and solved using a Deep Neural network approach. The feature-dependent demand data used to train and test the deep neural network is produced by a discrete event simulation based on actual daily temperature data, among other features.
168

RISK Gameplay Analysis Using Stochastic Beam Search

Gillenwater, Jacob 01 May 2022 (has links)
Hasbro’s RISK, first published in 1959, is a complex multiplayer strategy game that has received little attention from the scientific community. Training artificial intelligence (AI) agents using stochastic beam search gives insight into effective strategy when playing RISK. A comprehensive analysis of the systems of play challenges preconceptions about good strategy in some areas of the game while reinforcing those preconceptions in others. This study applies stochastic beam search to discover optimal strategies in RISK. Results of the search show both support for and challenges to traditionally held positions about RISK gameplay. While stochastic beam search competently investigates gameplay on a turn-by-turn basis, the search cannot create contingencies that allow for effective strategy across multiple turns. Future work would investigate additional algorithms that eliminate this limitation to provide further insights into optimal gameplay strategies.
169

Leveraging Artificial Intelligence to Improve Provider Documentation in Patient Medical Records

Ozurigbo, Evangeline C 01 January 2018 (has links)
Clinical documentation is at the center of a patient's medical record; this record contains all the information applicable to the care a patient receives in the hospital. The practice problem addressed in this project was the lack of clear, consistent, accurate, and complete patient medical records in a pediatric hospital. Although the occurrence of incomplete medical records has been a known issue for the project hospital, the issue was further intensified following the implementation of the 10th revision of International Classification of Diseases (ICD-10) standard for documentation, which resulted in gaps in provider documentation that needed to be filled. Based on this, the researcher recommended a quality improvement project and worked with a multidisciplinary team from the hospital to develop an evidence-based documentation guideline that incorporated ICD-10 standard for documenting pediatric diagnoses. Using data generated from the guideline, an artificial intelligence (AI) was developed in the form of best practice advisory alerts to engage providers at the point of documentation as well as augment provider efforts. Rosswurm and Larrabee's conceptual framework and Kotter's 8-step change model was used to develop the guideline and design the project. A descriptive data analysis using sample T-test significance indicated that financial reimbursement decreased by 25%, while case denials increased by 28% after ICD-10 implementation. This project promotes positive social change by improving safety, quality, and accountability at the project hospital.
170

Automatic Extraction of Narrative Structure from Long Form Text

Eisenberg, Joshua Daniel 02 November 2018 (has links)
Automatic understanding of stories is a long-time goal of artificial intelligence and natural language processing research communities. Stories literally explain the human experience. Understanding our stories promotes the understanding of both individuals and groups of people; various cultures, societies, families, organizations, governments, and corporations, to name a few. People use stories to share information. Stories are told –by narrators– in linguistic bundles of words called narratives. My work has given computers awareness of narrative structure. Specifically, where are the boundaries of a narrative in a text. This is the task of determining where a narrative begins and ends, a non-trivial task, because people rarely tell one story at a time. People don’t specifically announce when we are starting or stopping our stories: We interrupt each other. We tell stories within stories. Before my work, computers had no awareness of narrative boundaries, essentially where stories begin and end. My programs can extract narrative boundaries from novels and short stories with an F1 of 0.65. Before this I worked on teaching computers to identify which paragraphs of text have story content, with an F1 of 0.75 (which is state of the art). Additionally, I have taught computers to identify the narrative point of view (POV; how the narrator identifies themselves) and diegesis (how involved in the story’s action is the narrator) with F1 of over 0.90 for both narrative characteristics. For the narrative POV, diegesis, and narrative level extractors I ran annotation studies, with high agreement, that allowed me to teach computational models to identify structural elements of narrative through supervised machine learning. My work has given computers the ability to find where stories begin and end in raw text. This allows for further, automatic analysis, like extraction of plot, intent, event causality, and event coreference. These tasks are impossible when the computer can’t distinguish between which stories are told in what spans of text. There are two key contributions in my work: 1) my identification of features that accurately extract elements of narrative structure and 2) the gold-standard data and reports generated from running annotation studies on identifying narrative structure.

Page generated in 0.1419 seconds