• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 1
  • Tagged with
  • 248
  • 248
  • 248
  • 122
  • 91
  • 91
  • 65
  • 44
  • 39
  • 38
  • 36
  • 32
  • 31
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Automatic Extraction of Narrative Structure from Long Form Text

Eisenberg, Joshua Daniel 02 November 2018 (has links)
Automatic understanding of stories is a long-time goal of artificial intelligence and natural language processing research communities. Stories literally explain the human experience. Understanding our stories promotes the understanding of both individuals and groups of people; various cultures, societies, families, organizations, governments, and corporations, to name a few. People use stories to share information. Stories are told –by narrators– in linguistic bundles of words called narratives. My work has given computers awareness of narrative structure. Specifically, where are the boundaries of a narrative in a text. This is the task of determining where a narrative begins and ends, a non-trivial task, because people rarely tell one story at a time. People don’t specifically announce when we are starting or stopping our stories: We interrupt each other. We tell stories within stories. Before my work, computers had no awareness of narrative boundaries, essentially where stories begin and end. My programs can extract narrative boundaries from novels and short stories with an F1 of 0.65. Before this I worked on teaching computers to identify which paragraphs of text have story content, with an F1 of 0.75 (which is state of the art). Additionally, I have taught computers to identify the narrative point of view (POV; how the narrator identifies themselves) and diegesis (how involved in the story’s action is the narrator) with F1 of over 0.90 for both narrative characteristics. For the narrative POV, diegesis, and narrative level extractors I ran annotation studies, with high agreement, that allowed me to teach computational models to identify structural elements of narrative through supervised machine learning. My work has given computers the ability to find where stories begin and end in raw text. This allows for further, automatic analysis, like extraction of plot, intent, event causality, and event coreference. These tasks are impossible when the computer can’t distinguish between which stories are told in what spans of text. There are two key contributions in my work: 1) my identification of features that accurately extract elements of narrative structure and 2) the gold-standard data and reports generated from running annotation studies on identifying narrative structure.
172

Early Stopping of a Neural Network via the Receiver Operating Curve.

Yu, Daoping 13 August 2010 (has links) (PDF)
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
173

Classification models for 2,4-D formulations in damaged Enlist crops through the application of FTIR spectroscopy and machine learning algorithms

Blackburn, Benjamin 09 August 2022 (has links) (PDF)
With new 2,4-Dichlorophenoxyacetic acid (2,4-D) tolerant crops, increases in off-target movement events are expected. New formulations may mitigate these events, but standard lab techniques are ineffective in identifying these 2,4-D formulations. Using Fourier-transform infrared spectroscopy and machine learning algorithms, research was conducted to classify 2,4-D formulations in treated herbicide-tolerant soybeans and cotton and observe the influence of leaf treatment status and collection timing on classification accuracy. Pooled Classification models using k-nearest neighbor classified 2,4-D formulations with over 65% accuracy in cotton and soybean. Tissue collected 14 DAT and 21 DAT for cotton and soybean respectively produced higher accuracies than the pooled model. Tissue directly treated with 2,4-D also performed better than the pooled model. Lastly, models using timing and treatment status as factors resulted in higher accuracies, with cotton 14 DAT New Growth and Treated models and 28 DAT and 21 DAT Treated soybean models achieving the best accuracies.
174

Predicting High-Cap Tech Stock Polarity: A Combined Approach using Support Vector Machines and Bidirectional Encoders from Transformers

Grisham, Ian L 01 May 2023 (has links) (PDF)
The abundance, accessibility, and scale of data have engendered an era where machine learning can quickly and accurately solve complex problems, identify complicated patterns, and uncover intricate trends. One research area where many have applied these techniques is the stock market. Yet, financial domains are influenced by many factors and are notoriously difficult to predict due to their volatile and multivariate behavior. However, the literature indicates that public sentiment data may exhibit significant predictive qualities and improve a model’s ability to predict intricate trends. In this study, momentum SVM classification accuracy was compared between datasets that did and did not contain sentiment analysis-related features. The results indicated that sentiment containing datasets were typically better predictors, with improved model accuracy. However, the results did not reflect the improvements shown by similar research and will require further research to determine the nature of the relationship between sentiment and higher model performance.
175

Online Path Planning And Control Solution For A Coordinated Attack Of Multiple Unmanned Aerial Vehicles In A Dynamic Environment

Vega-Nevarez, Juan 01 January 2012 (has links)
The role of the unmanned aerial vehicle (UAV) has significantly expanded in the military sector during the last decades mainly due to their cost effectiveness and their ability to eliminate the human life risk. Current UAV technology supports a variety of missions and extensive research and development is being performed to further expand its capabilities. One particular field of interest is the area of the low cost expendable UAV since its small price tag makes it an attractive solution for target suppression. A swarm of these low cost UAVs can be utilized as guided munitions or kamikaze UAVs to attack multiple targets simultaneously. The focus of this thesis is the development of a cooperative online path planning algorithm that coordinates the trajectories of these UAVs to achieve a simultaneous arrival to their dynamic targets. A nonlinear autopilot design based on the dynamic inversion technique is also presented which stabilizes the dynamics of the UAV in its entire operating envelope. A nonlinear high fidelity six degrees of freedom model of a fixed wing aircraft was developed as well that acted as the main test platform to verify the performance of the presented algorithms.
176

Adapting Single-View View Synthesis with Multiplane Images for 3D Video Chat

Uppuluri, Anurag Venkata 01 December 2021 (has links) (PDF)
Activities like one-on-one video chatting and video conferencing with multiple participants are more prevalent than ever today as we continue to tackle the pandemic. Bringing a 3D feel to video chat has always been a hot topic in Vision and Graphics communities. In this thesis, we have employed novel view synthesis in attempting to turn one-on-one video chatting into 3D. We have tuned the learning pipeline of Tucker and Snavely's single-view view synthesis paper — by retraining it on MannequinChallenge dataset — to better predict a layered representation of the scene viewed by either video chat participant at any given time. This intermediate representation of the local light field — called a Multiplane Image (MPI) — may then be used to rerender the scene at an arbitrary viewpoint which, in our case, would match with the head pose of the watcher in the opposite, concurrent video frame. We discuss that our pipeline, when implemented in real-time, would allow both video chat participants to unravel occluded scene content and "peer into" each other's dynamic video scenes to a certain extent. It would enable full parallax up to the baselines of small head rotations and/or translations. It would be similar to a VR headset's ability to determine the position and orientation of the wearer's head in 3D space and render any scene in alignment with this estimated head pose. We have attempted to improve the performance of the retrained model by extending MannequinChallenge with the much larger RealEstate10K dataset. We present a quantitative and qualitative comparison of the model variants and describe our impactful dataset curation process, among other aspects.
177

A Neural Network Approach to Border Gateway Protocol Peer Failure Detection and Prediction

White, Cory B. 01 December 2009 (has links) (PDF)
The size and speed of computer networks continue to expand at a rapid pace, as do the corresponding errors, failures, and faults inherent within such extensive networks. This thesis introduces a novel approach to interface Border Gateway Protocol (BGP) computer networks with neural networks to learn the precursor connectivity patterns that emerge prior to a node failure. Details of the design and construction of a framework that utilizes neural networks to learn and monitor BGP connection states as a means of detecting and predicting BGP peer node failure are presented. Moreover, this framework is used to monitor a BGP network and a suite of tests are conducted to establish that this neural network approach as a viable strategy for predicting BGP peer node failure. For all performed experiments both of the proposed neural network architectures succeed in memorizing and utilizing the network connectivity patterns. Lastly, a discussion of this framework's generic design is presented to acknowledge how other types of networks and alternate machine learning techniques can be accommodated with relative ease.
178

Automatic Music Transcription with Convolutional Neural Networks Using Intuitive Filter Shapes

Sleep, Jonathan 01 October 2017 (has links) (PDF)
This thesis explores the challenge of automatic music transcription with a combination of digital signal processing and machine learning methods. Automatic music transcription is important for musicians who can't do it themselves or find it tedious. We start with an existing model, designed by Sigtia, Benetos and Dixon, and develop it in a number of original ways. We find that by using convolutional neural networks with filter shapes more tailored for spectrogram data, we see better and faster transcription results when evaluating the new model on a dataset of classical piano music. We also find that employing better practices shows improved results. Finally, we open-source our test bed for pre-processing, training, and testing the models to assist in future research.
179

SPOONS: Netflix Outage Detection Using Microtext Classification

Augusitne, Eriq A 01 March 2013 (has links) (PDF)
Every week there are over a billion new posts to Twitter services and many of those messages contain feedback to companies about their services. One company that recognizes this unused source of information is Netflix. That is why Netflix initiated the development of a system that lets them respond to the millions of Twitter and Netflix users that are acting as sensors and reporting all types of user visible outages. This system enhances the feedback loop between Netflix and its customers by increasing the amount of customer feedback that Netflix receives and reducing the time it takes for Netflix to receive the reports and respond to them. The goal of the SPOONS (Swift Perceptions of Online Negative Situations) system is to use Twitter posts to determine when Netflix users are reporting a problem with any of the Netflix services. This work covers the architecture of the SPOONS system and framework as well as outage detection using tweet classification.
180

Autonomous Satellite Operations for CubeSat Satellites

Anderson, Jason Lionel 01 March 2010 (has links) (PDF)
In the world of educational satellites, student teams manually conduct operations daily, sending commands and collecting downlinked data. Educational satellites typically travel in a Low Earth Orbit allowing line of sight communication for approximately thirty minutes each day. This is manageable for student teams as the required manpower is minimal. The international Global Educational Network for Satellite Operations (GENSO), however, promises satellite contact upwards of sixteen hours per day by connecting earth stations all over the world through the Internet. This dramatic increase in satellite communication time is unreasonable for student teams to conduct manual operations and alternatives must be explored. This thesis first introduces a framework for developing different Artificial Intelligences to conduct autonomous satellite operations for CubeSat satellites. Three different implementations are then compared using Cal Poly's CP6 CubeSat and the University of Tokyo's XI-IV CubeSat to determine which method is most effective.

Page generated in 0.1325 seconds