• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 1
  • Tagged with
  • 248
  • 248
  • 248
  • 122
  • 91
  • 91
  • 65
  • 44
  • 39
  • 38
  • 36
  • 32
  • 31
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Unsupervised Learning Trojan

Geigel, Arturo 04 November 2014 (has links)
This work presents a proof of concept of an Unsupervised Learning Trojan. The Unsupervised Learning Trojan presents new challenges over previous work on the Neural network Trojan, since the attacker does not control most of the environment. The current work will presented an analysis of how the attack can be successful by proposing new assumptions under which the attack can become a viable one. A general analysis of how the compromise can be theoretically supported is presented, providing enough background for practical implementation development. The analysis was carried out using 3 selected algorithms that can cover a wide variety of circumstances of unsupervised learning. A selection of 4 encoding schemes on 4 datasets were chosen to represent actual scenarios under which the Trojan compromise might be targeted. A detailed procedure is presented to demonstrate the attack's viability under assumed circumstances. Two tests of hypothesis concerning the experimental setup were carried out which yielded acceptance of the null hypothesis. Further discussion is contemplated on various aspects of actual implementation issues and real world scenarios where this attack might be contemplated.
192

Alternative Approaches to Correction of Malapropisms in AIML Based Conversational Agents

Brock, Walter A. 26 November 2014 (has links)
The use of Conversational Agents (CAs) utilizing Artificial Intelligence Markup Language (AIML) has been studied in a number of disciplines. Previous research has shown a great deal of promise. It has also documented significant limitations in the abilities of these CAs. Many of these limitations are related specifically to the method employed by AIML to resolve ambiguities in the meaning and context of words. While methods exist to detect and correct common errors in spelling and grammar of sentences and queries submitted by a user, one class of input error that is particularly difficult to detect and correct is the malapropism. In this research a malapropism is defined a "verbal blunder in which one word is replaced by another similar in sound but different in meaning" ("malapropism," 2013). This research explored the use of alternative methods of correcting malapropisms in sentences input to AIML CAs using measures of Semantic Distance and tri-gram probabilities. Results of these alternate methods were compared against AIML CAs using only the Symbolic Reductions built into AIML. This research found that the use of the two methodologies studied here did indeed lead to a small, but measurable improvement in the performance of the CA in terms of the appropriateness of its responses as classified by human judges. However, it was also noted that in a large number of cases, the CA simply ignored the existence of a malapropism altogether in formulating its responses. In most of these cases, the interpretation and response to the user's input was of such a general nature that one might question the overall efficacy of the AIML engine. The answer to this question is a matter for further study.
193

An Exploratory Analysis of Twitter Keyword-Hashtag Networks and Knowledge Discovery Applications

Hamed, Ahmed A 01 January 2014 (has links)
The emergence of social media has impacted the way people think, communicate, behave, learn, and conduct research. In recent years, a large number of studies have analyzed and modeled this social phenomena. Driven by commercial and social interests, social media has become an attractive subject for researchers. Accordingly, new models, algorithms, and applications to address specific domains and solve distinct problems have erupted. In this thesis, we propose a novel network model and a path mining algorithm called HashnetMiner to discover implicit knowledge that is not easily exposed using other network models. Our experiments using HashnetMiner have demonstrated anecdotal evidence of drug-drug interactions when applied to a drug reaction context. The proposed research comprises three parts built upon the common theme of utilizing hashtags in tweets. 1 Digital Recruitment on Twitter. We build an expert system shell for two different studies: (1) a nicotine patch study where the system reads streams of tweets in real time and decides whether to recruit the senders to participate in the study, and (2) an environmental health study where the system identifies individuals who can participate in a survey using Twitter. 2 Does Social Media Big Data Make the World Smaller? This work provides an exploratory analysis of large-scale keyword-hashtag networks (K-H) generated from Twitter. We use two different measures, (1) the number of vertices that connect any two keywords, and (2) the eccentricity of keyword vertices, a well-known centrality and shortest path measure. Our analysis shows that K-H networks conform to the phenomenon of the shrinking world and expose hidden paths among concepts. 3 We pose the following biomedical web science question: Can patterns identified in Twitter hashtags provide clinicians with a powerful tool to extrapolate a new medical therapies and/or drugs? We present a systematic network mining method HashnetMiner, that operates on networks of medical concepts and hashtags. To the best of our knowledge, this is the first effort to present Biomedical Web Science models and algorithms that address such a question by means of data mining and knowledge discovery using hashtag-based networks.
194

The Use of Automated Speech Recognition in Electronic Health Records in Rural Health Care Systems

Gargett, Ross 01 May 2016 (has links)
Since the HITECH (Health Information Technology for Economic and Clinical Health) Act was enacted, healthcare providers are required to achieve “Meaningful Use.” CPOE (Clinical Provider Order Entry), is one such requirement. Many providers prefer to dictate their orders rather than typing them. Medical vocabulary is wrought with its own terminology and department-specific acronyms, and many ASR (Automated Speech Recognition) systems are not trained to interpret this language. The purpose of this thesis research was to investigate the use and effectiveness of ASR in the healthcare industry. Multiple hospitals and multiple clinicians agreed to be followed through their use of an ASR system to enter patient data into the record. As a result of this research, the effectiveness and use of the ASR was examined, and multiple issues with the use and accuracy of the system were uncovered.
195

AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

Hamraz, Hamid 01 January 2018 (has links)
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
196

Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference

Willmott, Devin 01 January 2018 (has links)
Recurrent neural networks (RNNs) are state of the art sequential machine learning tools, but have difficulty learning sequences with long-range dependencies due to the exponential growth or decay of gradients backpropagated through the RNN. Some methods overcome this problem by modifying the standard RNN architecure to force the recurrent weight matrix W to remain orthogonal throughout training. The first half of this thesis presents a novel orthogonal RNN architecture that enforces orthogonality of W by parametrizing with a skew-symmetric matrix via the Cayley transform. We present rules for backpropagation through the Cayley transform, show how to deal with the Cayley transform's singularity, and compare its performance on benchmark tasks to other orthogonal RNN architectures. The second half explores two deep learning approaches to problems in RNA secondary structure inference and compares them to a standard structure inference tool, the nearest neighbor thermodynamic model (NNTM). The first uses RNNs to detect paired or unpaired nucleotides in the RNA structure, which are then converted into synthetic auxiliary data that direct NNTM structure predictions. The second method uses recurrent and convolutional networks to directly infer RNA base pairs. In many cases, these approaches improve over NNTM structure predictions by 20-30 percentage points.
197

Modeling and Mapping Location-Dependent Human Appearance

Bessinger, Zachary 01 January 2018 (has links)
Human appearance is highly variable and depends on individual preferences, such as fashion, facial expression, and makeup. These preferences depend on many factors including a person's sense of style, what they are doing, and the weather. These factors, in turn, are dependent upon geographic location and time. In our work, we build computational models to learn the relationship between human appearance, geographic location, and time. The primary contributions are a framework for collecting and processing geotagged imagery of people, a large dataset collected by our framework, and several generative and discriminative models that use our dataset to learn the relationship between human appearance, location, and time. Additionally, we build interactive maps that allow for inspection and demonstration of what our models have learned.
198

Learning to Map the Visual and Auditory World

Salem, Tawfiq 01 January 2019 (has links)
The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Billions of images that capture this complex relationship are uploaded to social-media websites every day and often are associated with precise time and location metadata. This rich source of data can be beneficial to improve our understanding of the globe. In this work, we propose a general framework that uses these publicly available images for constructing dense maps of different ground-level attributes from overhead imagery. In particular, we use well-defined probabilistic models and a weakly-supervised, multi-task training strategy to provide an estimate of the expected visual and auditory ground-level attributes consisting of the type of scenes, objects, and sounds a person can experience at a location. Through a large-scale evaluation on real data, we show that our learned models can be used for applications including mapping, image localization, image retrieval, and metadata verification.
199

Designing 2D Interfaces For 3D Gesture Retrieval Utilizing Deep Learning

Southard, Spencer 01 January 2017 (has links)
Gesture retrieval can be defined as the process of retrieving the correct meaning of the hand movement from a pre-assembled gesture dataset. The purpose of the research discussed here is to design and implement a gesture interface system that facilitates retrieval for an American Sign Language gesture set using a mobile device. The principal challenge discussed here will be the normalization of 2D gestures generated from the mobile device interface and the 3D gestures captured from video samples into a common data structure that can be utilized by deep learning networks. This thesis covers convolutional neural networks and auto encoders which are used to transform 2D gestures into the correct form, before being classified by a convolutional neural network. The architecture and implementation of the front-end and back-end systems and each of their respective responsibilities are discussed. Lastly, this thesis covers the results of the experiment and breakdown the final classification accuracy of 83% and how this work could be further improved by using depth based videos for the 3D data.
200

Automated Species Classification Methods for Passive Acoustic Monitoring of Beaked Whales

LeBien, John 20 December 2017 (has links)
The Littoral Acoustic Demonstration Center has collected passive acoustic monitoring data in the northern Gulf of Mexico since 2001. Recordings were made in 2007 near the Deepwater Horizon oil spill that provide a baseline for an extensive study of regional marine mammal populations in response to the disaster. Animal density estimates can be derived from detections of echolocation signals in the acoustic data. Beaked whales are of particular interest as they remain one of the least understood groups of marine mammals, and relatively few abundance estimates exist. Efficient methods for classifying detected echolocation transients are essential for mining long-term passive acoustic data. In this study, three data clustering routines using k-means, self-organizing maps, and spectral clustering were tested with various features of detected echolocation transients. Several methods effectively isolated the echolocation signals of regional beaked whales at the species level. Feedforward neural network classifiers were also evaluated, and performed with high accuracy under various noise conditions. The waveform fractal dimension was tested as a feature for marine biosonar classification and improved the accuracy of the classifiers. [This research was made possible by a grant from The Gulf of Mexico Research Initiative. Data are publicly available through the Gulf of Mexico Research Initiative Information & Data Cooperative (GRIIDC) at https://data.gulfresearchinitiative.org.] [DOIs: 10.7266/N7W094CG, 10.7266/N7QF8R9K]

Page generated in 0.0963 seconds