• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5598
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9041
  • 9041
  • 3028
  • 1688
  • 1534
  • 1522
  • 1416
  • 1358
  • 1192
  • 1186
  • 1157
  • 1128
  • 1113
  • 1024
  • 1020
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

A Machine Learning approach to Febrile Classification

Kostopouls, Theodore P 25 April 2018 (has links)
General health screening is needed to decrease the risk of pandemic in high volume areas. Thermal characterization, via infrared imaging, is an effective technique for fever detection, however, strict use requirements in combination with highly controlled environmental conditions compromise the practicality of such a system. Combining advanced processing techniques to thermograms of individuals can remove some of these requirements allowing for more flexible classification algorithms. The purpose of this research was to identify individuals who had febrile status utilizing modern thermal imaging and machine learning techniques in a minimally controlled setting. Two methods were evaluated with data that contained environmental, and acclimation noise due to data gathering technique. The first was a pretrained VGG16 Convolutional Neural Network found to have F1 score of 0.77 (accuracy of 76%) on a balanced dataset. The second was a VGG16 Feature Extractor that gives inputs to a principle components analysis and utilizes a support vector machine for classification. This technique obtained a F1 score of 0.84 (accuracy of 85%) on balanced data sets. These results demonstrate that machine learning is an extremely viable technique to classify febrile status independent of noise affiliated.
872

A Comprehensive Comparative Performance Evaluation of Signal Processing Features in Detecting Alcohol Consumption from Gait Data

Qi, Muxi 24 April 2016 (has links)
Excessive alcohol is the third leading lifestyle-related cause of death in the United States. Alcohol intoxication has a significant effect on how the human body operates, and is especially harmful to the human brain and heart. To help individuals to monitor their alcohol intoxication, several methods have been proposed to detect alcohol consumption levels including direct Blood Alcohol Concentration (BAC) measurement by breathalyzers and various wearable sensor devices. More recently, Arnold et al proposed a machine-learning-based method of passively inferring intoxication levels from gait data by classifying smartphone accelerometer readings. Their work utilized 11 smartphone accelerometer features in the time and frequency domains, achieving a classification accuracy of 57%. This thesis extends the work of Arnold et al by extracting and comparing the efficacy of a more comprehensive list of 27 signal processing features in the time, frequency, wavelet, statistical and information theory domains, evaluating how much using them improves the accuracy of supervised BAC classification of accelerometer gait data. Correlation-based Feature Selection (CFS) is used to identify and rank features most correlated with alcohol-induced gait changes. 22 of the 27 features investigated showed statistically significant correlations with BAC levels. The most correlated features were then used to classify labeled samples of intoxicated gait data in order to test their detection accuracy. Statistical features had the best classification accuracy of 83.89%, followed by time domain features and frequency domain features follow with accuracies of 83.22% and 82.21%, respectively. Classification using all 22 statistically significant signal processing features yielded an accuracy of 84.9% for the Random Forest classifier.
873

Why did they cite that?

Lovering, Charles 26 April 2018 (has links)
We explore a machine learning task, evidence recommendation (ER), the extraction of evidence from a source document to support an external claim. This task is an instance of the question answering machine learning task. We apply ER to academic publications because they cite other papers for the claims they make. Reading cited papers to corroborate claims is time-consuming and an automated ER tool could expedite it. Thus, we propose a methodology for collecting a dataset of academic papers and their references. We explore deep learning models for ER and achieve 77% accuracy with pairwise models and 75% pairwise accuracy with document-wise models.
874

Applying Causal Models to Dynamic Difficulty Adjustment in Video Games

Moffett, Jeffrey P 26 April 2010 (has links)
We have developed a causal model of how various aspects of a computer game influence how much a player enjoys the experience, as well as how long the player will play. This model is organized into three layers: a generic layer that applies to any game, a refinement layer for a particular game genre, and an instantiation layer for a specific game. Two experiments using different games were performed to validate the model. The model was used to design and implement a system and API for Dynamic Difficulty Adjustment(DDA). This DDA system and API uses machine learning techniques to make changes to a game in real time in the hopes of improving the experience of the user and making them play longer. A final experiment is presented that shows the effectiveness of the designed system.
875

Automating endoscopic camera motion for teleoperated minimally invasive surgery using inverse reinforcement learning

Agrawal, Ankur S 13 December 2018 (has links)
During a laparoscopic surgery, an endoscopic camera is used to provide visual feedback of the surgery to the surgeon and is controlled by a skilled assisting surgeon or a nurse. However, in robot-assisted teleoperated systems such as the daVinci surgical system, the same control lies with the operating surgeons. This results in an added task of constantly changing view point of the endoscope which can be disruptive and also increase the cognitive load on the surgeons. The work presented in this thesis aims to provide an approach that results in an intelligent camera control for such systems using machine learning algorithms. A particular task of pick and place was selected to demonstrate this approach. To add a layer of intelligence to the endoscope, the task was classified into subtasks representing the intent of the user. Neural networks with long short term memory cells (LSTMs) were trained to classify the motion of the instruments in the subtasks and a policy was calculated for each subtask using inverse reinforcement learning (IRL). Since current surgical robots do not enable the movement of the camera and instruments simultaneously, an expert data set was unavailable that could be used to train the models. Hence, a user study was conducted in which the participants were asked to complete the task of picking and placing a ring on a peg in a 3-D immersive simulation environment created using CHAI libraries. A virtual reality headset, Oculus Rift, was used during the study to track the head movements of the users to obtain their view points while they performed the task. This was considered to be expert data and was used to train the algorithm to automate the endoscope motion. A 71.3% accuracy was obtained for the classification of the task into 4 subtasks and the inverse reinforcement learning resulted in an automated trajectory of the endoscope which was 94.7% similar to the human trajectories collected demonstrating that the approach provided in thesis can be used to automate endoscopic motion similar to a skilled assisting surgeon.
876

Topic modeling in marketing: recent advances and research opportunities

Reisenbichler, Martin, Reutterer, Thomas 04 1900 (has links) (PDF)
Using a probabilistic approach for exploring latent patterns in high-dimensional co-occurrence data, topic models offer researchers a flexible and open framework for soft-clustering large data sets. In recent years, there has been a growing interest among marketing scholars and practitioners to adopt topic models in various marketing application domains. However, to this date, there is no comprehensive overview of this rapidly evolving field. By analyzing a set of 61 published papers along with conceptual contributions, we systematically review this highly heterogeneous area of research. In doing so, we characterize extant contributions employing topic models in marketing along the dimensions data structures and retrieval of input data, implementation and extensions of basic topic models, and model performance evaluation. Our findings confirm that there is considerable progress done in various marketing sub-areas. However, there is still scope for promising future research, in particular with respect to integrating multiple, dynamic data sources, including time-varying covariates and the combination of exploratory topic models with powerful predictive marketing models.
877

A random forest approach to segmenting and classifying gestures

Joshi, Ajjen Das 12 March 2016 (has links)
This thesis investigates a gesture segmentation and recognition scheme that employs a random forest classification model. A complete gesture recognition system should localize and classify each gesture from a given gesture vocabulary, within a continuous video stream. Thus, the system must determine the start and end points of each gesture in time, as well as accurately recognize the class label of each gesture. We propose a unified approach that performs the tasks of temporal segmentation and classification simultaneously. Our method trains a random forest classification model to recognize gestures from a given vocabulary, as presented in a training dataset of video plus 3D body joint locations, as well as out-of-vocabulary (non-gesture) instances. Given an input video stream, our trained model is applied to candidate gestures using sliding windows at multiple temporal scales. The class label with the highest classifier confidence is selected, and its corresponding scale is used to determine the segmentation boundaries in time. We evaluated our formulation in segmenting and recognizing gestures from two different benchmark datasets: the NATOPS dataset of 9,600 gesture instances from a vocabulary of 24 aircraft handling signals, and the CHALEARN dataset of 7,754 gesture instances from a vocabulary of 20 Italian communication gestures. The performance of our method compares favorably with state-of-the-art methods that employ Hidden Markov Models or Hidden Conditional Random Fields on the NATOPS dataset. We conclude with a discussion of the advantages of using our model.
878

Understanding Human Activities at Large Scale

Caba Heilbron, Fabian David 03 1900 (has links)
With the growth of online media, surveillance and mobile cameras, the amount and size of video databases are increasing at an incredible pace. For example, YouTube reported that over 400 hours of video are uploaded every minute to their servers. Arguably, people are the most important and interesting subjects of such videos. The computer vision community has embraced this observation to validate the crucial role that human action recognition plays in building smarter surveillance systems, semantically aware video indexes and more natural human-computer interfaces. However, despite the explosion of video data, the ability to automatically recognize and understand human activities is still somewhat limited. In this work, I address four different challenges at scaling up action understanding. First, I tackle existing dataset limitations by using a flexible framework that allows continuous acquisition, crowdsourced annotation, and segmentation of online videos, thus, culminating in a large-scale, rich, and easy-to-use activity dataset, known as ActivityNet. Second, I develop an action proposal model that takes a video and directly generates temporal segments that are likely to contain human actions. The model has two appealing properties: (a) it retrieves temporal locations of activities with high recall, and (b) it produces these proposals quickly. Thirdly, I introduce a model, which exploits action-object and action-scene relationships to improve the localization quality of a fast generic action proposal method and to prune out irrelevant activities in a cascade fashion quickly. These two features lead to an efficient and accurate cascade pipeline for temporal activity localization. Lastly, I introduce a novel active learning framework for temporal localization that aims to mitigate the data dependency issue of contemporary action detectors. By creating a large-scale video benchmark, designing efficient action scanning methods, enriching approaches with high-level semantics for activity localization, and an effective strategy to build action detectors with limited data, this thesis is making a step closer towards general video understanding.
879

Mitochondrial dynamics: regulation of insulin secretion and novel quantification methods

Miller, Nathanael A. 12 June 2018 (has links)
The recent surge in Type 2 Diabetes (T2D) has renewed interest in the study of cellular metabolism – which mitochondria tightly control. Previous work has shown mitochondrial dysfunction plays a critical role in the development of metabolic diseases, such as T2D. The pancreatic β-cell synthesizes and secretes insulin in vivo in response to diverse fuel signals such as glucose, fatty acids, and amino acids; failure or loss of β-cell mass is a hallmark of T2D. Pancreatic β-cell mitochondria are dynamic organelles living a life of fusion, fission, and movement collectively called mitochondrial dynamics. Mitochondrial fusion is impaired in obesity and models of obesity, while basal secretion of insulin is elevated. Previous studies demonstrate that hyperinsulinemia alone is sufficient to induce insulin resistance, yet the relationship between mitochondrial morphology and basal insulin secretion has not yet been studied. Here, we investigated the link between loss of mitochondrial fusion and insulin secretion at basal glucose concentrations by reducing the expression of mitofusin 2 (Mfn2), which controls mitochondrial morphology and metabolism. We found that forced mitochondrial fragmentation caused increased insulin secretion at basal glucose concentrations. In addition, fragmentation of mitochondria enhanced the secretory response of islets to palmitate at nonstimulatory glucose concentrations and increased fatty acid uptake and oxidation in a cell model of pancreatic β-cells. We developed unique solutions to challenges posed by the measurement of mitochondrial dynamics via confocal microscopy by using novel image analysis techniques, including a novel method of mitochondrial segmentation. This technique also revealed novel biology of brown adipose tissue mitochondria dependent on their localization within the cell. Our findings demonstrate that changes to mitochondrial dynamics in the β-cell can lead to increased insulin secretion at basal glucose concentrations. These data support the possibility that hyperinsulinemia and the downstream outcome of insulin resistance can be initiated by altered mitochondrial function in the β-cell independently of other tissues. By uncovering a new process that governs basal insulin secretion, we provide novel targets for regulation, such as mitochondrial morphology or fatty acid induced insulin secretion that may present new approaches to treatment of diabetes.
880

Bidirectional long short-term memory network for proto-object representation

Zhou, Quan 09 October 2018 (has links)
Researchers have developed many visual saliency models in order to advance the technology in computer vision. Neural networks, Convolution Neural Networks (CNNs) in particular, have successfully differentiate objects in images through feature extraction. Meanwhile, Cummings et al. has proposed a proto-object image saliency (POIS) model that shows perceptual objects or shapes can be modelled through the bottom-up saliency algorithm. Inspired from their work, this research is aimed to explore the imbedding features in the proto-object representations and utilizing artificial neural networks (ANN) to capture and predict the saliency output of POIS. A combination of CNN and a bi-directional long short-term memory (BLSTM) neural network is proposed for this saliency model as a machine learning alternative to the border ownership and grouping mechanism in POIS. As ANNs become more efficient in performing visual saliency tasks, the result of this work would extend their application in computer vision through successful implementation for proto-object based saliency.

Page generated in 0.1619 seconds