Spelling suggestions: "subject:"artificial intelligence|computer science"" "subject:"artificial intelligence|coomputer science""
101 |
Visual face tracking and its applications /Tu, Jilin, January 2007 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2007. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1092. Adviser: Thomas Huang. Includes bibliographical references (leaves 159-172) Available on microfilm from Pro Quest Information and Learning.
|
102 |
US Knowledge Worker Augmentation versus Replacement with AI Software| Impact on Organizational Returns, Innovation, and ResistanceBoggan, Chad M. 29 November 2018 (has links)
<p> This praxis studies the effects on organizations of replacing US knowledge workers with artificial intelligence software (automation) and enhancing US knowledge workers with artificial intelligence software (augmentation). The effects on organizational innovation, resistance, and return on investment (ROI) are studied. </p><p> The main purpose of this study is to confirm the relationships between automation/augmentation, innovation, resistance, and ROI. This study is also meant to aid researchers, policy makers, executives, and others that have influence over automation and augmentation decisions. The implications of these decisions will reverberate through the multi-billion-dollar US job market in the coming years. </p><p> Quantitative methods were used to look at researched examples of both automation and augmentation. Data from 1993 to 2018 was gathered and assessed on innovation, resistance, and ROI from a number of different industries and a number of different types of firms based on size and ownership structure (public or private). Statistical methods were then used to compare the effects of automation and augmentation on organizations. </p><p> Research data was gathered to study the relationship between innovation and ROI, as well as the relationship between resistance and ROI. These relationships were used to combine ROI, innovation, and resistance using Monte Carlo simulations. This combination of ROI, innovation, and resistance was then used to compare the combined effects of automation and augmentation on organizations over time.</p><p>
|
103 |
Computer Vision-based Estimation of Body Mass Distribution, Center of Mass, and Body Types| Design and Comparative StudyGautam, Kumar 30 August 2018 (has links)
<p> Body mass distribution and center of mass (CoM) are important topics in the field of human biomechanics and the healthcare industry. Increasing global obesity has led researchers to measure body parameters. This project focuses on developing an automatic computer vision approach to calculate the body mass distribution and CoM, as well as identify body types with a minimum setup cost. </p><p> In this project, a 3-D calibrated experimental setup was devised to take images of four male subjects in three views: front view, left side view, and right side view. First, a method was devised to separate the human subject from the background. Second, a novel approach was developed to find the CoM, percentage body mass distribution, and body types using two models: Simulated Skeleton Model (SSM) and Simulated Skeleton Matrix (SSMA). The CoM using this method was 94.36% of the CoM calculated with a reaction board experiment. Total body mass using this method was 96.6% of the total body mass calculated with the weighing balance. This project has three components: (1) finding the body mass distribution and comparing the results with the weighing balance, (2) finding the CoM and comparing the results with the reaction board experiment, and (3) offering new ways to conceptualize the three body types that are ectomorph, endomorph, and mesomorph with ratings in the range of 0 to 5.</p><p>
|
104 |
AIRtouch| An Intelligent Virtual KeyboardLuvani, Kuldeep D. 02 February 2018 (has links)
<p> Computers have come a long way to become a part and parcel of people’s everyday life. There is a rising need to provide innovative solutions to meet the ever-increasing demand and provide new features, which will make work easier for people. This thesis gives a detailed insight into one such new feature, which would possibly soon see the light of the day. The idea is to create AIRtouch keyboard, which would take the typing technology one step further by doing away the need for the physical keyboard. The gloves worn by the user would allow it to type directly on the surface. This thesis first introduces to the background and explains the idea followed by providing a detailed explanation of the algorithm used and its working. The description is first categorized into two broad categories of hardware side and the software side. The hardware part focuses mainly on the sensors and the motherboard used followed by the glove construction details and its implementation. The software category gives a detailed explanation of the algorithm used and the following code implemented.</p><p>
|
105 |
A Novel Computer Vision-Based Framework for Supervised Classification of Energy Outbreak PhenomenaAbusaleh, Sumaya 06 March 2018 (has links)
<p> Today, there is a need to implement a proper design of an adequate surveillance system that detects and categorizes explosion phenomena in order to identify the explosion risk to reduce its impact through mitigation and preparedness. This dissertation introduces state-of-the-art classification of explosion phenomena through pattern recognition techniques on color images. Consequently, we present a novel taxonomy for explosion phenomena. In particular, we demonstrate different aspects of volcanic eruptions and nuclear explosions of the proposed taxonomy that include scientific formation, real examples, existing monitoring methodologies, and their limitations. In addition, we propose a novel framework designed to categorize explosion phenomena against non-explosion phenomena. Moreover, a new dataset, Volcanic and Nuclear Explosions (VNEX), was collected. The totality of VNEX is 10, 654 samples, and it includes the following patterns: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds.</p><p> In order to achieve high reliability in the proposed explosion classification framework, we propose to employ various feature extraction approaches. Thus, we calculated the intensity levels to extract the texture features. Moreover, we utilize the YC<sub>b</sub>C<sub>r</sub> color model to calculate the amplitude features. We also employ the Radix-2 Fast Fourier Transform to compute the frequency features. Furthermore, we use the uniform local binary patterns technique to compute the histogram features. Additionally, these discriminative features were combined into a single input vector that provides valuable insight of the images, and then fed into the following classification techniques: Euclidian distance, correlation, k-nearest neighbors, one-against-one multiclass support vector machines with different kernels, and the multilayer perceptron model. Evaluation results show the design of the proposed framework is effective and robust. Furthermore, a trade-off between the computation time and the classification rate was achieved.</p><p>
|
106 |
Deep Learning for Information ExtractionNguyen, Thien Huu 18 April 2018 (has links)
<p> The explosion of data has made it crucial to analyze the data and distill important information effectively and efficiently. A significant part of such data is presented in unstructured and free-text documents. This has prompted the development of the techniques for information extraction that allow computers to automatically extract structured information from the natural free-text data. Information extraction is a branch of natural language processing in artificial intelligence that has a wide range of applications, including question answering, knowledge base population, information retrieval etc. The traditional approach for information extraction has mainly involved hand-designing large feature sets (feature engineering) for different information extraction problems, i.e, entity mention detection, relation extraction, coreference resolution, event extraction, and entity linking. This approach is limited by the laborious and expensive effort required for feature engineering for different domains, and suffers from the unseen word/feature problem of natural languages. </p><p> This dissertation explores a different approach for information extraction that uses deep learning to automate the representation learning process and generate more effective features. Deep learning is a subfield of machine learning that uses multiple layers of connections to reveal the underlying representations of data. I develop the fundamental deep learning models for information extraction problems and demonstrate their benefits through systematic experiments. </p><p> First, I examine word embeddings, a general word representation that is produced by training a deep learning model on a large unlabelled dataset. I introduce methods to use word embeddings to obtain new features that generalize well across domains for relation extraction. This is done for both the feature-based method and the kernel-based method of relation extraction. </p><p> Second, I investigate deep learning models for different problems, including entity mention detection, relation extraction and event detection. I develop new mechanisms and network architectures that allow deep learning to model the structures of information extraction problems more effectively. Some extensive experiments are conducted on the domain adaptation and transfer learning settings to highlight the generalization advantage of the deep learning models for information extraction. </p><p> Finally, I investigate the joint frameworks to simultaneously solve several information extraction problems and benefit from the inter-dependencies among these problems. I design a novel memory augmented network for deep learning to properly exploit such inter-dependencies. I demonstrate the effectiveness of this network on two important problems of information extraction, i.e, event extraction and entity linking.</p><p>
|
107 |
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language ProcessingKeller, Thomas Anderson 08 September 2017 (has links)
<p> Most machine learning algorithms require a fixed length input to be able to perform commonly desired tasks such as classification, clustering, and regression. For natural language processing, the inherently unbounded and recursive nature of the input poses a unique challenge when deriving such fixed length representations. Although today there is a general consensus on how to generate fixed length representations of individual words which preserve their meaning, the same cannot be said for sequences of words in sentences, paragraphs, or documents. In this work, we study the encoders commonly used to generate fixed length representations of natural language sequences, and analyze their effectiveness across a variety of high and low level tasks including sentence classification and question answering. Additionally, we propose novel improvements to the existing Skip-Thought and End-to-End Memory Network architectures and study their performance on both the original and auxiliary tasks. Ultimately, we show that the setting in which the encoders are trained, and the corpus used for training, have a greater influence of the final learned representation than the underlying sequence encoders themselves. </p><p>
|
108 |
An expert system for the synthesis of solid-liquid-liquid separationsGiannelos, Nikolaos Fotios 01 January 1997 (has links)
The synthesis of separation systems involving solid-liquid-liquid mixtures is an important problem that has not received appreciable attention in the literature of process synthesis in the past. The main objective of this research is the development of a complete design methodology for solid-liquid-liquid separations in the context of total flowsheet synthesis, along with its computer implementation, including the initial synthesis of flowsheet structures, the generation of process alternatives, and a preliminary cost analysis. The implementation part of this work is stressed as a means of testing heuristics and formalizing the design activity. The proposed synthesis approach is heuristic in nature. Separation methods are selected and the interconnections among equipment types are deduced based on expert knowledge in the form of rules. Short-cut calculations are employed for equipment design and cost estimations. The final product of this research has been implemented in a prototype expert system, facilitating the screening of alternative separation schemes and the invention of preliminary flowsheets.
|
109 |
Decision tree algorithms for handwritten digit recognitionWilder, Kenneth Joseph 01 January 1998 (has links)
We present an original algorithm for recognizing handwritten digits. We begin by introducing a virtually infinite collection of binary geometric features. The features are queries that ask if a particular geometric arrangement of local topographic codes is present in an image. The codes, which we call "tags", are too coarse and common to be informative by themselves, but the presence of geometric arrangements of tags ("tag arrangements") can provide substantial information about the shape of an image. Tag arrangements are features that are well-suited for handwritten digit recognition as their presence in an image is unaffected by a large number of transformations that do not affect the class of the image. It is impossible to calculate all of the features in an image. We therefore use decision trees to simultaneously determine a small collection of informative features and construct a classifier. By only considering a small random sample of queries at each mode we are able to generate multiple, randomized trees that determine a more varied and informative collection of features than is possible with a single tree. The trees, which provide posterior estimates of the class probabilities, are aggregated to produce a stable and robust classifier. We analyze the performance of this method and propose several means of augmenting its performance. Most notably, we introduce a nearest neighbor final test that reduces the already low error rate an additional 20-30%. Testing was done on a subset of a National Institute of Standards and Technology database, and we report a classification rate of 99.6%, comparable to the top results reported elsewhere.
|
110 |
Increasing scalability in algorithms for centralized and decentralized partially observable Markov decision processes: Efficient decision-making and coordination in uncertain environmentsAmato, Christopher 01 January 2010 (has links)
As agents are built for ever more complex environments, methods that consider the uncertainty in the system have strong advantages. This uncertainty is common in domains such as robot navigation, medical diagnosis and treatment, inventory management, sensor networks and e-commerce. When a single decision maker is present, the partially observable Markov decision process (POMDP) model is a popular and powerful choice. When choices are made in a decentralized manner by a set of decision makers, the problem can be modeled as a decentralized partially observable Markov decision process (DEC-POMDP). While POMDPs and DEC-POMDPs offer rich frameworks for sequential decision making under uncertainty, the computational complexity of each model presents an important research challenge. As a way to address this high complexity, this thesis develops several solution methods based on utilizing domain structure, memory-bounded representations and sampling. These approaches address some of the major bottlenecks for decision-making in real-world uncertain systems. The methods include a more efficient optimal algorithm for DEC-POMDPs as well as scalable approximate algorithms for POMDPs and DEC-POMDPs. Key contributions include optimizing compact representations as well as automatic structure extraction and exploitation. These approaches increase the scalability of algorithms, while also increasing their solution quality.
|
Page generated in 0.1447 seconds