Spelling suggestions: "subject:"artificial intelligence|computer science"" "subject:"artificial intelligence|coomputer science""
31 |
Combining prior knowledge and data : beyond the Bayesian framework /Epshteyn, Arkady. January 2007 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2007. / Source: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4588. Adviser: Gerald DeJong. Includes bibliographical references (leaves 106-110) Available on microfilm from Pro Quest Information and Learning.
|
32 |
Visual human tracking and group activity analysis a video mining system for retail marketing /Leykin, Alex. January 2007 (has links)
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007. / Title from dissertation home page (viewed Sept. 26, 2008). Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1108. Adviser: Mihran Tuceryan.
|
33 |
Unified discriminative subspace learning for multimodality image analysis /Fu, Yun. January 2008 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2008. / Source: Dissertation Abstracts International, Volume: 69-11, Section: B, page: 6913. Adviser: Thomas S. Huang. Includes bibliographical references (leaves 176-188) Available on microfilm from Pro Quest Information and Learning.
|
34 |
Universal transfer learning /Mahmud, M. M. Hassan. January 2008 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2008. / Source: Dissertation Abstracts International, Volume: 69-11, Section: B, page: 6927. Adviser: Gerald F. DeJong. Includes bibliographical references (leaves 96-102) Available on microfilm from Pro Quest Information and Learning.
|
35 |
Fourier-based invariant image descriptors /Mavandadi, Sam. January 2008 (has links)
Thesis (Ph. D.)--University of Toronto, 2008. / Includes bibliographical references.
|
36 |
Lower Bound Resource Requirements for Machine IntelligenceGilmanov, Timur 06 December 2018 (has links)
<p> Recent advancements in technology and the field of artificial intelligence provide a platform for new applications in a wide range of areas, including healthcare, engineering, vision, and natural language processing, that would be considered unattainable one or two decades ago. With the expected compound annual growth rate of 50% during the years of 2017–2021, the field of global artificial intelligence is set to observe increases in computational complexities and amounts of sensor data processed. </p><p> In spite of the advancements in the field, truly intelligent machine behavior operating in real time is yet an unachieved milestone. First, in order to quantify such behavior, a definition of machine intelligence would be required, which has not been agreed upon by the community at large. Second, delivering full machine intelligence, as defined in this work, is beyond the scope of today's cutting-edge high-performance computing machines. </p><p> One important aspect of machine intelligent systems is resource requirements and the limitations that today's and future machines could impose on such systems. The goal of this research effort is to provide an estimate on the lower bound resource requirements for machine intelligence. A working definition of machine intelligence for purposes of this research is provided, along with definitions of an abstract architecture, workflow, and performance model. Combined together, these tools allow an estimate on resource requirements for problems of machine intelligence, and provide an estimate of such requirements in the future.</p><p>
|
37 |
Real-Time Individual Thermal Preferences Prediction Using Visual SensorsCosma, Andrei Claudiu 19 December 2018 (has links)
<p> The thermal comfort of a building’s occupants is an important aspect of building design. Providing an increased level of thermal comfort is critical given that humans spend the majority of the day indoors, and that their well-being, productivity, and comfort depend on the quality of these environments. In today’s world, Heating, Ventilation, and Air Conditioning (HVAC) systems deliver heated or cooled air based on a fixed operating point or target temperature; individuals or building managers are able to adjust this operating point through human communication of dissatisfaction. Currently, there is a lack in automatic detection of an individual’s thermal preferences in real-time, and the integration of these measurements in an HVAC system controller. </p><p> To achieve this, a non-invasive approach to automatically predict personal thermal comfort and the mean time to discomfort in real-time is proposed and studied in this thesis. The goal of this research is to explore the consequences of human body thermoregulation on skin temperature and tone as a means to predict thermal comfort. For this reason, the temperature information extracted from multiple local body parts, and the skin tone information extracted from the face will be investigated as a means to model individual thermal preferences. </p><p> In a first study, we proposed a real-time system for individual thermal preferences prediction in transient conditions using temperature values from multiple local body parts. The proposed solution consists of a novel visual sensing platform, which we called RGB-DT, that fused information from three sensors: a color camera, a depth sensor, and a thermographic camera. This platform was used to extract skin and clothing temperature from multiple local body parts in real-time. Using this method, personal thermal comfort was predicted with more than 80% accuracy, while mean time to warm discomfort was predicted with more than 85% accuracy. </p><p> In a second study, we introduced a new visual sensing platform and method that uses a single thermal image of the occupant to predict personal thermal comfort. We focused on close-up images of the occupant’s face to extract fine-grained details of the skin temperature. We extracted manually selected features, as well as a set of automated features. Results showed that the automated features outperformed the manual features in all the tests that were run, and that these features predicted personal thermal comfort with more than 76% accuracy. </p><p> The last proposed study analyzed the thermoregulation activity at the face level to predict skin temperature in the context of thermal comfort assessment. This solution uses a single color camera to model thermoregulation based on the side effects of the vasodilatation and vasoconstriction. To achieve this, new methods to isolate skin tone response to an individual’s thermal regulation were explored. The relation between the extracted skin tone measurement and the skin temperature was analyzed using a regression model. </p><p> Our experiments showed that a thermal model generated using noninvasive and contactless visual sensors could be used to accurately predict individual thermal preferences in real-time. Therefore, instantaneous feedback with respect to the occupants' thermal comfort can be provided to the HVAC system controller to adjust the room temperature. </p><p>
|
38 |
The Impact of Cost on Feature Selection for ClassifiersMcCrae, Richard 20 December 2018 (has links)
<p> Supervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time. </p><p> This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2<i><sup>k</sup></i> – 1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers. </p><p> This study used two types of classifiers—artificial neural networks and classification trees—that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures—true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed. </p><p> The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.</p><p>
|
39 |
Towards Fast and Efficient Representation LearningLi, Hao 05 October 2018 (has links)
<p> The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?</p><p> To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks.</p><p> To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.</p><p> Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.</p><p>
|
40 |
Deep Networks for Forward Prediction and PlanningHenaff, Mikael 17 November 2018 (has links)
<p> Learning to predict how an environment will evolve and the consequences of one’s actions is an important ability for autonomous agents, and can enable planning with relatively few interactions with the environment which may be slow or costly. However, learning an accurate forward model is often difficult in practice due to several features often present in complex environments. First, many environments exhibit long-term dependencies which require the system to learn to record and maintain relevant information in its memory over long timescales. Second, the envi- ronment may only be partially observed, and the aspects of the environment which are observed may depend on parts of the environment which are hidden. Third, many observed processes contain some form of apparent or inherent stochasticity, which makes the task of predicting future states ill-defined. </p><p> In this thesis, we propose approaches to tackle and better understand these different challenges associated with learning predictive models of the environment and using them for planning. We first provide an analysis of recurrent neural network (RNN) memory, which sheds light on the mechanisms by which RNNs are able to store different types of information in their memory over long timescales through the analysis of two synthetic benchmark tasks. We then introduce a new neural network architecture which keeps an estimate of the state of the environment in its memory, and can deal with partial observability by reasoning based on what is observed. We next present a new method for performing planning using a learned model of the environment with both discrete and continuous actions. Finally, we propose an approach for model-based planning in the presence of both environment uncertainty and model uncertainty, and evaluate it on a new real-world dataset and environment with applications to autonomous driving.</p><p>
|
Page generated in 0.0996 seconds