• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3083
  • 562
  • 233
  • 231
  • 196
  • 127
  • 99
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 30
  • 29
  • Tagged with
  • 5709
  • 5709
  • 2110
  • 1561
  • 1341
  • 1176
  • 886
  • 878
  • 780
  • 753
  • 697
  • 605
  • 559
  • 533
  • 530
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Incremental activity and plan recognition for human teams

Masato, Daniele January 2012 (has links)
Anticipating human subjects' intentions and information needs is considered one of the ultimate goals of Artificial Intelligence. Activity and plan recognition contribute to this goal by studying how low-level observations about subjects and the environment in which they act can be linked to a high-level plan representation. This task is challenging in a dynamic and uncertain environment; the environment may change while the subjects are reasoning about it, and the effects of the subjects' interactions cannot be predicted with certainty. Humans generally struggle to enact plans and maintain situation awareness in such circumstances, even when they work in teams towards a common objective. Intelligent software assistants can support human teams by monitoring their activities and plan progress, thus relieving them from some of the cognitive burden they experience. The assistants' design needs to keep into account that teams can form and disband quickly in response to environmental changes, and that the course of action may change during plan execution. It is also crucial to efficiently and incrementally process a stream of observations in order to enable online prediction of those intentions and information needs. In this thesis we propose an incremental approach for team composition and activity recognition based on probabilistic graphical models. We show that this model can successfully learn team formations and behaviours in highly dynamic domains, and that classification can be performed in polynomial time. We evaluate our model within a simulated scenario provided by an open-source computer game. In addition, we discuss an incremental approach to plan recognition that exploits the results yielded by activity recognition to assess a team's course of action. We show how this model can account for incomplete or inconsistent knowledge about recognised activities, and how it can be integrated into an existing mechanism for plan recognition.
132

Integrating Exponential Dispersion Models to Latent Structures

Basbug, Mehmet Emin 08 February 2017 (has links)
<p> Latent variable models have two basic components: a latent structure encoding a hypothesized complex pattern and an observation model capturing the data distribution. With the advancements in machine learning and increasing availability of resources, we are able to perform inference in deeper and more sophisticated latent variable models. In most cases, these models are designed with a particular application in mind; hence, they tend to have restrictive observation models. The challenge, surfaced with the increasing diversity of data sets, is to generalize these latent models to work with different data types. We aim to address this problem by utilizing exponential dispersion models (EDMs) and proposing mechanisms for incorporating them into latent structures. (Abstract shortened by ProQuest.)</p>
133

Intelligent Maintenance Aid (IMA)

Shockley, Keith J. 06 1900 (has links)
Technological complexities of current ground combat systems require advanced maintenance methods to keep the fleet in a state of operational readiness. Currently, maintenance personnel use paper Technical Manuals (TM) that are cumbersome and not easily transportable or updated in the field. This thesis proposes using the latest technology to support maintainers in the field or depot by integrating the TMs with the onboard diagnostics Built-In-Test (BIT) and Fault Isolation Test (FIT) of the vehicle, to provide the maintainer with an improved diagnostics tool to expedite troubleshooting analysis. This will be accomplished by connecting the vehicle, using the vehicle's 1553 multiplex bus, with the Graphical User Interface (GUI) of an Intelligent Maintenance Aid (IMA). The IMA will use Troubleshooting Procedure (TP) codes generated during BIT and FIT testing. Using the information provided by these TP codes, through the IMA GUI, information from the technical manuals will be displayed to aid the maintainers in their diagnostic work. The results of this thesis will serve as a baseline for further research and will be presented to the program management office for combat systems (PM-CS) for further consideration and development. / US Army RDECOM-TACOM author (civilian).
134

Human behavior representation of military teamwork

Martin, Michael W. 06 1900 (has links)
This work presents a conceptual structure for the behaviors of artificial intelligence agents, with emphasis on creating teamwork through individual behaviors. The goal is to set up a framework which enables teams of simulation agents to behave more realistically. Better team behavior can lend a higher fidelity of human behavior representation in a simulation, as well as provide opportunities to experiment with the factors that create teamwork. The framework divides agent behaviors into three categories: leadership, individual, and team-enabling. Leadership behaviors consist of planning, decision-making, and delegating. Individual behaviors consist of moving, shooting, environment-monitoring, and self-monitoring. Team-enabling behaviors consist of communicating, synchronizing actions, and team member monitoring. These team-enabling behaviors augment the leadership and individual behaviors at all phases of an agent's thought process, and create aggregate team behavior that is a hybrid of emergent and hierarchical teamwork. The net effect creates, for each agent, options and courses of action which are sub-optimal from the individual agent's standpoint, but which leverage the power of the team to accomplish objectives. The individual behaviors synergistically combine to create teamwork, allowing a group of agents to act in such a manner that their overall effectiveness is greater than the sum of their individual contributions. / US Army (USA) author.
135

Discovering credible events in near real time from social media streams

Buntain, Cody 26 January 2017 (has links)
<p>Recent reliance on social media platforms as major sources of news and information, both for journalists and the larger population and especially during times of crisis, motivate the need for better methods of identifying and tracking high-impact events in these social media streams. Social media's volume, velocity, and democratization of information (leading to limited quality controls) complicate rapid discovery of these events and one's ability to trust the content posted about these events. This dissertation addresses these complications in four stages, using Twitter as a model social platform. The first stage analyzes Twitter's response to major crises, specifically terrorist attacks in Western countries, showing these high-impact events do not significantly impact message or user volume. Instead, these events drive changes in Twitter's topic distribution, with conversation, retweets, and hashtags relevant to these events experiencing significant, rapid, and short-lived bursts in frequency. Furthermore, conversation participants tend to prefer information from local authorities/organizations/media over national or international sources, with accounts for local police or local newspapers often emerging as central in the networks of interaction. Building on these results, the second stage in this dissertation presents and evaluates a set of features that capture these topical bursts associated with crises by modeling bursts in frequency for individual tokens in the Twitter stream. The resulting streaming algorithm is capable of discovering notable moments across a series of major sports competitions using Twitter's public stream without relying on domain- or language-specific information or models. Furthermore, results demonstrate models trained on sporting competition data perform well when transferred to earthquake identification. This streaming algorithm is then extended in this dissertation's third stage to support real-time event tracking and summarization. This real-time algorithm leverages new distributed processing technology to operate at scale and is evaluated against a collection of other community-developed information retrieval systems, where it performs comparably. Further experiments also show this real-time burst detection algorithm can be integrated with these other information retrieval systems to increase overall performance. The final stage then investigates automated methods for evaluating credibility in social media streams by leveraging two existing data sets. These two data sets measure different types of credibility (veracity versus perception), and results show veracity is negatively correlated with the amount of disagreement in and length of a conversation, and perceptions of credibility are influenced by the amount of links to other pages, shared media about the event, and the number of verified users participating in the discussion. Contributions made across these four stages are then usable in the relatively new fields of computational journalism and crisis informatics, which seek to improve news gathering and crisis response by leveraging new technologies and data sources like machine learning and social media.
136

Adaptive estimation techniques for resident space object characterization

LaPointe, Jamie 26 January 2017 (has links)
<p> This thesis investigates using adaptive estimation techniques to determine unknown model parameters such as size and surface material reflectivity, while estimating position, velocity, attitude, and attitude rates of a resident space object. This work focuses on the application of these methods to the space situational awareness problem.</p><p> This thesis proposes a unique method of implementing a top-level gating network in a dual-layer hierarchical mixture of experts. In addition it proposes a decaying learning parameter for use in both the single layer mixture of experts and the dual-layer hierarchical mixture of experts. Both a single layer mixture of experts and dual-layer hierarchical mixture of experts are compared to the multiple model adaptive estimation in estimating resident space object parameters such as size and reflectivity. The hierarchical mixture of experts consists of macromodes. Each macromode can estimate a different parameter in parallel. Each macromode is a single layer mixture of experts with unscented Kalman filters used as the experts. A gating network in each macromode determines a gating weight which is used as a hypothesis tester. Then the output of the macromode gating weights go to a top level gating weight to determine which macromode contains the most probable model. The measurements consist of astrometric and photometric data from non-resolved observations of the target gathered via a telescope with a charge coupled device camera. Each filter receives the same measurement sequence. The apparent magnitude measurement model consists of the Ashikhmin Shirley bidirectional reflectance distribution function. The measurements, process models, and the additional shape, mass, and inertia characteristics allow the algorithm to predict the state and select the most probable fit to the size and reflectance characteristics based on the statistics of the measurement residuals and innovation covariance. A simulation code is developed to test these adaptive estimation techniques. The feasibility of these methods will be demonstrated in this thesis.</p>
137

Data-driven computer vision for science and the humanities

Lee, Stefan 05 November 2016 (has links)
<p> The rate at which humanity is producing visual data from both large-scale scientific imaging and consumer photography has been greatly accelerating in the past decade. This thesis is motivated by the hypothesis that this trend will necessarily change the face of observational science and the humanities, requiring the development of automated methods capable of distilling vast image collections to produce meaningful analyses. Such methods are needed to empower novel science both by improving throughput in traditionally quantitative disciplines and by developing new techniques to study culture through large scale image datasets.</p><p> When computer vision or machine learning in general is leveraged to aid academic inquiry, it is important to consider the impact of erroneous solutions produced by implicit ambiguity or model approximations. To that end, we argue for the importance of algorithms that are capable of generating multiple solutions and producing measures of confidence. In addition to providing solutions to a number of multi-disciplinary problems, this thesis develops techniques to address these overarching themes of confidence estimation and solution diversity. </p><p> This thesis investigates a diverse set of problems across a broad range of studies including glaciology, developmental psychology, architectural history, and demography to develop and adapt computer vision algorithms to solve these domain-specific applications. We begin by proposing vision techniques for automatically analyzing aerial radar imagery of polar ice sheets while simultaneously providing glaciologists with point-wise estimates of solution confidence. We then move to psychology, introducing novel recognition techniques to produce robust hand localizations and segmentations in egocentric video to empower psychologists studying child development with automated annotations of grasping behaviors integral to learning. We then investigate novel large-scale analysis for architectural history, leveraging tens of thousands of publicly available images to identify and track distinctive architectural elements. Finally, we show how rich estimates of demographic and geographic properties can be predicted from a single photograph.</p>
138

Design and implementation of an English to Arabic machine translation (MEANA MT)

Alneami, Ahmed H. January 2001 (has links)
A new system for Arabic Machine Translation (called MEANA MT) has been built. This system is capable of the analysis of English language as a source and can convert the given sentences into Arabic. The designed system contains three sets of grammar rules governing the PARSING, TRANSFORMATION AND GENERATION PHASES. In the system, word sense ambiguity and some pragmatic patterns were resolved. A new two-way (Analysis/Generation) computational lexicon system dealing with the morphological analysis of the Arabic language has been created. The designed lexicon contains a set of rules governing the morphological inflection and derivation of Arabic nouns, verbs, verb "to be", verb "not to be" and pronouns. The lexicon generates Arabic word forms and their inflectional affixes such as plural and gender morphemes as well as attached pronouns, each according to its rules. It can not parse or generate unacceptable word inflections. This computational system is capable of dealing with vowelized Arabic words by parsing the vowel marks which are attached to the letters. Semantic value pairs were developed to show ~he word sense and other issues in morphology; e.g. genders, numbers and tenses. The system can parse and generate some pragmatic sentences and phrases like proper names, titles, acknowledgements, dates, telephone numbers and addresses. A Lexical Functional Grammar (LFG) formalism is used to combine the syntactic, morphological and semantic features. The grammar rules of this system were implemented and compiled in COMMON. LISP based on Tomita's Generalised LR parsing algorithm, augmented by Pseudo and Full Unification packages. After parsing, sentence constituents of the English sentence are rep- _ resented as Feature Structures (F-Structures). These take part in the transfer and generation process which uses transformation' grammar rules to change the English F-Structure into Arabic F-Structure. These Arabic F-Structure features will be suitable for the Arabic generation grammar to build the required Arabic sentence. This system has been tested on three domains (sentences and phrases); the first is a selected children's story, the second semantic sentences and the third domain consists of pragmatic sentences. This research could be considered as a complete solution for a personal MT system for small messages and sublanguage domains.
139

Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine

Halpern, Yonatan 15 December 2016 (has links)
<p> Medical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis.</p><p> Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands'' the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.</p><p> We are faced with three main challenges:</p><p> First, the unstructured and incomplete nature of the data recorded in the electronic medical records requires special attention. Relevant information can be missing or written in an obscure way that the computer does not understand. </p><p> Second, the scale of the data makes it important to develop efficient methods at all steps of the machine learning pipeline, including data collection and labeling, model learning and inference.</p><p> Third, large parts of medicine are well understood by health professionals. How do we combine the expert knowledge of specialists with the statistical insights from the electronic medical record?</p><p> Probabilistic graphical models such as Bayesian networks provide a useful abstraction for quantifying uncertainty and describing complex dependencies in data. Although significant progress has been made over the last decade on approximate inference algorithms and structure learning from complete data, learning models with incomplete data remains one of machine learning&rsquo;s most challenging problems. How can we model the effects of latent variables that are not directly observed?</p><p> The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored'' condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled'' condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after transforming the data). </p><p> Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting. For both the anchored and singly-coupled conditions, practical algorithms are presented.</p><p> The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.</p><p> The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.</p>
140

An evolutionary method for training autoencoders for deep learning networks

Lander, Sean 18 November 2016 (has links)
<p> Introduced in 2006, Deep Learning has made large strides in both supervised an unsupervised learning. The abilities of Deep Learning have been shown to beat both generic and highly specialized classification and clustering techniques with little change to the underlying concept of a multi-layer perceptron. Though this has caused a resurgence of interest in neural networks, many of the drawbacks and pitfalls of such systems have yet to be addressed after nearly 30 years: speed of training, local minima and manual testing of hyper-parameters.</p><p> In this thesis we propose using an evolutionary technique in order to work toward solving these issues and increase the overall quality and abilities of Deep Learning Networks. In the evolution of a population of autoencoders for input reconstruction, we are able to abstract multiple features for each autoencoder in the form of hidden nodes, scoring the autoencoders based on their ability to reconstruct their input, and finally selecting autoencoders for crossover and mutation with hidden nodes as the chromosome. In this way we are able to not only quickly find optimal abstracted feature sets but also optimize the structure of the autoencoder to match the features being selected. This also allows us to experiment with different training methods in respect to data partitioning and selection, reducing overall training time drastically for large and complex datasets. This proposed method allows even large datasets to be trained quickly and efficiently with little manual parameter choice required by the user, leading to faster, more accurate creation of Deep Learning Networks.</p>

Page generated in 0.1142 seconds