• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5500
  • 1071
  • 768
  • 625
  • 541
  • 355
  • 143
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 82
  • Tagged with
  • 11471
  • 6028
  • 2537
  • 1977
  • 1672
  • 1419
  • 1340
  • 1313
  • 1215
  • 1132
  • 1074
  • 1035
  • 1008
  • 886
  • 876
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

The Georgia Information Sharing and Analysis Center : a model for state and local governments role in the intelligence community

English, Charles D. 06 1900 (has links)
CHDS State/Local / Approved for public release; distribution is unlimited / Since 9/11 there have been many demands for robust intelligence efforts and information sharing in the context of Homeland Security. This thesis focuses on the critical need for the inclusion of local and state intelligence collection efforts into the broader intelligence community and describes a model for states to follow when creating a statewide Information Sharing and Analysis Center. Key organizational and relationship principles are examined. Establishing state ISACs and including them as partners in fight against terrorism benefits all levels of government at the strategic and tactical intelligence levels. Requirements for successful state level ISACs are identified through numerous cases studies focusing on the Georgia Information Sharing and Analysis Center. / Director of Operations, Georgia Emergency Management Agency author.
232

Intelligent Maintenance Aid (IMA)

Shockley, Keith J. 06 1900 (has links)
Technological complexities of current ground combat systems require advanced maintenance methods to keep the fleet in a state of operational readiness. Currently, maintenance personnel use paper Technical Manuals (TM) that are cumbersome and not easily transportable or updated in the field. This thesis proposes using the latest technology to support maintainers in the field or depot by integrating the TMs with the onboard diagnostics Built-In-Test (BIT) and Fault Isolation Test (FIT) of the vehicle, to provide the maintainer with an improved diagnostics tool to expedite troubleshooting analysis. This will be accomplished by connecting the vehicle, using the vehicle's 1553 multiplex bus, with the Graphical User Interface (GUI) of an Intelligent Maintenance Aid (IMA). The IMA will use Troubleshooting Procedure (TP) codes generated during BIT and FIT testing. Using the information provided by these TP codes, through the IMA GUI, information from the technical manuals will be displayed to aid the maintainers in their diagnostic work. The results of this thesis will serve as a baseline for further research and will be presented to the program management office for combat systems (PM-CS) for further consideration and development. / US Army RDECOM-TACOM author (civilian).
233

Human behavior representation of military teamwork

Martin, Michael W. 06 1900 (has links)
This work presents a conceptual structure for the behaviors of artificial intelligence agents, with emphasis on creating teamwork through individual behaviors. The goal is to set up a framework which enables teams of simulation agents to behave more realistically. Better team behavior can lend a higher fidelity of human behavior representation in a simulation, as well as provide opportunities to experiment with the factors that create teamwork. The framework divides agent behaviors into three categories: leadership, individual, and team-enabling. Leadership behaviors consist of planning, decision-making, and delegating. Individual behaviors consist of moving, shooting, environment-monitoring, and self-monitoring. Team-enabling behaviors consist of communicating, synchronizing actions, and team member monitoring. These team-enabling behaviors augment the leadership and individual behaviors at all phases of an agent's thought process, and create aggregate team behavior that is a hybrid of emergent and hierarchical teamwork. The net effect creates, for each agent, options and courses of action which are sub-optimal from the individual agent's standpoint, but which leverage the power of the team to accomplish objectives. The individual behaviors synergistically combine to create teamwork, allowing a group of agents to act in such a manner that their overall effectiveness is greater than the sum of their individual contributions. / US Army (USA) author.
234

The cost and economic corruption of the Iraq war

Spiers, Scott A. 12 1900 (has links)
This research effort analyzes the cost of the current war in Iraq and the corruption that is siphoning funds away from the war effort through economic corruption by Iraqis and by United States citizens and American corporations, most notably Haliburton and its subsidiary Kellogg, Brown and Root. In order to help limit corruption and aid economic growth through its own resources, many economists to include Looney, Robert have proposed the creation of an oil fund, similar to the one the state of Alaska uses where its citizens receive a direct distribution of funds from the state government. This analysis adds to that idea by looking at the types of corruption that is currently on going and the cultural and psychological reasons why Iraqis are joining terrorist and insurgent organizations. In doing so, the United States may be able better address the key the center of gravity of any insurgency, the population. / US Air Force (USAF) author.
235

Politicization and the Intelligence-Policymaker Relationship: A Comparison of the Kennedy and Trump Administrations

Orehek, Matt 01 January 2017 (has links)
The American public’s wellbeing rests on the ability of policymakers to enact informed policy. In order for policymakers to be productive in the forging of policy, they must be presented with unbiased intelligence analysis. Thus policymakers must maintain a healthy relationship with the intelligence community in order to receive accurate intelligence reports. Avoiding politicization is paramount to maintaining a healthy intelligence-policymaker relationship. Throughout the past half-century, American politicians and members of the U.S. intelligence community have sought to minimize their own political opinions when dealing with matters of national security. This thesis explores and describes the relationship between intelligence and policymaking, and examines closely how politicization of national security matters strains that relationship. It will focus on two case studies; the first concerning the Kennedy administration and the second the Trump administration. I address hostile intra-administration relations within the Kennedy administration and relate those complications to the current tensions between Trump and his intelligence services. It is concluded that for executives, the use of confidants to conduct foreign policy negotiations and to deliberate on national security matters generates resentment and distrust from intelligence agencies. Associating with the Russian government is also a major factor leading to rifts in this relationship. For the intelligence community, biased analysis, leaks, and undermining policy positions all contribute to decreases in policymaker’s confidence in their work. These forms of politicization hamper healthy intelligence-policymaker relations and lead to ineffective policy initiatives. President Trump must work with his intelligence community to curb these forms of politicization if he is to have a successful and productive presidency.
236

Discovering credible events in near real time from social media streams

Buntain, Cody 26 January 2017 (has links)
<p>Recent reliance on social media platforms as major sources of news and information, both for journalists and the larger population and especially during times of crisis, motivate the need for better methods of identifying and tracking high-impact events in these social media streams. Social media's volume, velocity, and democratization of information (leading to limited quality controls) complicate rapid discovery of these events and one's ability to trust the content posted about these events. This dissertation addresses these complications in four stages, using Twitter as a model social platform. The first stage analyzes Twitter's response to major crises, specifically terrorist attacks in Western countries, showing these high-impact events do not significantly impact message or user volume. Instead, these events drive changes in Twitter's topic distribution, with conversation, retweets, and hashtags relevant to these events experiencing significant, rapid, and short-lived bursts in frequency. Furthermore, conversation participants tend to prefer information from local authorities/organizations/media over national or international sources, with accounts for local police or local newspapers often emerging as central in the networks of interaction. Building on these results, the second stage in this dissertation presents and evaluates a set of features that capture these topical bursts associated with crises by modeling bursts in frequency for individual tokens in the Twitter stream. The resulting streaming algorithm is capable of discovering notable moments across a series of major sports competitions using Twitter's public stream without relying on domain- or language-specific information or models. Furthermore, results demonstrate models trained on sporting competition data perform well when transferred to earthquake identification. This streaming algorithm is then extended in this dissertation's third stage to support real-time event tracking and summarization. This real-time algorithm leverages new distributed processing technology to operate at scale and is evaluated against a collection of other community-developed information retrieval systems, where it performs comparably. Further experiments also show this real-time burst detection algorithm can be integrated with these other information retrieval systems to increase overall performance. The final stage then investigates automated methods for evaluating credibility in social media streams by leveraging two existing data sets. These two data sets measure different types of credibility (veracity versus perception), and results show veracity is negatively correlated with the amount of disagreement in and length of a conversation, and perceptions of credibility are influenced by the amount of links to other pages, shared media about the event, and the number of verified users participating in the discussion. Contributions made across these four stages are then usable in the relatively new fields of computational journalism and crisis informatics, which seek to improve news gathering and crisis response by leveraging new technologies and data sources like machine learning and social media.
237

Adaptive estimation techniques for resident space object characterization

LaPointe, Jamie 26 January 2017 (has links)
<p> This thesis investigates using adaptive estimation techniques to determine unknown model parameters such as size and surface material reflectivity, while estimating position, velocity, attitude, and attitude rates of a resident space object. This work focuses on the application of these methods to the space situational awareness problem.</p><p> This thesis proposes a unique method of implementing a top-level gating network in a dual-layer hierarchical mixture of experts. In addition it proposes a decaying learning parameter for use in both the single layer mixture of experts and the dual-layer hierarchical mixture of experts. Both a single layer mixture of experts and dual-layer hierarchical mixture of experts are compared to the multiple model adaptive estimation in estimating resident space object parameters such as size and reflectivity. The hierarchical mixture of experts consists of macromodes. Each macromode can estimate a different parameter in parallel. Each macromode is a single layer mixture of experts with unscented Kalman filters used as the experts. A gating network in each macromode determines a gating weight which is used as a hypothesis tester. Then the output of the macromode gating weights go to a top level gating weight to determine which macromode contains the most probable model. The measurements consist of astrometric and photometric data from non-resolved observations of the target gathered via a telescope with a charge coupled device camera. Each filter receives the same measurement sequence. The apparent magnitude measurement model consists of the Ashikhmin Shirley bidirectional reflectance distribution function. The measurements, process models, and the additional shape, mass, and inertia characteristics allow the algorithm to predict the state and select the most probable fit to the size and reflectance characteristics based on the statistics of the measurement residuals and innovation covariance. A simulation code is developed to test these adaptive estimation techniques. The feasibility of these methods will be demonstrated in this thesis.</p>
238

Data-driven computer vision for science and the humanities

Lee, Stefan 05 November 2016 (has links)
<p> The rate at which humanity is producing visual data from both large-scale scientific imaging and consumer photography has been greatly accelerating in the past decade. This thesis is motivated by the hypothesis that this trend will necessarily change the face of observational science and the humanities, requiring the development of automated methods capable of distilling vast image collections to produce meaningful analyses. Such methods are needed to empower novel science both by improving throughput in traditionally quantitative disciplines and by developing new techniques to study culture through large scale image datasets.</p><p> When computer vision or machine learning in general is leveraged to aid academic inquiry, it is important to consider the impact of erroneous solutions produced by implicit ambiguity or model approximations. To that end, we argue for the importance of algorithms that are capable of generating multiple solutions and producing measures of confidence. In addition to providing solutions to a number of multi-disciplinary problems, this thesis develops techniques to address these overarching themes of confidence estimation and solution diversity. </p><p> This thesis investigates a diverse set of problems across a broad range of studies including glaciology, developmental psychology, architectural history, and demography to develop and adapt computer vision algorithms to solve these domain-specific applications. We begin by proposing vision techniques for automatically analyzing aerial radar imagery of polar ice sheets while simultaneously providing glaciologists with point-wise estimates of solution confidence. We then move to psychology, introducing novel recognition techniques to produce robust hand localizations and segmentations in egocentric video to empower psychologists studying child development with automated annotations of grasping behaviors integral to learning. We then investigate novel large-scale analysis for architectural history, leveraging tens of thousands of publicly available images to identify and track distinctive architectural elements. Finally, we show how rich estimates of demographic and geographic properties can be predicted from a single photograph.</p>
239

Design and implementation of an English to Arabic machine translation (MEANA MT)

Alneami, Ahmed H. January 2001 (has links)
A new system for Arabic Machine Translation (called MEANA MT) has been built. This system is capable of the analysis of English language as a source and can convert the given sentences into Arabic. The designed system contains three sets of grammar rules governing the PARSING, TRANSFORMATION AND GENERATION PHASES. In the system, word sense ambiguity and some pragmatic patterns were resolved. A new two-way (Analysis/Generation) computational lexicon system dealing with the morphological analysis of the Arabic language has been created. The designed lexicon contains a set of rules governing the morphological inflection and derivation of Arabic nouns, verbs, verb "to be", verb "not to be" and pronouns. The lexicon generates Arabic word forms and their inflectional affixes such as plural and gender morphemes as well as attached pronouns, each according to its rules. It can not parse or generate unacceptable word inflections. This computational system is capable of dealing with vowelized Arabic words by parsing the vowel marks which are attached to the letters. Semantic value pairs were developed to show ~he word sense and other issues in morphology; e.g. genders, numbers and tenses. The system can parse and generate some pragmatic sentences and phrases like proper names, titles, acknowledgements, dates, telephone numbers and addresses. A Lexical Functional Grammar (LFG) formalism is used to combine the syntactic, morphological and semantic features. The grammar rules of this system were implemented and compiled in COMMON. LISP based on Tomita's Generalised LR parsing algorithm, augmented by Pseudo and Full Unification packages. After parsing, sentence constituents of the English sentence are rep- _ resented as Feature Structures (F-Structures). These take part in the transfer and generation process which uses transformation' grammar rules to change the English F-Structure into Arabic F-Structure. These Arabic F-Structure features will be suitable for the Arabic generation grammar to build the required Arabic sentence. This system has been tested on three domains (sentences and phrases); the first is a selected children's story, the second semantic sentences and the third domain consists of pragmatic sentences. This research could be considered as a complete solution for a personal MT system for small messages and sublanguage domains.
240

Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine

Halpern, Yonatan 15 December 2016 (has links)
<p> Medical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis.</p><p> Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands'' the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.</p><p> We are faced with three main challenges:</p><p> First, the unstructured and incomplete nature of the data recorded in the electronic medical records requires special attention. Relevant information can be missing or written in an obscure way that the computer does not understand. </p><p> Second, the scale of the data makes it important to develop efficient methods at all steps of the machine learning pipeline, including data collection and labeling, model learning and inference.</p><p> Third, large parts of medicine are well understood by health professionals. How do we combine the expert knowledge of specialists with the statistical insights from the electronic medical record?</p><p> Probabilistic graphical models such as Bayesian networks provide a useful abstraction for quantifying uncertainty and describing complex dependencies in data. Although significant progress has been made over the last decade on approximate inference algorithms and structure learning from complete data, learning models with incomplete data remains one of machine learning&rsquo;s most challenging problems. How can we model the effects of latent variables that are not directly observed?</p><p> The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored'' condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled'' condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after transforming the data). </p><p> Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting. For both the anchored and singly-coupled conditions, practical algorithms are presented.</p><p> The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.</p><p> The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.</p>

Page generated in 0.0485 seconds