• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • Tagged with
  • 12
  • 12
  • 12
  • 10
  • 10
  • 9
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Link discovery in very large graphs by constructive induction using genetic programming

Weninger, Timothy Edwards January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / This thesis discusses the background and methodologies necessary for constructing features in order to discover hidden links in relational data. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. I first document a data model for the blog service LiveJournal, and define a set of machine learning problems such as predicting existing links and estimating inter-pair distance. Next, I explain how the problem of classifying a user pair in a social networks, as directly connected or not, poses the problem of selecting and constructing relevant features. In order to construct these features, a genetic programming approach is used to construct multiple symbol trees with base features as their leaves; in this manner, the genetic program selects and constructs features that many not have been considered, but possess better predictive properties than the base features. In order to extract certain graph features from the relatively large social network, a new shortest path search algorithm is presented which computes and operates on a Euclidean embedding of the network. Finally, I present classification results and discuss the properties of the frequently constructed features in order to gain insight on hidden relations that exists in this domain.
2

Modeling humans as peers and supervisors in computing systems through runtime models

Zhong, Christopher January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / There is a growing demand for more effective integration of humans and computing systems, specifically in multiagent and multirobot systems. There are two aspects to consider in human integration: (1) the ability to control an arbitrary number of robots (particularly heterogeneous robots) and (2) integrating humans as peers in computing systems instead of being just users or supervisors. With traditional supervisory control of multirobot systems, the number of robots that a human can manage effectively is between four and six [17]. A limitation of traditional supervisory control is that the human must interact individually with each robot, which limits the upper-bound on the number of robots that a human can control effectively. In this work, I define the concept of "organizational control" together with an autonomous mechanism that can perform task allocation and other low-level housekeeping duties, which significantly reduces the need for the human to interact with individual robots. Humans are very versatile and robust in the types of tasks they can accomplish. However, failures in computing systems are common and thus redundancies are included to mitigate the chance of failure. When all redundancies have failed, system failure will occur and the computing system will be unable to accomplish its tasks. One way to further reduce the chance of a system failure is to integrate humans as peer "agents" in the computing system. As part of the system, humans can be assigned tasks that would have been impossible to complete due to failures.
3

Predicting the behavior of robotic swarms in discrete simulation

Lancaster, Joseph Paul, Jr January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David Gustafson / We use probabilistic graphs to predict the location of swarms over 100 steps in simulations in grid worlds. One graph can be used to make predictions for worlds of different dimensions. The worlds are constructed from a single 5x5 square pattern, each square of which may be either unoccupied or occupied by an obstacle or a target. Simulated robots move through the worlds avoiding the obstacles and tagging the targets. The interactions between the robots and the robots and the environment lead to behavior that, even in deterministic simulations, can be difficult to anticipate. The graphs capture the local rate and direction of swarm movement through the pattern. The graphs are used to create a transition matrix, which along with an occupancy matrix, can be used to predict the occupancy in the patterns in the 100 steps using 100 matrix multiplications. In the future, the graphs could be used to predict the movement of physical swarms though patterned environments such as city blocks in applications such as disaster response search and rescue. The predictions could assist in the design and deployment of such swarms and help rule out undesirable behavior.
4

MASSPEC: multiagent system specification through policy exploration and checking

Harmon, Scott J. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / Multiagent systems have been proposed as a way to create reliable, adaptable, and efficient systems. As these systems grow in complexity, configuration, tuning, and design of these systems can become as complex as the problems they claim to solve. As researchers in multiagent systems engineering, we must create the next generation of theories and tools to help tame this growing complexity and take some of the burden off the systems engineer. In this thesis, I propose guidance policies as a way to do just that. I also give a framework for multiagent system design, using the concept of guidance policies to automatically generate a set of constraints based on a set of multiagent system models as well as provide an implementation for generating code that will conform to these constraints. Presenting a formal definition for guidance policies, I show how they can be used in a machine learning context to improve performance of a system and avoid failures. I also give a practical demonstration of converting abstract requirements to concrete system requirements (with respect to a given set of design models).
5

Event recognition in epizootic domains

Bujuru, Swathi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / In addition to named entities such as persons, locations, organizations, and quantities which convey factual information, there are other entities and attributes that relate identifiable objects in the text and can provide valuable additional information. In the field of epizootics, these include specific properties of diseases such as their name, location, species affected, and current confirmation status. These are important for compiling the spatial and temporal statistics and other information needed to track diseases, leading to applications such as detection and prevention of bioterrorism. Toward this objective, we present a system (Rule Based Event Extraction System in Epizootic Domains) that can be used for extracting the infectious disease outbreaks from the unstructured data automatically by using the concept of pattern matching. In addition to extracting events, the components of this system can help provide structured and summarized data that can be used to differentiate confirmed events from suspected events, answer questions regarding when and where the disease was prevalent develop a model for predicting future disease outbreaks, and support visualization using interfaces such as Google Maps. While developing this system, we consider the research issues that include document relevance classification, entity extraction, recognizing the outbreak events in the disease domain and to support the visualization for events. We present a sentence-based event extraction approach for extracting the outbreak events from epizootic domain that has tasks such as extracting the events such as the disease name, location, species, confirmation status, and date; classifying the events into two categories of confirmation status- confirmed or suspected. The present approach shows how confirmation status is important in extracting the disease based events from unstructured data and a pyramid approach using reference summaries is used for evaluating the extracted events.
6

Engineering complex systems with multigroup agents

Case, Denise Marie January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Scott A. DeLoach / As sensor prices drop and computing devices continue to become more compact and powerful, computing capabilities are being embedded throughout our physical environment. Connecting these devices in cyber-physical systems (CPS) enables applications with significant societal impact and economic benefit. However, engineering CPS poses modeling, architecture, and engineering challenges and, to fully realize the desired benefits, many outstanding challenges must be addressed. For the cyber parts of CPS, two decades of work in the design of autonomous agents and multiagent systems (MAS) offers design principles for distributed intelligent systems and formalizations for agent-oriented software engineering (AOSE). MAS foundations offer a natural fit for enabling distributed interacting devices. In some cases, complex control structures such as holarchies can be advantageous. These can motivate complex organizational strategies when implementing such systems with a MAS, and some designs may require agents to act in multiple groups simultaneously. Such agents must be able to manage their multiple associations and assignments in a consistent and unambiguous way. This thesis shows how designing agents as systems of intelligent subagents offers a reusable and practical approach to designing complex systems. It presents a set of flexible, reusable components developed for OBAA++, an organization-based architecture for single-group MAS, and shows how these components were used to develop the Adaptive Architecture for Systems of Intelligent Systems (AASIS) to enable multigroup agents suitable for complex, multigroup MAS. This work illustrates the reusability and flexibility of the approach by using AASIS to simulate a CPS for an intelligent power distribution system (IPDS) operating two multigroup MAS concurrently: one providing continuous voltage control and a second conducting discrete power auctions near sources of distributed generation.
7

On improving natural language processing through phrase-based and one-to-one syntactic algorithms

Meyer, Christopher Henry January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / Machine Translation (MT) is the practice of using computational methods to convert words from one natural language to another. Several approaches have been created since MT’s inception in the 1950s and, with the vast increase in computational resources since then, have continued to evolve and improve. In this thesis I summarize several branches of MT theory and introduce several newly developed software applications, several parsing techniques to improve Japanese-to-English text translation, and a new key algorithm to correct translation errors when converting from Japanese kanji to English. The overall translation improvement is measured using the BLEU metric (an objective, numerical standard in Machine Translation quality analysis). The baseline translation system was built by combining Giza++, the Thot Phrase-Based SMT toolkit, the SRILM toolkit, and the Pharaoh decoder. The input and output parsing applications were created as intermediary to improve the baseline MT system as to eliminate artificially high improvement metrics. This baseline was measured with and without the additional parsing provided by the thesis software applications, and also with and without the thesis kanji correction utility. The new algorithm corrected for many contextual definition mistakes that are common when converting from Japanese to English text. By training the new kanji correction utility on an existing dictionary, identifying source text in Japanese with a high number of possible translations, and checking the baseline translation against other translation possibilities; I was able to increase the translation performance of the baseline system from minimum normalized BKEU scores of .0273 to maximum normalized scores of .081. The preliminary phase of making improvements to Japanese-to-English translation focused on correcting segmentation mistakes that occur when attempting to parse Japanese text into meaningful tokens. The initial increase is not indicative of future potential and is artificially high as the baseline score was so low to begin with, but was needed to create a reasonable baseline score. The final results of the tests confirmed that a significant, measurable improvement had been achieved through improving the initial segmentation of the Japanese text through parsing the input corpora and through correcting kanji translations after the Pharaoh decoding process had completed.
8

A multi-objective GP-PSO hybrid algorithm for gene regulatory network modeling

Cai, Xinye January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Sanjoy Das / Stochastic algorithms are widely used in various modeling and optimization problems. Evolutionary algorithms are one class of population-based stochastic approaches that are inspired from Darwinian evolutionary theory. A population of candidate solutions is initialized at the first generation of the algorithm. Two variation operators, crossover and mutation, that mimic the real world evolutionary process, are applied on the population to produce new solutions from old ones. Selection based on the concept of survival of the fittest is used to preserve parent solutions for next generation. Examples of such algorithms include genetic algorithm (GA) and genetic programming (GP). Nevertheless, other stochastic algorithms may be inspired from animals’ behavior such as particle swarm optimization (PSO), which imitates the cooperation of a flock of birds. In addition, stochastic algorithms are able to address multi-objective optimization problems by using the concept of dominance. Accordingly, a set of solutions that do not dominate each other will be obtained, instead of just one best solution. This thesis proposes a multi-objective GP-PSO hybrid algorithm to recover gene regulatory network models that take environmental data as stimulus input. The algorithm infers a model based on both phenotypic and gene expression data. The proposed approach is able to simultaneously infer network structures and estimate their associated parameters, instead of doing one or the other iteratively as other algorithms need to. In addition, a non-dominated sorting approach and an adaptive histogram method based on the hypergrid strategy are adopted to address ‘convergence’ and ‘diversity’ issues in multi-objective optimization. Gene network models obtained from the proposed algorithm are compared to a synthetic network, which mimics key features of Arabidopsis flowering control system, visually and numerically. Data predicted by the model are compared to synthetic data, to verify that they are able to closely approximate the available phenotypic and gene expression data. At the end of this thesis, a novel breeding strategy, termed network assisted selection, is proposed as an extension of our hybrid approach and application of obtained models for plant breeding. Breeding simulations based on network assisted selection are compared to one common breeding strategy, marker assisted selection. The results show that NAS is better both in terms of breeding speed and final phenotypic level.
9

Intelligent adaptive environments: proposal for inclusive, interactive design enabling the creation of an interconnected public open space on the Iron Horse trestle interurban-railroad-subway [St. Louis, Missouri]

Anterola, Jeremy K. January 1900 (has links)
Master of Landscape Architecture / Department of Landscape Architecture/Regional and Community Planning / Stephanie A. Rolley / Economically insecure times require reduction of energy and land consumption, enhancement of socio-economic and environmental quality of life, and reutilization of neglected existing structures and sites. Traditional planning and design dictates through top-down policy and ordered master planning. In contrast, interactive smart technology simulating human cognitive reactions offers an alternative design framework - an intelligent, adaptive environment – capable of redefining contemporary public open space design. Traversing through the neglected Fifth Ward north of downtown St. Louis, the adaptive reutilization of the abandoned Iron Horse Trestle interurban elevated railroad and subway applies the Sense Respond Adapt Mutate Emerge conceptual framework (the S.R.A.M.E. Strategy) by utilizing existing resources to create an interconnected, emergent open space network. Ten unique sites along the Iron Horse Trestle are initially embedded with sensory devices capable of gathering and synthesizing learned information. The real-time actions translate into physical structural responses. The site specifi c reactions extend outwards as structural adaptations to indeterminate changes from trail users. The evolving structural form connects and mutates the existing structure. Similar to a Choose your own adventure gamebook, the Trestle’s open-ended and reactive programmatic strategies emerge as a series of potential options for future inclusionary, interactive designs. By selectively enhancing, creating, or enabling an open space system reacting to real-time actual user needs over time directly along the Trestle line, the S.R.A.M.E. Strategy offers a potential alternative framework for the indirect revitalization of neglected infrastructural and economic conditions, a residential rejuvenation catalyst, and future socio-economic and ecological sustainable living patterns education tool. The Trestle’s revitalization serves as an education tool critiquing contemporary landscape architecture and general design practice - the static, dictated, and consumptive. Intelligent adaptive environments offer an alternative framework enabling interactive design decision making capabilities to the users as options evolving over time.
10

Continuous-time infinite dynamic topic models

Elshamy, Wesam Samy January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / William Henry Hsu / Topic models are probabilistic models for discovering topical themes in collections of documents. In real world applications, these models provide us with the means of organizing what would otherwise be unstructured collections. They can help us cluster a huge collection into different topics or find a subset of the collection that resembles the topical theme found in an article at hand. The first wave of topic models developed were able to discover the prevailing topics in a big collection of documents spanning a period of time. It was later realized that these time-invariant models were not capable of modeling 1) the time varying number of topics they discover and 2) the time changing structure of these topics. Few models were developed to address this two deficiencies. The online-hierarchical Dirichlet process models the documents with a time varying number of topics. It varies the structure of the topics over time as well. However, it relies on document order, not timestamps to evolve the model over time. The continuous-time dynamic topic model evolves topic structure in continuous-time. However, it uses a fixed number of topics over time. In this dissertation, I present a model, the continuous-time infinite dynamic topic model, that combines the advantages of these two models 1) the online-hierarchical Dirichlet process, and 2) the continuous-time dynamic topic model. More specifically, the model I present is a probabilistic topic model that does the following: 1) it changes the number of topics over continuous time, and 2) it changes the topic structure over continuous-time. I compared the model I developed with the two other models with different setting values. The results obtained were favorable to my model and showed the need for having a model that has a continuous-time varying number of topics and topic structure.

Page generated in 0.0732 seconds