• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Itemset size-sensitive interestingness measures for association rule mining and link prediction

Aljandal, Waleed A. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / William H. Hsu / Association rule learning is a data mining technique that can capture relationships between pairs of entities in different domains. The goal of this research is to discover factors from data that can improve the precision, recall, and accuracy of association rules found using interestingness measures and frequent itemset mining. Such factors can be calibrated using validation data and applied to rank candidate rules in domain-dependent tasks such as link existence prediction. In addition, I use interestingness measures themselves as numerical features to improve link existence prediction. The focus of this dissertation is on developing and testing an analytical framework for association rule interestingness measures, to make them sensitive to the relative size of itemsets. I survey existing interestingness measures and then introduce adaptive parametric models for normalizing and optimizing these measures, based on the size of itemsets containing a candidate pair of co-occurring entities. The central thesis of this work is that in certain domains, the link strength between entities is related to the rarity of their shared memberships (i.e., the size of itemsets in which they co-occur), and that a data-driven approach can capture such properties by normalizing the quantitative measures used to rank associations. To test this hypothesis under different levels of variability in itemset size, I develop several test bed domains, each containing an association rule mining task and a link existence prediction task. The definitions of itemset membership and link existence in each domain depend on its local semantics. My primary goals are: to capture quantitative aspects of these local semantics in normalization factors for association rule interestingness measures; to represent these factors as quantitative features for link existence prediction, to apply them to significantly improve precision and recall in several real-world domains; and to build an experimental framework for measuring this improvement, using information theory and classification-based validation.
142

Graphical product-line configuration of nesC-based sensor network applications using feature models

Niederhausen, Matthias January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John M. Hatcliff / Developing a wireless sensor network application includes a variety of tasks, such as coding of the implementation, designing the architecture and assessing availability of hardware components, that provide necessary capabilities. Before compiling an application, the developer has to configure the selection of hardware components and set up required parameters. One has to choose from among a variety of configurations regarding communication parameters, such as frequency, channel, subnet identifier, transmission power, etc.. This configuration step also includes setting up parameters for the selection of hardware components, such as a specific hardware platform, which sensor boards and programmer boards to be used or the use of optional services and more. Reasoning about a proper selection of configuration parameters is often very difficult, since there are a lot of dependencies among these parameters which may rule out some other options. The developer has to know about all these constraints in order to pick a valid configuration. Unfortunately, the existing makefile approach that comes with nesC is poorly organized and does not capture important compatibility constraints. The configuration of a particular nesC application is distributed in multiple makefiles. Therefore a developer has to look at multiple files to make sure all necessary parameter are set up correctly for compiling a specific application. Furthermore without analyzing all makefiles it is unclear what the total configurability of a nesC application is and what options and parameters are provided (e.g. is there a parameter for enabling secure communication). In addition to this, the makefile approach tends to be error-prone, since the developer has to type in variable names and values manually, that match the existing implementation. However, the existing configuration system does not capture important compatibility constraints, such as capabilities of selected hardware components. This thesis proposes the use of feature models to configure nesC-based sensor network applications. We provide a tool-supported framework to model valid configurations and a generator that translates this model into a makefile compatible with existing nesC infrastructure. The framework automatically rules out selection of incompatible features using a build-in constraint language. Since all variables are defined in the model, misspellings of variable names are reduced and their domains are clearly defined because most variables come with all its possible options. A developer just needs to choose one or more of them by enabling certain features, where the problem of cardinality is also handled by the model. We show a detailed analysis of nesC's variability domain and how to use feature models to cover the exact behavior of nesC's makefile approach. In a following chapter we simplify our feature model and include the selection of specific hardware components, its capabilities and its dependencies. The feature model and the makefile generator offer a convenient way to configure nesC applications, that is faster, easier to understand and to handle, more transparent and once implemented it gives the possibility to adopt this configuration tool to an existing development environment.
143

Robust communication for location-aware mobile robots using motes

Mulanda, Brian Wise January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / The best mode of communication for a team of mobile robots deployed to cooperatively perform a particular task is through exchange of messages. To facilitate such exchange, a communication network is required. When successful execution of the task hinges on communication, the network needs to be robust - sufficiently reliable and secure. The absence of a fixed network infrastructure defeats the use of traditional wire-based communication strategies or an 802.11-based wireless network that would require an access point. In such a case, only an ad hoc wireless network is practical. This thesis presents a robust wireless communication solution for mobile robots using motes. Motes, sometimes referred to as smart dust, are small, low-cost, low-power computing devices equipped with wireless communication capability that uses Radio Frequency (RF). Motes have been applied widely in wireless sensing networks and are typically connected to sensors and used to gather information about their environment. Communication in a mote network is inherently unreliable due to message loss, exposed to attacks, and supports very low bandwidth. Additional mechanisms are therefore required in order to achieve robust communication. Multi-hop routing must be used to overcome short signal transmission range. The ability of a mobile robot to determine its present location can be exploited in building an appropriate routing protocol. When present, information about a mobile robot's future location can aid further the routing process. To guarantee message delivery, a transport protocol is necessary. Optimal packet sizes should be chosen for best network throughput. To protect the wireless network from attacks, an efficient security protocol can be used. This thesis describes the hardware setup, software configuration, and a network protocol for a team of mobile robots that use motes for robust wireless communication. The thesis also presents results of experiments performed.
144

Ontology engineering and feature construction for predicting friendship links and users interests in the Live Journal social network

Bahirwani, Vikas January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / William H. Hsu / An ontology can be seen as an explicit description of the concepts and relationships that exist in a domain. In this thesis, we address the problem of building an interests' ontology and using the same to construct features for predicting both potential friendship relations between users in the social network Live Journal, and users' interests. Previous work has shown that the accuracy of predicting friendship links in this network is very low if simply interests common to two users are used as features and no network graph features are considered. Thus, our goal is to organize users' interests into an ontology (specifically, a concept hierarchy) and to use the semantics captured by this ontology to improve the performance of learning algorithms at the task of predicting if two users can be friends. To achieve this goal, we have designed and implemented a hybrid clustering algorithm, which combines hierarchical agglomerative and divisive clustering paradigms, and automatically builds the interests' ontology. We have explored the use of this ontology to construct interest-based features and shown that the resulting features improve the performance of various classifiers for predicting friendships in the Live Journal social network. We have also shown that using the interests' ontology, one can address the problem of predicting the interests of Live Journal users, a task that in absence of the ontology is not feasible otherwise as there is an overwhelming number of interests.
145

Consistency checking in multiple UML state diagrams using super state analysis

Alanazi, Mohammad N. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David A. Gustafson / The Unified Modeling Language (UML) has been designed to be a full standard notation for Object-Oriented Modeling. UML 2.0 consists of thirteen types of diagrams: class, composite structure, component, deployment, object, package, activity, use case, state, sequence, communication, interaction overview, and timing. Each one is dedicated to a different design aspect. This variety of diagrams, which overlap with respect to the information depicted in each, can leave the overall system design specification in an inconsistent state. This dissertation presents Super State Analysis (SSA) for analyzing UML multiple state and sequence diagrams to detect the inconsistencies. SSA model uses a transition set that captures relationship information that is not specifiable in UML diagrams. The SSA model uses the transition set to link transitions of multiple state diagrams together. The analysis generates three different sets automatically. These generated sets are compared to the provided sets to detect the inconsistencies. Because Super State Analysis considers multiple UML state diagrams, it discovers inconsistencies that cannot be discovered when considering only a single UML state diagram. Super State Analysis identifies five types of inconsistencies: valid super states, invalid super states, valid single step transitions, invalid single step transitions, and invalid sequences.
146

JForlan tool

Uppu, Srinivasa Aditya January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Alley Stoughton / Forlan is a computer toolset for experimenting with formal languages. Forlan is implemented as a set of Standard ML (a functional programming language) modules. Forlan presently includes a tool named ‘JFA’ (Java Finite Automata Editor) which is a Java GUI tool for creating and editing ‘Finite Automata’ and a tool named ’JTR’ (Java Trees Graphical Editor) which is used for creating and editing ‘Parse Trees’ or ’Regular Expression Trees’. The JForlan tool is an attempt to unify the ‘JFA’ and the ‘JTR’ tools into one single tool so as to make it more robust, efficient and easy to use. Apart from integrating the tools a lot more functionality like creating and editing ‘Regular Expression Finite Automata’ and ’Program Trees’ (special kinds of Forlan trees which are used to represent Programs in Forlan) has been added to the tool which were not present in the ‘JFA’ and the ‘JTR’ tools. As the ‘Automata’ and the ‘Trees’ are closely related concepts it is highly beneficial to combine the tools into one single tool. The JForlan tool can be invoked either from Forlan or can be run as a standalone Application. Due to the integration of the ‘JFA’ and the ‘JTR’ tools the user can now view the regular expression which was entered as a transition label in the ‘Automata’ mode as a tree structure in the ‘Tree’ mode. Another important feature which is added to the tool is that during the creation of the trees the user need not follow the top down approach only (i.e. creating first the root and then adding children to it) but can create the nodes of the tree in any order the user wishes. An important feature of the tool is that after drawing the necessary automaton or tree the user can save it directly as an ‘image’ or a JForlan project or can select the option of saving it in Forlan syntax, which translates the figures drawn into the Forlan code automatically. The main purpose of developing this tool is to provide a user friendly, easy to use tool which would be ideal for students as educational software which would help them to learn and understand the basic concepts of automata and tree structure.
147

Exploring knowledge bases for engineering a user interests hierarchy for social network applications

Haridas, Mandar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Gurdip Singh / In the recent years, social networks have become an integral part of our lives. Their outgrowth has resulted in opportunities for interesting data mining problems, such as interest or friendship recommendations. A global ontology over the interests specified by the users of a social network is essential for accurate recommendations. The focus of this work is on engineering such an interest ontology. In particular, given that the resulting ontology is meant to be used for data mining applications to social network problems, we explore only hierarchical ontologies. We propose, evaluate and compare three approaches to engineer an interest hierarchy. The proposed approaches make use of two popular knowledge bases, Wikipedia and Directory Mozilla, to extract interest definitions and/or relationships between interests. More precisely, the first approach uses Wikipedia to find interest definitions, the latent semantic analysis technique to measure the similarity between interests based on their definitions, and an agglomerative clustering algorithm to group similar interests into higher level concepts. The second approach uses the Wikipedia Category Graph to extract relationships between interests. Similarly, the third approach uses Directory Mozilla to extract relationships between interests. Our results indicate that the third approach, although the simplest, is the most effective for building an ontology over user interests. We use the ontology produced by the third approach to construct interest based features. These features are further used to learn classifiers for the friendship prediction task. The results show the usefulness of the ontology with respect to the results obtained in absence of the ontology.
148

Revitalizing eXene

Hoag, Matthew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Alley Stoughton / This thesis covers the process leading up to the release of eXene 2.0, a User Interface Management System (UIMS) toolkit. Since its inception, eXene has provided a unique way to create meaningful graphical user interfaces (GUIs) for Standard ML applications. Additionally, it has gone through several quality revisions which have both enhanced the toolkit and corrected many deficiencies that were present. Even with these improvements, however, the full potential of eXene has become increasingly difficult for developers to utilize. That is, in spite of the natural innovation that eXene brings to GUI construction, its current lack of extensibility, usability, and functionality has caused Standard ML developers to choose simpler, more familiar UIMS toolkits, despite their limitations, for the creation of their applications. In light of this fact, eXene needs an internal and cosmetic overhaul to extend its usage and appeal. First, to improve its extensibility, formerly weakened by organic growth, eXene requires some restructuring of its architecture. Second, to improve its overall usability, previously stifled by sparse documentation, eXene requires the implementation of an interactive electronic document for its API. Finally, to improve its functionality, several new multi-purpose widgets need to be introduced. It is the author's hypothesis that the revised structure, improved documentation, and additional multi-purpose widgets detailed in this thesis sufficiently elevate eXene's extensibility, usability, and functionality such that eXene can be considered a fully featured UIMS toolkit. With these changes and the release of eXene 2.0, eXene is more likely to be adopted as the primary UIMS toolkit for Standard ML developers.
149

Automated pavement condition analysis based on AASHTO guidelines

Radhakrishnan, Anirudh January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / In this thesis, we present an automated system for detection and classification of cracks, based on the new standard proposed by `American Association of State Highway and Transportation Officials (AASHTO)'. The AASHTO standard is a draft standard, that attempts to overcome the limitations of current crack quantifying and classification methods. In the current standard, the crack classification relies heavily on the judgment of the expert. Thus the results are susceptible to human error. The effect of human error is especially severe when the amount of data collected is large. This lead to inconsistencies even if a single standard is being followed. The new AASHTO guidelines attempt to develop a method for consistent measurement of pavement condition. Gray scale images of the road are captured by an image capture vehicle and stored on a database. Through steps of thresholding, line detect and scanning, the gray scale image is converted to binary image, with 'zeros' representing cracked pixels. PCA analysis, followed by closing and filtering operation, are carried out on the gray scale image to identify cracked sub-images. The output from the filtering operation, is then replaced with its binary counterpart. In the final step the crack parameters are calculated. The region around the crack is divided into blocks of 32x32 to approximate and calculate the crack parameters with ease. The width of the crack is approximated by the average width of crack in each block. The orientation of the crack is calculated from the angle between direction of travel and the line joining the ends of the crack. Length of the crack is the displacement between the ends of the crack, and the position of the crack is calculated from the midpoint of the line joining the end points.
150

Visualization of sensor network applications in simulated environments

Kummary, Samuel Benny January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / Distributed applications that operate on networks of sensors to gather data are important in real world. TinyOS is an operating system designed to support wireless sensor networks. . It has interfaces and components, which provide functionalities for sensing parameters in the environment, packet communication and computation. These sensors have multiple purposes such as gathering different kinds of data and can be deployed in distributed networks to gather important information. NesC is a language which is used to write sensor applications for TinyOS which are deployment on the sensors. TinyViz is an application which simulates the NesC applications on a computer so that the applications can be tested first in the simulation environment and then can be tested on the sensors and deployed. However, TinyViz by default represents a static and closed environment where the conditions simulated may not be realistic. This project aims at providing real-world scenarios on the platform TinyViz, by communicating with TinyViz using Tython, a script language for this specific purpose. In terms of sensor network applications, events are classified into categories, which can be mapped to tangible parameters. This project takes as input the real-world parameters as input by the developer of the NesC applications in the form of a configuration file and converts them into implementable threads that run in parallel with TinyViz and keep sending instructions to the TinyViz which then simulates real-world environment. Thus, it helps simulate NesC applications in a realistic environment even before the real deployment. This is packaged as an Eclipse plug-in for portability and ease of implementation, using which developers of NesC applications can give as input configuration and obtain the files required for simulation. The implementation is done in java, using ‘Tython’.

Page generated in 0.049 seconds