• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Motion tracking using feature point clusters

Foster, Robert L. Jr. January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson William Hsu / In this study, we identify a new method of tracking motion over a sequence of images using feature point clusters. We identify and implement a system that takes as input a sequence of images and generates clusters of SIFT features using the K-Means clustering algorithm. Every time the system processes an image it compares each new cluster to the clusters of previous images, which it stores in a local cache. When at least 25% of the SIFT features that compose a cluster match a cluster in the local cache, the system uses the centroid of both clusters in order to determine the direction of travel. To establish a direction of travel, we calculate the slope of the line connecting the centroid of two clusters relative to their Cartesian coordinates in the secondary image. In an experiment using a P3-AT mobile robotic agent equipped with a digital camera, the system receives and processes a sequence of eight images. Experimental results show that the system is able to identify and track the motion of objects using SIFT feature clusters more efficiently when applying spatial outlier detection prior to generating clusters.
82

Domain-specific environment generation for modular software model checking

Tkachuk, Oksana January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Matthew Dwyer / John M. Hatcliff / To analyze an open system, one needs to close it with a definition of its environment, i.e., its execution context. Environment modeling is a significant challenge: environment models should be general enough to permit analysis of large portions of a system's possible behaviors, yet sufficiently precise to enable cost-effective reasoning. This thesis presents the Bandera Environment Generator (BEG), a toolset that automates generation of environment models to provide a restricted form of modular model checking of Java programs, where the module's source code is the subject of analysis along with an abstract model of the environment's behavior. Since the most general environments do not allow for tractable model checking, BEG has support for restricting the environment behavior based on domain-specific knowledge and assumptions about the environment behavior, which can be acquired from a variety of sources. When the environment code is not available, developers can encode their assumptions as an explicit formal specification. When the environment code is available, BEG employs static analyses to extract environment assumptions. Both specifications and static analyses can be tuned to reflect domain-specific knowledge, i.e., to describe domain-specific aspects of the environment behavior. Initially, BEG was implemented to handle general Java applications; later, it was extended to handle two specific domains: Graphical User Interfaces (GUI) implemented using the Swing/AWT libraries and web applications implemented using the J2EE framework. BEG was evaluated on several non-trivial case studies, including industrial applications from NASA, SUN, and Fujitsu. This thesis presents the domain-specific environment generation for GUI and web applications and describes BEG, its extensible architecture, usage, and how it can be extended to handle new domains.
83

Verification of FlexRay membership protocol using UPPAAL

Mudaliar, Vinodkumar Sekar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Safety-critical systems embedded in avionics and automotive systems are becoming increasing complex. Components with different requirements typically share a common distributed platform for communication. To accommodate varied requirements, many of these distributed real-time systems use FlexRay communication network. FlexRay supports both time triggered and event-triggered communications. In such systems, it is vital to establish a consistent view of all the associated processes to handle fault-tolerance. This task can be accomplished through the use of a Process Group Membership Protocol. This protocol must provide a high level of assurance that it operates correctly. In this thesis, we provide for the verification of one such protocol using Model Checking. Through this verification, we found that the protocol may remove nodes from the group of operational nodes in the communicating network at a fast rate. This may lead to exhaustion of the system resources by the protocol, hampering system performance. We determine allowable rates of failure that do not hamper system performance.
84

A Markov model for web request prediction

Kurian, Habel January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / Increasing web content and Internet traffic is making web prediction models popular. A web prediction model helps to predict user requests ahead of time, making web servers more responsive. It caches these pages at the server side or pre-sends the response to the client to reduce web latency. Several prediction techniques have been tried in the past; Markov based prediction models being the most popular ones. Among these, the All-K[superscript]th -order Markov model has been found to be most effective. In this project, a Markov tree is designed, which is a fourth order model but behaves like an All-K[superscript]th-order Markov model because of its ability to recognize different order models according to the height of the tree. It has dual characteristics of good applicability and predictive accuracy. A Markov tree gives a complete description on the frequency with which a particular state occurs, and the number of times a path to a particular state is used, to access its child nodes. Further, the model can be pruned to eliminate states that have very little contribution towards the accuracy of the model. In this work, an evolutionary model is designed that makes use of a fitness function. The fitness function is a weighted sum of precision and the extent of coverage that the model offers. This helps to generate a model with reduced complexity. Results indicate that this model performs consistently with good predictive accuracy among different log files. The evolutionary approach helps to train the model to make predictions commensurate to current web browsing patterns.
85

Graph-based protein-protein interaction prediction in Saccharomyces cerevisiae

Paradesi, Martin Samuel Rao January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / William H. Hsu / The term 'protein-protein interaction (PPI)' refers to the study of associations between proteins as manifested through biochemical processes such as formation of structures, signal transduction, transport, and phosphorylation. PPI play an important role in the study of biological processes. Many PPI have been discovered over the years and several databases have been created to store the information about these interactions. von Mering (2002) states that about 80,000 interactions between yeast proteins are currently available from various high-throughput interaction detection methods. Determining PPI using high-throughput methods is not only expensive and time-consuming, but also generates a high number of false positives and false negatives. Therefore, there is a need for computational approaches that can help in the process of identifying real protein interactions. Several methods have been designed to address the task of predicting protein-protein interactions using machine learning. Most of them use features extracted from protein sequences (e.g., amino acids composition) or associated with protein sequences directly (e.g., GO annotation). Others use relational and structural features extracted from the PPI network, along with the features related to the protein sequence. When using the PPI network to design features, several node and topological features can be extracted directly from the associated graph. In this thesis, important graph features of a protein interaction network that help in predicting protein interactions are identified. Two previously published datasets are used in this study. A third dataset has been created by combining three PPI databases. Several classifiers are applied on the graph attributes extracted from protein interaction networks of these three datasets. A detailed study has been performed in this present work to determine if graph attributes extracted from a protein interaction network are more predictive than biological features of protein interactions. The results indicate that the performance criteria (such as Sensitivity, Specificity and AUC score) improve when graph features are combined with biological features.
86

Simulation of power distribution management system using OMACS metamodel

Manghat, Jaidev January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Scott A. DeLoach / Designing and implementing large, complex and distributed systems using semi-autonomous agents that can reorganize and adapt themselves by cooperating with one another represents the future of software systems. This project concentrates on analyzing, designing and simulating such a system using the Organization Model for Adaptive Computational Systems (OMACS) metamodel. OMACS provides a framework for developing multiagent based systems that can adapt themselves to changes in the environment. Design of OMACS ensures the system will be highly robust and adaptive. In this project, we implement a simulator that models the adaptability of agents in a Power Distribution Management (PDM) system. The project specifies a top-down approach to break down the goals of the PDM system and to design the functional role of each agent involved in the system. It defines the different roles in the organization and the various capabilities possessed by the agents. All the assignments in PDM system are based on these factors. The project gives two different approaches for assigning the agents to the goals they are capable of achieving. It also analyzes the time complexity and the efficiency of agent assignments in various scenarios to understand the effectiveness of agent reorganization.
87

Capturing semantics using a link analysis based concept extractor approach

Kulkarni, Swarnim January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / The web contains a massive amount of information and is continuously growing every day. Extracting information that is relevant to a user is an uphill task. Search engines such as Google TM, Yahoo! TM have made the task a lot easier and have indeed made people much more "smarter". However, most of the existing search engines still rely on the traditional keyword-based searching techniques i.e. returning documents that contain the keywords in the query. They do not take the associated semantics into consideration. To incorporate semantics into search, one could proceed in at least two ways. Firstly, we could plunge into the world of "Semantic Web", where the information is represented in formal formats such as RDF, N3 etc which can effectively capture the associated semantics in the documents. Secondly, we could try to explore a new semantic world in the existing structure of World Wide Web (WWW). While the first approach can be very effective when semantic information is available in RDF/N3 formats, for many web pages such information is not readily available. This is why we consider the second approach in this work. In this work, we attempt to capture the semantics associated with a query by rst extracting the concepts relevant to the query. For this purpose, we propose a novel Link Analysis based Concept Extractor (LACE) that extract the concepts associated with the query by exploiting the meta data of a web page. Next, we propose a method to determine relationships between a query and its extracted concepts. Finally, we show how LACE can be used to compute a statistical measure of semantic similarity between concepts. At each step, we evaluate our approach by comparison with other existing techniques (on benchmark data sets, when available) and show that our results are competitive with existing state of the art results or even outperform them.
88

Distributed parallel symbolic execution

King, Andrew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software defects cost our economy a significant amount of money. Techniques that can detect software defects before the software begins its operational life-cycle are therefore highly valuable. Unfortunately, as software is becoming more ubiquitous, it is also becoming more complex. Static analysis of software can be computationally intensive, and as software becomes more complex the computational demands of any analysis applied increase also. While increasingly complex software entails more computationally demanding analysis, the computational capabilities provided by computers have increased exponentially over the last half century of computing. Historically, the increase in computational capability has come by increasing the clock speed of the computer's central processing unit (CPU.) In the last several years, engineering limitations have made it increasingly difficult to build CPU's with progressively higher clock speeds. Instead, processor manufacturers now provide increased capability in the form of `multi-core' CPUs; where each processor package contains two or more processing units, enabling that processor to execute more than one task concurrently. This thesis describes the design and implementation of a parallel version of symbolic execution which can take advantage of modern multi-core and multi-processor systems to complete analysis of software units in a reduced amount of time.
89

Online bill payment system

Konreddy, Venkata Sri Vatsav Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / Keeping track of paper bills is always difficult and there is always a chance of missing bill payment dates. Online Bill Payment application is an interactive, effective and secure website designed for customers to manage all their bills. The main objective of this application is to help customers to receive, view and pay all the bills from one personalized, secure website there by eliminating the need of paper bills. Once customers register in the website, they can add various company accounts. The information is verified with the company and the accounts are added. After the customers add the company accounts they can receive notifications about new bills, payments and payment reminders. All the information dealing with sensitive data is passed through a Secure Socket Layer for the sake of security. This website follows MVC architecture. Struts is used to develop the application. Well established and well proven design patterns like Business Delegate, Data Access Object, and Transfer Object are used to simplify the maintenance of the application. For the communication between the website and companies, web services are used. Apache Axis2 serves as the web services container and Apache Rampart is used to secure the information flow between the web services. Tiles, JSP, HTML, CSS and JavaScript are used to provide a rich user interface. A part from these, Java Mail is used to send emails and concepts like one way hashing, certificates, key store’s, and encryption are implemented for the sake of security. The overall system is tested using unit testing, manual testing and performance testing techniques. Automated test cases are written whenever possible to ensure correctness of the functions. Manual testing further ensures that the application is working as expected. The system is subjected to different loads and the corresponding behavior is observed at different loads. The unit and manual testing revealed that the functionality of each module in the system is behaving as expected for both valid and invalid inputs. Performance testing revealed that the website works fine even when the server is subjected to huge loads.
90

MyBookStore-eshopping for books

Chitturi, Sushma Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / The Web is a shopper's paradise boasting every kind of product imaginable — plus many more that are almost unimaginable. People find it easy and secure to shop online these days thereby saving time and also have more options to choose from at their fingertips. Based on this comes MyBookStore, a neat web application designed to exclusively cater the needs of students for purchasing books online. Primary focus of this application is to ease the use of searching for a particular book by the user and also navigability within the website. A sophisticated search engine has been designed in this application which filters the products based on various user criterions. Searching and viewing the details about a book is available. This also has an administrator side through which the administrator can update the website with new products, remove any of the available products, and add new categories, subcategories and products along with updating the shipping status of orders placed. This section is majorly responsible for user accounts maintenance, product maintenance as well as orders maintenance. Major emphasis of this application is to build user interactive search techniques for simplifying user needs and to provide specific products as required by the user.

Page generated in 0.0655 seconds