• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 163
  • 27
  • Tagged with
  • 1181
  • 774
  • 700
  • 437
  • 437
  • 401
  • 401
  • 398
  • 398
  • 116
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Graph-based protein-protein interaction prediction in Saccharomyces cerevisiae

Paradesi, Martin Samuel Rao January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / William H. Hsu / The term 'protein-protein interaction (PPI)' refers to the study of associations between proteins as manifested through biochemical processes such as formation of structures, signal transduction, transport, and phosphorylation. PPI play an important role in the study of biological processes. Many PPI have been discovered over the years and several databases have been created to store the information about these interactions. von Mering (2002) states that about 80,000 interactions between yeast proteins are currently available from various high-throughput interaction detection methods. Determining PPI using high-throughput methods is not only expensive and time-consuming, but also generates a high number of false positives and false negatives. Therefore, there is a need for computational approaches that can help in the process of identifying real protein interactions. Several methods have been designed to address the task of predicting protein-protein interactions using machine learning. Most of them use features extracted from protein sequences (e.g., amino acids composition) or associated with protein sequences directly (e.g., GO annotation). Others use relational and structural features extracted from the PPI network, along with the features related to the protein sequence. When using the PPI network to design features, several node and topological features can be extracted directly from the associated graph. In this thesis, important graph features of a protein interaction network that help in predicting protein interactions are identified. Two previously published datasets are used in this study. A third dataset has been created by combining three PPI databases. Several classifiers are applied on the graph attributes extracted from protein interaction networks of these three datasets. A detailed study has been performed in this present work to determine if graph attributes extracted from a protein interaction network are more predictive than biological features of protein interactions. The results indicate that the performance criteria (such as Sensitivity, Specificity and AUC score) improve when graph features are combined with biological features.
272

Simulation of power distribution management system using OMACS metamodel

Manghat, Jaidev January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Scott A. DeLoach / Designing and implementing large, complex and distributed systems using semi-autonomous agents that can reorganize and adapt themselves by cooperating with one another represents the future of software systems. This project concentrates on analyzing, designing and simulating such a system using the Organization Model for Adaptive Computational Systems (OMACS) metamodel. OMACS provides a framework for developing multiagent based systems that can adapt themselves to changes in the environment. Design of OMACS ensures the system will be highly robust and adaptive. In this project, we implement a simulator that models the adaptability of agents in a Power Distribution Management (PDM) system. The project specifies a top-down approach to break down the goals of the PDM system and to design the functional role of each agent involved in the system. It defines the different roles in the organization and the various capabilities possessed by the agents. All the assignments in PDM system are based on these factors. The project gives two different approaches for assigning the agents to the goals they are capable of achieving. It also analyzes the time complexity and the efficiency of agent assignments in various scenarios to understand the effectiveness of agent reorganization.
273

Capturing semantics using a link analysis based concept extractor approach

Kulkarni, Swarnim January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / The web contains a massive amount of information and is continuously growing every day. Extracting information that is relevant to a user is an uphill task. Search engines such as Google TM, Yahoo! TM have made the task a lot easier and have indeed made people much more "smarter". However, most of the existing search engines still rely on the traditional keyword-based searching techniques i.e. returning documents that contain the keywords in the query. They do not take the associated semantics into consideration. To incorporate semantics into search, one could proceed in at least two ways. Firstly, we could plunge into the world of "Semantic Web", where the information is represented in formal formats such as RDF, N3 etc which can effectively capture the associated semantics in the documents. Secondly, we could try to explore a new semantic world in the existing structure of World Wide Web (WWW). While the first approach can be very effective when semantic information is available in RDF/N3 formats, for many web pages such information is not readily available. This is why we consider the second approach in this work. In this work, we attempt to capture the semantics associated with a query by rst extracting the concepts relevant to the query. For this purpose, we propose a novel Link Analysis based Concept Extractor (LACE) that extract the concepts associated with the query by exploiting the meta data of a web page. Next, we propose a method to determine relationships between a query and its extracted concepts. Finally, we show how LACE can be used to compute a statistical measure of semantic similarity between concepts. At each step, we evaluate our approach by comparison with other existing techniques (on benchmark data sets, when available) and show that our results are competitive with existing state of the art results or even outperform them.
274

Distributed parallel symbolic execution

King, Andrew January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software defects cost our economy a significant amount of money. Techniques that can detect software defects before the software begins its operational life-cycle are therefore highly valuable. Unfortunately, as software is becoming more ubiquitous, it is also becoming more complex. Static analysis of software can be computationally intensive, and as software becomes more complex the computational demands of any analysis applied increase also. While increasingly complex software entails more computationally demanding analysis, the computational capabilities provided by computers have increased exponentially over the last half century of computing. Historically, the increase in computational capability has come by increasing the clock speed of the computer's central processing unit (CPU.) In the last several years, engineering limitations have made it increasingly difficult to build CPU's with progressively higher clock speeds. Instead, processor manufacturers now provide increased capability in the form of `multi-core' CPUs; where each processor package contains two or more processing units, enabling that processor to execute more than one task concurrently. This thesis describes the design and implementation of a parallel version of symbolic execution which can take advantage of modern multi-core and multi-processor systems to complete analysis of software units in a reduced amount of time.
275

Online bill payment system

Konreddy, Venkata Sri Vatsav Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / Keeping track of paper bills is always difficult and there is always a chance of missing bill payment dates. Online Bill Payment application is an interactive, effective and secure website designed for customers to manage all their bills. The main objective of this application is to help customers to receive, view and pay all the bills from one personalized, secure website there by eliminating the need of paper bills. Once customers register in the website, they can add various company accounts. The information is verified with the company and the accounts are added. After the customers add the company accounts they can receive notifications about new bills, payments and payment reminders. All the information dealing with sensitive data is passed through a Secure Socket Layer for the sake of security. This website follows MVC architecture. Struts is used to develop the application. Well established and well proven design patterns like Business Delegate, Data Access Object, and Transfer Object are used to simplify the maintenance of the application. For the communication between the website and companies, web services are used. Apache Axis2 serves as the web services container and Apache Rampart is used to secure the information flow between the web services. Tiles, JSP, HTML, CSS and JavaScript are used to provide a rich user interface. A part from these, Java Mail is used to send emails and concepts like one way hashing, certificates, key store’s, and encryption are implemented for the sake of security. The overall system is tested using unit testing, manual testing and performance testing techniques. Automated test cases are written whenever possible to ensure correctness of the functions. Manual testing further ensures that the application is working as expected. The system is subjected to different loads and the corresponding behavior is observed at different loads. The unit and manual testing revealed that the functionality of each module in the system is behaving as expected for both valid and invalid inputs. Performance testing revealed that the website works fine even when the server is subjected to huge loads.
276

MyBookStore-eshopping for books

Chitturi, Sushma Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / The Web is a shopper's paradise boasting every kind of product imaginable — plus many more that are almost unimaginable. People find it easy and secure to shop online these days thereby saving time and also have more options to choose from at their fingertips. Based on this comes MyBookStore, a neat web application designed to exclusively cater the needs of students for purchasing books online. Primary focus of this application is to ease the use of searching for a particular book by the user and also navigability within the website. A sophisticated search engine has been designed in this application which filters the products based on various user criterions. Searching and viewing the details about a book is available. This also has an administrator side through which the administrator can update the website with new products, remove any of the available products, and add new categories, subcategories and products along with updating the shipping status of orders placed. This section is majorly responsible for user accounts maintenance, product maintenance as well as orders maintenance. Major emphasis of this application is to build user interactive search techniques for simplifying user needs and to provide specific products as required by the user.
277

Using Ambient Radio Environment to Support Practical Pervasive Computing

Varshavsky, Alexander 26 February 2009 (has links)
Mobile applications can benefit from increased awareness of the device's context. Unfortunately, existing solutions for inferring context require special purpose sensors or beacons on the mobile devices or in the physical environment. This requirement significantly limits the deployment of these solutions. In this thesis, I argue that mobile devices can infer a substantial amount of their context by leveraging their existing wireless interfaces to monitor ambient radio sources, such as GSM cell towers or WiFi access points. I focus on two important problems in context-aware computing: localization of mobile devices and detecting proximity between mobile devices for authentication purposes. Specifically, I present an accurate localization system based on fingerprinting of GSM signals. I show that the key to more accurate GSM localization is the use of wide signal strength fingerprints that include readings from a large number of base stations. Next, I present a method that addresses the key drawback of fingerprint-based localization systems - the need to collect extensive measurements to train the system in every target environment. Finally, I show how radio environment sensing can be used to secure the communication of devices that come within close proximity. Removing the need for additional hardware on the mobile devices and in the physical environment renders the approach that I present amenable for widespread deployment.
278

Effective Search Techniques for Non-classical Planning via Reformulation

Baier, Jorge A. 15 April 2010 (has links)
Automated planning is a branch of AI that addresses the problem of generating a course of action to achieve a specified objective, given an initial state of the world. It is an area that is central to the development of intelligent agents and autonomous robots. In the last decade, automated planning has seen significant progress in terms of scalability, much of it achieved by the development of heuristic search approaches. Many of these advances, are only immediately applicable to so-called classical planning tasks. However, there are compelling applications of planning that are non-classical. An example is the problem of web service composition, in which the objective is to automatically compose web artifacts to achieve the objective of a human user. In doing so, not only the hard goals but also the \emph{preferences} of the user---which are usually not considered in the classical model---must be considered. % Also, the automated composition %should deal with abstract representations of the web %artifacts---which may also not adjust to the classical model. In this thesis we show that many of the most successful advances in classical planning can be leveraged for solving compelling non-classical problems. In particular, we focus on the following non-classical planning problems: planning with temporally extended goals; planning with rich, temporally extended preferences; planning with procedural control, and planning with procedural programs that can sense the environment. We show that to efficiently solve these problems we can use a common approach: reformulation. For each of these planning tasks, we propose a reformulation algorithm that generates another, arguably simpler instance. Then, if necessary, we adapt existing techniques to make the reformulated instance solvable efficiently. In particular, we show that both the problems of planning with temporally extended goals and with procedural control can be mapped into classical planning. Planning with rich user preferences, even after reformulation, cannot be mapped into classical planning and thus we develop specialized heuristics, based on existing heuristics, together with a branch-and-bound algorithm. Finally, for the problem of planning with programs that sense, we show that under certain conditions programs can be reduced to simple operators, enabling the use of a variety of existing planners. In all cases, we show experimentally that the reformulated problems can be solved effectively by either existing planners or by our adapted planners, outperforming previous approaches.
279

Locomotion Synthesis Methods for Humanoid Characters

Wang, Jack 16 March 2011 (has links)
This thesis introduces locomotion synthesis methods for humanoid characters. Motion synthesis is an under-constrained problem that requires additional constraints beyond user inputs. Two main approaches to introducing additional constraints are physics-based and data-driven. Despite significant progress in the past 20 years, major difficulties still exist for both approaches. In general, building animation systems that are flexible to user requirements while keeping the synthesized motions plausible remain a challenging task. The methods introduced in this thesis, presented in two-parts, aim to allow animation systems to be more flexible to user demands without radically violating constraints that are important for maintaining plausibility. In the first part of the thesis, we address an important subproblem in physics-based animation --- controller synthesis for humanoid characters. We describe a method for optimizing the parameters of a physics-based controller for full-body, 3D walking. The objective function includes terms for power minimization, angular momentum minimization, and minimal head motion, among others. Together these terms produce a number of important features of natural walking, including active toe-off, near-passive knee swing, and leg extension during swing. We then extend the algorithm to optimize for robustness to uncertainty. Many unknown factors, such as external forces, control torques, and user control inputs, cannot be known in advance and must be treated as uncertain. Controller optimization entails optimizing the expected value of the objective function, which is computed by Monte Carlo methods. We demonstrate examples with a variety of sources of uncertainty and task constraints. The second part of this thesis deals with the data-driven approach and the problem of motion modeling. Defining suitable models for human motion data is non-trivial. Simple linear models are not expressive enough, while more complex models require setting many parameters and are difficult to learn with limited data. Using Bayesian methods, we demonstrate how the Gaussian process prior can be used to derive a kernelized version of multilinear models. The result is a locomotion model that takes advantage of training data addressed by multiple indices to improve generalization to unseen motions.
280

Large-scale Peer-to-peer Streaming: Modeling, Measurements, and Optimizing Solutions

Wu, Chuan 26 February 2009 (has links)
Peer-to-peer streaming has emerged as a killer application in today's Internet, delivering a large variety of live multimedia content to millions of users at any given time with low server cost. Though successfully deployed, the efficiency and optimality of the current peer-to-peer streaming protocols are still less than satisfactory. In this thesis, we investigate optimizing solutions to enhance the performance of the state-of-the-art mesh-based peer-to-peer streaming systems, utilizing both theoretical performance modeling and extensive real-world measurements. First, we model peer-to-peer streaming applications in both the single-overlay and multi-overlay scenarios, based on the solid foundation of optimization and game theories. Using these models, we design efficient and fully decentralized solutions to achieve performance optimization in peer-to-peer streaming. Then, based on a large volume of live measurements from a commercial large-scale peer-to-peer streaming application, we extensively study the real-world performance of peer-to-peer streaming over a long period of time. Highlights of our measurement study include the topological characterization of large-scale streaming meshes, the statistical characterization of inter-peer bandwidth availability, and the investigation of server capacity utilization in peer-to-peer streaming. Utilizing in-depth insights from our measurements, we design practical algorithms that advance the performance of key protocols in peer-to-peer live streaming. We show that our optimizing solutions fulfill their design objectives in various realistic scenarios, using extensive simulations and experiments.

Page generated in 0.0526 seconds