• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 162
  • 27
  • Tagged with
  • 1180
  • 773
  • 699
  • 436
  • 436
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

TrustVoucher: automating trust in websites

Dean, Kevin January 1900 (has links)
Master of Science / Department of Computing and Information Science / Eugene Vasserman / Since the early 2000s, Internet users have continuously fallen prey to the perils of identity theft and malware . A number of tools have been proposed and implemented to foster trust towards deserving websites and alert users of undeserving websites, including P3P and trust seals. Each of these has fallen short, with studies showing that users simply do not use them. TrustVoucher is a prototype system o forge bonds of trust between users and websites by automatically determining if the website is backed by a trusted third party. Inspiration is taken from the real life way of trusting businesses, in which one aggregates recommendations by friends. TrustVoucher protects users who are attentive to its messages by informing them of sites who have put forth the effort to be endorsed by a trusted third party. An experimental study was performed on the effectiveness of the chosen interface for doing this, and determined that users did not consistently trust the recommendations of TrustVoucher, so future work will explore options for gathering the trust of users to distribute among websites.
162

Study of Facebook’s application architecture

Sundar, Nataraj January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Xinming (Simon) Ou / Facebook is a social networking service launched in February of 2004, currently having 600 million active users. Users can create a personal profile, add other friends, and exchange messages and notifications when they change their profile. Facebook has the highest usage among all social networks worldwide. It's most valuable asset is access to the personal data of all its users, making the security of such data a primary concern. User's data can be accessed by Facebook and third parties using Applications(Applications are web applications that are loaded in the context of Facebook. Building an application on Facebook will allow integration with many aspects like the user's profile information, news feed, notifications etc). "On profile" advertisement in Facebook is a classic example of how Facebook tailors the advertisements a user can see, based on the information in his profile. Having prioritzed user friendlines and ease of use of the Applications over the security of the user's data, serious questions about privacy are raised. We provide here an in-depth view of the Facebook's Application Authetication and Authorization architecture. We have included what, in our opinion, are the positives and negetives and suggested improvements. This document takes on the role of the User, the Application and Facebook server at appropriate points.
163

Purchase order system

Battula, Tejaswi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Maintaining paper bills is always a tedious job and there is always a chance of missing the purchase orders or even the payment dates. But, using an online purchase order system makes it easier to enter and maintain the correct information. The main objective of this application is to keep track of all the purchase orders made by the faculty members or staff members of the department for their students or for their research work. All the purchase orders can be made just by one click by using the online purchase order system. The user needs to register with the website letting the admin know if he is an authorized user of the department. Once the admin gives the required permission the user can then create a new purchase order from the desired vendor registered with the department. He can also make the payment by providing the funding source through which the order will be paid. Then the orders are finalized and provided with an invoice in a pdf format which can also be printed. Additionally, there exists an Admin user who can manage/ view users and edit their PO’s according to the information provided by the user. The admin also has the permissions to add new vendors and manage them. This website is developed using the PHP Scripting Language which is one of the major technology used now-a-days to build various websites along with HTML, jQuery, CSS web technologies for a better design of the website. Major emphasis of this application is to build user interactive techniques for simplifying user needs and to provide specific products required by the user.
164

Predicting sentiment-mention associations in product reviews

Vaswani, Vishwas January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / With the rising trend in social networking, more people express their opinions on the web. As a consequence, there has been an increase in the number of blogs where people write reviews about the products they buy or services they experience. These reviews can be very helpful to other potential customers who want to know the pros and cons of a product, and also to manufacturers who want to get feedback from customers about their products. Sentiment analysis of online data (such as review blogs) is a rapidly growing field of research in Machine Learning, which can leverage online reviews and quickly extract the sentiment of a whole blog. The accuracy of a sentiment analyzer relies heavily on correctly identifying associations between a sentiment (opinion) word and the targeted mention (token or object) in blog sentences. In this work, we focus on the task of automatically identifying sentiment-mention associations, in other words, we identify the target mention that is associated with a sentiment word in a sentence. Support Vector Machines (SVM), a supervised machine learning algorithm, was used to learn classifiers for this task. Syntactic and semantic features extracted from sentences were used as input to the SVM algorithm. The dataset used in the work has reviews from car and camera domain. The work is divided into two phases. In the first phase, we learned domain specific classifiers for the car and camera domains, respectively. To further improve the predictions of the domain specific classifiers we investigated the use of transfer learning techniques in the second phase. More precisely, the goal was to use knowledge from a source domain to improve predictions for a target domain. We considered two transfer learning approaches: a feature level fusion approach and a classifier level fusion approach. Experimental results show that transfer learning can help to improve the predictions made using the domain specific classifier approach. While both the feature level and classifier level fusion approaches were shown to improve the prediction accuracy, the classifier level fusion approach gave better results.
165

Classroom quiz app

Konganda, Hemala January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / As a part of enhancing Teaching and Learning in class, this project implements a Student Response System (SRS) by providing interactive in class quizzes between Students and Instructors by the use of a smart device. The application of this project is limited to the scope of android devices. This application includes a way in which the Instructor can enter his questions and answer for the quiz and the Student receives these questions instantly, allowing the student to choose his answer to the best of his knowledge. The answers are then validated and visualized. Based on which the Instructor implements the concept of Talk to your Partner (TTYP). The Instructor would have the option of deciding if he wants to pair the users either randomly or based on their result of the previous question.
166

A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement

Larkin, James Unknown Date (has links)
Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies. This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
167

Domain Specialisation and Applications of Model-Based Testing

Pari-Salas, Percy Antonio Unknown Date (has links)
Software testing, one of the most important methods for quality assurance, has become too expensive and error prone for complex modern software systems. Test automation aims to reduce the costs of software testing and to improve its reliability. Despite advances in test automation, there are some domains for which automation seems to be difficult, for example, testing software to reveal the presence of security vulnerabilities, testing for conformance to security properties that traverse several functionalities of an application such as privacy policies, and testing asynchronous concurrent systems. Although there are research works that aim to solve the problems of test automation for these domains, there is still a gap between the practice and the state of the art. These works describe specific approaches that deal with particular problems, generally under restricted conditions. Nevertheless, individually, they have not made noticeable impact on the practice in test automation for these domains. Therefore, there is a need for an integrated framework that binds specific approaches together in order to provide more complete solutions. It is also important for this framework to show how current test automation efforts, tools and frameworks, can be reused. This thesis addresses this need by describing a general model-based testing framework and its specialisation for the testing domains of security vulnerabilities, privacy policies and asynchronous systems
168

Toward More Efficient Motion Planning with Differential Constraints

Kalisiak, Maciej 31 July 2008 (has links)
Agents with differential constraints, although common in the real world, pose a particular difficulty for motion planning algorithms. Methods for solving such problems are still relatively slow and inefficient. In particular, current motion planners generally can neither "see" the world around them, nor generalize from experience. That is, their reliance on collision tests as the only means of sensing the environment yields a tactile, myopic perception of the world. Such short-sightedness greatly limits any potential for detection, learning, or reasoning about frequently encountered situations. In result these methods solve each problem in exactly the same way, whether the first or the hundredth time they attempt it, each time none the wiser. The key component of this thesis proposes a general approach for motion planning in which local sensory information, in conjunction with prior accumulated experience, are exploited to improve planner performance. The approach relies on learning viability models for the agent's "perceptual space", and the use thereof to direct planning effort. In addition, a method is presented for improving runtimes of the RRT motion planning algorithm in heavily constrained search-spaces, a common feature for agents with differential constraints. Finally, the thesis explores the use of viability models for maintaing safe operation of user-controlled agents, a related application which could be harnessed to yield additional, more "natural" experience data for further improving motion planning.
169

Clause Learning, Resolution Space, and Pebbling

Hertel, Philipp 19 January 2009 (has links)
Currently, the most effective complete SAT solvers are based on the DPLL algorithm augmented by Clause Learning. These solvers can handle many real-world problems from application areas like verification, diagnosis, planning, and design. Clause Learning works by storing previously computed, intermediate results and then reusing them to prune the future search tree. Without Clause Learning, however, DPLL loses most of its effectiveness on real world problems. Recently there has been some work on obtaining a deeper understanding of the technique of Clause Learning. In this thesis, we contribute to the understanding of Clause Learning, and the Resolution proof system that underlies it, in a number of ways. We characterize Clause Learning as a new, intuitive Resolution refinement which we call CL. We then show that CL proofs can effectively p-simulate general Resolution. Furthermore, this result holds even for the more restrictive class of greedy, unit propagating CL proofs, which more accurately characterize Clause Learning as it is used in practice. This result is surprising and indicates that Clause Learning is significantly more powerful than was previously known. Since Clause Learning makes use of previously derived clauses, it motivates the study of Resolution space. We contribute to this area of study by proving that determining the variable space of a Resolution derivation is PSPACE-complete. The reduction also yields a surprising exponential size/space trade-off for Resolution in which an increase of just 3 units of variable space results in an exponential decrease in proofsize. This result runs counter to the intuitions of many in the SAT-solving community who have generally believed that proof-size should decrease smoothly as available space increases. In order to prove these Resolution results, we need to make use of some intuition regarding the relationship between Black-White Pebbling and Resolution. In fact, determining the complexity of Resolution variable space required us to first prove that Black-White Pebbling is PSPACE-complete. The complexity of the Black-White Pebbling Game has remained an open problem for 30 years and resisted numerous attempts to solve it. Its solution is the primary contribution of this thesis.
170

Nogood Processing in CSPs

Katsirelos, George 19 January 2009 (has links)
The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases. The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of \newtext{nogood learning, which is intended to be the analogous to clause learning in CSPs}. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results. Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.

Page generated in 0.0385 seconds