• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 34
  • 24
  • 10
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 44
  • 32
  • 31
  • 31
  • 30
  • 25
  • 24
  • 22
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data Security Enhancement for Web Applications Using Cryptographic Back-end Store

Lin, Wenghui 01 January 2009 (has links)
Conventional storage technologies do not always give sufficient guarantees of security for critical information. Databases and file servers are regularly compromised, with consequential theft of identities and unauthorized use of sensitive information. Some cryptographic technologies increase the security guarantees, but rely on a key, and key secrecy and maintenance are difficult problems. Meanwhile, there is an accelerating trend of moving data from local storage to Internet storage. As a result, automatic security of critical information without the need for key management promises to be an important technology for Web Applications. This thesis presents such solution for Internet data storage that uses a secret sharing scheme. The shared secrets are packaged as JSON objects and delivered to various endpoints using HTTP semantics. A shopping website is developed to demonstrate the solution.
2

Technology tools for improving online learning environments

Gonzales, Kimberly Sharon 16 April 2013 (has links)
I worked on technical Web applications for two different research groups for my master’s project. The first application was Adventure Learning Water Expeditions, an online learning environment for K-12 students in Idaho. The second was Project Engage, a computer science principles course for high school students. In this paper, I describe each Web application by going over the technical details of the application, describing the challenges I faced, and connecting my work with relevant research in the field of Learning Technologies. / text
3

Configuration of semantic web applications using lightweight reasoning

Taylor, Stuart January 2014 (has links)
The web of data has continued to expand thanks to the principles of Linked Data outlined by Tim Berners-Lee, increasing its impact on the semantic web both in its depth and range of data sources. Meanwhile traditional web applications and technologies, with a strong focus on user interaction, such as blogs, wikis, folksonomies-based systems, and content management systems have become an integral part of the World Wide Web. However the semantic web has not yet managed to fully harness these technologies, resulting in a lack of linked data coming from user-generated content. The high level aim of this thesis is to answer the question of whether semantic web applications can be configured to use existing technologies that encourage usergenerated content on the Web. This thesis proposes an approach to reusing user-generated content from folksonomybased systems in semantic web applications, allowing these applications to be configured to make use of the structure and associated reasoning power of the semantic web, but while being able to reuse the vast amount of data already existing in these folksonomy-based systems. It proposes two new methods of semantic web application development: (i) a reusable infrastructure for building semantic mashup applications that can be configured to make use of the proposed approach; and (ii) a approach to configuring traditional web content management systems (CMS) to maintain repositories of Linked Data. The proposed approach allows semantic web applications to make use of tagged resources, while also addressing some limitations of the folksonomy approach by using ontology reasoning to exploit the structured information held in domain ontologies. The reusable infrastructure provides a set of components to allow semantic web applications to be configured to reuse content from folksonomy-based systems, while also allowing the users of these systems to contribute to the semantic web indirectly via the proposed approach. The proposed Linked Data CMS approach provides a configurable tools for semantic web application developers to develop an entire website based on linked data, while allowing ordinary web users to contribute directly to the semantic web using familiar CMS tools. The approaches proposed in this thesis make use of lightweight ontology reasoning, which is both efficient and scalable, to provide a basis for the development of practical semantic web applications. The research presented in this thesis shows how the semantic web can reuse both folksonomies and content management systems from Web 2.0 to help narrow the gap between these two key areas of the web.
4

Automating Reuse in Web Application Development

Maras, Josip January 2014 (has links)
Web applications are one of the fastest growing types of software systems today. Structurally, they are composed out of two parts: the server-side, used for data-access and business logic, and the client-side used as a user-interface. In recent years, thanks to fast, modern web browsers and advanced scripting techniques, developers are building complex interfaces, and the client-side is playing an increasingly important role. From the user's perspective, the client-side offers a number of features. A feature is an abstract notion representing a distinguishable part of the system behavior. Similar features are often used in a large number of web applications, and facilitating their reuse would offer considerable benefits. However, the client-side technology stack does not offer any widely used structured reuse method, and code responsible for a feature is usually copy-pasted to the new application. Copy-paste reuse can be complex and error prone - usually it is hard to identify exactly the code responsible for a certain feature and introduce it into the new application without errors. The primary focus of the research described in this PhD thesis is to provide methods and tools for automatizing reuse in client-side web application development. This overarching problem leads to a number of sub-problems: i) how to identify code responsible for a particular feature; ii) how to include the code that implements a feature into an already existing application without breaking neither the code of the feature nor of the application; and iii) how to automatically generate sequences of user actions that accurately capture the behavior of a feature? In order to tackle these problems we have made the following contributions: i) a client-side dependency graph that is capable of capturing dependencies that exist in client-side web applications, ii) a method capable of identifying the exact code and resources that implement a particular feature, iii) a method that can introduce code from one application into another without introducing errors, and iv) a method for generating usage scenarios that cause the manifestation of a feature. Each contribution was evaluated a suite of web applications, and the evaluations have shown that each method is capable of performing its intended purpose.
5

Creating a Testing Framework and Workflow for Developers New to Web Application Engineering

Ashby, Tag G 01 June 2014 (has links) (PDF)
Web applications are quickly replacing standalone applications for everyday tasks. These web applications need to be tested to ensure proper functionality and reliability. There have been substantial efforts to create tools that assist with the testing of web applications, but there is no standard set of tools or a recommended workflow to ensure speed of development and strength of application. We have used and outlined the merits of a number of existing testing tools and brought together the best among them to create what we believe is a fully- featured, easy to use, testing framework and workflow for web application devel- opment. We then took an existing web application, PolyXpress, and augmented its development process to include our workflow suggestions in order to incorporate testing at all levels. PolyXpress is a web application that “allows you to create location-based stories, build eTours, or create restaurant guides. It is the tool that will bring people to locations in order to entertain, educate, or provide amazing deals.”[10] After incorporating our testing procedures, we immediately detected previously unknown bugs in the software. In addition, there is now a workflow in place for future developers to use which will expedite their testing and development.
6

Byzantine fault tolerant web applications using the UpRight library

Rebello, Rohan Francis 2009 August 1900 (has links)
Web applications are widely used for email, online sales, auctions, collaboration, etc. Most of today’s highly-available web applications implement fault tolerant protocols in order to tolerate crash faults. However, recent system-wide failures have been caused by arbitrary or Byzantine faults which these applications are not capable of handling. Despite the abundance of research on adding Byzantine fault tolerance (BFT) to a system, BFT systems have found little use outside the research community. Reasons typically cited for this are the difficulty in implementing such systems and the performance overhead associated with them. While most research focuses on improving the performance or lowering the replication cost of BFT protocols, little has been done on making them easy to implement. The goal of this thesis is to evaluate the viability of BFT web applications and show that, given the right abstraction, it is viable to build a Byzantine fault tolerant web application without extensive reimplementation of the web application. In order to achieve this goal, it demonstrates a BFT implementation of the Apache Tomcat servlet container and the VQWiki web application by using the UpRight BFT library. The UpRight library provides abstractions that make it easy to develop BFT applications and we leverage this abstraction to reduce the implementation cost of our system. Our results are encouraging — less than 2% of the original system needs to be modified while still retaining all the functionality of the original system. Given the design trade-offs that we make in implementing the system, we also get comparable performance, indicating that implementing BFT is a viable option to explore for highly-available web applications. / text
7

Design and Implementation of Thread-Level Speculation in JavaScript Engines

Martinsen, Jan Kasper January 2014 (has links)
Two important trends in computer systems are that applications are moved to the Internet as web applications, and that computer systems are getting an increasing number of cores to increase the performance. It has been shown that JavaScript in web applications has a large potential for parallel execution despite the fact that JavaScript is a sequential language. In this thesis, we show that JavaScript execution in web applications and in benchmarks are fundamentally different and that an effect of this is that Just-in-time compilation does often not improve the execution time, but rather increases the execution time for JavaScript in web applications. Since there is a significant potential for parallel computation in JavaScript for web applications, we show that Thread-Level Speculation can be used to take advantage of this in a manner completely transparent to the programmer. The Thread-Level Speculation technique is very suitable for improving the performance of JavaScript execution in web applications; however we observe that the memory overhead can be substantial. Therefore, we propose several techniques for adaptive speculation as well as for memory reduction. In the last part of this thesis we show that Just-in-time compilation and Thread-Level Speculation are complementary techniques. The execution characteristics of JavaScript in web applications are very suitable for combining Just-in-time compilation and Thread-Level Speculation. Finally, we show that Thread-Level Speculation and Just-in-time compilation can be combined to reduce power usage on embedded devices.
8

Data Recovery For Web Applications

Akkus, Istemi Ekin 14 December 2009 (has links)
Web applications store their data at the server. Despite several benefits, this design raises a serious problem because a bug or misconfiguration causing data loss or corruption can affect a large number of users. We describe the design of a generic recovery system for web applications. Our system tracks application requests and reuses undo logs already kept by databases to selectively recover from corrupting requests and their effects. The main challenge is to correlate requests across the multiple tiers of the application to determine the correct recovery actions. We explore using dependencies both within and across requests at three layers, (i.e., database, application, client) to help identify data corruption accurately. We evaluate our system using known bugs and misconfigurations in popular web applications, including Wordpress, Drupal and Gallery2. Our results show that our system enables recovery from data corruption without loss of critical data incurring little overhead while tracking requests.
9

Data Recovery For Web Applications

Akkus, Istemi Ekin 14 December 2009 (has links)
Web applications store their data at the server. Despite several benefits, this design raises a serious problem because a bug or misconfiguration causing data loss or corruption can affect a large number of users. We describe the design of a generic recovery system for web applications. Our system tracks application requests and reuses undo logs already kept by databases to selectively recover from corrupting requests and their effects. The main challenge is to correlate requests across the multiple tiers of the application to determine the correct recovery actions. We explore using dependencies both within and across requests at three layers, (i.e., database, application, client) to help identify data corruption accurately. We evaluate our system using known bugs and misconfigurations in popular web applications, including Wordpress, Drupal and Gallery2. Our results show that our system enables recovery from data corruption without loss of critical data incurring little overhead while tracking requests.
10

MT-WAVE: Profiling multi-tier web applications

2015 June 1900 (has links)
The web is evolving: what was once primarily used for sharing static content has now evolved into a platform for rich client-side applications. These applications do not run exclusively on the client; while the client is responsible for presentation and some processing, there is a significant amount of processing and persistence that happens server-side. This has advantages and disadvantages. The biggest advantage is that the user’s data is accessible from anywhere. It doesn’t matter which device you sign into a web application from, everything you’ve been working on is instantly accessible. The largest disadvantage is that large numbers of servers are required to support a growing user base; unlike traditional client applications, an organization making a web application needs to provision compute and storage resources for each expected user. This infrastructure is designed in tiers that are responsible for different aspects of the application, and these tiers may not even be run by the same organization. As these systems grow in complexity, it becomes progressively more challenging to identify and solve performance problems. While there are many measures of software system performance, web application users only care about response latency. This “fingertip-to-eyeball performance” is the only metric that users directly perceive: when a button is clicked in a web application, how long does it take for the desired action to complete? MT-WAVE is a system for solving fingertip-to-eyeball performance problems in web applications. The system is designed for doing multi-tier tracing: each piece of the application is instrumented, execution traces are collected, and the system merges these traces into a single coherent snapshot of system latency at every tier. To ensure that user-perceived latency is accurately captured, the tracing begins in the web browser. The application developer then uses the MT-WAVE Visualization System to explore the execution traces to first identify which system is causing the largest amount of latency, and then zooms in on the specific function calls in that tier to find optimization candidates. After fixing an identified problem, the system is used to verify that the changes had the intended effect. This optimization methodology and toolset is explained through a series of case studies that identify and solve performance problems in open-source and commercial applications. These case studies demonstrate both the utility of the MT-WAVE system and the unintuitive nature of system optimization.

Page generated in 0.122 seconds