• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 162
  • 22
  • Tagged with
  • 1175
  • 768
  • 694
  • 431
  • 431
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Querying, Exploring and Mining the Extended Document

Sarkas, Nikolaos 31 August 2011 (has links)
The evolution of the Web into an interactive medium that encourages active user engagement has ignited a huge increase in the amount, complexity and diversity of available textual data. This evolution forces us to re-evaluate our view of documents as simple pieces of text and of document collections as immutable and isolated. Extended documents published in the context of blogs, micro-blogs, on-line social networks, customer feedback portals, can be associated with a wealth of meta-data in addition to their textual component: tags, links, sentiment, entities mentioned in text, etc. Collections of user-generated documents grow, evolve, co-exist and interact: they are dynamic and integrated. These unique characteristics of modern documents and document collections present us with exciting opportunities for improving the way we interact with them. At the same time, this additional complexity combined with the vast amounts of available textual data present us with formidable computational challenges. In this context, we introduce, study and extensively evaluate an array of effective and efficient solutions for querying, exploring and mining extended documents, dynamic and integrated document collections. For collections of socially annotated extended documents, we present an improved probabilistic search and ranking approach based on our growing understanding of the dynamics of the social annotation process. For extended documents, such as blog posts, associated with entities extracted from text and categorical attributes, we enable their interactive exploration through the efficient computation of strong entity associations. Associated entities are computed for all possible attribute value restrictions of the document collection. For extended documents, such as user reviews, annotated with a numerical rating, we introduce a keyword-query refinement approach. The solution enables the interactive navigation and exploration of large result sets. We extend the skyline query to document streams, such as news articles, associated with categorical attributes and partially ordered domains. The technique incrementally maintains a small set of recent, uniquely interesting extended documents from the stream.Finally, we introduce a solution for the scalable integration of structured data sources into Web search. Queries are analysed in order to determine what structured data, if any, should be used to augment Web search results.
102

Flexible Distributed Business Process Management

Muthusamy, Vinod 11 January 2012 (has links)
Many large business processes are inherently distributed, spanning multiple organizations, administrative domains, and geographic locations. To support such applications, this thesis develops a flexible and distributed platform to develop, execute, and monitor business processes. The solutions utilize a distributed content-based publish/subscribe overlay that is extended with support for mobile clients and client interest churn. Over this layer, a distributed execution engine uses events to coordinate the execution of the process, and dynamically redeploys activities in the process in order to minimize a user-specified cost function and preserve service level agreements (SLAs). Finally, a management layer allows users to find and automatically compose services available across a distributed set of service registries, and monitor processes for SLA violations. Evaluations show that the distributed execution engine can scale better than alternate architectures, exhibiting over 60% improvements in execution time in one experiment. As well the system can dynamically redeploy processes to reflect changing workload conditions and SLAs, saving up to 90% of the process messaging overhead of a static deployment.
103

Path Graphs and PR-trees

Chaplick, Steven 20 August 2012 (has links)
The PR-tree data structure is introduced to characterize the sets of path-tree models of path graphs. We further characterize the sets of directed path-tree models of directed path graphs with a slightly restricted form of the PR-tree called the Strong PR-tree. Additionally, via PR-trees and Strong PR-trees, we characterize path graphs and directed path graphs by their Split Decompositions. Two distinct approaches (Split Decomposition and Reduction) are presented to construct a PR-tree that captures the path-tree models of a given graph G = (V, E) with n = |V| and m = |E|. An implementation of the split decomposition approach is presented which runs in O(nm) time. Similarly, an implementation of the reduction approach is presented which runs in O(A(n + m)nm) time (where A(s) is the inverse of Ackermann’s function arising from Union-Find [40]). Also, from a PR-tree, an algorithm to construct a corresponding Strong PR-tree is given which runs in O(n + m) time. The sizes of the PR-trees and Strong PR-trees produced by these approaches are O(n + m) with respect to the given graph. Furthermore, we demonstrate that an implicit form of the PR-tree and Strong PR-tree can be represented in O(n) space.
104

Software Evolution: A Requirements Engineering Perspective

Ernst, Neil 21 August 2012 (has links)
This thesis examines the issue of software evolution from a Requirements Engineering perspective. This perspective is founded on the premise that software evolution is best managed with reference to the requirements of a given software system. In particular, I follow the Requirements Problem approach to software development: the problem of developing software can be characterized as finding a specification that satisfies user requirements, subject to domain constraints. To enable this, I propose a shift from treating requirements as artifacts to treating requirements as design knowledge, embedded in knowledge bases. Most requirements today, when they exist in tangible form at all, are static objects. Such artifacts are quickly out of date and difficult to update. Instead, I propose that requirements be maintained in a knowledge base which supports knowledge-level operations for asserting new knowledge and updating existing knowledge. Consistency checks and entailment of new specifications is done automatically by answering simple queries. Maintaining a requirements knowledge base in parallel with running code means that changes precipitated by evolution are always addressed relative to the ultimate purpose of the system. This thesis begins with empirical studies which establish the nature of the requirements evolution problem. I use an extended case study of payment cards to motivate the following discussion. I begin at an abstract level, by introducing a requirements engineering knowledge base (REKB) using a functional specification. Since it is functional, the specifics of the implementation are left open. I then describe one implementation, using a reason-maintenance system, and show how this implementation can a) solve static requirements problems; b) help stakeholders bring requirements and implementation following a change in the requirements problem; c) propose paraconsistent reasoning to support inconsistency tolerance in the REKB. The end result of my work on the REKB is a tool and approach which can guide software developers and software maintainers in design and decision-making in the context of software evolution.
105

The Challenge of Web Design Guidelines: Investigating Issues of Awareness, Interpretation, and Efficacy

Szigeti, Stephen James 31 August 2012 (has links)
Guidelines focusing on web interface design allow for the dissemination of complex and multidisciplinary research to communities of practice. Motivated by the desire to better understand how research evidence canbe shared with the web design community, this dissertation investigates the role guidelines play in the design process, the attitudes designers hold regarding guidelines, and whether evidence based guidelines can be consistently interpreted by designers. Guidelines are a potential means to address the knowledge gap between research and practice, yet we do not have a clear understanding of the relationship between research evidence, guideline sets and web design practitioners. In order to better understand how design guidelines are used by designers in the practice of web interface design, four sequential studies were designed; the application of a guideline subset to a design project by 16 students, the assessment of ten health information websites by eight designers using a guideline subset, a web based survey of 116 designers, and interviews with 20 designers. The studies reveal that guideline use is dependent on the perceived trustworthiness of the guideline, its source and the alignment between guideline advice and designer experience. The first two studies found that guidelines are inconsistently interpreted. One third of the guidelines used in the second study were interpreted differently by participants, an inconsistency which represents a critical problem in guideline use. Findings showed no difference in the characteristics of guidelines which were consistently interpreted and those for which interpretation was the most inconsistent. Further, research evidence was not a factor in guideline use, less than half the designers are aware of evidence-based guideline sets, and guidelines are predominantly used as memory aids. Ultimately alternatives to guidelines, such as checklists or pattern libraries, may yield the best results in our efforts to share research knowledge with communities of practice.
106

Cost-aware Dynamic Provisioning for Performance and Power Management

Ghanbari, Saeed 30 July 2008 (has links)
Dynamic provisioning of server boxes to applications entails an inherent performance-power trade-off for the service provider, a trade-off that has not been studied in detail. The optimal number of replicas to be dynamically provisioned to an application is ultimately the configuration that results in the highest revenue. The service provider should thus dynamically provision resources for an application only as long as the resulting reward from hosting more clients exceeds its operational costs for power and cooling. We introduce a novel cost-aware dynamic provisioning approach for the database tier of a dynamic content site. Our approach employs Support Vector Machine regression for learning a dynamically adaptive system model. We leverage this lightweight on-line learning approach for two cost-aware dynamic provisioning techniques. The first is a temperature-aware scheme which avoids temperature hot-spots within the set of provisioned machines, and hence reduces cooling costs. The second is a more general cost-aware provisioning technique using a utility function expressing monetary costs for both performance and power.
107

Pressure-sensitive Pen Interactions

Ramos, Gonzalo 28 July 2008 (has links)
Pen-based computers bring the promise of tapping into people’s expressiveness with pen and paper and producing a platform that feels familiar while providing new functionalities only possible within an electronic medium. To this day, pen computers’ success is marginal because their interfaces mainly replicate keyboard and mouse ones. Maximizing the potential of pen computers requires redesigning their interfaces so that they are sensitive to the pen’s input modalities and expressiveness. In particular, pressure is an important and expressive, yet underutilized, pen input modality. This dissertation advances our knowledge about pressure-aware, pen-based interactions and how people use these techniques. We systematically explore their design by first investigating how pressure can affect pen interactions. We propose novel techniques that take advantage of the pressure modality of a pen to control, link, and annotate digital video. We then study people’s performance using pressure to navigate through a set of elements and find that they can discriminate a minimum of six different pressure regions. We introduce the concept of Pressure Widgets and suggest visual and interaction properties for their design. We later explore pressure’s use to enhance the adjustment of continuous parameters and propose Zliding, a technique in which users vary pressure to adjust the scale of the parameter space, while sliding their pen to perform parameter manipulations. We study Zliding and find it a viable technique, which is capable of enabling arbitrarily precise parameter adjustments. We finally present a novel interaction technique defined by the concurrent variation in pressure applied while dragging a pen. We study these pressure marks and find that they are a compact, orientation-independent, full interaction phrase that can be 30% faster than a stateof-the-art selection-action interaction phrase. This dissertation also makes a number of key contributions throughout the design and study of novel interaction techniques: -It identifies important design issues for the development of pressure-sensitive, pen operated widgets and interactions, -It provides design guidelines for interaction techniques and interface elements utilizing pressure-enabled input devices, -It presents empirical data on people’s ability to control pressure, and -It charts a visual design space of pressure-sensitive, pen-based interactions.
108

Interaction with Volumetric Displays

Grossman, Tovi 19 January 2009 (has links)
For almost 50 years, researchers have been exploring the use of stereoscopic displays for visualizing and interacting with three-dimensional (3D) data. Unfortunately, a number of unfavorable qualitative properties have impeded the wide-spread adoption of traditional 3D displays. The volumetric display, a more recent class of 3D display to emerge, possesses unique features which potentially makes it more suitable for integration into workplace, classroom, and even home environments. In this dissertation we investigate volumetric displays as an interactive platform for 3D applications. We identify the inherent affordances unique to volumetric displays, such as their true 3D display volume, 360° viewing angle, and enclosing surface. Identifying these properties exposes human factor issues which we investigate and interaction issues which we address. First, we evaluate the user’s ability perceive imagery displayed by a volumetric display. In a formal experiment, we show that depth perception can be improved, in comparison to more traditional platforms. We then perform an experiment which evaluates users’ ability to read text under 3D rotations, and present a new algorithm which optimizes text rotation when viewed my multiple users. Next, we investigate the user’s ability to select 3D imagery within the display. Results show that the dimension defining the depth of the object can constrain user performance as much as or more than the other two dimensions of the target. This leads us to explore alternative methods of selection which are less constraining to the user. We define a suite of new selection techniques, of which several are found to have significant benefits in comparison to techniques traditionally used in 3D user interfaces. Next, we describe our development of the first working interactive application, where a volumetric display is the sole device for input and display. The application presents a first glance at what the equivalent of today’s graphical user interface might be on a volumetric display. We then develop a prototype application which allows multiple users to simultaneously interact with the volumetric display. We discuss and address the core issues related to providing such a collaborative user interface, and report feedback obtained from usage sessions and expert interviews.
109

Acquiring and Reasoning about Variability in Goal Models

Liaskos, Sotirios 19 January 2009 (has links)
One of the most essential parts of any software requirements analysis effort is the exploration of alternative ways by which stakeholder problems can be solved. Systematic modeling and analysis of requirements variability allows better decision making during the early requirements phase and substantiates design choices pertaining to the configurability aspect of the system-to-be. This thesis proposes the use of goal models for capturing and reasoning about requirements variability. The goal models we adopt consist of AND/OR decompositions of stakeholder goals and express alternative ways by which stakeholders may wish to achieve them. By capturing goal variability using such models, we propose a shift of focus from variability of the software design, to variability of the problem that the design is intended to solve. This way, we ensure that every important variation of the problem is identified and analyzed before variations of the solution are specified. The thesis exploits opportunities that arise from this new viewpoint. Firstly, a variability-intensive goal decomposition process is proposed. The process is based on associating each high-level goal to a set of variability concerns that must be addressed through decomposition. We introduce a universal categorization of such concerns and also show how domain-specific variability concerns can be identified by annotating domain corpora. Concern-driven decomposition offers a structured way of thinking about problem variability, while systematizing its identification process. Further, an expressive LTL-based preference language is introduced to support leverage of large spaces of goal alternatives. The language allows the expression of preferences over behavioral and qualitative properties of solutions and a reasoning tool allows the identification of alternatives that satisfy these preferences. This way, individual stakeholders can get the solution that exactly fits their needs in a particular situation, through simply specifying desired high-level characteristics of these solutions. Finally, a framework for connecting alternatives at the goal level to alternative configurations of common desktop applications is presented. The framework shows how a vast number of configurations of a software application can be evaluated and ranked with respect to a small number of quality goals that are more intuitive to and comprehensible by end users.
110

Learning Probabilistic Models for Visual Motion

Ross, David A. 26 February 2009 (has links)
A fundamental goal of computer vision is the ability to analyze motion. This can range from the simple task of locating or tracking a single rigid object as it moves across an image plane, to recovering the full pose parameters of a collection of nonrigid objects interacting in a scene. The current state of computer vision research, as with the preponderance of challenges that comprise "artificial intelligence", is that the abilities of humans can only be matched in very narrow domains by carefully and specifically engineered systems. The key to broadening the applicability of these successful systems is to imbue them with the flexibility to handle new inputs, and to adapt automatically without the manual intervention of human engineers. In this research we attempt to address this challenge by proposing solutions to motion analysis tasks that are based on machine learning. We begin by addressing the challenge of tracking a rigid object in video, presenting two complementary approaches. First we explore the problem of learning a particular choice of appearance model---principal components analysis (PCA)---from a very limited set of training data. However, PCA is far from the only appearance model available. This raises the question: given a new tracking task, how should one select the most-appropriate models of appearance and dynamics? Our second approach proposes a data-driven solution to this problem, allowing the choice of models, along with their parameters, to be learned from a labelled video sequence. Next we consider motion analysis at a higher-level of organization. Given a set of trajectories obtained by tracking various feature points, how can we discover the underlying non-rigid structure of the object or objects? We propose a solution that models the observed sequence in terms of probabilistic "stick figures", under the assumption that the relative joint angles between sticks can change over time, but their lengths and connectivities are fixed. We demonstrate the ability to recover the invariant structure and the pose of articulated objects from a number of challenging datasets.

Page generated in 0.022 seconds