• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 160
  • 21
  • Tagged with
  • 1172
  • 765
  • 691
  • 428
  • 428
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Improving Dependability for Internet-scale Services

Gill, Phillipa 11 December 2012 (has links)
The past 20 years have seen the Internet evolve from a network connecting academics, to a critical part of our daily lives. The Internet now supports extremely popular services, such as online social networks and user generated content, in addition to critical services such as electronic medical records and power grid control. With so many users depending on the Internet, ensuring that data is delivered dependably is paramount. However, dependability of the Internet is threatened by the dual challenges of ensuring (1) data in transit cannot be intercepted or dropped by a malicious entity and (2) services are not impacted by unreliability of network components. This thesis takes an end-to-end approach and addresses these challenges at both the core and edge of the network. We make the following two contributions: A strategy for securing the Internet's routing system. First, we consider the challenge of improving security of interdomain routing. In the core of the network, a key challenge is enticing multiple competing organizations to agree on, and adopt, new protocols. To address this challenge we present our three-step strategy that creates economic incentives for deploying a secure routing protocol (S*BGP). The cornerstone of our strategy is S*BGP's impact on network traffic, which we harness to drive revenue-generating traffic toward ISPs that deploy S*BGP, thus creating incentives for deployment. Empirical study of data center network reliability. Second, we consider the dual challenge of improving network reliability in data centers hosting popular content. The scale at which these networks are deployed presents challenges to building reliable networks, however, since they are administered by a single organization, they also provide opportunity to innovate. We take a first step towards designing a more reliable network infrastructure by characterizing failures in a data center network comprised of tens of data centers and thousands of devices. Through dialogue with relevant stakeholders on the Internet (e.g., standardization bodies and large content providers), these contributions have resulted in real world impact. This impact has included the creation of an FCC working group, and improved root cause analysis in a large content provider network.
32

Transitioning to Agile: A Framework for Pre-adoption Analysis using Empirical Knowledge and Strategic Modeling

Chiniforooshan Esfahani, Hesam 11 December 2012 (has links)
Transitioning to the Agile style of software development has become an increasing phenomenon among software companies. The commonly perceived advantages of Agile, such as shortened time to market, improved efficiency, and reduced development waste are among key driving motivations of organizations to Agile. Each year a considerable number of empirical studies are being published, reporting on successful or unfavorable outcomes of enacting Agile in various organizations. Reusing this body of knowledge, and turning it into a concise and reachable source of information on Agile practices, can help many software organizations which are at the edge of transition to Agile, dealing with the uncertainties of such a decision. One of the early steps of transitioning to Agile (or any other process model) is to confirm the adaptability of new process with the current organization. Various Agile adoption frameworks have proposed different checklists to test the readiness of an organization for becoming Agile, or to identify the required adaptation criteria. Transitioning to Agile, as a significant organizational initiative, is a strategic decision, which should be made with respect to key objectives of the target organization. Having a reliable anticipation of how a new process model will impact the strategic objectives helps organizational managers to choose a process model, which brings optimum advantage to the organization. This thesis introduces a framework for evaluating new Agile practices (compartments of Agile methods) prior to their adoption in an organization. The framework has two distinguishing characteristics: first, it acts strategically, as it puts the strategic model of organization at the center of many decision makings that should be performed during Agile adoption; and second, it is based on a repository of Agile practices that allows the framework to benefit from the empirical knowledge of Agile methods, in order to improve the reliability of its outcomes. This repository has been populated through an extensive literature review of empirical studies on Agile methods. The framework was put in practice in an industrial case, at one of the R&D units of Ericsson Company in Italy. The target R&D unit was proposed with a number of Agile practices. The application of framework helped R&D unit managers to strategically decide on the new process proposal, by having a better understanding of its strategic shortcomings and strengths. A key portion of framework’s analysis results were evaluated one year after the R&D unit made the transition to Agile, showing that over 75% of pre-adoption analysis results came to reality after the enactment of new process into the organization.
33

PScout: Analyzing the Android Permission Specification

Au, Kathy Wain Yee 18 March 2013 (has links)
Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the security mechanisms these systems use is permission system. We perform an analysis of the Android permission system in an attempt to begin answering some of the questions that have arisen about its design and implementation. We developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis and analyzed 5 versions of Android spanning version 2.2 up to the recently released Android 4.1. Our main findings are that while there is little redundancy in the permission specification, if applications could be constrained to only use documented APIs, then about 18-26% of the non-system permissions can be hidden. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves.
34

Feature-based Control of Physics-based Character Animation

de Lasa, Martin 31 August 2011 (has links)
Creating controllers for physics-based characters is a long-standing open problem in animation and robotics. Such controllers would have numerous applications while potentially yielding insight into human motion. Creating controllers remains difficult: current approaches are either constrained to track motion capture data, are not robust, or provide limited control over style. This thesis presents an approach to control of physics-based characters based on high-level features of human movement, such as center-of-mass, angular momentum, and end-effector motion. Objective terms are used to control each feature, and are combined via optimization. We show how locomotion can be expressed in terms of a small number of features that control balance and end-effectors. This approach is used to build controllers for biped balancing, jumping, walking, and jogging. These controllers provide numerous benefits: human-like qualities such as arm-swing, heel-off, and hip-shoulder counter-rotation emerge automatically during walking; controllers are robust to changes in body parameters; control parameters apply to intuitive properties; and controller may be mapped onto entirely new bipeds with different topology and mass distribution, without controller modifications. Transitions between multiple types of gaits, including walking, jumping, and jogging, emerge automatically. Controllers can traverse challenging terrain while following high-level user commands at interactive rates. This approach uses no motion capture or off-line optimization process. Although we focus on the challenging case of bipedal locomotion, many other types of controllers stand to benefit from our approach.
35

Expressive Motion Editing Using Motion Extrema

Coleman, Patrick 21 August 2012 (has links)
When animating characters, a key goal is the creation of a believable, expressive performance that gives a character a unique personality with a distinct style of movement. While animators are skilled at creating expressive, personalized performances, it remains challenging to change performance-related aspects of movement in existing motion data. In recent years, motion data reuse has become increasingly important as recorded motion capture data has come into widespread use. This thesis investigates the use of a sparse set of pose-centric editing controls for editing existing motion data using techniques similar to those used by keyframe animators when they create new motion. To do this, this thesis proposes the use of motion extrema--the poses a character passes through when there is a significant change in movement--as a means for choosing effective pose-centric editing controls. First, I present algorithms for identifying motion extrema. Motion extrema can be associated with individual joints or the full body of the character. I introduce a set of approaches for identifying motion extrema; these include the use of extrema of differential measures and the explicit search for times at which the body or a joint is in a spatially extreme configuration. I then present three motion editing applications that use motion extrema as a foundation for applying motion edits. The first application, pose-centric editing, allows users to interactively change poses in a motion, and the system modifies the motion to respect existing ground contact. The second application--staggered poses, introduces a model of character pose that explicitly encodes how timing varies among motion extrema on different parts of the body. This timing variation is commonly used by animators to model overlapping action. By introducing an algorithm for finding timing variation on motion extrema in existing motion, this system enables users to make high-level changes to timing patterns to change overlap effects in existing motion. Finally, I present a procedural motion editing application that targets a specific aspect of motion style; this technique is called spatial exaggeration. Spatial exaggeration changes the geometric relationships among extreme poses. Such edits cause movement to appear more or less energetic. Overall, these applications demonstrate that performance-related aspects of existing motion can be edited using a sparse set of controls in the form of motion extrema.
36

Mechanism Design with Partial Revelation

Hyafil, Nathanael 28 July 2008 (has links)
With the emergence of the Internet as a global structure for communication and interaction, many “business to consumer” and “business to business” applications have migrated online, thus increasing the need for software agents that can act on behalf of people, institutions or companies with private and often conflicting interests. The design of such agents, and the protocols (i.e., mechanisms) through which they interact, has therefore naturally become an important research theme. Classical mechanism design techniques from the economics literature do not account for the costs entailed with the full revelation of preferences that they require. The aim of this thesis is to investigate how to design mechanisms that only require the revelation of partial preference information and are applicable in any mechanism design context. We call this partial revelation mechanism design. Reducing revelation costs is thus our main concern. With only partial revelation, the designer has some remaining uncertainty over the agents’ types, even after the mechanism has been executed. Thus, in general, the outcome chosen will not be optimal with respect to the designer’s objective function. This alone raises interesting questions about which (part of the) information should be elicited in order to minimize the degree of sub-optimality incurred by the mechanism. But this sub-optimality of the mechanism’s outcome choice function has additional important consequences: most of the results in classical mechanism design which guarantee that agents will reveal their type truthfully to the mechanism rely on the fact that the optimal outcome is chosen. We must therefore also investigate if, and how, appropriate incentives can be maintained in partial revelation mechanisms. We start by presenting our own model for partial revelation mechanism design. Our second contribution is a negative one regarding the quasi-impossibility of implementing partial revelation mechanisms with exact incentive properties. The rest of the thesis shows, in different settings, how this negative result can be bypassed in various settings, depending on the designer's objective (e.g., social welfare, revenue...) and the interaction type (sequential or one shot). Finally, we study how the approximation of the incentive properties can be further improved when necessary, and in the process, introduce and proves the existence of a new equilibrium concept.
37

Algorithms in 3D Shape Segmentation

Simari, Patricio Dario 23 February 2010 (has links)
Surfaces in 3D are often represented by polygonal meshes and point clouds obtained from 3D modeling tools or acquisition processes such as laser range scanning. While these formats are very flexible and allow the representation of a wide variety of shapes, they are rarely appropriate in their raw form for the range of applications that benefit from their use. Their decomposition into simpler constituting parts is referred to as shape segmentation, and its automation remains a challenging area within computer science. We will present and analyze different aspects of shape segmentation. We begin by looking at useful segmentation criteria and present a categorization of current methods according to which type of criteria they address, dividing them into affinity-based, model-fitting, and property-based approaches. We then present two algorithmic contributions in the form of a model-based and a property-based segmentation approaches. These share the goals of automatically finding redundancy in a shape and propose shape representations that leverage this redundancy to achieve descriptive compactness. The first is a method for segmenting a surface into piece-wise ellipsoidal parts, motivated by the fact that most organic objects and many manufactured objects have large curved areas. The second is an algorithm for robustly detecting global and local planar-reflective symmetry and a hierarchical segmentation approach based on this detection method. We note within these approaches the variation in segmentations resulting from different criteria and propose a way to generalize the segmentation problem to heterogenous criteria. We introduce a framework and relevant algorithms for multi-objective segmentation of 3D shapes which allow for the incorporation of domain-specific knowledge through multiple objectives, each of which refers to one or more segmentation labels. They can assert properties of an individual part or they can refer to part interrelations. We thus cast the segmentation problem as an optimization minimizing an aggregate objective function which combines all objectives as a weighted sum. We conclude with a summary and discussion of the contributions presented, lessons learned, and a look at the open questions remaining and potential avenues of continued research.
38

On Generalizations of Gowers Norms

Hatami, Hamed 01 March 2010 (has links)
Inspired by the definition of Gowers norms we study integrals of products of multi-variate functions. The $L_p$ norms, certain trace norms, and the Gowers norms are all defined by taking the proper root of one of these integrals. These integrals are important from a combinatorial point of view as inequalities between them are useful in understanding the relation between various subgraph densities. Lov\'asz asked the following questions: (1) Which integrals correspond to norm functions? (2) What are the common properties of the corresponding normed spaces? We address these two questions. We show that such a formula is a norm if and only if it satisfies a H\"older type inequality. This condition turns out to be very useful: First we apply it to prove various necessary conditions on the structure of the integrals which correspond to norm functions. We also apply the condition to an important conjecture of Erd\H{o}s, Simonovits, and Sidorenko. Roughly speaking, the conjecture says that among all graphs with the same edge density, random graphs contain the least number of copies of every bipartite graph. This had been verified previously for trees, the $3$-dimensional cube, and a few other families of bipartite graphs. The special case of the conjecture for paths, one of the simplest families of bipartite graphs, is equivalent to the Blakley-Roy theorem in linear algebra. Our results verify the conjecture for certain graphs including all hypercubes, one of the important classes of bipartite graphs, and thus generalize a result of Erd\H{o}s and Simonovits. In fact, for hypercubes we can prove statements that are surprisingly stronger than the assertion of the conjecture. To address the second question of Lov\'asz we study these normed spaces from a geometric point of view, and determine their moduli of smoothness and convexity. These two parameters are among the most important invariants in Banach space theory. Our result in particular determines the moduli of smoothness and convexity of Gowers norms. In some cases we are able to prove the Hanner inequality, one of the strongest inequalities related to the concept of smoothness and convexity. We also prove a complex interpolation theorem for these normed spaces, and use this and the Hanner inequality to obtain various optimum results in terms of the constants involved in the definition of moduli of smoothness and convexity.
39

Queries, Data, and Statistics: Pick Two

Mishra, Chaitanya 21 April 2010 (has links)
The query processor of a relational database system executes declarative queries on relational data using query evaluation plans. The cost of the query evaluation plan depends on various statistics defined by the query and data. These statistics include intermediate and base table sizes, and data distributions on columns. In addition to being an important factor in query optimization, such statistics also influence various runtime properties of the query evaluation plan. This thesis explores the interactions between queries, data, and statistics in the query processor of a relational database system. Specifically, we consider problems where any two of the three - queries, data, and statistics - are provided, with the objective of instantiating the missing element in the triple such that the query, when executed on the data, satisfies the statistics on the associated subexpressions. We present multiple query processing problems that can be abstractly formulated in this manner. The first contribution of this thesis is a monitoring framework for collecting and estimating statistics during query execution. We apply this framework to the problems of monitoring the progress of query execution, and adaptively reoptimizing query execution plans. Our monitoring and adaptivity framework has a low overhead, while significantly reducing query execution times. This work demonstrates the feasibility and utility of overlaying statistics estimators on query evaluation plans. Our next contribution is a framework for testing the performance of a query processor by generating targeted test queries and databases. We present techniques for data-aware query generation, and query-aware data generation that satisfy test cases specifying statistical constraints. We formally analyze the hardness of the problems considered, and present systems that support best-effort semantics for targeted query and data generation. The final contribution of this thesis is a set of techniques for designing queries for business intelligence applications that specify cardinality constraints on the result. We present an interactive query refinement framework that explicitly incorporates user feedback into query design, refining queries returning too many or few answers. Each of these contributions is accompanied by a formal analysis of the problem, and a detailed experimental evaluation of an associated system.
40

Summarizing Spoken Documents Through Utterance Selection

Zhu, Xiaodan 02 September 2010 (has links)
The inherently linear and sequential property of speech raises the need for ways to better navigate through spoken documents. The strategy of navigation I focus on in this thesis is summarization, which aims to identify important excerpts in spoken documents. A basic characteristic that distinguishes speech summarization from traditional text summarization is the availability and utilization of speech-related features. Most previous research, however, has addressed this source from the perspective of descriptive linguistics, in considering only such prosodic features that appear in that literature. The experiments in this dissertation suggest that incorporating prosody does help but its usefulness is very limited—much less than has been suggested in some previous research. We reassess the role of prosodic features vs. features arising from speech recognition transcripts, as well as baseline selection in error-prone and disfluency-filled spontaneous speech. These problems interact with each other, and isolated observations have hampered a comprehensive understanding to date. The effectiveness of these prosodic features is largely confined because of their difficulty in predicting content relevance and redundancy. Nevertheless, untranscribed audio does contain more information than just prosody. This dissertation shows that collecting statistics from far more complex acoustic patterns does allow for estimating state-of-the-art summarization models directly. To this end, we propose an acoustics-based summarization model that is estimated directly on acoustic patterns. We empirically determine the extent to which this acoustics-based model can effectively replace ASR-based models. The extent to which written sources can benefit speech summarization has also been limited, namely to noisy speech recognition transcripts. Predicting the salience of utterances can indeed benefit from more sources than raw audio only. Since speaking and writing are two basic ways of communication and are by nature closely related to each other, in many situations, speech is accompanied with relevant written text. Richer semantics conveyed in the relevant written text provides additional information over speech by itself. This thesis utilizes such information in content selection to help identify salient utterances in the corresponding speech documents. We also employ such richer content to find the structure of spoken documents—i.e., subtopic boundaries—which may in turn help summarization.

Page generated in 0.0225 seconds