• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 160
  • 21
  • Tagged with
  • 1172
  • 765
  • 691
  • 428
  • 428
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
52

Bone Graphs: Medial Abstraction for Shape Parsing and Object Recognition

Macrini, Diego 31 August 2010 (has links)
The recognition of 3-D objects from their silhouettes demands a shape representation which is invariant to minor changes in viewpoint and articulation. This invariance can be achieved by parsing a silhouette into parts and relationships that are stable across similar object views. Medial descriptions, such as skeletons and shock graphs, attempt to decompose a shape into parts, but suffer from instabilities that lead to similar shapes being represented by dissimilar part sets. We propose a novel shape parsing approach based on identifying and regularizing the ligature structure of a given medial axis. The result of this process is a bone graph, a new medial shape abstraction that captures a more intuitive notion of an object’s parts than a skeleton or a shock graph, and offers improved stability and within-class deformation invariance over the shock graph. The bone graph, unlike the shock graph, has attributed edges that specify how and where two medial parts meet. We propose a novel shape matching framework that exploits this relational information by formulating the problem as an inexact directed acyclic graph matching, and extending a leading bipartite graph-based matching framework introduced for matching shock graphs. In addition to accommodating the relational information, our new framework is better able to enforce hierarchical and sibling constraints between nodes, resulting in a more general and more powerful matching framework. We evaluate our matching framework with respect to a competing shock graph matching framework, and show that for the task of view-based object categorization, our matching framework applied to bone graphs outperforms the competing framework. Moreover, our matching framework applied to shock graphs also outperforms the competing shock graph matching algorithm, demonstrating the generality and improved performance of our matching algorithm.
53

Effective Heuristic-based Test Generation Techniques for Concurrent Software

Razavi, Niloofar 22 August 2014 (has links)
With the increasing dependency on software systems, we require them to be reliable and correct. Software testing is the predominant approach in industry for finding software errors. There has been a great advance in testing sequential programs throughout the past decades. Several techniques have been introduced with the aim of automatically generating input values such that the executions of the program with those inputs provide meaningful coverage guarantees for the program. Today, multi-threaded (concurrent) programs are becoming pervasive in the era of multiprocessor systems. The behaviour of a concurrent program depends not only on the input values but also on the way the executions of threads are interleaved. Testing concurrent programs is notoriously hard because often there are exponentially large number of interleavings of executions of threads that has to be explored. In this thesis, we propose an array of heuristicbased testing techniques for concurrent programs to prioritize a subset of interleavings and test as many of them as possible. To that end, we develop: (A) a sound and scalable technique that based on the events of an observed execution, predicts runs that might contain null-pointer dereferences. This technique explores the interleaving space (based on the observed execution) while keeping the input values fixed and can be adapted to predict other types of bugs. (B) a test generation technique that uses a set of program executions as a program under-approximation to explore both input and interleaving spaces. This technique generates tests that increase branch coverage in concurrent programs based their approximation models. (C) a new heuristic, called bounded-interference, for input/interleaving exploration. It is defined based on the notion of data-flow between threads and is parameterized by the number of interferences among threads. Testing techniques that employ this heuristic are able to provide coverage guarantees for concurrent programs (modulo interference bound). (D) a testing technique which adapts the sequential concolic testing to concurrent programs by incorporating the bounded-interference heuristic into it. The technique provides branch coverage guarantees for concurrent programs. Based on the above techniques, we have developed tools and used them to successfully find bugs in several traditional concurrency benchmarks.
54

Generation and Verification of Plans with Loops

Hu, Yuxiao 22 August 2012 (has links)
This thesis studies planning problems whose solution plans are program-like structures that contain branches and loops. Such problems are a generalization of classical and conditional planning, and usually involve infinitely many cases to be handled by a single plan. This form of planning is useful in a number of applications, but meanwhile challenging to analyze and solve. As a result, it is drawing increasing interest in the AI community. In this thesis, I will give a formal definition of planning with loops in the situation calculus framework, and propose a corresponding plan representation in the form of finite-state automata. It turns out that this definition is more general than a previous formalization that uses restricted programming structures for plans. For the verification of plans with loops, we study a property of planning problems called finite verifiability. Such problems have the property that for any candidate plan, only a finite number of cases need to be checked in order to conclude whether the plan is correct for all the infinitely many cases. I will identify several forms of finitely-verifiable classes of planning problems, including the so-called one-dimensional problems where an unknown and unbounded number of objects need independent processing. I will also show that this property is not universal, in that finite verifiability of less restricted problems would mean a solution to the Halting problem or an unresolved mathematical conjecture. For the generation of plans with loops, I will present a novel nondeterministic algorithm which essentially searches in the space of the AND/OR execution trees of an incremental partial plan on a finite set of example instances of the planning problem. Two different implementations of the algorithm are explored for search efficiency, namely, heuristic search and randomized search with restarts. In both cases, I will show that the resulting planner generates compact plans for a dozen benchmark problems, some of which are not solved by other existing approaches, to the best of our knowledge. Finally, I will present generalizations and applications of the framework proposed in this thesis that enables the analysis and solution of related planning problems recently proposed in the literature, namely, controller synthesis, service composition and planning programs. Notably, the latter two require possiblynon-terminating execution in a dynamic environment to provide services to coming requests. I will show a generic definition and planner whose instantiation accommodates and solves all the three example applications. Interestingly, the instantiations are competitive with, and sometimes even outperform, the original tailored approaches proposed in the literature.
55

Tree Spanners of Simple Graphs

Papoutsakis, Ioannis 09 August 2013 (has links)
A tree $t$-spanner $T$ of a simple graph $G$ is a spanning tree of $G$, such that for every pair of vertices of $G$ their distance in $T$ is at most $t$ times their distance in $G$, where $t$ is called a stretch factor of $T$ in $G$. It has been shown that there is a linear time algorithm to find a tree 2-spanner in a graph; it has also been proved that, for each $t>3$, determining whether a graph admits a tree $t$-spanner is an NP-complete problem. This thesis studies tree $t$-spanners from both theoretical and algorithmic perspectives. In particular, it is proved that a nontree graph admits a unique tree $t$-spanner for at most one value of stretch factor $t$. As a corollary, a nontree bipartite graph cannot admit a unique tree $t$-spanner for any $t$. But, for each $t$, there are infinitely many nontree graphs that admit exactly one tree $t$-spanner. Furthermore, for each $t$, let U($t$) be the set of graphs being the union of two tree $t$-spanners of a graph. Although graphs in U(2) do not have cycles of length greater than 4, graphs in U(3) may contain cycles of arbitrary length. It turns out that any even cycle is an induced subgraph of a graph in U(3), while no graph in U(3) contains an induced odd cycle other than a triangle; graphs in U(3) are shown to be perfect. Also, properties of induced even cycles of graphs in U(3) are presented. For each $t>3$, though, graphs in U($t$) may contain induced odd cycles of any length. Moreover, there is an efficient algorithm to recognize graphs that admit a tree 3-spanner of diameter at most 4, while it is proved that, for each $t>3$, determining whether a graph admits a tree $t$-spanner of diameter at most $t+1$ is an NP-complete problem. It is not known if it is hard to recognize graphs that admit a tree 3-spanner of general diameter; however integer programming is employed to provide certificates of tree 3-spanner inadmissibility for a family of graphs.
56

Customizable Services for Application-layer Overlay Networks

Zhao, Yu 17 July 2013 (has links)
Application-layer overlay networks have emerged as a powerful paradigm for providing network services. While most approaches focus on providing a pre-defined set of network services, we provide a mechanism for network applications to deploy customizable data delivery services. We present the design, implementation, and evaluation of application-defined data delivery services that are executed at overlay nodes by transmitting messages marked with service identifiers. In our approach, a data delivery services is specified as an XML specification that define a finite-state machines that respond to network events, and perform a set of network primitives. We implemented a mechanism to execute these XML specifications in the HyperCast overlay middleware, and have evaluated this mechanism quantitatively on an Emulab testbed. The experiments show that our approach is effective in realizing a variety of data delivery services without incurring unreasonable performance overhead.
57

Generation and Verification of Plans with Loops

Hu, Yuxiao 22 August 2012 (has links)
This thesis studies planning problems whose solution plans are program-like structures that contain branches and loops. Such problems are a generalization of classical and conditional planning, and usually involve infinitely many cases to be handled by a single plan. This form of planning is useful in a number of applications, but meanwhile challenging to analyze and solve. As a result, it is drawing increasing interest in the AI community. In this thesis, I will give a formal definition of planning with loops in the situation calculus framework, and propose a corresponding plan representation in the form of finite-state automata. It turns out that this definition is more general than a previous formalization that uses restricted programming structures for plans. For the verification of plans with loops, we study a property of planning problems called finite verifiability. Such problems have the property that for any candidate plan, only a finite number of cases need to be checked in order to conclude whether the plan is correct for all the infinitely many cases. I will identify several forms of finitely-verifiable classes of planning problems, including the so-called one-dimensional problems where an unknown and unbounded number of objects need independent processing. I will also show that this property is not universal, in that finite verifiability of less restricted problems would mean a solution to the Halting problem or an unresolved mathematical conjecture. For the generation of plans with loops, I will present a novel nondeterministic algorithm which essentially searches in the space of the AND/OR execution trees of an incremental partial plan on a finite set of example instances of the planning problem. Two different implementations of the algorithm are explored for search efficiency, namely, heuristic search and randomized search with restarts. In both cases, I will show that the resulting planner generates compact plans for a dozen benchmark problems, some of which are not solved by other existing approaches, to the best of our knowledge. Finally, I will present generalizations and applications of the framework proposed in this thesis that enables the analysis and solution of related planning problems recently proposed in the literature, namely, controller synthesis, service composition and planning programs. Notably, the latter two require possiblynon-terminating execution in a dynamic environment to provide services to coming requests. I will show a generic definition and planner whose instantiation accommodates and solves all the three example applications. Interestingly, the instantiations are competitive with, and sometimes even outperform, the original tailored approaches proposed in the literature.
58

Transitioning to Agile: A Framework for Pre-adoption Analysis using Empirical Knowledge and Strategic Modeling

Chiniforooshan Esfahani, Hesam 11 December 2012 (has links)
Transitioning to the Agile style of software development has become an increasing phenomenon among software companies. The commonly perceived advantages of Agile, such as shortened time to market, improved efficiency, and reduced development waste are among key driving motivations of organizations to Agile. Each year a considerable number of empirical studies are being published, reporting on successful or unfavorable outcomes of enacting Agile in various organizations. Reusing this body of knowledge, and turning it into a concise and reachable source of information on Agile practices, can help many software organizations which are at the edge of transition to Agile, dealing with the uncertainties of such a decision. One of the early steps of transitioning to Agile (or any other process model) is to confirm the adaptability of new process with the current organization. Various Agile adoption frameworks have proposed different checklists to test the readiness of an organization for becoming Agile, or to identify the required adaptation criteria. Transitioning to Agile, as a significant organizational initiative, is a strategic decision, which should be made with respect to key objectives of the target organization. Having a reliable anticipation of how a new process model will impact the strategic objectives helps organizational managers to choose a process model, which brings optimum advantage to the organization. This thesis introduces a framework for evaluating new Agile practices (compartments of Agile methods) prior to their adoption in an organization. The framework has two distinguishing characteristics: first, it acts strategically, as it puts the strategic model of organization at the center of many decision makings that should be performed during Agile adoption; and second, it is based on a repository of Agile practices that allows the framework to benefit from the empirical knowledge of Agile methods, in order to improve the reliability of its outcomes. This repository has been populated through an extensive literature review of empirical studies on Agile methods. The framework was put in practice in an industrial case, at one of the R&D units of Ericsson Company in Italy. The target R&D unit was proposed with a number of Agile practices. The application of framework helped R&D unit managers to strategically decide on the new process proposal, by having a better understanding of its strategic shortcomings and strengths. A key portion of framework’s analysis results were evaluated one year after the R&D unit made the transition to Agile, showing that over 75% of pre-adoption analysis results came to reality after the enactment of new process into the organization.
59

Polymorphism and Genome Assembly

Donmez, Nilgun 11 December 2012 (has links)
When Darwin introduced natural selection in 1859 as a key mechanism of evolution, little was known about the underlying cause of variation within a species. Today we know that this variation is caused by the acquired genomic differences between individuals. Polymorphism, defined as the existence of multiple alleles or forms at a genomic locus, is the technical term used for such genetic variations. Polymorphism, along with reproduction and inheritance of genetic traits, is a necessary condition for natural selection and is crucial in understanding how species evolve and adapt. Many questions regarding polymorphism, such as why certain species are more polymorphic than others or how different organisms tend to favor some types of polymorphism among others, when solved, have the potential to shed light on important problems in human medicine and disease research. Some of these studies require more diverse species and/or individuals to be sequenced. Of particular interest are species with the highest rates of polymorphisms. For instance, the sequencing of the sea squirt genome lead to exciting studies that would not be possible to conduct on species that possess lower levels of polymorphism. Such studies form the motivation of this thesis. Sequencing of genomes is, nonetheless, subject to its own research. Recent advances in DNA sequencing technology enabled researchers to lead an unprecedented amount of sequencing projects. These improvements in cost and abundance of sequencing revived a greater interest in advancing the algorithms and tools used for genome assembly. A majority of these tools, however, have no or little support for highly polymorphic genomes; which, we believe, require specialized methods. In this thesis, we look at challenges imposed by polymorphism on genome assembly and develop methods for polymorphic genome assembly via an overview of current and past methods. Though we borrow fundamental ideas from the literature, we introduce several novel concepts that can be useful not only for assembly of highly polymorphic genomes but also genome assembly and analysis in general.
60

Improving Dependability for Internet-scale Services

Gill, Phillipa 11 December 2012 (has links)
The past 20 years have seen the Internet evolve from a network connecting academics, to a critical part of our daily lives. The Internet now supports extremely popular services, such as online social networks and user generated content, in addition to critical services such as electronic medical records and power grid control. With so many users depending on the Internet, ensuring that data is delivered dependably is paramount. However, dependability of the Internet is threatened by the dual challenges of ensuring (1) data in transit cannot be intercepted or dropped by a malicious entity and (2) services are not impacted by unreliability of network components. This thesis takes an end-to-end approach and addresses these challenges at both the core and edge of the network. We make the following two contributions: A strategy for securing the Internet's routing system. First, we consider the challenge of improving security of interdomain routing. In the core of the network, a key challenge is enticing multiple competing organizations to agree on, and adopt, new protocols. To address this challenge we present our three-step strategy that creates economic incentives for deploying a secure routing protocol (S*BGP). The cornerstone of our strategy is S*BGP's impact on network traffic, which we harness to drive revenue-generating traffic toward ISPs that deploy S*BGP, thus creating incentives for deployment. Empirical study of data center network reliability. Second, we consider the dual challenge of improving network reliability in data centers hosting popular content. The scale at which these networks are deployed presents challenges to building reliable networks, however, since they are administered by a single organization, they also provide opportunity to innovate. We take a first step towards designing a more reliable network infrastructure by characterizing failures in a data center network comprised of tens of data centers and thousands of devices. Through dialogue with relevant stakeholders on the Internet (e.g., standardization bodies and large content providers), these contributions have resulted in real world impact. This impact has included the creation of an FCC working group, and improved root cause analysis in a large content provider network.

Page generated in 0.0447 seconds