• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 19
  • 17
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fault replication as a method of coding information

Barclay, Nicola January 1998 (has links)
No description available.
2

Combining data structure repair and program repair

Malik, Muhammad Zubair 19 September 2014 (has links)
Bugs in code continue to pose a fundamental problem for software reliability and cause expensive failures. The process of removing known bugs is termed debugging, which is a classic methodology commonly performed before code is deployed. Traditionally, debugging is tedious, often requiring much manual effort. A more recent technique that complements debugging is data structure repair, which handles bugs that make it to deployed systems and lead to erroneous behavior at runtime by modifying erroneous program states to recover from errors. While data structure repair presents a promising basis for dealing with bugs at runtime, it remains computationally expensive. Our thesis is that debugging and data structure repair can be integrated to provide the basis of an effective approach for removing bugs before code is deployed and handling them after it is deployed. We present a bi-directional integration where ideas at the basis of data structure repair assist in automating debugging and vice versa. Our key insight is two-fold: (1)a repair action performed to mutate an erroneous object field value to repair it can be abstracted into a program statement that performs that update correctly; and (2)repair actions that are performed repeatedly to fix the same error can be memoized and re-used. We design, develop, and evaluate two techniques that embody our insight. One, we present an automated debugging technique that leverages a systematic constraint-based data structure repair technique developed in previous work and provides suggestions on how to fix a faulty program. Two, we present repair abstractions that are based on the same central ideas as in our automated debugging technique and memoize how an erroneous state was repaired, which enables prioritizing and re-using repair actions when the same error occurs again. The focus of our work is programs that operate on structurally complex data, e.g., heap-allocated data structures that have complex structural integrity constraints, such as acyclicity. Checking such constraints plays a central role in the techniques that lay at the foundation of our work. These techniques require the user to provide the constraints, which poses a burden on the user. To facilitate the use of constraint-based techniques, we present a third technique to check constraint violations at runtime using graph spectra, which have been studied extensively by mathematicians to capture properties of graphs. We view the heap of an object-oriented program as an edge-labeled graph, which allows us to apply results from graph spectra theory. Experimental results show the effectiveness of using graph spectra as a basis of capturing structural properties of a class of commonly used data structures. / text
3

OSIDEM : a demonstration of the transmission of open systems interconnection high level protocols

Azizi, Davood January 1992 (has links)
No description available.
4

Lempel-Ziv Factorization Using Less Time and Space

Chen, Gang 08 1900 (has links)
<p> For 30 years the Lempel-Ziv factorization LZx of a string x = x[1..n] has been a fundamental data structure of string processing, especially valuable for string compression and for computing all the repetitions (runs) in x. When the Internet came in, a huge need for Lempel-Ziv factorization was created. Nowadays it has become a basic efficient data transmission format on the Internet.</p> <p> Traditionally the standard method for computing LZx was based on O(n)-time processing of the suffix tree STx of x. Ukkonen's algorithm constructs suffix tree online and so permits LZ to be built from subtrees of ST; this gives it an advantage, at least in terms of space, over the fast and compact version of McCreight's STCA [37] due to Kurtz [24]. In 2000 Abouelhoda, Kurtz & Ohlebusch proposed a O(n)-time Lempel-Ziv factorization algorithm based on an "enhanced" suffix array - that is, a suffix array SAx together with other supporting data structures.</p> <p> In this thesis we first examine some previous algorithms for computing Lempel-Ziv factorization. We then analyze the rationale of development and introduce a collection of new algorithms for computing LZ-factorization. By theoretical proof and experimental comparison based on running time and storage usage, we show that our new algorithms appear either in their theoretical behavior or in practice or both to be superior to those previously proposed. In the last chapter the conclusion of our new algorithms are given, and some open problems are pointed out for our future research.</p> / Thesis / Master of Science (MSc)
5

Contract-based data structure repair using alloy

Nokhbeh Zaeem, Razieh 25 October 2010 (has links)
Contracts and specifications have long been used in object-oriented design, programming and testing to enhance reliability before software deployment. However, the use of specifications in deployed software is commonly limited to runtime checking where assertions form a basis for detecting incorrect program states to terminate the erroneous executions. We present a contract-based approach for data structure repair, which repairs erroneous states in deployed software. The key novelty is the support for rich behavioral specifications, such as those that relate pre-states with post-states of a method to accurately specify expected behavior and precise repair. The approach is based on the view of a specification as a non-deterministic implementation. The key insight is to use any correct state mutations by an otherwise erroneous execution to prune non-determinism in the specification, thereby transmuting the specification to an implementation that does not incur a prohibitively high performance penalty. While invariants, pre-conditions and post-conditions could be provided in different modeling languages, we leverage the Alloy tool-set, specifically the Alloy language and the Alloy Analyzer for systematically repairing erroneous states. Four different algorithms are presented and implemented in our data structure repair framework. These algorithms can repair a medium sized erroneous data structure in a few seconds. We introduce repair guide annotations defined by the user to improve the accuracy and performance of the repair mechanism. Experiments using complex specifications show the approach holds much promise in increasing software reliability. / text
6

Data Structuring Problems in the Bit Probe Model

Rahman, Mohammad Ziaur January 2007 (has links)
We study two data structuring problems under the bit probe model: the dynamic predecessor problem and integer representation in a manner supporting basic updates in as few bit operations as possible. The model of computation considered in this paper is the bit probe model. In this model, the complexity measure counts only the bitwise accesses to the data structure. The model ignores the cost of computation. As a result, the bit probe complexity of a data structuring problem can be considered as a fundamental measure of the problem. Lower bounds derived by this model are valid as lower bounds for any realistic, sequential model of computation. Furthermore, some of the problems are more suitable for study in this model as they can be solved using less than $w$ bit probes where $w$ is the size of a computer word. The predecessor problem is one of the fundamental problems in computer science with numerous applications and has been studied for several decades. We study the colored predecessor problem, a variation of the predecessor problem, in which each element is associated with a symbol from a finite alphabet or color. The problem is to store a subset $S$ of size $n,$ from a finite universe $U$ so that to support efficient insertion, deletion and queries to determine the color of the largest value in $S$ which is not larger than $x,$ for a given $x \in U.$ We present a data structure for the problem that requires $O(k \sqrt[k]{{\log U} \over {\log \log U}})$ bit probes for the query and $O(k^2 {{\log U} \over {\log \log U}})$ bit probes for the update operations, where $U$ is the universe size and $k$ is positive constant. We also show that the results on the colored predecessor problem can be used to solve some other related problems such as existential range query, dynamic prefix sum, segment representative, connectivity problems, etc. The second structure considered is for integer representation. We examine the problem of integer representation in a nearly minimal number of bits so that increment and decrement (and indeed addition and subtraction) can be performed using few bit inspections and fewer bit changes. In particular, we prove a new lower bound of $\Omega(\sqrt{n})$ for the increment and decrement operation, where $n$ is the minimum number of bits required to represent the number. We present several efficient data structures to represent integers that use a logarithmic number of bit inspections and a constant number of bit changes per operation.
7

Data Structuring Problems in the Bit Probe Model

Rahman, Mohammad Ziaur January 2007 (has links)
We study two data structuring problems under the bit probe model: the dynamic predecessor problem and integer representation in a manner supporting basic updates in as few bit operations as possible. The model of computation considered in this paper is the bit probe model. In this model, the complexity measure counts only the bitwise accesses to the data structure. The model ignores the cost of computation. As a result, the bit probe complexity of a data structuring problem can be considered as a fundamental measure of the problem. Lower bounds derived by this model are valid as lower bounds for any realistic, sequential model of computation. Furthermore, some of the problems are more suitable for study in this model as they can be solved using less than $w$ bit probes where $w$ is the size of a computer word. The predecessor problem is one of the fundamental problems in computer science with numerous applications and has been studied for several decades. We study the colored predecessor problem, a variation of the predecessor problem, in which each element is associated with a symbol from a finite alphabet or color. The problem is to store a subset $S$ of size $n,$ from a finite universe $U$ so that to support efficient insertion, deletion and queries to determine the color of the largest value in $S$ which is not larger than $x,$ for a given $x \in U.$ We present a data structure for the problem that requires $O(k \sqrt[k]{{\log U} \over {\log \log U}})$ bit probes for the query and $O(k^2 {{\log U} \over {\log \log U}})$ bit probes for the update operations, where $U$ is the universe size and $k$ is positive constant. We also show that the results on the colored predecessor problem can be used to solve some other related problems such as existential range query, dynamic prefix sum, segment representative, connectivity problems, etc. The second structure considered is for integer representation. We examine the problem of integer representation in a nearly minimal number of bits so that increment and decrement (and indeed addition and subtraction) can be performed using few bit inspections and fewer bit changes. In particular, we prove a new lower bound of $\Omega(\sqrt{n})$ for the increment and decrement operation, where $n$ is the minimum number of bits required to represent the number. We present several efficient data structures to represent integers that use a logarithmic number of bit inspections and a constant number of bit changes per operation.
8

Realization Methods for the Quadtree Morphological Filter with Their Applications

Chen, Yung-lin 07 September 2011 (has links)
Quadtree algorithm and morphological image processing are combined in the proposed method in this paper. A new method is proposed to improve the previous pattern mapping method for faster processing. The previous pattern mapping method is a pattern mapping method by storing the tree pattern by string form, which is a pointless data structure. In the proposed method the tree pattern is saved in a point data structure. Therefore, the pointer tree can be applied to the quadtree immediately without the transforming time, which was required in the previous pattern mapping method. In this paper, the pointless quad tree work is modified to pointer quad tree to reduce the processing time. The modified algorithm is applied to circuit detection, image restoration, image segmentation and cell counting.
9

Sequential and Parallel Algorithms for the Generalized Maximum Subarray Problem

Bae, Sung Eun January 2007 (has links)
The maximum subarray problem (MSP) involves selection of a segment of consecutive array elements that has the largest possible sum over all other segments in a given array. The efficient algorithms for the MSP and related problems are expected to contribute to various applications in genomic sequence analysis, data mining or in computer vision etc. The MSP is a conceptually simple problem, and several linear time optimal algorithms for 1D version of the problem are already known. For 2D version, the currently known upper bounds are cubic or near-cubic time. For the wider applications, it would be interesting if multiple maximum subarrays are computed instead of just one, which motivates the work in the first half of the thesis. The generalized problem of K-maximum subarray involves finding K segments of the largest sum in sorted order. Two subcategories of the problem can be defined, which are K-overlapping maximum subarray problem (K-OMSP), and K-disjoint maximum subarray problem (K-DMSP). Studies on the K-OMSP have not been undertaken previously, hence the thesis explores various techniques to speed up the computation, and several new algorithms. The first algorithm for the 1D problem is of O(Kn) time, and increasingly efficient algorithms of O(K² + n logK) time, O((n+K) logK) time and O(n+K logmin(K, n)) time are presented. Considerations on extending these results to higher dimensions are made, which contributes to establishing O(n³) time for 2D version of the problem where K is bounded by a certain range. Ruzzo and Tompa studied the problem of all maximal scoring subsequences, whose definition is almost identical to that of the K-DMSP with a few subtle differences. Despite slight differences, their linear time algorithm is readily capable of computing the 1D K-DMSP, but it is not easily extended to higher dimensions. This observation motivates a new algorithm based on the tournament data structure, which is of O(n+K logmin(K, n)) worst-case time. The extended version of the new algorithm is capable of processing a 2D problem in O(n³ + min(K, n) · n² logmin(K, n)) time, that is O(n³) for K ≤ n/log n For the 2D MSP, the cubic time sequential computation is still expensive for practical purposes considering potential applications in computer vision and data mining. The second half of the thesis investigates a speed-up option through parallel computation. Previous parallel algorithms for the 2D MSP have huge demand for hardware resources, or their target parallel computation models are in the realm of pure theoretics. A nice compromise between speed and cost can be realized through utilizing a mesh topology. Two mesh algorithms for the 2D MSP with O(n) running time that require a network of size O(n²) are designed and analyzed, and various techniques are considered to maximize the practicality to their full potential.
10

Survival Techniques for Computer Programs

Rinard, Martin C. 01 1900 (has links)
Programs developed with standard techniques often fail when they encounter any of a variety of internal errors. We present a set of techniques that prevent programs from failing and instead enable them to continue to execute even after they encounter otherwise fatal internal errors. Our results indicate that even though the techniques may take the program outside of its anticipated execution envelope, the continued execution often enables the program to provide acceptable results to their users. These techniques may therefore play an important role in making software systems more resilient and reliable in the face or errors. / Singapore-MIT Alliance (SMA)

Page generated in 0.0578 seconds