• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 23
  • 17
  • 17
  • 15
  • 15
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fault replication as a method of coding information

Barclay, Nicola January 1998 (has links)
No description available.
2

Combining data structure repair and program repair

Malik, Muhammad Zubair 19 September 2014 (has links)
Bugs in code continue to pose a fundamental problem for software reliability and cause expensive failures. The process of removing known bugs is termed debugging, which is a classic methodology commonly performed before code is deployed. Traditionally, debugging is tedious, often requiring much manual effort. A more recent technique that complements debugging is data structure repair, which handles bugs that make it to deployed systems and lead to erroneous behavior at runtime by modifying erroneous program states to recover from errors. While data structure repair presents a promising basis for dealing with bugs at runtime, it remains computationally expensive. Our thesis is that debugging and data structure repair can be integrated to provide the basis of an effective approach for removing bugs before code is deployed and handling them after it is deployed. We present a bi-directional integration where ideas at the basis of data structure repair assist in automating debugging and vice versa. Our key insight is two-fold: (1)a repair action performed to mutate an erroneous object field value to repair it can be abstracted into a program statement that performs that update correctly; and (2)repair actions that are performed repeatedly to fix the same error can be memoized and re-used. We design, develop, and evaluate two techniques that embody our insight. One, we present an automated debugging technique that leverages a systematic constraint-based data structure repair technique developed in previous work and provides suggestions on how to fix a faulty program. Two, we present repair abstractions that are based on the same central ideas as in our automated debugging technique and memoize how an erroneous state was repaired, which enables prioritizing and re-using repair actions when the same error occurs again. The focus of our work is programs that operate on structurally complex data, e.g., heap-allocated data structures that have complex structural integrity constraints, such as acyclicity. Checking such constraints plays a central role in the techniques that lay at the foundation of our work. These techniques require the user to provide the constraints, which poses a burden on the user. To facilitate the use of constraint-based techniques, we present a third technique to check constraint violations at runtime using graph spectra, which have been studied extensively by mathematicians to capture properties of graphs. We view the heap of an object-oriented program as an edge-labeled graph, which allows us to apply results from graph spectra theory. Experimental results show the effectiveness of using graph spectra as a basis of capturing structural properties of a class of commonly used data structures. / text
3

OSIDEM : a demonstration of the transmission of open systems interconnection high level protocols

Azizi, Davood January 1992 (has links)
No description available.
4

Lempel-Ziv Factorization Using Less Time and Space

Chen, Gang 08 1900 (has links)
<p> For 30 years the Lempel-Ziv factorization LZx of a string x = x[1..n] has been a fundamental data structure of string processing, especially valuable for string compression and for computing all the repetitions (runs) in x. When the Internet came in, a huge need for Lempel-Ziv factorization was created. Nowadays it has become a basic efficient data transmission format on the Internet.</p> <p> Traditionally the standard method for computing LZx was based on O(n)-time processing of the suffix tree STx of x. Ukkonen's algorithm constructs suffix tree online and so permits LZ to be built from subtrees of ST; this gives it an advantage, at least in terms of space, over the fast and compact version of McCreight's STCA [37] due to Kurtz [24]. In 2000 Abouelhoda, Kurtz & Ohlebusch proposed a O(n)-time Lempel-Ziv factorization algorithm based on an "enhanced" suffix array - that is, a suffix array SAx together with other supporting data structures.</p> <p> In this thesis we first examine some previous algorithms for computing Lempel-Ziv factorization. We then analyze the rationale of development and introduce a collection of new algorithms for computing LZ-factorization. By theoretical proof and experimental comparison based on running time and storage usage, we show that our new algorithms appear either in their theoretical behavior or in practice or both to be superior to those previously proposed. In the last chapter the conclusion of our new algorithms are given, and some open problems are pointed out for our future research.</p> / Thesis / Master of Science (MSc)
5

Realization Methods for the Quadtree Morphological Filter with Their Applications

Chen, Yung-lin 07 September 2011 (has links)
Quadtree algorithm and morphological image processing are combined in the proposed method in this paper. A new method is proposed to improve the previous pattern mapping method for faster processing. The previous pattern mapping method is a pattern mapping method by storing the tree pattern by string form, which is a pointless data structure. In the proposed method the tree pattern is saved in a point data structure. Therefore, the pointer tree can be applied to the quadtree immediately without the transforming time, which was required in the previous pattern mapping method. In this paper, the pointless quad tree work is modified to pointer quad tree to reduce the processing time. The modified algorithm is applied to circuit detection, image restoration, image segmentation and cell counting.
6

Sequential and Parallel Algorithms for the Generalized Maximum Subarray Problem

Bae, Sung Eun January 2007 (has links)
The maximum subarray problem (MSP) involves selection of a segment of consecutive array elements that has the largest possible sum over all other segments in a given array. The efficient algorithms for the MSP and related problems are expected to contribute to various applications in genomic sequence analysis, data mining or in computer vision etc. The MSP is a conceptually simple problem, and several linear time optimal algorithms for 1D version of the problem are already known. For 2D version, the currently known upper bounds are cubic or near-cubic time. For the wider applications, it would be interesting if multiple maximum subarrays are computed instead of just one, which motivates the work in the first half of the thesis. The generalized problem of K-maximum subarray involves finding K segments of the largest sum in sorted order. Two subcategories of the problem can be defined, which are K-overlapping maximum subarray problem (K-OMSP), and K-disjoint maximum subarray problem (K-DMSP). Studies on the K-OMSP have not been undertaken previously, hence the thesis explores various techniques to speed up the computation, and several new algorithms. The first algorithm for the 1D problem is of O(Kn) time, and increasingly efficient algorithms of O(K² + n logK) time, O((n+K) logK) time and O(n+K logmin(K, n)) time are presented. Considerations on extending these results to higher dimensions are made, which contributes to establishing O(n³) time for 2D version of the problem where K is bounded by a certain range. Ruzzo and Tompa studied the problem of all maximal scoring subsequences, whose definition is almost identical to that of the K-DMSP with a few subtle differences. Despite slight differences, their linear time algorithm is readily capable of computing the 1D K-DMSP, but it is not easily extended to higher dimensions. This observation motivates a new algorithm based on the tournament data structure, which is of O(n+K logmin(K, n)) worst-case time. The extended version of the new algorithm is capable of processing a 2D problem in O(n³ + min(K, n) · n² logmin(K, n)) time, that is O(n³) for K ≤ n/log n For the 2D MSP, the cubic time sequential computation is still expensive for practical purposes considering potential applications in computer vision and data mining. The second half of the thesis investigates a speed-up option through parallel computation. Previous parallel algorithms for the 2D MSP have huge demand for hardware resources, or their target parallel computation models are in the realm of pure theoretics. A nice compromise between speed and cost can be realized through utilizing a mesh topology. Two mesh algorithms for the 2D MSP with O(n) running time that require a network of size O(n²) are designed and analyzed, and various techniques are considered to maximize the practicality to their full potential.
7

Survival Techniques for Computer Programs

Rinard, Martin C. 01 1900 (has links)
Programs developed with standard techniques often fail when they encounter any of a variety of internal errors. We present a set of techniques that prevent programs from failing and instead enable them to continue to execute even after they encounter otherwise fatal internal errors. Our results indicate that even though the techniques may take the program outside of its anticipated execution envelope, the continued execution often enables the program to provide acceptable results to their users. These techniques may therefore play an important role in making software systems more resilient and reliable in the face or errors. / Singapore-MIT Alliance (SMA)
8

Le parrainage sportif en PME PMI : de l'émergence d'un processus d'identification à l'évolution de la satisfaction au travail et l'implication organisationnelle des employés / Sport sponsorship in SME's : from the emergence of identification process to the evolution of job satisfaction and organizational commitment on the sponsors' employees

Jouny, Julien 17 November 2014 (has links)
Depuis près de quarante ans, la recherche sur le parrainage s’inscrit dans la mouvance d’un phénomène qui ne cesse de se développer. A ce jour, plus de 55 milliards de dollars sont consacrés dans notre monde à cette pratique (Kantar, 2014). La France n’est pas en reste avec près de 2 milliards d’euros d’investissement en 2013 (FPI, 2014). Le parrainage sportif représente la majeure partie de ces dépenses, ce qui en fait l’un des secteurs les plus dynamiques de l’activité de communication marketing. La recherche porte principalement sur son impact externe (spectateurs, consommateurs,…) dans le cadre du parrainage d’entités ou d’événements sportifs au fort potentiel marketing (Jeux Olympiques, Championnats du Monde, clubs professionnels…) par de grandes entreprises. Le terrain des PME-PMI s’avère peu investi. De même, peu de recherches relatives aux conséquences de cette pratique sur le public interne du parrain ont vu le jour. Notre travail met en relief les effets de ce type de parrainage sportif sur le public interne des PME-PMI. Cette recherche base sa réflexion sur la problématique suivante : Le parrainage sportif à faible potentiel marketing et commercial par une PME-PMI engendre-t-il des effets sur les employés du parrain ?Nous développons notre réflexion autour de cette question en trois temps. Premièrement, nous précisons respectivement l’intérêt théorique et managérial de cette recherche. Une revue de la littérature sur la thématique du parrainage et du parrainage sportif est proposée. Celle-ci permet d’ériger notre définition du concept principal mobilisé dans ce travail. Puis, une étude qualitative réalisée auprès de 18 dirigeants de PME-PMI parrain confirme l’intérêt managérial du sujet et l’impact possible du parrainage sportif sur le public interne. Deuxièmement, nous réalisons une étude qualitative auprès de 16 employés de PME-PMI parrain sportif. A partir de leur discours, nous développons une data-structure qui met en relief un processus d’impact du parrainage sportif affectant leur identité organisationnelle. De plus, des effets notoires sur leur satisfaction au travail et leur implication organisationnelle sont identifiés. Troisièmement, nous confirmons l’existence de ces effets à travers une troisième étude, de nature quantitative, réalisée auprès de 421 employés de 41 PME-PMI parrains sportifs. Suite aux résultats obtenus, nous indiquons la relative méconnaissance et inexploitation du potentiel du parrainage sportif au sein de ces entreprises. Des lors, d’un point de vue managérial, cette recherche souligne les conditions optimales à mettre en place afin d’utiliser efficacement le parrainage sportif à faible ampleur marketing et commercial en tant que moyen de communication interne au sein des PME-PMI / Over the last 40 years, research on sponsorship has been growing, and so have investments in this communication instrument. Today, worldwide sponsorship investments exceed 55 billion US Dollars (Kantar, 2014), while investments in France total more than two billion Euros (FPI, 2014). Roughly two thirds of these investments concern the area of sport making sport sponsorship one of the most dynamic fields of marketing communication. In the past, research has focused mainly on the impact of sponsorship on the external targets of the sponsor, typically analyzing large multinational companies supporting large scale events with high marketing potential (Olympic games, Soccer World championships, auto racing, etc.). Very few studies have targeted the use sponsorship by SMEs. Moreover, there is a lack of research on the internal consequences of this practice. Our study focuses on the effects of sport sponsorship on the internal public of SMEs by addressing the following question: How do employees of SMEs perceive the little-mediated sponsorship activities of their employers and how are they impacted by them?This work is structured in three parts. First, a literature review about sport sponsorship highlights the theoretical and managerial interests of the research and proposes a definition of sport sponsorship. A qualitative study conducted on 18 directors of SME confirms the managerial interest of this research and sheds light on the potential impact of sport sponsorship on the internal public of this kind of company. Second, based on a qualitative study on 16 employees of SMEs, a data-structure is developed which allows to grasp the organizational identification processes that result from sponsorship activities, and the effects of these activities in terms of job satisfaction and organizational commitment. In a third part, the existence of these effects is further analyzed through a quantitative survey upon 421 employees of 41 different SMEs sponsors. Overall, results show the lack of awareness and the quite confidential use of sport sponsorship within these organizations. From a managerial perspective, our research highlights the optimal conditions required to make effective use of sport sponsorship with minor marketing and sales potential by SMEs
9

Scalable Parameter Management using Casebased Reasoning for Cognitive Radio Applications

Ali, Daniel Ray 30 May 2012 (has links)
Cognitive radios have applied various forms of artificial intelligence (AI) to wireless systems in order to solve the complex problems presented by proper link management, network traffic balance, and system efficiency. Casebased reasoning (CBR) has seen attention as a prospective avenue for storing and organizing past information in order to allow the cognitive engine to learn from previous experience. CBR uses past information and observed outcomes to form empirical relationships that may be difficult to model apriori. As wireless systems become more complex and more tightly time constrained, scalability becomes an apparent concern to store large amounts of information over multiple dimensions. This thesis presents a renewed look at an abstract application of CBR to CR. By appropriately designing a case structure with useful information both to the cognitive entity as well as the underlying similarity relationships between cases, an accurate problem description can be developed and indexed. By separating the components of a case from the parameters that are meaningful to similarity, the situation can be quickly identified and queried given proper design. A data structure with this in mind is presented that orders cases in terms of general placement in Euclidean space, but does not require the discrete calculation of distance between the query case and all cases stored. By grouping possible similarity dimension values into distinct partitions called "similarity buckets", a data structure is developed with constant (O(1)) access time, which is an improvement of several orders of magnitude over traditional linear approaches (O(n)). / Master of Science
10

Studying the Properties of a Distributed Decentralized b+ Tree with Weak-Consistency

Ben Hafaiedh, Khaled 18 January 2012 (has links)
Distributed computing is very popular in the field of computer science and is widely used in web applications. In such systems, tasks and resources are partitioned among several computers so that the workload can be shared among the different computers in the network, in contrast to systems using a single server computer. Distributed system designs are used for many practical reasons and are often found to be more scalable, robust and suitable for many applications. The aim of this thesis is to study the properties of a distributed tree data-structure that allow searches, insertions and deletions of data elements. In particular, the b- tree structure [13] is considered, which is a generalization of a binary search tree. The study consists of analyzing the effect of distributing such a tree among several computers and investigates the behavior of such structure over a long period of time by growing the network of computers supporting the tree, while the state of the structure is instantly updated as insertions and deletions operations are performed. It also attempts to validate the necessary and sufficient invariants of the b-tree-structure that guarantee the correctness of the search operations. A simulation study is also conducted to verify the validity of such distributed data-structure and the performance of the algorithm that implements it. Finally, a discussion is provided in the end of the thesis to compare the performance of the system design with other distributed tree structure designs.

Page generated in 0.0465 seconds