• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60386
  • 5936
  • 5658
  • 3721
  • 3432
  • 2248
  • 2248
  • 2248
  • 2248
  • 2248
  • 2235
  • 1222
  • 1145
  • 643
  • 535
  • Tagged with
  • 102002
  • 45269
  • 27967
  • 20107
  • 17267
  • 12196
  • 10643
  • 10624
  • 8924
  • 8328
  • 7089
  • 6296
  • 6074
  • 6053
  • 5977
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

How civilized! : discourses in modernity and postmodernity in the computer strategy game /

Caldwell, Nicholas Peter. January 2001 (has links) (PDF)
Thesis (M. Phil.)--University of Queensland, 2002. / Includes bibliographical references.
52

Random testing of open source C compilers

Yang, Xuejun 23 June 2015 (has links)
<p> Compilers are indispensable tools to developers. We expect them to be correct. However, compiler correctness is very hard to be reasoned about. This can be partly explained by the daunting complexity of compilers. </p><p> In this dissertation, I will explain how we constructed a random program generator, Csmith, and used it to find hundreds of bugs in strong open source compilers such as the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). The success of Csmith depends on its ability of being expressive and unambiguous at the same time. Csmith is composed of a code generator and a GTAV (Generation-Time Analysis and Validation) engine. They work interactively to produce expressive yet unambiguous random programs. The expressiveness of Csmith is attributed to the code generator, while the unambiguity is assured by GTAV. GTAV performs program analyses, such as points-to analysis and effect analysis, efficiently to avoid ambiguities caused by undefined behaviors or unspecified behaviors. </p><p> During our 4.25 years of testing, Csmith has found over 450 bugs in the GNU Compiler Collection (GCC) and the LLVM Compiler Infrastructure (LLVM). We analyzed the bugs by putting them into different categories, studying the root causes, finding their locations in compilers' source code, and evaluating their importance. We believe analysis results are useful to future random testers, as well as compiler writers/users.</p>
53

Opportunities for near data computing in MapReduce workloads

Pugsley, Seth Hintze 25 June 2015 (has links)
<p> In-memory big data applications are growing in popularity, including in-memory versions of the MapReduce framework. The move away from disk-based datasets shifts the performance bottleneck from slow disk accesses to memory bandwidth. MapReduce is a data-parallel application, and is therefore amenable to being executed on as many parallel processors as possible, with each processor requiring high amounts of memory bandwidth. We propose using Near Data Computing (NDC) as a means to develop systems that are optimized for in-memory MapReduce workloads, offering high compute parallelism and even higher memory bandwidth. This dissertation explores three different implementations and styles of NDC to improve MapReduce execution. First, we use 3D-stacked memory+logic devices to process the Map phase on compute elements in close proximity to database splits. Second, we attempt to replicate the performance characteristics of the 3D-stacked NDC using only commodity memory and inexpensive processors to improve performance of both Map and Reduce phases. Finally, we incorporate fixed-function hardware accelerators to improve sorting performance within the Map phase. This dissertation shows that it is possible to improve in-memory MapReduce performance by potentially two orders of magnitude by designing system and memory architectures that are specifically tailored to that end.</p>
54

Medial axis simplification based on global geodesic slope and accumulated hyperbolic distance

Wang, Rui, 王睿 January 2012 (has links)
The medial axis is an important shape representation and the computation of the medial axis is a fundamental research problem in computer graphics. Practically, the medial axis is widely used in various aspects of computer graphics, such as shape analysis, image segmentation, skeleton extraction and mesh generation and so forth. However, the applications of the medial axis have been limited by its sensitivity to boundary perturbations. This characteristic may lead to a number of noise branches and increase the complexity of the medial axis. To solve the sensitivity problem, it is critical to simplify the medial axis. This thesis first investigates the algorithms for computing medial axes of different input shapes. Several algorithms for the filtration of medial axes are then reviewed, such as the local importance measurement algorithms, boundary smoothness algorithms, and the global algorithms. Two novel algorithms for the simplification of the medial axis are proposed to generate a stable and simplified medial axis as well as its reconstructed boundary. The developed Global Geodesic Slope(GGS) algorithm for the medial axis simplification is based on the global geodesic slope defined in this thesis, which combines the advantages of the global and the local algorithms. The GGS algorithm prunes the medial axis according to local features as well as the relative size of the shape. It is less sensitive to boundary noises than the local algorithms, and can maintain the features of the shape in highly concave regions while the global algorithms may not. The other simplification algorithm we propose is the Accumulated Hyperbolic Distance(AHD) algorithm. It directly uses the evaluation criterion of the error, accumulated hyperbolic distance defined in this thesis, as the pruning measurement in the filtration process. It guarantees the upper bound of the error between the reconstructed shape and the original one within the defined threshold. The AHD algorithm avoids sudden changes of the reconstructed shape as the defined threshold changes. / published_or_final_version / Computer Science / Master / Master of Philosophy
55

New results on online job scheduling

Zhu, Jianqiao., 朱剑桥. January 2013 (has links)
This thesis presents several new results on online job scheduling. Job scheduling is a basic requirement of many practical computer systems, and the scheduling behavior directly affects a system’s performance. In theoretical aspect, scheduling scenarios are abstracted into scheduling models, which are studied mathematically. In this thesis, we look into a variety of scheduling models which are under active research. We incorporate these models and organize them into generalized pictures. We first study non-clairvoyant scheduling to minimize weighted flow time on two different multi-processor models. In the first model, processors are all identical and jobs can possibly be speeded up by running on several processors in parallel. Under the non-clairvoyant model, the online scheduler has no information about the actual job size and degree of speed-up due to parallelism during the execution of a job, yet it has to determine dynamically when and how many processors to run the jobs. The literature contains several O(1)-competitive algorithms for this problem under the unit-weight multi-processor setting [13, 14] as well as the weighted single-processor setting [5]. This thesis shows the first O(1)-competitive algorithm for weighted flow time in the multi-processor setting. In the second model, we consider processors with different functionalities and only processors of the same functionality can work on the same job in parallel to achieve some degree of speed up. Here a job is modeled as a sequence of non-clairvoyant demands of different functionalities. This model is derived naturally from the classical job shop scheduling; but as far as we know, there is no previous work on scheduling to minimize flow time under this multi-processor model. In this thesis we take a first step to study non-clairvoyant scheduling on this multi-processor model. Motivated by the literature on 2-machine job shop scheduling, we focus on the special case when processors are divided into two types of functionalities, and we show a non-clairvoyant algorithm that is O(1)-competitive for weighted flow time. This thesis also initiates the study of online scheduling with rejection penalty in the non-clairvoyant setting. In the rejection penalty model, jobs can be rejected with a penalty, and the user cost of a job is defined as the weighted flow time of the job plus the penalty if it is rejected before completion. Previous work on minimizing the total user cost focused on the clairvoyant single-processor setting [3, 10] and has produced O(1)-competitive online algorithm for jobs with arbitrary weights and penalties. This thesis gives the first non-clairvoyant algorithms that are O(1)-competitive for minimizing the total user cost on a single processor and multi-processors, when using slightly faster (i.e., (1 + ∈)-speed for any ∈> 0) processors. Note that if no extra speed is allowed, no online algorithm can be O(1)-competitive even for minimizing (unweighted) flow time alone. The above results assume a processor running at a fixed speed. This thesis shows more interesting results on extending the above study to the dynamic speed scaling model, where the processor can vary the speed dynamically and the rate of energy consumption is an arbitrary increasing function of speed. A scheduling algorithm has to decide job rejection and determine the order and speed of job execution. It is interesting to study the tradeoff between the above-mentioned user cost and energy. This thesis gives two O(1)-competitive non-clairvoyant algorithms for minimizing the user cost plus energy on a single processor and multi-processors, respectively. / published_or_final_version / Computer Science / Master / Master of Philosophy
56

New competitive algorithms for online job scheduling

Li, Rongbin, 李榕滨 January 2014 (has links)
Job scheduling, which greatly impacts on the system performance, is a fundamental problem in computer science. In this thesis, we study three kinds of scheduling problems, that is, deadline scheduling, due date scheduling, and flow time scheduling. Traditionally, the major concern for scheduling problems is the system performance, i.e. the “Quality of Service" (QoS). Different scheduling problems use different QoS measurements. For deadline scheduling, the most common QoS to optimize is the throughput; for due date scheduling, it is the total quoted lead time; and for flow time scheduling, it is the total (weighted) flow time. Recently, energy efficiency is becoming more and more important. Many modern processors adopt technologies like dynamic speed scaling and sleep management to reduce energy usage. Much work is done on energy efficient scheduling. In this thesis, we study this topic for all three kinds of scheduling mentioned above. Meanwhile, we also revisit the traditional flow time scheduling problem to optimize the QoS. However, we consider the problem in a more realistic model that makes the problem much more challenging. Below is the summary of the problems studied in the thesis. First, we consider the tradeoff between energy and throughput for deadline scheduling. Specifically, each job is associated with a value (or importance) and a deadline. A scheduling algorithm is allowed to discard some of the jobs, and the objective is to minimize total energy usage plus total value of discarded jobs. When processor's maximum speed is unbounded, we propose an O(1)-competitive algorithm. When processor's maximum speed is bounded, we show a strong lower bound and give an algorithm with a competitive ratio close to that lower bound. Second, we study energy efficient due date scheduling. Jobs arrive online with different sizes and weights. An algorithm needs to assign a due date to each job once it arrives, and complete the job by the due date. The quoted lead time of a job equals its due date minus its arrival time, multiplied by its weight. We propose a competitive algorithm for minimizing the sum of the total quoted lead time and energy usage. Next, we consider flow time scheduling with power management on multiple machines. Jobs with arbitrary sizes and weights arrive online. Each machine consumes different amount of energy when processing a job, idling or sleeping. A scheduler has to maintain a good balance of the states of the machines to avoid energy wastage and, meanwhile, guarantee high QoS. Our result is an O(1)-competitive algorithm to minimize total weighted flow time plus energy usage. Finally, we consider the traditional preemptive scheduling to minimize total flow time. Previous theoretical results often assume preemption is free, which is not true for most systems. We investigate the complexity of the problem when a processor has to perform a certain amount of overhead before it resumes the execution of a job preempted before. We first show an Ω(n^(1/4)) lower bound, and then, propose a (1+ε)-speed (1+ 1/ε )-competitive algorithm in resource augmentation model. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
57

Compact representation of medial axis transform

Zhu, Yanshu, 朱妍姝 January 2014 (has links)
Shape representation is a fundamental topic in geometric modeling, which is ubiquitous in computer graphics. Compared with the explicit and implicit shape representations, the medial representation possesses many advantages. It provides a comprehensive understanding of the shapes, since it gives direct access to both the boundaries and the interiors of the shapes. Although there are many medial axis computation algorithms which are able to filter noises in the medial axis, introduced by the perturbations on the boundary, and generate stable medial axis transforms of the input shapes, the medial axis transforms are usually represented in a redundant way with numerous primitives, which brings down the flexibility of the medial axis transform and hinders the popularity of the medial axis transform in geometric applications. In this thesis, we propose compact representations of the medial axis transforms for 2D and 3D shapes. The first part of this thesis proposes a full pipeline for computing the medial axis transform of an arbitrary 2D shape. The instability of the medial axis transform is overcome by a pruning algorithm guided by a user-defined Hausdorff distance threshold. The stable medial axis transform is then approximated by spline curves in the 3D space to produce a smooth and compact representation. These spline curves are computed by minimizing the approximation error between the input shape and the shape represented by the medial axis transform. The second part of this thesis discusses improvements on the existing medial axis computation algorithms, and represent the medial axis transform of a 3D shape in a compact way. The CVT remeshing framework is applied on an initial medial axis transform to promote the mesh quality of the medial axis. The simplified medial axis transform is then optimized by minimizing the approximation error of the shape reconstructed from the medial axis transform to the original 3D shape. Our results on various 2D and 3D shapes suggest that our method is practical and effective, and yields faithful and compact representations of medial axis transforms of 2D and 3D shapes. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
58

CORDIC-based high-speed direct digital frequency synthesis

Kang, Chang Yong 28 August 2008 (has links)
Not available / text
59

Efficient model checking for timing diagrams

Amla, Nina 28 August 2008 (has links)
Not available / text
60

Randomness extractors for independent sources and applications

Rao, Anup, 1980- 28 August 2008 (has links)
The use of randomized algorithms and protocols is ubiquitous in computer science. Randomized solutions are typically faster and simpler than deterministic ones for the same problem. In addition, many computational problems (for example in cryptography and distributed computing) are impossible to solve without access to randomness. In computer science, access to randomness is usually modeled as access to a string of uncorrelated uniformly random bits. Although it is widely believed that many physical phenomena are inherently unpredictable, there is a gap between the computer science model of randomness and what is actually available. It is not clear where one could find such a source of uniformly distributed bits. In practice, computers generate random bits in ad-hoc ways, with no guarantees on the quality of their distribution. One aim of this thesis is to close this gap and identify the weakest assumption on the source of randomness that would still permit the use of randomized algorithms and protocols. This is achieved by building randomness extractors ... Such an algorithm would allow us to use a compromised source of randomness to obtain truly random bits, which we could then use in our original application. Randomness extractors are interesting in their own right as combinatorial objects that look random in strong ways. They fall into the class of objects whose existence is easy to check using the probabilistic method (i.e., almost all functions are good randomness extractors), yet finding explicit examples of a single such object is non-trivial. Expander graphs, error correcting codes, hard functions, epsilon biased sets and Ramsey graphs are just a few examples of other such objects. Finding explicit examples of extractors is part of the bigger project in the area of derandomization of constructing such objects which can be used to reduce the dependence of computer science solutions on randomness. These objects are often used as basic building blocks to solve problems in computer science. The main results of this thesis are: Extractors for Independent Sources: The central model that we study is the model of independent sources. Here the only assumption we make (beyond the necessary one that the source of randomness has some entropy/unpredictability), is that the source can be broken up into two or more independent parts. We show how to deterministically extract true randomness from such sources as long as a constant (as small as 3) number of sources is available with a small amount of entropy. Extractors for Small Space Sources: In this model we assume that the source is generated by a computationally bounded processes -- a bounded width branching program or an algorithm that uses small memory. This seems like a plausible model for sources of randomness produced by a defective physical device. We build on our work on extractors for independent sources to obtain extractors for such sources. Extractors for Low Weight Affine Sources: In this model, we assume that the source gives a random point from some unknown low dimensional affine subspace with a low-weight basis. This model generalizes the well studied model of bit-fixing sources. We give new extractors for this model that have exponentially small error, a parameter that is important for an application in cryptography. The techniques that go into solving this problem are inspired by the techniques that give our extractors for independent sources. Ramsey Graphs: A Ramsey graph is a graph that has no large clique or independent set. We show how to use our extractors and many other ideas to construct new explicit Ramsey graphs that avoid cliques and independent sets of the smallest size to date. Distributed Computing with Weak Randomness: Finally, we give an application of extractors for independent sources to distributed computing. We give new protocols for Byzantine Agreement and Leader Election that work when the players involved only have access to defective sources of randomness, even in the presence of completely adversarial behavior at many players and limited adversarial behavior at every player. In fact, we show how to simulate any distributed computing protocol that assumes that each player has access to private truly random bits, with the aid of defective sources of randomness. / text

Page generated in 0.1419 seconds