• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 43
  • 21
  • 10
  • 7
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 241
  • 241
  • 77
  • 41
  • 37
  • 30
  • 29
  • 29
  • 25
  • 23
  • 21
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The limits of network transparency in a distributed programming language

Collet, Raphaël 19 December 2007 (has links)
This dissertation presents a study on the extent and limits of network transparency in distributed programming languages. This property states that the result of a distributed program is the same as if it were executed on a single computer, in the case when no failure occurs. The programming language may also be network aware if it allows the programmer to control how a program is distributed and how it behaves on the network. Both aim at simplifying distributed programming, by making non-functional aspects of a program more modular. We show that network transparency is not only possible, but also practical: it can be efficient, and smoothly extended in the case of partial failure. We give a proof of concept with the programming language Oz and the system Mozart, of which we have reimplemented the distribution support on top of the Distribution Subsystem (DSS). We have extended the language to control which distribution algorithms are used in a program, and reflect partial failures in the language. Both extensions allow to handle non-functional aspects of a program without breaking the property of network transparency.
42

Lifting Transformations

McAllester, David, Siskind, Jeffrey 01 December 1991 (has links)
Lifting is a well known technique in resolution theorem proving, logic programming, and term rewriting. In this paper we formulate lifting as an efficiency-motivated program transformation applicable to a wide variety of nondeterministic procedures. This formulation allows the immediate lifting of complex procedures, such as the Davis-Putnam algorithm, which are otherwise difficult to lift. We treat both classical lifting, which is based on unification, and various closely related program transformations which we also call lifting transformations. These nonclassical lifting transformations are closely related to constraint techniques in logic programming, resolution, and term rewriting.
43

A Review of Freely Available Quantum Computer Simulation Software

Brandhorst-Satzkorn, Johan January 2012 (has links)
A study has been made of a few different freely available Quantum Computer simulators. All the simulators tested are available online on their respective websites. A number of tests have been performed to compare the different simulators against each other. Some untested simulators of various programming languages are included to show the diversity of the quantum computer simulator applications. The conclusion of the review is that LibQuantum is the best of the simulators tested because of ease of coding, a great amount of pre-defined function implementations and decoherence simulation support among other reasons. / ICG QC
44

Programming Language Evolution and Source Code Rejuvenation

Pirkelbauer, Peter Mathias 2010 December 1900 (has links)
Programmers rely on programming idioms, design patterns, and workaround techniques to express fundamental design not directly supported by the language. Evolving languages often address frequently encountered problems by adding language and library support to subsequent releases. By using new features, programmers can express their intent more directly. As new concerns, such as parallelism or security, arise, early idioms and language facilities can become serious liabilities. Modern code sometimes bene fits from optimization techniques not feasible for code that uses less expressive constructs. Manual source code migration is expensive, time-consuming, and prone to errors. This dissertation discusses the introduction of new language features and libraries, exemplifi ed by open-methods and a non-blocking growable array library. We describe the relationship of open-methods to various alternative implementation techniques. The benefi ts of open-methods materialize in simpler code, better performance, and similar memory footprint when compared to using alternative implementation techniques. Based on these findings, we develop the notion of source code rejuvenation, the automated migration of legacy code. Source code rejuvenation leverages enhanced program language and library facilities by finding and replacing coding patterns that can be expressed through higher-level software abstractions. Raising the level of abstraction improves code quality by lowering software entropy. In conjunction with extensions to programming languages, source code rejuvenation o ers an evolutionary trajectory towards more reliable, more secure, and better performing code. We describe the tools that allow us efficient implementations of code rejuvenations. The Pivot source-to-source translation infrastructure and its traversal mechanism forms the core of our machinery. In order to free programmers from representation details, we use a light-weight pattern matching generator that turns a C like input language into pattern matching code. The generated code integrates seamlessly with the rest of the analysis framework. We utilize the framework to build analysis systems that find common workaround techniques for designated language extensions of C 0x (e.g., initializer lists). Moreover, we describe a novel system (TACE | template analysis and concept extraction) for the analysis of uninstantiated template code. Our tool automatically extracts requirements from the body of template functions. TACE helps programmers understand the requirements that their code de facto imposes on arguments and compare those de facto requirements to formal and informal specifications.
45

Shelang : An Implementation of Probabilistic Programming Language and its Applications

Gu, Tianyu January 2015 (has links)
Nowadays, probabilistic models are playing a significant role in various areas in- cluding machine learning, artificial intelligence and cognitive science, etc. How- ever, as those models are becoming more and more complex, it shows that the corresponding programs are really hard to maintain and reuse as well. Meanwhile, the current tools are not feasible enough to enable probabilistic modeling and ma- chine learning to be accessible to the working programmer, who has sufficient do- main expertise, but perhaps not enough expertise in probability theory or machine learning. Probabilistic programming is one possible way to solve this. Indeed, probabilistic programming languages are powerful tools to specify probabilistic models directly in terms of a computer programs. While programmers writes normal procedures, everything will be automatically translated into statistical distributions and then users can do inferences upon them. This project aims at exploring and implementing a probabilistic programming language, for which we name as Shelang. We use Scheme, a dialect of Lisp lan- guage which is originated from λ-Calculus, to implement a embedded probabilis- tic programming language. This paper mainly discusses about the design, algo- rithms, details of this implementation and several usages of Shelang and make a conclusion in the end.
46

Lifting the Abstraction Level of Compiler Transformations

Tang, Xiaolong 16 December 2013 (has links)
Production compilers implement optimizing transformation rules for built-in types. What justifies applying these optimizing rules is the axioms that hold for built-in types and the built-in operations supported by these types. Similar axioms also hold for user-defined types and the operations defined on them, and therefore justify a set of optimization rules that may apply to user-defined types. Production compilers, however, do not attempt to construct and apply these optimization rules to user-defined types. Built-in types together the axioms that apply to them are instances of more general algebraic structures. So are user-defined types and their associated axioms. We use the technique of generic programming, a programming paradigm to design efficient, reusable software libraries, to identify the commonality of classes of types, whether built-in or user-defined, convey the semantics of the classes of types to compilers, design scalable and effective program analysis for them, and eventually apply optimizing rules to the operations on them. In generic programming, algorithms and data structures are defined in terms of such algebraic structures. The same definitions are reused for many types, both built-in and user-defined. This dissertation applies generic programming to compiler analyses and transformations. Analyses and transformations are specified for general algebraic structures, and they apply to all types, both built-in and primitive types.
47

The Semantics, Formal Correctness and Implementation of History Variables in an Imperative Programming Language.

Mallon, Ryan Peter Kingsley January 2006 (has links)
Storing the history of objects in a program is a common task. Web browsers remember which websites we have visited, drawing programs maintain a list of the images we have modified recently and the undo button in a wordprocessor allows us to go back to a previous state of a document. Maintaining the history of an object in a program has traditionally required programmers either to write specific code for handling the historical data, or to use a library which supports history logging. We propose that maintaining the history of objects in a program could be simplified by providing support at the language level for storing and manipulating the past versions of objects. History variables are variables in a programming language which store not only their current value, but also the values they have contained in the past. Some existing languages do provide support for history variables. However these languages typically have many limits and restrictions on use of history variables. In this thesis we discuss a complete implementation of history variables in an imperative programming language. We discuss the semantics of history variables for scalar types, arrays, pointers, strings, and user defined types. We also introduce an additional construct called an 'atomic block' which allows us to temporarily suspend the logging of a history variable. Using the mathematical system of Hoare logic we formally prove the correctness of our informal semantics for atomic blocks and each of the history variable types we introduce. Finally, we develop an experimental language and compiler with support for history variables. The language and compiler allow us to investigate the practical aspects of implementing history variables and to compare the performance of history variables with their non- history counterparts.
48

Static Timing Analysis of Parallel Systems Using Abstract Execution

Gustavsson, Andreas January 2014 (has links)
The Power Wall has stopped the past trend of increasing processor throughput by increasing the clock frequency and the instruction level parallelism.Therefore, the current trend in computer hardware design is to expose explicit parallelism to the software level.This is most often done using multiple processing cores situated on a single processor chip.The cores usually share some resources on the chip, such as some level of cache memory (which means that they also share the interconnect, e.g. a bus, to that memory and also all higher levels of memory), and to fully exploit this type of parallel processor chip, programs running on it will have to be concurrent.Since multi-core processors are the new standard, even embedded real-time systems will (and some already do) incorporate this kind of processor and concurrent code. A real-time system is any system whose correctness is dependent both on its functional and temporal output. For some real-time systems, a failure to meet the temporal requirements can have catastrophic consequences. Therefore, it is of utmost importance that methods to analyze and derive safe estimations on the timing properties of parallel computer systems are developed. This thesis presents an analysis that derives safe (lower and upper) bounds on the execution time of a given parallel system.The interface to the analysis is a small concurrent programming language, based on communicating and synchronizing threads, that is formally (syntactically and semantically) defined in the thesis.The analysis is based on abstract execution, which is itself based on abstract interpretation techniques that have been commonly used within the field of timing analysis of single-core computer systems, to derive safe timing bounds in an efficient (although, over-approximative) way.Basically, abstract execution simulates the execution of several real executions of the analyzed program in one go.The thesis also proves the soundness of the presented analysis (i.e. that the estimated timing bounds are indeed safe) and includes some examples, each showing different features or characteristics of the analysis. / Worst-Case Execution Time Analysis of Parallel Systems / RALF3 - Software for Embedded High Performance Architectures
49

Computational tool support of open-building design / Title on abstract page: Computational tools in support of open building design

Guz, Yunus O. January 2006 (has links)
The thesis explores the possible use of parametric object definitions during capacity analysis to support Open Building design processes.The study proposes that design criteria regarding possible size, position and relation of design elements can be formulated and modeled parametrically. Then developed parametric data can be used as library objects during the exploration of dwelling unit layout alternatives. Parametric models, holding explicit design information can be shared, modified and re-used in different design cases. The process and criteria used in the study are based on S.A.R. (Stichting Architecten Research) methods described in the study, "Variations - The Systematic Design of Supports" focused particularly on residential building types. Parallel to the S.A.R methods, the study focuses on the spatial capacity analysis between a floor plate and a number of alternative dwelling unit layout arrangements. Other capacity analyses such as structural, daylight or thermal performances can be formulated and studied in a similar way, but are not included in this study.GDL (Geometric Description Language), a programming medium for ArchiCAD software, is used for the production of parametric models. The Keyenburg housing project designed by Dutch architect Frans Van Der Werf is taken as a base-building model to demonstrate the development and the use of parametric models.Keywords: Open Building, capacity analysis, parametric objects, design constraints, GDL (Geometric Description Language) / Department of Architecture
50

SCOPE: Scalable Clustered Objects with Portable Events

Matthews, Christopher 27 September 2006 (has links)
Writing truly concurrent software is hard, scaling software to fully utilize hardware is one of the reasons why. One abstraction for increasing the scalability of systems software is clustered objects. Clustered objects is a proven method of increasing scalability. This thesis explores a user-level abstraction based on clustered objects which increases hardware utilization without requiring any customization of the underlying system. We detail the design, implementation and testing of Scalable Clustered Objects with Portable Events or (SCOPE), a user-level system inspired by an implementation of the clustered objects model from IBM Research’s K42 operating system. To aid in the portability of the new system, we introduce the idea of a clustered object event, which is responsible for maintaining the runtime environment of the clustered objects. We show that SCOPE can increase scalability on a simple micro benchmark, and provide most of the benefits that the kernel-level implementation provided.

Page generated in 0.1073 seconds