1 |
Reprocessing and model analysis for linear and integer programming modelsAmhemad, Abdella Zidan January 1997 (has links)
No description available.
|
2 |
The parallel reduction of lambda calculus expressionsWatson, Paul January 1986 (has links)
No description available.
|
3 |
A common model for ubiquitous computingBlackstock, Michael Anthony 11 1900 (has links)
Ubiquitous computing (ubicomp) is a compelling vision for how people will interact with multiple computer systems in the course of their daily lives. To date, practitioners have created a variety of infrastructures, middleware and toolkits to provide the flexibility, ease of programming and the necessary coordination of distributed software and hardware components in physical spaces.
However, to-date no one approach has been adopted as a default or de-facto standard. Consequently the field risks losing momentum as fragmentation occurs. In particular, the goal of ubiquitous deployments may stall as groups deploy and trial incompatible point solutions in specific locations. In their
defense, researchers in the field argue that it is too early to standardize and that room is needed to explore specialized domain-specific solutions.
In the absence of an agreed upon set of standards, we argue that the community must consider a methodology that allows systems to evolve and specialize, while at the same time allowing the development of portable applications and integrated deployments that work between between sites.
To address this we studied the programming models of many commercial and research ubicomp systems. Through this survey we gained an understanding of the shared abstractions required in a core programming model suitable for both application portability and systems integration.
Based on this study we designed an extensible core model called the Ubicomp
Common Model (UCM) to describe a representative sample of ubiquitous systems
to date. The UCM is instantiated in a flexible and extensible platform called the Ubicomp Integration Framework (UIF) to
adapt ubicomp systems to this model.
Through application development and integration experience with a composite
campus environment, we provide strong evidence that this model is adequate for
application development and that the complexity of developing adapters to several
representative systems is not onerous. The performance overhead introduced by
introducing the centralized UIF between applications and an integrated system is
reasonable. Through careful analysis and the use of well understood approaches
to integration, this thesis demonstrates the value of our methodology that directly leverages the significant contributions of past research in our quest for ubicomp application and systems interoperability.
|
4 |
Algorithms acceleration of pattern-matching in multi-core architecturesRodenas Pico, David 08 July 2011 (has links)
L'objectiu d'aquesta tesis es crear o adaptar models de programació per a fer els processadors multi-core accessibles per a la majoria de programadors. Aquest objectiu inclou la possibilitat de reusar els algoritmes existents, la capacitat de depuració, I la capacitat d'introduir els canvis de forma incremental. Contrastem les solucions proposades en diversos tipus de multi-core, incloent sistemes homogenis i heterogenis, i sistemes de memòria compartida i memòria distribuïda. A més a més contribuïm exposant algorismes i programes reals i ensenyant com aquests es poden ser usat per aplicacions en temps quasi real. / The aim of this thesis is to create or adapt a programming model in order to make multi-core processors accessible by almost every programmer. This objective includes existing codes and algorithms reuse, debuggability, and the capacity to introduce changes incrementally. We face multi-cores with many architectures including homogeneity versus heterogeneity and shared-memory versus distributed-memory. We also contribute by exposing real algorithms and programs and showing how some of them can be used for quasi realtime applications.
|
5 |
A common model for ubiquitous computingBlackstock, Michael Anthony 11 1900 (has links)
Ubiquitous computing (ubicomp) is a compelling vision for how people will interact with multiple computer systems in the course of their daily lives. To date, practitioners have created a variety of infrastructures, middleware and toolkits to provide the flexibility, ease of programming and the necessary coordination of distributed software and hardware components in physical spaces.
However, to-date no one approach has been adopted as a default or de-facto standard. Consequently the field risks losing momentum as fragmentation occurs. In particular, the goal of ubiquitous deployments may stall as groups deploy and trial incompatible point solutions in specific locations. In their
defense, researchers in the field argue that it is too early to standardize and that room is needed to explore specialized domain-specific solutions.
In the absence of an agreed upon set of standards, we argue that the community must consider a methodology that allows systems to evolve and specialize, while at the same time allowing the development of portable applications and integrated deployments that work between between sites.
To address this we studied the programming models of many commercial and research ubicomp systems. Through this survey we gained an understanding of the shared abstractions required in a core programming model suitable for both application portability and systems integration.
Based on this study we designed an extensible core model called the Ubicomp
Common Model (UCM) to describe a representative sample of ubiquitous systems
to date. The UCM is instantiated in a flexible and extensible platform called the Ubicomp Integration Framework (UIF) to
adapt ubicomp systems to this model.
Through application development and integration experience with a composite
campus environment, we provide strong evidence that this model is adequate for
application development and that the complexity of developing adapters to several
representative systems is not onerous. The performance overhead introduced by
introducing the centralized UIF between applications and an integrated system is
reasonable. Through careful analysis and the use of well understood approaches
to integration, this thesis demonstrates the value of our methodology that directly leverages the significant contributions of past research in our quest for ubicomp application and systems interoperability.
|
6 |
A common model for ubiquitous computingBlackstock, Michael Anthony 11 1900 (has links)
Ubiquitous computing (ubicomp) is a compelling vision for how people will interact with multiple computer systems in the course of their daily lives. To date, practitioners have created a variety of infrastructures, middleware and toolkits to provide the flexibility, ease of programming and the necessary coordination of distributed software and hardware components in physical spaces.
However, to-date no one approach has been adopted as a default or de-facto standard. Consequently the field risks losing momentum as fragmentation occurs. In particular, the goal of ubiquitous deployments may stall as groups deploy and trial incompatible point solutions in specific locations. In their
defense, researchers in the field argue that it is too early to standardize and that room is needed to explore specialized domain-specific solutions.
In the absence of an agreed upon set of standards, we argue that the community must consider a methodology that allows systems to evolve and specialize, while at the same time allowing the development of portable applications and integrated deployments that work between between sites.
To address this we studied the programming models of many commercial and research ubicomp systems. Through this survey we gained an understanding of the shared abstractions required in a core programming model suitable for both application portability and systems integration.
Based on this study we designed an extensible core model called the Ubicomp
Common Model (UCM) to describe a representative sample of ubiquitous systems
to date. The UCM is instantiated in a flexible and extensible platform called the Ubicomp Integration Framework (UIF) to
adapt ubicomp systems to this model.
Through application development and integration experience with a composite
campus environment, we provide strong evidence that this model is adequate for
application development and that the complexity of developing adapters to several
representative systems is not onerous. The performance overhead introduced by
introducing the centralized UIF between applications and an integrated system is
reasonable. Through careful analysis and the use of well understood approaches
to integration, this thesis demonstrates the value of our methodology that directly leverages the significant contributions of past research in our quest for ubicomp application and systems interoperability. / Science, Faculty of / Computer Science, Department of / Graduate
|
7 |
High performance communication support for sockets-based applications over high-speed setworksBalaji, Pavan 19 September 2006 (has links)
No description available.
|
8 |
Capsules: expressing composable computations in a parallel programming modelMandviwala, Hasnain A. 01 October 2008 (has links)
A well-known problem in designing high-level parallel programming models and languages is the "granularity problem", where the execution of parallel tasks that are too fine grain incur large overheads in the parallel runtime and adversely affect the speed-up that can be achieved by parallel execution. On the other hand, tasks that are too coarse-grain create load imbalance and do not adequately utilize the parallel machine. In this work we attempt to address the issue of granularity with a concept of expressing "composable computations" within a parallel programming model called "Capsules".
In Capsules, we provide a unifying framework that allows composition and adjustment of granularity for both data and computation over iteration space and computation space.
The Capsules model not only allows the user to express the decision on granularity of execution, but also the decision on the granularity of garbage collection (and therefore, the aggressiveness of the GC optimization), and other features that may be supported by the programming model. We argue that this adaptability of execution granularity leads to efficient parallel execution by matching the available application concurrency to the available hardware concurrency,
thereby reducing parallelization overhead. By matching, we refer to creating coarsegrain
Computation Capsules that encompass multiple instances of fine-grain computation instances. In effect, creating coarse-grain computations reduces overhead by simply reducing the number of parallel computations. Reducing parallel computation instances in turn leads to: (1) Reduced synchronization cost such as that required to access and search in shared data-structures; (2) Reduced distribution and scheduling cost for parallel computation
instances; and (3) Reduced book-keeping costs consisting of maintain data-structures such as blocked lists for unfulfilled data requests.
Capsules builds on our prior work, TStreams, a data-flow oriented parallel programming framework. Our results on an CMP/SMP machine using real vision applications such as the Cascade Face Detector, and the Stereo Vision Depth applications, and other synthetic applications show benefits in application performance. We use profiling to help determine optimal coarse-grain serial execution granularity, and provide empirical proof that adjusting execution granularity reduces parallelization overhead to yield maximum application performance.
|
9 |
HW-SW components for parallel embedded computing on Noc-based MPSoCsJoven Murillo, Jaume 15 March 2010 (has links)
Recentment, en el camp del sistemes encastats, estem assistint al creixement de sistemes Multi-Processor System-on-Chip (MPSoC). El paradigma de Network-on-chip (NoC) s'ha proposat una solució viable, eficient, escalable, predictible i flexible per connectar components dins un xip, o inclús sistemes complets basats en busos dins al xip amb la finalitat de crear sistemes altament complexos. Així, el paradigma de computació encastada d'altres prestacions està arribant a través d'integrar hardware altament paral·lel amb llibreries software per obtenir una màxima integració a nivell de plataforma utilitzant de components prèviament dissenyats (IP cores), en la forma de arquitectures NoC-based MPSoCs. No obstant, quan el nombre de components augmenta hi ha diversos desafiaments i problemes a resoldre. El primer repte és el disseny d'una xarxa d'interconnexió que proporcioni qualitat de servei assegurant un cert ample de banda i latència entre cada bloc del sistema, amb el mínim area i consum possible. Ja que l'espai de disseny en arquitectures NoCs és enorme, s'han de desenvolupar entorns de simulació, i verificació per explorar validar i optimitzar múltiples NoC arquitectures. El segon objectiu, que és actualment un forat de recerca, és proveir models de programació paral·lela flexibles i eficients sobre les arquitectures NoC-based MPSoCs. Així, és obligatori l'ús de llibreries software lleugeres capaces d'explotar la capacitats del hardware present a la plataforma d'execució. Fent servir aquestes llibreries software permetrà els programadors reutilitzar i programar de manera fàcil aplicacions paral·leles dins un xip. Finalment, per obtenir un sistema eficient, un punt clau és el disseny de les interfícies HW-SW apropiades. Aquest fet és crucial in multi processadors heterogenis on els paradigmes de programació paral·lela and middleware han d'abstreure els recursos de comunicació durant l'especificació d'aplicacions software. El principal objectiu d'aquesta tesis és enriquir les emergents arquitectures NoC-based MPSoC explorant i fent contribucions de caire científic afrontant els nous reptes apareguts aquest últims anys. Aquesta tesis es focalitza en els següents temes: Descripció of un entorn experimental anomenat NoCMaker per realitzar exploració arquitectural de sistemes NoC-based MPSoC, permetent alhora una validació i prototipatge ràpid. Extensió de les interfícies de xarxa per controlar tràfic heterogeni de diferents estàndards (AMBA AHB, OCP-IP) amb la finalitat de reutilitzar i comunicar de manera transparent múltiple IP cores des del punt de vista de l'usuari. Proporcionar qualitat de servei en temps d'execució a traves de components hardware a la NoC, i de rutines middleware en software. Exploració de les interfícies HW-SW i la compartició de recursos quan una unitat de punt flotant es connecta com a coprocessador a un sistema NoC-based MPSoC. Migració de paradigmes de programació paral·lela, com memòria compartida i pas de missatges en arquitectures NoC-based MPSoCs. En aquesta tesis presentem el desenvolupament d'un model de programació paral·lela basat en pas de missatges (MPI), anomenat on-chip MPI. Això permet el disseny de programes paral·leles distribuïts a nivell de tasca o funció fent servir la programació paral·lela explicita amb els mètodes de sincronia entre els elements integrats en el xip. Proporcionant qualitat de servei en temps d'execució a sobre d'una llibreria OpenMP dissenyada per sistemes de memòria compartida amb la finalitat d'accelerar o balancejar aplicacions critiques i fils d'execució durant la seva execució. Tots els reptes explorats durant aquesta tesi doctoral estan formalitzats en una metodologia hardware-software centrada en la infraestructura de comunicació de la plataforma. Així, el resultat d'aquest treball d'investigació serà una plataforma cluster-on-chip per una computació paral·lela encastada d'altes prestacions, on els components hardware and software poden ser reutilitzats a diverses nivells d'abstracció. / Recently, on the on-chip and embedded domain, we are witnessing the growing of the Multi-Processor System-on-Chip (MPSoC) era. Network-on-chip (NoCs) have been proposed to be a viable, efficient, scalable, predictable and flexible solution to interconnect IP blocks on a chip, or full-featured bus-based systems in order to create highly complex systems. Thus, the paradigm to high-performance embedded computing is arriving through high hardware parallelism and concurrent software stacks to achieve maximum system platform composability and flexibility using pre-designed IP cores. These are the emerging NoC-based MPSoCs architectures. However, as the number of IP cores on a single chip increases exponentially, many new challenges arise. The first challenge is the design of a suitable hardware interconnection to provide adequate Quality of Service (QoS) ensuring certain bandwidth and latency bounds for inter-block communication, but at a minimal power and area costs. Due to the huge NoC design space, simulation and verification environments must be put in place to explore, validate and optimize many different NoC architectures. The second target, nowadays a hot topic, is to provide efficient and flexible parallel programming models upon new generation of highly parallel NoC-based MPSoCs. Thus, it is mandatory the use of lightweight SW libraries which are able to exploit hardware features present on the execution platform. Using these software stacks and their associated APIs according to a specific parallel programming model will let software application designers to reuse and program parallel applications effortlessly at higher levels of abstraction. Finally, to get an efficient overall system behaviour, a key research challenge is the design of suitable HW/SW interfaces. Specially, it is crucial in heterogeneous multiprocessor systems where parallel programming models and middleware functions must abstract the communication resources during high level specification of software applications. Thus, the main goal of this dissertation is to enrich the emerging NoC-based MPSoCs by exploring and adding engineering and scientific contribution to new challenges appeared in the last years. This dissertation focuses on all of the above points: by describing an experimental environment to design NoC-based systems, xENoC, and a NoC design space exploration tool named NoCMaker. This framework leads to a rapid prototyping and validation of NoC-based MPSoCs. by extending Network Interfaces (NIs) to handle heterogeneous traffic from different bus¬based standards (e.g. AMBA, OCP-IP) in order to reuse and communicate a great variety off-the-shelf IP cores and software stacks in a transparent way from the user point of view. by providing runtime QoS features (best effort and guaranteed services) through NoC-level hardware components and software middleware routines. by exploring HW/SW interfaces and resource sharing when a Floating Point Unit (FPU) co¬processor is interfaced on a NoC-based MPSoC. by porting parallel programming models, such as shared memory or message passing models on NoC-based MPSoCs. We present the implementation of an efficient lightweight parallel programming model based on Message Passing Interface (MPI), called on-chip Message Passing Interface (ocMPI). It enables the design of parallel distributed computing at task-level or function-level using explicit parallelism and synchronization methods between the cores integrated on the chip. by provide runtime application to packets QoS support on top of the OpenMP runtime library targeted for shared memory MPSoCs in order to boost or balance critical applications or threads during its execution. The key challenges explored in this dissertation are formalized on HW-SW communication centric platform-based design methodology. Thus, the outcome of this work will be a robust cluster-on-chip platform for high-performance embedded computing, whereby hardware and software components can be reused at multiple levels of design abstraction.
|
10 |
Shared Memory Abstractions for Heterogeneous Multicore ProcessorsSchneider, Scott 21 January 2011 (has links)
We are now seeing diminishing returns from classic single-core processor designs, yet the number of transistors available for a processor is still increasing. Processor architects are therefore experimenting with a variety of multicore processor designs. Heterogeneous multicore processors with Explicitly Managed Memory (EMM) hierarchies are one such experimental design which has the potential for high performance, but at the cost of great programmer effort. EMM processors have cores that are divorced from the normal memory hierarchy, thus the onus is on the programmer to manage locality and parallelism. This dissertation presents the Cellgen source-to-source compiler which moves some of this complexity back into the compiler. Cellgen offers a directive-based programming model with semantics similar to OpenMP for the Cell Broadband Engine, a general-purpose processor with EMM. The compiler implicitly handles locality and parallelism, schedules memory transfers for data parallel regions of code, and provides performance predictions which can be leveraged to make scheduling decisions. We compare this approach to using a software cache, to a different programming model which is task based with explicit data transfers, and to programming the Cell directly using the native SDK. We also present a case study which uses the Cellgen compiler in a comparison across multiple kinds of multicore architectures: heterogeneous, homogeneous and radically data-parallel graphics processors. / Ph. D.
|
Page generated in 0.1183 seconds