Spelling suggestions: "subject:"data low"" "subject:"data flow""
61 |
Technogeopolitics of militarization and security in cyberspaceYannakogeorgos, Panayotis, January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Global Affairs." Includes bibliographical references (p. 225-248).
|
62 |
Efficient fault tolerance for pipelined structures and its application to superscalar and dataflow machinesMizan, Elias, 1976- 10 October 2012 (has links)
Silicon reliability has reemerged as a very important problem in digital system design. As voltage and device dimensions shrink, combinational logic is becoming sensitive to temporary errors caused by single event upsets, transistor and interconnect aging and circuit variability. In particular, computational functional units are very challenging to protect because current redundant execution techniques have a high power and area overhead, cannot guarantee detection of some errors and cause a substantial performance degradation. As traditional worst-case design rules that guarantee error avoidance become too conservative to be practical, new microarchitectures need to be investigated to address this problem. To this end, this dissertation introduces Self-Imposed Temporal Redundancy (SITR), a speculative microarchitectural temporal redundancy technique suitable for pipelined computational functional units. SITR is able to detect most temporary errors, is area and energy-efficient and can be easily incorporated in an out-of-order microprocessor. SITR can also be used as a throttling mechanism against thermal viruses and, in some cases, allows designers to design very aggressive bypass networks capable of achieving high instruction throughput, by tolerating timing violations. To address the performance degradation caused by redundant execution, this dissertation proposes using a tiled-data ow model of computation because it enables the design of scalable, resource-rich computational substrates. Starting with the WaveScalar tiled-data flow architecture, we enhance the reliability of its datapath, including computational logic, interconnection network and storage structures. Computations are performed speculatively using SITR while traditional information redundancy techniques are used to protect data transmission and storage. Once a value has been verified, confirmation messages are transmitted to consumer instructions. Upon error detection, nullification messages are sent to the instructions affected by the error. Our experimental results demonstrate that the slowdown due to redundant computation and error recovery on the tiled-data flow machine is consistently smaller than on a superscalar von Neumann architecture. However, the number of additional messages required to support SITR execution is substantial, increasing power consumption. To reduce this overhead without significantly affecting performance, we introduce wave-based speculation, a mechanism targeted for data flow architectures that enables speculation only when it is likely to benefit performance. / text
|
63 |
Vartotojo sąsajos modeliavimas duomenų srautų specifikacijos pagrindu / User Interface Design Based on the Data Flow SpecificationEidukynaitė, Vilma 29 May 2006 (has links)
The user interface is the direct mediator between the user and the system. It is one of main factors which influences how fluently and with what time resources system could be integrated into business process and how fast systems deployment could be performed. User interface is one of the most important points in software design, because it determines quality and rate of project implementation. Software design methodologies, based on Unified Modeling Language (UML), Oracle CASE, introduced by C. Finkelstein, D. J. Anderson, V. Balasubramanian, A. Granlund, D. Lafreniere, D. Carr are analyzed in this paper. The user interface modeling method based on data flow specification is presented in this work; the software prototype of modeling user interface based on this method is implemented.
|
64 |
A spectral method for mapping dataflow graphsElling, Volker Wilhelm January 1998 (has links)
No description available.
|
65 |
Erbium : Reconciling languages, runtimes, compilation and optimizations for streaming applicationsMiranda, Cupertino 11 February 2013 (has links) (PDF)
As transistors size and power limitations stroke computer industry, hardware parallelism arose as the solution, bringing old forgotten problems back into equation to solve the existing limitations of current parallel technologies. Compilers regain focus by being the most relevant puzzle piece in the quest for the expected computer performance improvements predicted by Moores law no longer possible without parallelism. Parallel research is mainly focused in either the language or architectural aspects, not really giving the needed attention to compiler problems, being the reason for the weak compiler support by many parallel languages or architectures, not allowing to exploit performance to the best. This thesis addresses these problems by presenting: Erbium, a low level streaming data-flow language supporting multiple producer and consumer task communication; a very efficient runtime implementation for x86 architectures also addressing other types of architectures; a compiler integration of the language as an intermediate representation in GCC; a study of the language primitives dependencies, allowing compilers to further optimise the Erbium code not only through specific parallel optimisations but also through traditional compiler optimisations, such as partial redundancy elimination and dead code elimination.
|
66 |
Capsules: expressing composable computations in a parallel programming modelMandviwala, Hasnain A. 01 October 2008 (has links)
A well-known problem in designing high-level parallel programming models and languages is the "granularity problem", where the execution of parallel tasks that are too fine grain incur large overheads in the parallel runtime and adversely affect the speed-up that can be achieved by parallel execution. On the other hand, tasks that are too coarse-grain create load imbalance and do not adequately utilize the parallel machine. In this work we attempt to address the issue of granularity with a concept of expressing "composable computations" within a parallel programming model called "Capsules".
In Capsules, we provide a unifying framework that allows composition and adjustment of granularity for both data and computation over iteration space and computation space.
The Capsules model not only allows the user to express the decision on granularity of execution, but also the decision on the granularity of garbage collection (and therefore, the aggressiveness of the GC optimization), and other features that may be supported by the programming model. We argue that this adaptability of execution granularity leads to efficient parallel execution by matching the available application concurrency to the available hardware concurrency,
thereby reducing parallelization overhead. By matching, we refer to creating coarsegrain
Computation Capsules that encompass multiple instances of fine-grain computation instances. In effect, creating coarse-grain computations reduces overhead by simply reducing the number of parallel computations. Reducing parallel computation instances in turn leads to: (1) Reduced synchronization cost such as that required to access and search in shared data-structures; (2) Reduced distribution and scheduling cost for parallel computation
instances; and (3) Reduced book-keeping costs consisting of maintain data-structures such as blocked lists for unfulfilled data requests.
Capsules builds on our prior work, TStreams, a data-flow oriented parallel programming framework. Our results on an CMP/SMP machine using real vision applications such as the Cascade Face Detector, and the Stereo Vision Depth applications, and other synthetic applications show benefits in application performance. We use profiling to help determine optimal coarse-grain serial execution granularity, and provide empirical proof that adjusting execution granularity reduces parallelization overhead to yield maximum application performance.
|
67 |
A micro data flow (MDF) : a data flow approach to self-timed VLSI system design for DSPMerani, Lalit T. 24 August 1993 (has links)
Synchronization is one of the important issues in digital system design. While
other approaches have been intriguing, up until now a globally clocked timing
discipline has been the dominant design philosophy. However, we have reached the
point, with advances in technology, where other options should be given serious
consideration. VLSI promises great processing power at low cost. This increase in
computation power has been obtained by scaling the digital IC process. But as this
scaling continues, it is doubtful that the advantages of faster devices can be fully
exploited. This is because the clock periods are getting much smaller in relation to the
interconnect propagation delays, even within a single chip and certainly at the board and
backplane level.
In this thesis, some alternative approaches to synchronization in digital system
design are described and developed. We owe these techniques to a long history of
effort in both digital computational system design as well as digital communication
system design. The latter field is relevant because large propagation delays have always
been a dominant consideration in its design methods.
Asynchronous design gives better performance than comparable synchronous
design in situations for which a global synchronization with a high speed clock
becomes a constraint for greater system throughput. Asynchronous circuits with
unbounded gate delays, or self-timed digital circuit can be designed by employing either
of two request-acknowledge protocols 4-cycle and 2-cycle.
We will also present an alternative approach to the problem of mapping
computation algorithms directly into asynchronous circuits. Data flow graph or
language is used to describe the computation algorithms. The data flow primitives have
been designed using both the 2-cycle and 4-cycle signaling schemes which are
compared in terms of performance and transistor count. The 2-cycle implementations
prove to be better than their 4-cycle counterparts.
A promising application of self-timed design is in high performance DSP
systems. Since there is no global constraint of clock distribution, localized forwardonly
connection allows computation to be extended and sped up using pipelining. A
decimation filter was designed and simulated to check the system level performance of
the two protocols. Simulations were carried out using VHDL for high level definition
of the design. The simulation results will demonstrate not only the efficacy of our
synthesis procedure but also the improved efficiency of the 2-cycle scheme over the 4-
cycle scheme. / Graduation date: 1994
|
68 |
Automatic relative debuggingSearle, Aaron James January 2006 (has links)
Relative Debugging is a paradigm that assists users to locate errors in programs that have been corrected or enhanced. In particular, the contents of key data structures in the development version are compared with the contents of the corresponding data structures, in an existing version, as the two programs execute. If the values of two corresponding data structures differ at points where they should not, an error may exist and the user is notified. Relative Debugging requires users to identify the corresponding data structures within the two programs, and the locations at which the comparisons should be performed. To quickly and effectively identify useful data structures and comparison points requires that users have a detailed knowledge of the two programs under consideration. Without a detailed knowledge of the two programs, the task of locating useful data structures and comparison points can quickly become a difficult and time consuming process. Prior to the research detailed in this thesis, the Relative Debugging paradigm did not provide any assistance that allowed users to quickly and effectively identify suitable data structures and program points that will help discover the source of an error. Our research efforts have been directed at enhancing the Relative Debugging paradigm. The outcome of this research is the discovery of techniques that empower Relative Debugging users to become more productive and allow the Relative Debugging paradigm to be significantly enhanced. Specifically, the research has resulted in the following three contributions: 1. A Systematic Approach to Relative Debugging. 2. Data Flow Browsing for Relative Debugging. 3. Automatic Relative Debugging. These contributions have enhanced the Relative Debugging paradigm and allow errors to be localized with little human interaction. Minimizing the user's involvement reduces the cost of debugging programs that have been corrected or enhanced, and has a significant impact on current debugging practices.
|
69 |
Flow grammars: a methodology for automatically constructing static analyzersUhl, James S. 12 June 2018 (has links)
A new control flow model called flow grammars is introduced which unifies the treatment of intraprocedural and interprocedural control flow. This model provides excellent support for the rapid prototyping of flow analyzers. Flow grammars are an easily understood, easily constructed and flexible representation of control flow, forming an effective bridge between the usual control flow graph model of traditional compilers and the continuation passing style of denotational semantics. A flow grammar semantics is given which is shown to summarize the effects all possible executions generated by a flow grammar conservatively. Various interpretations of flow grammars for data flow analysis are explored, including a novel bidirectional interprocedural variant. Several algorithms, based on a similar technique called grammar flow analysis, for solving the equations arising from the interpretations are given. Flow grammars were developed as a basis for FACT (Flow Analysis Compiler Tool), a compiler construction tool for the automatic construction of flow analyzers. Several important analyses from the literature are cast in the flow grammar framework and their implementation in a FACT prototype is discussed. / Graduate
|
70 |
Diretrizes para projetos de edifícios de escritórios. / Office buildings design guidelines.Ana Wansul Liu 05 April 2010 (has links)
A complexidade no desenvolvimento de projetos para edifícios de escritórios está relacionada a dificuldades na conciliação de interesses de empreendedores, projetistas, construtores e usuários finais, e a diversidade e especialização cada vez maiores das disciplinas envolvidas. A clareza quanto aos pontos que devem ser definidos, e quem deve defini-los, ainda na fase de concepção deste tipo de projeto, é fundamental para que o empreendimento apresente viabilidades técnica, construtiva e de negócio, e a gestão do processo do projeto deve ter domínio total destas questões nesta fase. A proposta deste trabalho é apresentar as informações críticas das diversas disciplinas, que devem ser definidas ainda na concepção da arquitetura, e sua correta seqüência de inserção no processo. Para tal, a metodologia adotada baseia-se em revisão bibliográfica e na realização de um estudo de caso, cujas condições de contorno são consideradas ímpares: a empresa contratante de projetos é uma incorporadora que tem o domínio das informações sobre as necessidades mercadológicas do produto, tem um corpo técnico que apresenta condições de avaliar e escolher soluções técnicas construtivas, e também é uma empresa de administração predial, ou seja, opera o funcionamento do edifício construído, resultando em decisões de projeto que realmente focam o custo do empreendimento em seu ciclo da vida, o que não ocorre freqüentemente no mercado brasileiro. Propõe-se o desenvolvimento de um fluxo de informações de projetos que indique a necessidade e a etapa de cada informação na fase de concepção do projeto, o que ajuda a esclarecer o correto papel de cada agente no processo e constitui uma ferramenta extremamente útil para a gestão de projetos. / The complexity in office buildings design development is related to difficulties in incorporating the interests of all the players involved (owners, designers, contractors and end-users) and to the increasing diversity of specialist designers. The clarity about key points definitions and who should make them, during the design conceptual phase, is imperative for technical, constructive and commercial feasibilities of the project itself, and design management must have complete control of these aspects. The aim is to investigate what critical information from several design subjects should be defined during this conceptual phase and its correct insertion sequence in the design process. In order to achieve this investigation, the research is based on the case study method, the studied object of which has distinctive conditions: the design team contractor is a real estate company that fully understands office building market needs, holds an experienced technical team to evaluate and select constructive solutions and, also, is a facility manager. Due to this, their design decisions actually focus on the project entire life cycle, which is not common in the Brazilian market. In conclusion, the development of an information flow is proposed, during the design conceptual phase, which indicates when each piece of information should be located in the design process, which is helpful to elucidate the correct function of each related player and to establish a useful tool for design management.
|
Page generated in 0.0648 seconds