1 |
Fine-Grained Specification and Control of Data Flows in Web-based User InterfacesBook, Matthias, Gruhn, Volker, Richter, Jan 04 December 2018 (has links)
When building process-intensive web applications, developers typically spend considerable effort on the exchange of specific data entities between specific web pages and operations under specific condi-
tions, as called for by business requirements. Since the WWW infrastructure provides only very coarse data exchange mechanisms, we introduce a notation for the design of fine-grained conditional data flows between user interface components. These specifications can be interpreted by a data flow controller that automatically provides the data entities to the specified receivers at run-time, relieving developers of the need to implement user interface data flows manually.
|
2 |
Multicommodity and generalized flow algorithms theory and practice /Oldham, Jeffrey David. January 1900 (has links)
Thesis (Ph.D)--Stanford University, 1999. / Title from metadata (viewed May 9, 2002). "August 1999." "Adminitrivia V1/Prg/19990823"--Metadata.
|
3 |
Specification and solution of multisource data flow problems /Fiskio-Lasseter, John Howard Eli, January 2006 (has links)
Thesis (Ph. D.)--University of Oregon, 2006. / Typescript. Includes vita and abstract. Includes bibliographical references (leaves 150-162). Also available for download via the World Wide Web; free to University of Oregon users.
|
4 |
Micro data flow processor designChang, Chih-ming 24 September 1993 (has links)
Computer has evolved rapidly during the past several decades in terms of
its implementation technology; it's architecture, however, has not changed dramatically
since the von Neumann computer(control flow) model emerged in the 1940s. One
main reason is that the performance for this kind of computers was able to satisfy
the requirement of most users. Another reason maybe that the engineers who designed
them are more familiar with this model. However, recent solutions to the problem
of parallelizing sequential nature instructions on a von Neumann machine complicate
both the compiler and the controller design. Therefore, another computer model, namely
the data flow model, has regained attention since this model of computation exposes
parallelism inherent in the program naturally.
In terms of implementation methodology, we currently use synchronous sequential
logic, which is clock controlled for synchronization within circuits. This design
philosophy becomes hard to follow due to the occurrence of clock skew as the clock
frequency goes higher and higher. One way to eliminate these clock related problems
is to use the self-timed(asynchronous) implementation methodology. It features advantages
such as free of clock-skew, low power consumption, composibility and so forth.
Since data flow(data driven) computation model provides the execution of instructions
asynchronously, it is natural to implement a data flow processor using self-timed circuits.
In this thesis, micro pipelines, one of the self-timed implementation methodology,
is used to implement a preliminary version of general purpose static data flow
processor. Some interesting observations will be addressed in this thesis. An example
program of general difference recursive equation is given to test the correctness and
performance of this processor. We hope to gain more insight on how to design and
implement self-timed systems in the future. / Graduation date: 1994
|
5 |
Removing unimportant computations in interprocedural program analysisTok, Teck Bok, 1973- 29 August 2008 (has links)
Not available
|
6 |
Removing unimportant computations in interprocedural program analysisTok, Teck Bok, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references and index.
|
7 |
Dataflow analysis on game narrativesZhang, Peng, January 1900 (has links)
Written for the School of Computer Science. Title from title page of PDF (viewed 2009/013/09). Includes bibliographical references.
|
8 |
Data-flow vs control-flow for extreme level computingEvripidou, P., Kyriacou, Costas January 2013 (has links)
No / This paper challenges the current thinking for building High Performance Computing (HPC) Systems, which is currently based on the sequential computing also known as the von Neumann model, by proposing the use of Novel systems based on the Dynamic Data-Flow model of computation. The switch to Multi-core chips has brought the Parallel Processing into the mainstream. The computing industry and research community were forced to do this switch because they hit the Power and Memory walls. Will the same happen with HPC? The United States through its DARPA agency commissioned a study in 2007 to determine what kind of technologies will be needed to build an Exaflop computer. The head of the study was very pessimistic about the possibility of having an Exaflop computer in the foreseeable future. We believe that many of the findings that caused the pessimistic outlook were due to the limitations of the sequential model. A paradigm shift might be needed in order to achieve the affordable Exascale class Supercomputers.
|
9 |
An Investigation of Data Flow Patterns Impact on Maintainability When Implementing Additional FunctionalityMagnusson, Erik, Grenmyr, David January 2016 (has links)
JavaScript is breaking ground with the wave of new client-side frameworks. However, there are some key differences between some of them. One major distinction is the data flow pattern they applying. As of now, there are two predominant patterns used on client side frameworks, the Two-way data flow pattern and the Unidirectional data flow pattern. In this research, an empirical experiment was conducted to test the data flow patterns impact on maintainability. The scope of maintainability of this research is defined by a set of metrics: Amount of lines code, an amount of files and amount of dependencies. By analyzing the results, a conclusion could not be made to prove that the data flow patterns does affect maintainability, using this research method.
|
10 |
A Programming Parallel Real-Time Process Data Flow Telemetry SystemDa-qing, Huang 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada / In this paper, a programming parallel real-time process data flow telemetry system is presented. What we developed recently is a advanced telemetry system which can process multi-data-flow of multi-target for mulit-user at the same time. It can be used in RPV, missile and others. Its main characteristics are as follows: Input radio frequency is S wave band (multi-dot frequencies). In telemetry front-end, the chip microprocessor is used to make demodulation and decode. Telemetry preprocessor consists of parallel distributed chip microprocessor mould plates (bus link). There are menu shope man-computer dialogue, figure display, intelligence display and intelligence self-diagnosis in this system. Now, we have developed data compress mould plate, floating-point arithmetic mould plate, derive calculation mould plate and signal process mould plate etc. The main computer is VAX-II.
|
Page generated in 0.0235 seconds