Spelling suggestions: "subject:"data flow"" "subject:"mata flow""
21 |
Dynamická analýza pro hledání chyb endianity / Dynamic Analysis for Finding Endianity BugsKápl, Roman January 2018 (has links)
When two computer systems communicate, for example over a network, they must agree on the ordering of bytes within numbers. This ordering is called endianess. Often one of the systems has to swap the order of bytes to the agreed standard. Results of this work help programmers to find places in their program where they forgot to swap the bytes. We have developed a dynamic data-flow analysis built upon the popular Valgrind tool. Compared to the static analysis currently used by the Linux kernel developers, our approach does not require annotation of variables with their endianess. Typically only few places in the program source code will need to be annotated. The analysis can also detect potential bugs that would only manifest if the program was run on computer with opposite endianess. Our approach has been validated on an existing program known to contain yet unfixed endianess problems (RadeonSI OpenGL driver). It has identified all endianess-related bugs and provided useful diagnostic messages together with their location.
|
22 |
Flight Test: In Search of Boring DataHoaglund, Catharine M., Gardner, Lee S. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The challenge being faced today in the Department of Defense is to find ways to improve
the systems acquisition process. One area needing improvement is to eliminate surprises in
unexpected test data which add cost and time to developing the system. This amounts to
eliminating errors in all phases of a system’s lifecycle. In a perfect world, the ideal systems
acquisition process would result in a perfect system. Flawless testing of a perfect system
would result in predicted test results 100% of the time. However, such close fidelity
between predicted behavior and real behavior has never occurred. Until this ideal level of
boredom in testing occurs, testing will remain a critical part of the acquisition process.
Given the indispensability of testing, the goal to reduce the cost of flight tests is well worth
pursuing. Reducing test cost equates to reducing open air test hours, our most costly
budget item. It also means planning, implementing and controlling test cycles more
efficiently. We are working on methods to set up test missions faster, and analyze,
evaluate, and report on the test data more quickly, including unexpected results. This paper
explores the moving focus concept, one method that shows promise in our pursuit of the
goal of reducing test costs. The moving focus concept permits testers to change the data
they collect and view during a test, interactively, in real-time. This allows testers who are
receiving unexpected test results to change measurement subsets and explore the problem
or pursue other test scenarios.
|
23 |
Scaling scope bounded checking using incremental approachesGopinath, Divya 28 October 2010 (has links)
Bounded Verification is an effective technique for finding subtle bugs in object-oriented programs. Given a program, its correctness specification and bounds on the input domain size, scope bounded checking translates bounded code segments into formulas in boolean logic and uses off the shelf satisfiability solvers to search for correctness violations. However, scalability is a key issue of the technique, since for non-trivial programs, the formulas are often complex and can choke the solvers. This thesis describes approaches which aim to scale scope bounded checking by utilizing syntactic and semantic information from the code to split a program into sub-programs which can be checked incrementally. It presents a thorough evaluation of the approaches and compares their performance with existing bounded verification techniques. Novel ideas for future work, specifically a specification slicing driven splitting approach, are proposed to further improve the scalability of bounded verification. / text
|
24 |
Détection de comportements à risque dans les applications en utilisant l'analyse statique / Detection of risky behavior in smartphone applications using static analysisMartinez, Denis 02 February 2016 (has links)
Le monde des appareils mobiles permet aux utilisateurs d’installer des applications sur des terminaux personnels, mais pose des lacunes en termes de sécurité, car les utilisateurs n’ont ps les moyens de juger la dangerosité d’une application, et le risque de nuisibilité ne peut pas être limité après l’installation. Nous étudions l’analyse statique en tant qu’outil de détection de risques et de malware. Notre méthode se caractérise par un pilotage par règles, opérant sur des programmes partiels : l’un des objectifs est de proposer un langage spécifique au domaine pouvant exprimer un domaine abstrait et associer des comportements aux fonctions des librairies du système. L’expressivité est un atout important qui est obtenu au travers de l’abstraction. La technologie mobile évolue rapidement et de nouvelles manières de développer les appli- cations apparaissent fréquemment. Une analyse statique des applications doit absolument réagir rapidement à l’arrivée des nouvelles technologies, de sorte à ne pas se retrouver obsolète. Nous montrons de quelle manière il est possible de réaliser des analyses statiques que l’on peut réutiliser entre plusieurs plateformes de smartphones / The mobile device world allows users to install applications on theirpersonal devices, but typically falls short in terms of security, because theusers lack any ability to judge if an application will be dangerous, and thereis no way to limit the harmfulness of a program after it is installed.We explore static analysis as a tool for risk assessment and detection of malware behavior. Our method characterizes as a rule-driven, partial program approach: one of our goals is to provide a convenient, expressive domain-specific language to express an abstract domain and associate behavior to the library functions of the system.Expressivity is an important asset to have and it is obtained by the means of abstraction. The mobile technologies evolve fast and new ways to develop programs frequently appear.A real-world static analysis solution absolutely needs to react fast to the arrival of new technologies, in order not to fall into obsolescence. Weshow how it is possible to develop static analyses, and then to reuse them across mutiple smartphone platforms.
|
25 |
Constraint extension to dataflow network.January 2004 (has links)
Tsang Wing Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 90-93). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Preliminaries --- p.4 / Chapter 2.1 --- Constraint Satisfaction Problems --- p.4 / Chapter 2.2 --- Dataflow Networks --- p.5 / Chapter 2.3 --- The Lucid Programming Language --- p.9 / Chapter 2.3.1 --- Daton Domain --- p.10 / Chapter 2.3.2 --- Constants --- p.10 / Chapter 2.3.3 --- Variables --- p.10 / Chapter 2.3.4 --- Dataflow Operators --- p.11 / Chapter 2.3.5 --- Functions --- p.16 / Chapter 2.3.6 --- Expression and Statement --- p.17 / Chapter 2.3.7 --- Examples --- p.17 / Chapter 2.3.8 --- Implementation --- p.19 / Chapter 3 --- Extended Dataflow Network --- p.25 / Chapter 3.1 --- Assertion Arcs --- p.25 / Chapter 3.2 --- Selection Operators --- p.27 / Chapter 3.2.1 --- The Discrete Choice Operator --- p.27 / Chapter 3.2.2 --- The Discrete Committed Choice Operator --- p.29 / Chapter 3.2.3 --- The Range Choice Operators --- p.29 / Chapter 3.2.4 --- The Range Committed Choice Operators --- p.32 / Chapter 3.3 --- Examples --- p.33 / Chapter 3.4 --- E-Lucid --- p.39 / Chapter 3.4.1 --- Modified Four Cockroaches Problem --- p.42 / Chapter 3.4.2 --- Traffic Light Problem --- p.45 / Chapter 3.4.3 --- Old Maid Problem --- p.48 / Chapter 4 --- Implementation of E-Lucid --- p.54 / Chapter 4.1 --- Overview --- p.54 / Chapter 4.2 --- Definition of Terms --- p.56 / Chapter 4.3 --- Function ELUCIDinterpreter --- p.57 / Chapter 4.4 --- Function Edemand --- p.58 / Chapter 4.5 --- Function transf ormD --- p.59 / Chapter 4.5.1 --- Labelling Datastreams of Selection Operators --- p.59 / Chapter 4.5.2 --- Removing Committed Choice Operators --- p.62 / Chapter 4.5.3 --- "Removing asa, wvr, and upon" --- p.62 / Chapter 4.5.4 --- Labelling Output Datastreams of if-then-else-fi --- p.63 / Chapter 4.5.5 --- Transforming Statements to Daton Statements --- p.63 / Chapter 4.5.6 --- Transforming Daton Expressions Recursively --- p.65 / Chapter 4.5.7 --- An Example --- p.65 / Chapter 4.6 --- "Functions constructCSP, f indC, and transf ormC" --- p.68 / Chapter 4.7 --- An Example --- p.75 / Chapter 4.8 --- Function backtrack --- p.77 / Chapter 5 --- Related Works --- p.83 / Chapter 6 --- Conclusion --- p.87
|
26 |
Weakest Pre-Condition and Data Flow TestingMcClellan, Griffin David 05 July 1995 (has links)
Current data flow testing criteria cannot be applied to test array elements for two reasons: 1. The criteria are defined in terms of graph theory which is insufficiently expressive to investigate array elements. 2. Identifying input data which test a specified array element is an unsolvable problem. We solve the first problem by redefining the criteria without graph theory. We address the second problem with the invention of the wp_du method, which is based on Dijkstra's weakest pre-condition formalism. This method accomplishes the following: Given a program, a def-use pair and a variable (which can be an array element), the method computes a logical expression which characterizes all the input data which test that def-use pair with respect to that variable. Further, for any data flow criterion, this method can be used to construct a logical expression which characterizes all test sets which satisfy that data flow criterion. Although the wp_du method cannot avoid unsolvability, it does confine the presence of unsolvability to the final step in constructing a test set.
|
27 |
Interactive Design and Debugging of GPU-based Volume VisualizationsMeyer-Spradow, Jennis, Ropinski, Timo, Mensmann, Jörg, Hinrichs, Klaus January 2010 (has links)
There is a growing need for custom visualization applications to deal with the rising amounts of volume data to be analyzed in fields like medicine, seismology, and meteorology. Visual programming techniques have been used in visualization and other fields to analyze and visualize data in an intuitive manner. However, this additional step of abstraction often results in a performance penalty during the actual rendering. In order to prevent this impact, a careful modularization of the required processing steps is necessary, which provides flexibility and good performance at the same time. In this paper, we will describe the technical foundations as well as the possible applications of such a modularization for GPU-based volume raycasting, which can be considered the state-of-the-art technique for interactive volume rendering. Based on the proposed modularization on a functional level, we will show how to integrate GPU-based volume ray-casting in a visual programming environment in such a way that a high degree of flexibility is achieved without any performance impact.
|
28 |
Program allocation for hypercube based dataflow systemsFreytag, Vincent R. 18 March 1993 (has links)
The dataflow model of computation differs from the traditional control-flow
model of computation in that it does not utilize a program counter to sequence
instructions in a program. Instead, the execution of instructions is based solely on the
availability of their operands. Thus, an instruction is executed in a dataflow computer
when all of its operands are available. This asynchronous nature of the dataflow model of
computation allows the exploitation of fine-grain parallelism inherent in programs.
Although the dataflow model of computation exploits parallelism, the problem of
optimally allocating a program to processors belongs to the class of NP-complete
problems. Therefore, one of the major issues facing designers of dataflow
multiprocessors is the proper allocation of programs to processors.
The problem of program allocation lies in maximizing parallelism while
minimizing interprocessor communication costs. The culmination of research in the area
of program allocation has produced the proposed method called the Balanced Layered
Allocation Scheme that utilizes heuristic rules to strike a balance between computation
time and communication costs in dataflow multiprocessors. Specifically, the proposed
allocation scheme utilizes Critical Path and Longest Directed Path heuristics when
allocating instructions to processors. Simulation studies indicate that the proposed
scheme is effective in reducing the overall execution time of a program by considering
the effects of communication costs on computation times. / Graduation date: 1993
|
29 |
Mapping a Dataflow Programming Model onto Heterogeneous ArchitecturesSbirlea, Alina 06 September 2012 (has links)
This thesis describes and evaluates how extending Intel's Concurrent Collections (CnC) programming model can address the problem of hybrid programming with high performance and low energy consumption, while retaining the ease of use of data-flow
programming.
The CnC model is a declarative, dynamic light-weight task based parallel programming model and is implicitly deterministic by enforcing the single assignment rule, properties which ensure that problems are modelled in an intuitive way.
CnC offers a separation of concerns by allowing algorithms to be expressed as a two stage process: first by decomposing a problem into components and specifying how components interact with each other, and second by providing an implementation for each component.
By facilitating the separation between a domain expert, who can provide an accurate problem specification at a high level, and a tuning expert, who can tune the individual components for better performance, we ensure that tuning and future development, such as replacement of a subcomponent with a more efficient algorithm, become straightforward.
A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. In addition, the use of FPGAs has seen a significant increase for applications that can take advantage of such dedicated hardware. We see that computing is evolving from using many core CPUs to ``co-processing" on the CPU, GPU and FPGA, however hybrid programming models that support the interaction between multiple heterogeneous components are not widely accessible to mainstream programmers and domain experts who have a real need for such resources.
We propose a C-based implementation of the CnC model for enabling parallelism across heterogeneous processor components in a flexible way, with high resource utilization and high programmability. We use the task-parallel HabaneroC language (HC) as the platform for implementing CnC-HabaneroC (CnC-HC), a language also used to implement the computation steps in CnC-HC, for interaction with GPU or FPGA steps and which offers the desired flexibility and extensibility of interacting with any other C based language.
First, we extend the CnC model with tag functions and ranges to enable automatic code generation of high level operations for inter-task communication. This improves programmability and also makes the code more analysable, opening the door for future optimizations.
Secondly, we introduce a way to specify steps that are data parallel and thus are fit to execute on the GPU, and the notion of task affinity, a tuning annotation in the specification language. Affinity is used by the runtime during scheduling and can be fine-tuned based on application needs to achieve better (faster, lower power, etc.) results.
Thirdly, we introduce and develop a novel, data-driven runtime for the CnC model, using HabaneroC (HC) as a base language. In addition, we also create an implementation of the previous runtime approach and conduct a study to compare the performance.
Next, we expand the HabaneroC dynamic work-stealing runtime to allow cross-device stealing based on task affinity. Cross-device dynamic work-stealing is used to achieve load balancing across heterogeneous platforms for improved performance.
Finally, we implement and use a series of benchmarks for testing the model in different scenarios and show that our proposed approach can yield significant performance benefits and low power usage when using a hybrid execution.
|
30 |
Σύγχρονοι αλγόριθμοι ομαδοποίησης για ροές δεδομένωνΧατζημιχαήλ, Σπύρος 03 August 2009 (has links)
Σε αυτή την πτυχιακή εργασία γίνεται μελέτη του προβλήματος της ομαδοποίησης δεδομένων και πιο συγκεκριμένα οnline ομαδοποίηση σε ροές δεδομένων.
Στην αρχή παρουσιάζεται η απλή offline εκδοχή του προβλήματος, όπου όλα τα δεδομένα προς ομαδοποίηση είναι γνωστά εκ των προτέρων. Παρουσιάζονται οι πιο βασικοί αλγόριθμοι και στοιχειώδεις εφαρμογές που καταδεικνύουν ότι η εύρεση αποδοτικών αλγορίθμων μπορεί να δώσει ώθηση σε νέα περιβάλλοντα που η ομαδοποίηση αποτελεί υπολογιστικό πυρήνα.
Στη συνέχεια γίνεται εισαγωγή στο μοντέλο ροών δεδομένων, όπου εκεί η γνώση του αλγορίθμου για τη φύση των δεδομένων αποκτάται σταδιακά, όσο παρουσιάζονται νέα στοιχεία. Ο περιορισμός της διαθέσιμης μνήμης και η ανάγκη μας για αποδοτικούς αλγορίθμους μας οδηγεί σε κατασκευή προσεγγιστικών ευρετικών. Παρουσιάζονται ανοιχτά προβλήματα που έχουν τεθεί στη βιβλιογραφία καθώς και διάφορες εφαρμογές που προκύπτουν από δεδομένα που σχηματίζουν ροές.
Συνεχίζοντας γίνεται μια εκτενής μελέτη της σύγχρονης βιβλιογραφίας και παρουσιάζονται οι πιο αντιπροσωπευτικοί αλγόριθμοι από κάθε βασική τεχνική προσέγγισης, όπως η ομαδοποίηση με βάση την πυκνότητα, ομαδοποίηση με γραμμική παλινδρόμηση, ομαδοποίηση δύο σταδίων κα. Παρουσιάζεται επίσης και ένας νέος αλγόριθμος που συνδυάζει προεπεξεργασία των δεδομένων της ροής με έναν online αλγόριθμο ομαδοποίησης και παραγωγή της τελικής ομαδοποίησης με μία παραλλαγή του LocalSearch.
Τέλος ακολουθούν διάφορα πειραματικά αποτελέσματα που πραγματοποιήθηκαν επί αυτών των αντιπροσωπευτικών αλγορίθμων και γίνεται σύγκριση μεταξύ τους. Παρατηρούμε ότι τα νέα σχήματα που που προκύπτουν με βάση τον αλγόριθμο Localsearch πετυχαίνουν πολύ καλύτερα τελικά αποτελέσματα σε σχέση με τον αλγόριθμο Birch. / -
|
Page generated in 0.0406 seconds