Spelling suggestions: "subject:"data flow"" "subject:"mata flow""
91 |
Scalable data-flow testing / Teste de fluxo de dados escalávelAraujo, Roberto Paulo Andrioli de 15 September 2014 (has links)
Data-flow (DF) testing was introduced more than thirty years ago aiming at verifying a program by extensively exploring its structure. It requires tests that traverse paths in which the assignment of a value to a variable (a definition) and its subsequent reference (a use) is verified. This relationship is called definition-use association (dua). While control-flow (CF) testing tools have being able to tackle systems composed of large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This research presents the implementation of BA for programs compiled into Java bytecodes. Previous DF approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 250 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular CF testing tool. / Teste de fluxo de dados (TFD) foi introduzido há mais de trinta anos com o objetivo de criar uma avaliação mais abrangente da estrutura dos programas. TFD exige testes que percorrem caminhos nos quais a atribuição de valor a uma variável (definição) e a subsequente referência a esse valor (uso) são verificados. Essa relação é denominada associação definição-uso. Enquanto as ferramentas de teste de fluxo de controle são capazes de lidar com sistemas compostos de programas grandes e que executam durante bastante tempo, as ferramentas de TFD não têm obtido o mesmo sucesso. Esta situação é, em parte, devida aos custos associados ao rastreamento de associações definição-uso em tempo de execução. Recentemente, foi proposto um algoritmo --- chamado \\textit (BA) --- que usa vetores de bits e operações bit a bit para monitorar associações definição-uso em tempo de execução. Esta pesquisa apresenta a implementação de BA para programas compilados em Java. Abordagens anteriores são capazes de lidar com programas pequenos e de médio porte com altas penalidades em termos de execução e memória. Os resultados experimentais mostram que, usando BA, é possível utilizar TFD para verificar sistemas com mais de 250 mil linhas de código e 300 mil associações definição-uso. Além disso, para vários programas, a penalidade de execução imposta por BA é comparável àquela imposta por uma popular ferramenta de teste de fluxo de controle.
|
92 |
Scalable data-flow testing / Teste de fluxo de dados escalávelRoberto Paulo Andrioli de Araujo 15 September 2014 (has links)
Data-flow (DF) testing was introduced more than thirty years ago aiming at verifying a program by extensively exploring its structure. It requires tests that traverse paths in which the assignment of a value to a variable (a definition) and its subsequent reference (a use) is verified. This relationship is called definition-use association (dua). While control-flow (CF) testing tools have being able to tackle systems composed of large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This research presents the implementation of BA for programs compiled into Java bytecodes. Previous DF approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 250 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular CF testing tool. / Teste de fluxo de dados (TFD) foi introduzido há mais de trinta anos com o objetivo de criar uma avaliação mais abrangente da estrutura dos programas. TFD exige testes que percorrem caminhos nos quais a atribuição de valor a uma variável (definição) e a subsequente referência a esse valor (uso) são verificados. Essa relação é denominada associação definição-uso. Enquanto as ferramentas de teste de fluxo de controle são capazes de lidar com sistemas compostos de programas grandes e que executam durante bastante tempo, as ferramentas de TFD não têm obtido o mesmo sucesso. Esta situação é, em parte, devida aos custos associados ao rastreamento de associações definição-uso em tempo de execução. Recentemente, foi proposto um algoritmo --- chamado \\textit (BA) --- que usa vetores de bits e operações bit a bit para monitorar associações definição-uso em tempo de execução. Esta pesquisa apresenta a implementação de BA para programas compilados em Java. Abordagens anteriores são capazes de lidar com programas pequenos e de médio porte com altas penalidades em termos de execução e memória. Os resultados experimentais mostram que, usando BA, é possível utilizar TFD para verificar sistemas com mais de 250 mil linhas de código e 300 mil associações definição-uso. Além disso, para vários programas, a penalidade de execução imposta por BA é comparável àquela imposta por uma popular ferramenta de teste de fluxo de controle.
|
93 |
Declarative approach for long-term sensor data storage / Approche déclarative pour le stockage à long terme de données capteursCharfi, Manel 21 September 2017 (has links)
De nos jours, on a de plus en plus de capteurs qui ont tendance à apporter confort et facilité dans notre vie quotidienne. Ces capteurs sont faciles à déployer et à intégrer dans une variété d’applications (monitoring de bâtiments intelligents, aide à la personne,...). Ces milliers (voire millions)de capteurs sont de plus en plus envahissants et génèrent sans arrêt des masses énormes de données qu’on doit stocker et gérer pour le bon fonctionnement des applications qui en dépendent. A chaque fois qu'un capteur génère une donnée, deux dimensions sont d'un intérêt particulier : la dimension temporelle et la dimension spatiale. Ces deux dimensions permettent d'identifier l'instant de réception et la source émettrice de chaque donnée. Chaque dimension peut se voir associée à une hiérarchie de granularités qui peut varier selon le contexte d'application. Dans cette thèse, nous nous concentrons sur les applications nécessitant une conservation à long terme des données issues des flux de données capteurs. Notre approche vise à contrôler le stockage des données capteurs en ne gardant que les données jugées pertinentes selon la spécification des granularités spatio-temporelles représentatives des besoins applicatifs, afin d’améliorer l'efficacité de certaines requêtes. Notre idée clé consiste à emprunter l'approche déclarative développée pour la conception de bases de données à partir de contraintes et d'étendre les dépendances fonctionnelles avec des composantes spatiales et temporelles afin de revoir le processus classique de normalisation de schéma de base de données. Étant donné des flux de données capteurs, nous considérons à la fois les hiérarchies de granularités spatio-temporelles et les Dépendances Fonctionnelles SpatioTemporelles (DFSTs) comme objets de premier ordre pour concevoir des bases de données de capteurs compatibles avec n'importe quel SGBDR. Nous avons implémenté un prototype de cette architecture qui traite à la fois la conception de la base de données ainsi que le chargement des données. Nous avons mené des expériences avec des flux de donnés synthétiques et réels provenant de bâtiments intelligents. Nous avons comparé notre solution avec la solution de base et nous avons obtenu des résultats prometteurs en termes de performance de requêtes et d'utilisation de la mémoire. Nous avons également étudié le compromis entre la réduction des données et l'approximation des données. / Nowadays, sensors are cheap, easy to deploy and immediate to integrate into applications. These thousands of sensors are increasingly invasive and are constantly generating enormous amounts of data that must be stored and managed for the proper functioning of the applications depending on them. Sensor data, in addition of being of major interest in real-time applications, e.g. building control, health supervision..., are also important for long-term reporting applications, e.g. reporting, statistics, research data... Whenever a sensor produces data, two dimensions are of particular interest: the temporal dimension to stamp the produced value at a particular time and the spatial dimension to identify the location of the sensor. Both dimensions have different granularities that can be organized into hierarchies specific to the concerned context application. In this PhD thesis, we focus on applications that require long-term storage of sensor data issued from sensor data streams. Since huge amount of sensor data can be generated, our main goal is to select only relevant data to be saved for further usage, in particular long-term query facilities. More precisely, our aim is to develop an approach that controls the storage of sensor data by keeping only the data considered as relevant according to the spatial and temporal granularities representative of the application requirements. In such cases, approximating data in order to reduce the quantity of stored values enhances the efficiency of those queries. Our key idea is to borrow the declarative approach developed in the seventies for database design from constraints and to extend functional dependencies with spatial and temporal components in order to revisit the classical database schema normalization process. Given sensor data streams, we consider both spatio-temporal granularity hierarchies and Spatio-Temporal Functional Dependencies (STFDs) as first class-citizens for designing sensor databases on top of any RDBMS. We propose a specific axiomatisation of STFDs and the associated attribute closure algorithm, leading to a new normalization algorithm. We have implemented a prototype of this architecture to deal with both database design and data loading. We conducted experiments with synthetic and real-life data streams from intelligent buildings.
|
94 |
Comprehensive Path-sensitive Data-flow AnalysisThakur, Aditya 07 1900 (has links)
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. We initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward or backward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities.
Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy heuristic.
The framework has been implemented in the Scale research compiler, and instantiated for the specific problems of Constant Propagation and Liveness analysis. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach for Constant Propagation. For the problem of Liveness analysis, we see an average speedup of 0.8% in the running times over the baseline implementation.
|
95 |
Specification And Verification Of Confidentiality In Software ArchitecturesUlu, Cemil 01 March 2004 (has links) (PDF)
This dissertation addresses the confidentiality aspect of the information security problem from the viewpoint of the software architecture. It presents a new approach to secure system design in which the desired security properties, in particular, confidentiality, of the system are proven to hold at the architectural level. The architecture description language Wright is extended so that confidentiality authorizations can be specified. An architectural description in Wright/c, the extended language, assigns clearance to the ports of the components and treats security labels as a part of data type information. The security labels are declared along with clearance assignments in an access control lattice model, also expressed in Wright/c. This enables the static analysis of data flow over the architecture subject to confidentiality requirements as per Bell-LaPadula principles. An algorithm takes the Wright/c description and the lattice model as inputs, and checks if there is a potential violation of the Bell-LaPadula principles. The algorithm also detects excess privileges. A software tool, which features an XML-based front-end to the algorithm is constructed. Finally, the algorithm is analyzed for its soundness, completeness and computational complexity.
|
96 |
Cut off cross-border data flow and international investment law. : A legal analysis of a restriction with an effect equivalent of a ban on cross-border data flow and the fair and equitable treatment standard found in bilateral investment treaties.Magnusson, Victor January 2021 (has links)
In the world we live in today, the international trade and economy is becoming more and more dependent on data. Data that can be transferred across borders and during the last couple of years there is an observable trend that the cross-border data flows is increasing. The increase of the cross-border data flows is a result of the vast boom in the global digitalization. Businesses and enterprises can use the data accessible in multiple kinds of ways, follow and keep control of production chains, follow the demand of consumers, and make alterations to the products following the requests of the consumers. This is improving the efficiency and productivity of the businesses. The free flow of data across borders does not only have positive effect for the businesses, but also from a larger perspective, it also contributes to the welfare of countries, and provide new possibilities and opportunities. Despite the fact that the free flow of data has its great effects on both businesses and the welfare of states, states are imposing restrictions on cross-border data flows. The restrictions in place are of deferent kinds, some makes it mandatory to store or process data, while other restrictions are harsher and could provide a ban or cut off on cross-border data flow. In the legal system of international investment law, the fair and equitable treatment standard is a standard found in treaties, bilateral and multilateral. The standard is protecting the forging investors. If a state is enforcing a restriction with an effect equivalent to a ban on cross-border data flow, what is the relation of that restriction to the fair and equitable treatment standard?
|
97 |
Analýza datových toků ve databázových systémech / Analyzing Data Lineage in Database FrameworksEliáš, Richard January 2019 (has links)
Large information systems are typically implemented using frameworks and libraries. An important property of such systems is data lineage - the flow of data loaded from one system (e.g. database), through the program code, and back to another system. We implemented the Java Resolver tool for data lineage analysis of Java programs based on the Symbolic analysis library for computing data lineage of simple Java applications. The library supports only JDBC and I/O APIs to identify the sources and sinks of data flow. We proposed some archi- tecture changes to the library to make easily extensible by plugins that can add support for new data processing frameworks. We implemented such plugins for few frameworks with different approach for accessing the data, including Spring JDBC, MyBatis and Kafka. Our tests show that this approach works and can be usable in practice. 1
|
98 |
Verifying Data-Oriented Gadgets in Binary Programs to Build Data-Only ExploitsSisco, Zachary David 08 August 2018 (has links)
No description available.
|
99 |
Improving the information architecture at Boliden GroupNyberg, Kristina January 2018 (has links)
Architects design physical structures to allow visitors to perform certain actions. The same idea can be applied to the digital landscape [18]. Physical architectural design of spaces for employees within the metals industry influence and improve productivity [14], whereas a suboptimal intranet design can cause decreased productivity in a digital landscape [20]. During an IT conference in June 2018, it was established that 75% of enterprise cloud-based system users consider the complexity of implementation or operations top obstacles to usage. Productivity, efficiency, and consideration for people are strategic values at Boliden Group AB [22], yet the internal Document Management System (DMS) indicated the opposite to be true. The DMS was thus analyzed using Information Architecture (IA) principles; context, content, and users [3], through a case study including (3) stakeholder and (1) expert interviews, an internal questionnaire with 631 respondents, external State-of-the-Art (SoA) analysis including sketch and (4) interviews and a final survey with 35 respondents. Results indicated inconclusive policies concerning education in Information Systems (IS); problematic DMS user interface (UI) and branding; IS restriction to Microsoft (MS) Office365 and a desire to use Sharepoint increasingly. The overall impression indicated desire to follow the strategy at Boliden Group AB, through well defined methods and goals, and focus on simplifying processes. The State-of-the-Art analysis identified success factors organization, emails, publishing, tools, and findability. Recommendations propose to leverage impressions particularly regarding strategy, UI and training. Improvements to the IA should be done following complementary user-focused research in two steps; Search-log analysis followed by Contextual inquiry. Applied improvements from the State-of-the-Art analysis were visualized in a sketch that can, collectively with future user insights, be used to inspire a developed Digital Workplace with Sharepoint UI and more intuitive document management for the common user, in order to provide simplifying recommendations for improved IAand productivity. / Arkitekter designar fysiska strukturer för besökare ska kunna utföra vissa handlingar. Samma idé kan tillämpas på det digitala landskapet [18]. Utformning av fysiska utrymmen för anställda inom metallindustrin har visats förbättra produktivitet [14]; en suboptimal design av intranät kan däremot leda till minskad produktivitet i ett digitalt landskap [20]. Under en IT-konferens i juni 2018 fastställdes att 75% av företags molnbaserade systemanvändare anser att komplexitet i genomförande eller daglig verksamhet de största hindren till användning. Produktivitet, effektivitet och hänsyn till människor är strategiska värderingar hos Boliden Group AB [22], men det interna dokumenthanteringssystemet (DMS) talade för det motsatta. DMS analyserades därför med applicerade InformationsArkitekturs(IA) -principer; sammanhang, innehåll och användare [3]. Resultaten indikerade otydliga policies avseende utbildning i informationssystem (IS); problematiskt DMS, speciellt avseende användargränssnitt (UI) och branding; Begränsning till MS (Microsoft) Office365, med vilja att använda Sharepoint; State-of-the-art (SoA) -analys framgångsfaktorer organisering, e-post, publicering, verktyg och tillgänglighet; konsekvent helhetsintryck av en önskan att följa strategin på Boliden Group AB genom väldefinierade metoder och mål, och fokus på förenklande processer. Rekommendationer till följd av detta föreslår att särskilt utnyttja intryck gällande strategi, UI och utbildning. Förbättringar av IA bör göras efter kompletterande användarfokuserad forskning i två steg; Search-log analysis (sökloggsanalys) följt av Contextual inquiry (kontextuell undersökning). Tillämpade förbättringar från SoA-analysen visualiserades i en attrapp som tillsammans med framtida användarinsikter kan användas för att inspirera en utvecklad Digital Workplace med Sharepoint UI och mer intuitiv dokumenthantering för användare, för att förenklarekommendationer för förbättrad IA och produktivitet.
|
100 |
DATAFLÖDEN OCH DIGITALA VERKTYG I PRODUKTIONSSKEDET : En granskning av interna dataflöden och digitala verktyg för Skanska Stora Projekt / DATA FLOWS AND DIGITAL TOOLS IN THE PRODUCTION STAGE : A review of internal data flows and digital tools for Skanska Large ProjectsIda, Eriksson, Skoog, Joachim January 2022 (has links)
Digitaliseringen i bygg- och anläggningsbranschen går snabbt framåt. Under coronapandeminuppger ”Svensk Byggtjänst” att utvecklingen har accelererat och att användningen avbranschspecifika digitala verktyg har ökat med över 60%. Denna snabba utveckling skersamtidigt som stora pågående projekt vilket leder till att implementeringen av verktygen skerparallellt med produktionen. Det är inte problemfritt att styra om riktningen för rutiner ocharbetssätt på ett pågående projekt, speciellt inte i informationstäta anläggningsprojekt.I produktionsfasen av ett anläggningsprojekt hanteras stora mängder dokumentation, att skapaett effektivt flöde med minimalt antal avbrott och utan risk för att förlora information på vägenär viktigt, inte minst sett ur ett kvalitetsperspektiv. Kvalitetsdokumentation används för att säkerställa att rätt produkt levereras och att kraven påbyggdelen uppfylls, kvalitetsdokumentationen ingår även i slutdokumentationen som tas framinför en slutbesiktning.Syftet med detta examensarbete är att undersöka Skanskas dataflöden förkvalitetsdokumentation i produktionsfasen. För att undersöka hur dataflödet ser ut i projektethar en granskning av kontrolldokumentationen för en anläggningsdel i Slussen utförts. För attfå en tydligare bild av dataflödet idag samt hur effektivt det är, kartlägger denna studie ettfaktiskt kvalitetsrelaterat flöde, samt intervjuar digitala ledare och produktionsingenjörer. Syftetär att identifiera ineffektivitet och risker i form av avbrott i flödet och förlorad kvalitetssäkring.Studien visar på att det finns ett visst motstånd i produktionen när det kommer tillimplementering av digitala verktyg, att användningsgraden i de digitala verktygen är lägre änönskat samt att den påverkas av personliga preferenser när det kommer till kvalitetsarbetet iproduktionen. I resultatet redovisas konsekvenser som kan kopplas till ineffektiva dataflödensamt vad som behövs för att optimera dataflödet. / Digitalization in the construction industry is progressing rapidly. During the corona pandemic,“Svensk Byggtjänst” states that development has accelerated and that the use of industryspecificdigital tools has increased by more than 60%. This rapid development takes place at thesame time as large ongoing projects, which leads to the implementation of the digital tools takingplace in parallel with the production. It is not without problems to redirect the direction ofroutines and working methods on an ongoing project, especially not in information-denseconstruction projects.In the production phase of a construction project, large amounts of documentation are handled,creating an efficient flow with a minimal number of interruptions and without the risk of losinginformation along the way is important, not least from a quality perspective.Quality documentation is used to ensure that the right product is delivered and that therequirements for the building component are met, the quality documentation is also included inthe final documentation that is produced before a final inspection.The purpose of this thesis is to investigate Skanska's data flows for quality documentation in theproduction phase. To investigate what the data flow looks like in the project, a review of thecontrol documentation for a construction part in Slussen has been carried out. To get a clearerpicture of the data flow today and how efficient it is, this study performs a survey of an actualquality-related flow, and interviews digital leaders and production engineers. The purpose is toidentify inefficiencies and risks in the form of interruptions in the flow and lost qualityassurance.The study shows that there is some resistance in production when it comes to the implementationof digital tools, that the degree of use of the digital tools is lower than desired and that it isaffected by personal preferences when it comes to quality work in production. The results reportconsequences that can be linked to inefficient data flows and what is needed to optimize the dataflow.
|
Page generated in 0.2821 seconds