Spelling suggestions: "subject:"data flow"" "subject:"mata flow""
101 |
DATAFLÖDEN OCH DIGITALA VERKTYG I PRODUKTIONSSKEDET : En granskning av interna dataflöden och digitala verktyg för Skanska Stora Projekt / DATA FLOWS AND DIGITAL TOOLS IN THE PRODUCTION STAGE : A review of internal data flows and digital tools for Skanska Large ProjectsIda, Eriksson, Skoog, Joachim January 2022 (has links)
Digitaliseringen i bygg- och anläggningsbranschen går snabbt framåt. Under coronapandeminuppger ”Svensk Byggtjänst” att utvecklingen har accelererat och att användningen avbranschspecifika digitala verktyg har ökat med över 60%. Denna snabba utveckling skersamtidigt som stora pågående projekt vilket leder till att implementeringen av verktygen skerparallellt med produktionen. Det är inte problemfritt att styra om riktningen för rutiner ocharbetssätt på ett pågående projekt, speciellt inte i informationstäta anläggningsprojekt.I produktionsfasen av ett anläggningsprojekt hanteras stora mängder dokumentation, att skapaett effektivt flöde med minimalt antal avbrott och utan risk för att förlora information på vägenär viktigt, inte minst sett ur ett kvalitetsperspektiv. Kvalitetsdokumentation används för att säkerställa att rätt produkt levereras och att kraven påbyggdelen uppfylls, kvalitetsdokumentationen ingår även i slutdokumentationen som tas framinför en slutbesiktning.Syftet med detta examensarbete är att undersöka Skanskas dataflöden förkvalitetsdokumentation i produktionsfasen. För att undersöka hur dataflödet ser ut i projektethar en granskning av kontrolldokumentationen för en anläggningsdel i Slussen utförts. För attfå en tydligare bild av dataflödet idag samt hur effektivt det är, kartlägger denna studie ettfaktiskt kvalitetsrelaterat flöde, samt intervjuar digitala ledare och produktionsingenjörer. Syftetär att identifiera ineffektivitet och risker i form av avbrott i flödet och förlorad kvalitetssäkring.Studien visar på att det finns ett visst motstånd i produktionen när det kommer tillimplementering av digitala verktyg, att användningsgraden i de digitala verktygen är lägre änönskat samt att den påverkas av personliga preferenser när det kommer till kvalitetsarbetet iproduktionen. I resultatet redovisas konsekvenser som kan kopplas till ineffektiva dataflödensamt vad som behövs för att optimera dataflödet. / Digitalization in the construction industry is progressing rapidly. During the corona pandemic,“Svensk Byggtjänst” states that development has accelerated and that the use of industryspecificdigital tools has increased by more than 60%. This rapid development takes place at thesame time as large ongoing projects, which leads to the implementation of the digital tools takingplace in parallel with the production. It is not without problems to redirect the direction ofroutines and working methods on an ongoing project, especially not in information-denseconstruction projects.In the production phase of a construction project, large amounts of documentation are handled,creating an efficient flow with a minimal number of interruptions and without the risk of losinginformation along the way is important, not least from a quality perspective.Quality documentation is used to ensure that the right product is delivered and that therequirements for the building component are met, the quality documentation is also included inthe final documentation that is produced before a final inspection.The purpose of this thesis is to investigate Skanska's data flows for quality documentation in theproduction phase. To investigate what the data flow looks like in the project, a review of thecontrol documentation for a construction part in Slussen has been carried out. To get a clearerpicture of the data flow today and how efficient it is, this study performs a survey of an actualquality-related flow, and interviews digital leaders and production engineers. The purpose is toidentify inefficiencies and risks in the form of interruptions in the flow and lost qualityassurance.The study shows that there is some resistance in production when it comes to the implementationof digital tools, that the degree of use of the digital tools is lower than desired and that it isaffected by personal preferences when it comes to quality work in production. The results reportconsequences that can be linked to inefficient data flows and what is needed to optimize the dataflow.
|
102 |
Software Synthesis of Synchronous Data Flow Models Using ForSyDe IO / Mjukvarusyntesen av Synkront dataflöde Med ForSyDe IOZhao, Yihang January 2022 (has links)
The implementation of embedded software applications is a complex process. The complexity arises from the intense time-to-market pressures; power and memory constraints. To deal with this complexity, an idea is to automatically construct the applications based on the high-level abstraction model. Synchronous data flow (SDF) is a high-level model of computation, and is used to model the embedded applications. Formal System Design (ForSyDe), developed by ForSyDe group at KTH Royal Institute of Technology, is a methodology for modeling and designing heterogeneous systems-on-chip. The aim of Formal System Design (ForSyDe) is to automatically generate the detailed software implementation or hardware implementation according to the high-level system specification. Formal System Design (ForSyDe) starts from the high-level system specification and specifies the system model in Haskell language. Synchronous data flow is supported by ForSyDe. ForSyDe IO is an intermediate representation of the high-level system specification. This master thesis focuses on the software synthesis of synchronous data flow models specified in ForSyDe IO, and aims to produce an automatic code generator that can generate software applications in C code for different platforms based on ForSyDe IO. In this project, a software synthesis method for ForSyDe IO was proposed. Then, based on the software synthesis method, a code generator, written in Java and Xtend, was designed. The derived code generator was tested on two examples. The experiment results show that the synchronous data flow models specified in ForSyDe IO are successfully synthesized into C code. The code is in the Github repository https://github.com/Rojods/CInTSyDe.git with MIT license. / Implementeringen av inbäddade mjukvaruapplikationer är en komplex process. Komplexiteten beror på det intensiva trycket på tid-till-marknad; kraft- och minnesbegränsningar. För att hantera denna komplexitet är en idé att applikationerna automatiskt kan konstrueras den högnivåabstraktionsmodellen. Synkront dataflöde (SDF) är en beräkningsmodell på hög nivå som används för att modellera inbäddade applikationer. Formell systemdesign (ForSyDe), utvecklad av ForSyDe-gruppen vid KTH, Kungliga Tekniska Högskolan , är en metodik för modellering och design av heterogena system på chipp. Syftet med formell systemdesign (ForSyDe) är att automatiskt generera den detaljerade mjuk- eller hårdvaruimplementationen enligt systemspecifikationen på hög nivå. Formell systemdesign (ForSyDe) utgår från systemspecifikationen på hög nivå och specificerar systemmodellen på Haskell-språket. Synkront dataflöde stöds av ForSyDe. ForSyDe IO är en mellanrepresentation av systemspecifikationen på hög nivå. Detta examensarbete fokuserar på mjukvarusyntesen av ForSyDe IO och synkront dataflöde, och syftar till att producera ett automatiskt verktyg som kan generera mjukvaruapplikation i C-kod för olika plattformar baserat på ForSyDe IO. I detta projekt föreslås en mjukvarusyntesmetod för ForSyDe IO. Sedan, baserat på mjukvarusyntesmetoden, designas en kodgenerator skriven i Java och Xtend. Den härledda kodgeneratorn testas på två exempel. Experimentresultaten visar att ForSyDe IO framgångsrikt har syntetiserats till C-kod.
|
103 |
<b>FuzzGauge – A method to automatically determine reasons for a fuzz blocker</b>Trivikram Anandakumar Thirukkonda (20832620) 05 March 2025 (has links)
<p dir="ltr">Fuzz testing is well-known for its ability to catch unforeseen bugs in complex programs. Its highly automated nature makes it attractive since it can execute and test large parts of the program with just a few starting inputs from the verification team. To allow a general purpose fuzzing engine like AFL++ to work with any program, a fuzzing harness exists as the interface between the fuzzing engine (which provides the mutated string of bytes) and the entry points of the program being tested (which expects inputs in a well-formatted way). However, due to poorly written harnesses, a state-of-the-art fuzzer may spend lots of computation resources and still explore only a small portion of the codebase. These unexecuted “fuzz blockers” are a well-known reason for the disparity between fuzzing’s prowess in academic research, and their performance in real-world applications.</p><p dir="ltr">Google’s OSS-Fuzz initiative helps open-source developers to fuzz their programs and provide some insights about their fuzzing results, but it is up to the developer to manually analyze why certain sections of code are fuzz blockers. This thesis provides a tool, called FuzzGauge, by which a significant portion of this analysis can be automated. The tool especially focuses on locating causes of blockers that may be due to bad harnesses. It uses the results provided by Fuzz-Introspector and builds an analysis pipeline on top of it, to provide reasons why a fuzz blocker occurs. It tells whether a blocker could be eventually fuzzed, given enough time. If it cannot, it points out any possible hardcoded values in the code or harness which cause this blocker. Using these results, we believe a developer can quickly improve their project’s fuzzability.</p>
|
104 |
Data Flow and Remote Control in the Telemetry Network SystemLaird, Daniel T., Morgan, Jon 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The Central Test and Evaluation Investment Program (CTEIP) Integrated Network Enhanced Telemetry (iNET) program is currently developing new standards for wired-wireless local area networking (LAN-WLAN) using the Internet Protocol (IP), for use in telemetry (TM) channels, under the umbrella of the Telemetry Network System (TmNS). Some advantages of TmNS are real-time command and control of instrumentation, quick-look acquisition, data retransmission and recovery ('gapless TM' or 'PCM backfill'), data segmentation, etc. The iNET team is developing and evaluating prototypes, based on commercial 802.x and other technologies, in conjunction with Range Commander's Council (RCC) Inter-Range Instrumentation Group (IRIG) standards and standards developed under the iNET program.
|
105 |
Application of local semantic analysis in fault prediction and detectionShao, Danhua 06 October 2010 (has links)
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size.
Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time.
To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach. / text
|
106 |
Contributions à la conception de systèmes à hautes performances, programmables et sûrs: principes, interfaces, algorithmes et outilsCohen, Albert 23 March 2007 (has links) (PDF)
La loi de Moore sur semi-conducteurs approche de sa fin. L'evolution de l'architecture de von Neumann à travers les 40 ans d'histoire du microprocesseur a conduit à des circuits d'une insoutenable complexité, à un très faible rendement de calcul par transistor, et une forte consommation énergetique. D'autre-part, le monde du calcul parallèle ne supporte pas la comparaison avec les niveaux de portabilité, d'accessibilité, de productivité et de fiabilité de l'ingénérie du logiciel séquentiel. Ce dangereux fossé se traduit par des défis passionnants pour la recherche en compilation et en langages de programmation pour le calcul à hautes performances, généraliste ou embarqué. Cette thèse motive notre piste pour relever ces défis, introduit nos principales directions de travail, et établit des perspectives de recherche.
|
107 |
Portable Tools for Interoperable Grids : Modular Architectures and Software for Job and Workflow ManagementTordsson, Johan January 2009 (has links)
The emergence of Grid computing infrastructures enables researchers to shareresources and collaborate in more efficient ways than before, despite belongingto different organizations and being geographically distributed. While the Gridcomputing paradigm offers new opportunities, it also gives rise to newdifficulties. This thesis investigates methods, architectures, and algorithmsfor a range of topics in the area of Grid resource management. One studiedtopic is how to automate and improve resource selection, despite heterogeneityin Grid hardware, software, availability, ownership, and usage policies.Algorithmical difficulties for this are, e.g., characterization of jobs andresources, prediction of resource performance, and data placementconsiderations. Investigated Quality of Service aspects of resource selectioninclude how to guarantee job start and/or completion times as well as how tosynchronize multiple resources for coordinated use through coallocation.Another explored research topic is architectural considerations for frameworksthat simplify and automate submission, monitoring, and fault handling for largeamounts of jobs. This thesis also investigates suitable Grid interactionpatterns for scientific workflows, studies programming models that enable dataparallelism for such workflows, as well as analyzes how workflow compositiontools should be designed to increase flexibility and expressiveness. We today have the somewhat paradoxical situation where Grids, originally aimed tofederate resources and overcome interoperability problems between differentcomputing platforms, themselves struggle with interoperability problems causedby the wide range of interfaces, protocols, and data formats that are used indifferent environments. This thesis demonstrates how proof-of-concept softwaretools for Grid resource management can, by using (proposed) standard formatsand protocols as well as leveraging state-of-the-art principles fromservice-oriented architectures, be made independent of current Gridinfrastructures. Further interoperability contributions include an in-depthstudy that surveys issues related to the use of Grid resources in scientificworkflows. This study improves our understanding of interoperability amongscientific workflow systems by viewing this topic from three differentperspectives: model of computation, workflow language, and executionenvironment. A final contribution in this thesis is the investigation of how the design ofGrid middleware tools can adopt principles and concepts from softwareengineering in order to improve, e.g., adaptability and interoperability.
|
108 |
Demand-Driven Type Inference with Subgoal PruningSpoon, Steven Alexander 29 August 2005 (has links)
Highly dynamic languages like Smalltalk do not have much static type
information immediately available before the program runs. Static
types can still be inferred by analysis tools, but historically, such
analysis is only effective on smaller programs of at most a few tens
of thousands of lines of code.
This dissertation presents a new type inference algorithm, DDP,
that is effective on larger programs with hundreds of thousands
of lines of code. The approach of the algorithm borrows from the
field of knowledge-based systems: it is a demand-driven algorithm that
sometimes prunes subgoals. The algorithm is formally described,
proven correct, and implemented. Experimental results show that the
inferred types are usefully precise. A complete program understanding
application, Chuck, has been developed that uses DDP type
inferences.
This work contributes the DDP algorithm itself, the most thorough
semantics of Smalltalk to date, a new general approach for analysis
algorithms, and experimental analysis of DDP including
determination of useful parameter settings. It also contributes
an implementation of DDP, a general analysis framework for
Smalltalk, and a complete end-user application that uses DDP.
|
109 |
Capteur d'images événementiel, asynchrone à échantillonnage non-uniforme / Asynchronous Event-driven Image SensorDarwish, Amani 27 June 2016 (has links)
Face aux défis actuels liés à la conception de capteurs d'images à forte résolution comme la limitation de la consommation électrique, l'augmentation du flux de données ainsi que le traitement de données associé, on propose, à travers cette thèse, un capteur d'image novateur asynchrone à échantillonnage non uniforme.Ce capteur d’images asynchrone est basé sur une matrice de pixels événementiels qui intègrent un échantillonnage non uniforme par traversée de niveaux. Contrairement aux imageurs conventionnels, où les pixels sont lus systématiquement lors de chaque trame, les pixels événementiels proposés sont consultés que lorsqu’ils contiennent une information pertinente. Cela induit un flux de données réduit et dépendant de l’image.Pour compléter la chaîne de traitement des pixels, on présente également une architecture numérique de lecture dédiée conçue en utilisant de la logique asynchrone et destinée à contrôler et à gérer le flux de données des pixels événementiels. Ce circuit de lecture numérique permet de surmonter les difficultés classiques rencontrées lors de la gestion des demandes simultanées des pixels événementiels sans dégrader la résolution et le facteur de remplissage du capteur d’images. En outre, le circuit de lecture proposé permet de réduire considérablement les redondances spatiales dans une image ce qui diminue encore le flux de données.Enfin, en combinant l'aspect échantillonnage par traversée de niveau et la technique de lecture proposée, on a pu remplacer la conversion analogique numérique classique de la chaîne de traitement des pixels par une conversion temps-numérique (Time-to-Digital Conversion). En d'autres termes, l'information du pixel est codée par le temps. Il en résulte une diminution accrue de la consommation électrique du système de vision, le convertisseur analogique-numérique étant un des composants les plus consommant du système de lecture des capteurs d'images conventionnels / In order to overcome the challenges associated with the design of high resolution image sensors, we propose, through this thesis, an innovative asynchronous event-driven image sensor based on non-uniform sampling. The proposed image sensor aims the reduction of the data flow and its associated data processing by limiting the activity of our image sensor to the new captured information.The proposed asynchronous image sensor is based on an event-driven pixels that incorporate a non-uniform sampling crossing levels. Unlike conventional imagers, where the pixels are read systematically at each frame, the proposed event-driven pixels are only read when they hold new and relevant information. This induces a reduced and scene dependent data flow.In this thesis, we introduce a complete pixel reading sequence. Beside the event-driven pixel, the proposed reading system is designed using asynchronous logic and adapted to control and manage the flow of data from event pixels. This digital reading system overcomes the traditional difficulties encountered in the management of simultaneous requests for event pixels without degrading the resolution and fill factor of the image sensor. In addition, the proposed reading circuit significantly reduces the spatial redundancy in an image which further reduces the data flow.Finally, by combining the aspect of level crossing sampling and the proposed reading technique, we replaced the conventional analog to digital conversion of the pixel processing chain by a time-to-digital Conversion (TDC). In other words, the pixel information is coded by time. This results in an increased reduction in power consumption of the vision system, the analog-digital converter being one of the most consuming reading system of conventional image sensors components
|
110 |
A reutilização de modelos de requisitos de sistemas por analogia : experimentação e conclusões / Systems requirements reuse by analogy: examination and conclusionsZirbes, Sergio Felipe January 1995 (has links)
A exemplo de qualquer outra atividade que se destine a produzir um produto, a engenharia de software necessariamente passa por um fase inicial, onde necessário definir o que será produzido. A análise de requisitos é esta fase inicial, e o produto dela resultante é a especificação do sistema a ser construído. As duas atividades básicas durante a analise de requisitos são a eliciação (busca ou descoberta das características do sistema) e a modelagem. Uma especificação completa e consistente é condição indispensável para o adequado desenvolvimento de um sistema. Muitos tem sido, entretanto, os problemas enfrentados pelos analistas na execução desta tarefa. A variedade e complexidade dos requisitos, as limitações humanas e a dificuldade de comunicação entre usuários e analistas são as principais causas destas dificuldades. Ao considerarmos o ciclo de vida de um sistema de informação, verificamos que a atividade principal dos profissionais em computação é a transformação de uma determinada porção do ambiente do usuário, em um conjunto de modelos. Inicialmente, através de um modelo descritivo representamos a realidade. A partir dele derivamos um modelo das necessidades (especificação dos requisitos), transformando-o a seguir num modelo conceitual. Finalizando o ciclo de transformações, derivamos o modelo programado (software), que ira se constituir no sistema automatizado requerido. Apesar da reconhecida importância da analise dos requisitos e da conseqüente representação destes requisitos em modelos, muito pouco se havia inovado nesta área ate o final dos anos 80. Com a evolução do conceito de reutilização de software para reutilização de especificações ou reutilização de modelos de requisitos, finalmente surge não apenas um novo método, mas um novo paradigma: a reutilização sistemática (sempre que possível) de modelos integrantes de especificações de sistemas semelhantes ao que se pretende desenvolver. Muito se tem dito sobre esta nova forma de modelagem e um grande número de pesquisadores tem se dedicado a tornar mais simples e eficientes várias etapas do novo processo. Entretanto, para que a reutilização de modelos assuma seu papel como uma metodologia de use geral e de plena aceitação, resta comprovar se, de fato, ele produz software de melhor quantidade e confiabilidade, de forma mais produtiva. A pesquisa descrita neste trabalho tem por objetivo investigar um dos aspectos envolvido nesta comprovação. A experimentação viabilizou a comparação entre modelos de problemas construídos com reutilização, a partir dos modelos de problemas similares previamente construídos e postos a disposição dos analistas, e os modelos dos mesmos problemas elaborados sem nenhuma reutilização. A comparação entre os dois conjuntos de modelos permitiu concluir, nas condições propostas na pesquisa, serem os modelos construídos com reutilização mais completos e corretos do que os que foram construídos sem reutilização. A apropriação dos tempos gastos pelos analistas durante as diversas etapas da modelagem, permitiu considerações sobre o esforço necessário em cada um dos dois tipos de modelagem. 0 protocolo experimental e a estratégia definida para a pesquisa possibilitaram também que medidas pudessem ser realizadas com duas series de modelos, onde a principal diferença era o grau de similaridade entre os modelos do problema reutilizado e os modelos do problema alvo. A variação da qualidade e completude dos dois conjuntos de modelos, bem como do esforço necessário para produzi-los, evidenciou uma questão fundamental do processo: a reutilização só terá efeitos realmente produtivos se realizada apenas com aplicações integrantes de domínios específicos e bem definidos, compartilhando, em alto grau, dados e procedimentos. De acordo com as diretrizes da pesquisa, o processo de reutilização de modelos de requisitos foi investigado em duas metodologias de desenvolvimento: na metodologia estruturada a modelagem foi realizada com Diagramas de Fluxo de Dados (DFD's) e na metodologia orientada a objeto com Diagramas de Objetos. A pesquisa contou com a participação de 114 alunos/analistas, tendo sido construídos 175 conjuntos de modelos com diagramas de fluxo de dados e 23 modelos com diagramas de objeto. Sobre estas amostras foram realizadas as analises estatísticas pertinentes, buscando-se responder a um considerável número de questões existentes sobre o assunto. Os resultados finais mostram a existência de uma série de benefícios na análise de requisitos com modelagem baseada na reutilização de modelos análogos. Mas, a pesquisa em seu todo mostra, também, as restrições e cuidados necessários para que estes benefícios de fato ocorram. / System Engineering, as well as any other product oriented activity, starts by a clear definition of the product to be obtained. This initial activity is called Requirement Analysis and the resulting product consists of a system specification. The Requirement Analysis is divided in two separated phases: elicitation and modeling. An appropriate system development definition relies in a complete, and consistent system specification phase. However, many problems have been faced by system analysts in the performance of such task, as a result of requirements complexity, and diversity, human limitations, and communication gap between users and developers. If we think of a system life cycle, we'll find out that the main activity performed by software engineers consists in the generation of models corresponding to specific parts of the users environment. This modeling activity starts by a descriptive model of the portion of reality from which the requirement model is derived, resulting in the system conceptual model. The last phase of this evolving modeling activity is the software required for the system implementation. In spite of the importance of requirement analysis and modeling, very little research effort was put in these activities and none significant improvement in available methodologies were presented until the late 80s. Nevertheless, when the concepts applied in software reuse were also applied to system specification and requirements modeling, then a new paradigm was introduced, consisting in the specification of new systems based on systematic reuse of similar available system models. Research effort have been put in this new modeling technique in the aim of make it usable and reliable. However, only after this methodology is proved to produce better and reliable software in a more productive way, it would be world wide accepted by the scientific and technical community. The present work provides a critical analysis about the use of such requirement modeling technique. Experimental modeling techniques based on the reuse of similar existing models are analyzed. Systems models were developed by system analyst with similar skills, with and without reusing previously existing models. The resulting models were compared in terms of correction, consumed time in each modeling phase, effort, etc. An experimental protocol and a special strategy were defined in order to compare and to measure results obtained from the use of two different groups of models. The main difference between the two selected groups were the similarity level between the model available for reuse and the model to be developed. The diversity of resulting models in terms of quality and completeness, as well in the modeling effort, was a corroboration to the hypothesis that reuse effectiveness is related to similarity between domains, data and procedures of pre-existing models and applications being developed. In this work, the reuse of requirements models is investigated in two different methodologies: in the first one, the modeling process is based on the use of Data Flow Diagrams, as in the structured methodology; in the second methodology, based on Object Orientation, Object Diagrams are used for modeling purposes. The research was achieved with the cooperation of 114 students/analysts, resulting in 175 series of Data Flow Diagrams and 23 series of Object Diagrams. Proper statistical analysis were conducted with these samples, in order to clarify questions about requirements reuse. According to the final results, modeling techniques based on the reuse of analogous models provide an improvement in requirement analysis, without disregarding restrictions resulting from differences in domain, data and procedures.
|
Page generated in 0.0481 seconds