• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 17
  • 14
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 43
  • 34
  • 28
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Software Synthesis of Synchronous Data Flow Models Using ForSyDe IO / Mjukvarusyntesen av Synkront dataflöde Med ForSyDe IO

Zhao, Yihang January 2022 (has links)
The implementation of embedded software applications is a complex process. The complexity arises from the intense time-to-market pressures; power and memory constraints. To deal with this complexity, an idea is to automatically construct the applications based on the high-level abstraction model. Synchronous data flow (SDF) is a high-level model of computation, and is used to model the embedded applications. Formal System Design (ForSyDe), developed by ForSyDe group at KTH Royal Institute of Technology, is a methodology for modeling and designing heterogeneous systems-on-chip. The aim of Formal System Design (ForSyDe) is to automatically generate the detailed software implementation or hardware implementation according to the high-level system specification. Formal System Design (ForSyDe) starts from the high-level system specification and specifies the system model in Haskell language. Synchronous data flow is supported by ForSyDe. ForSyDe IO is an intermediate representation of the high-level system specification. This master thesis focuses on the software synthesis of synchronous data flow models specified in ForSyDe IO, and aims to produce an automatic code generator that can generate software applications in C code for different platforms based on ForSyDe IO. In this project, a software synthesis method for ForSyDe IO was proposed. Then, based on the software synthesis method, a code generator, written in Java and Xtend, was designed. The derived code generator was tested on two examples. The experiment results show that the synchronous data flow models specified in ForSyDe IO are successfully synthesized into C code. The code is in the Github repository https://github.com/Rojods/CInTSyDe.git with MIT license. / Implementeringen av inbäddade mjukvaruapplikationer är en komplex process. Komplexiteten beror på det intensiva trycket på tid-till-marknad; kraft- och minnesbegränsningar. För att hantera denna komplexitet är en idé att applikationerna automatiskt kan konstrueras den högnivåabstraktionsmodellen. Synkront dataflöde (SDF) är en beräkningsmodell på hög nivå som används för att modellera inbäddade applikationer. Formell systemdesign (ForSyDe), utvecklad av ForSyDe-gruppen vid KTH, Kungliga Tekniska Högskolan , är en metodik för modellering och design av heterogena system på chipp. Syftet med formell systemdesign (ForSyDe) är att automatiskt generera den detaljerade mjuk- eller hårdvaruimplementationen enligt systemspecifikationen på hög nivå. Formell systemdesign (ForSyDe) utgår från systemspecifikationen på hög nivå och specificerar systemmodellen på Haskell-språket. Synkront dataflöde stöds av ForSyDe. ForSyDe IO är en mellanrepresentation av systemspecifikationen på hög nivå. Detta examensarbete fokuserar på mjukvarusyntesen av ForSyDe IO och synkront dataflöde, och syftar till att producera ett automatiskt verktyg som kan generera mjukvaruapplikation i C-kod för olika plattformar baserat på ForSyDe IO. I detta projekt föreslås en mjukvarusyntesmetod för ForSyDe IO. Sedan, baserat på mjukvarusyntesmetoden, designas en kodgenerator skriven i Java och Xtend. Den härledda kodgeneratorn testas på två exempel. Experimentresultaten visar att ForSyDe IO framgångsrikt har syntetiserats till C-kod.
102

Data Flow and Remote Control in the Telemetry Network System

Laird, Daniel T., Morgan, Jon 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The Central Test and Evaluation Investment Program (CTEIP) Integrated Network Enhanced Telemetry (iNET) program is currently developing new standards for wired-wireless local area networking (LAN-WLAN) using the Internet Protocol (IP), for use in telemetry (TM) channels, under the umbrella of the Telemetry Network System (TmNS). Some advantages of TmNS are real-time command and control of instrumentation, quick-look acquisition, data retransmission and recovery ('gapless TM' or 'PCM backfill'), data segmentation, etc. The iNET team is developing and evaluating prototypes, based on commercial 802.x and other technologies, in conjunction with Range Commander's Council (RCC) Inter-Range Instrumentation Group (IRIG) standards and standards developed under the iNET program.
103

Application of local semantic analysis in fault prediction and detection

Shao, Danhua 06 October 2010 (has links)
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size. Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time. To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach. / text
104

Contributions à la conception de systèmes à hautes performances, programmables et sûrs: principes, interfaces, algorithmes et outils

Cohen, Albert 23 March 2007 (has links) (PDF)
La loi de Moore sur semi-conducteurs approche de sa fin. L'evolution de l'architecture de von Neumann à travers les 40 ans d'histoire du microprocesseur a conduit à des circuits d'une insoutenable complexité, à un très faible rendement de calcul par transistor, et une forte consommation énergetique. D'autre-part, le monde du calcul parallèle ne supporte pas la comparaison avec les niveaux de portabilité, d'accessibilité, de productivité et de fiabilité de l'ingénérie du logiciel séquentiel. Ce dangereux fossé se traduit par des défis passionnants pour la recherche en compilation et en langages de programmation pour le calcul à hautes performances, généraliste ou embarqué. Cette thèse motive notre piste pour relever ces défis, introduit nos principales directions de travail, et établit des perspectives de recherche.
105

Portable Tools for Interoperable Grids : Modular Architectures and Software for Job and Workflow Management

Tordsson, Johan January 2009 (has links)
The emergence of Grid computing infrastructures enables researchers to shareresources and collaborate in more efficient ways than before, despite belongingto different organizations and being geographically distributed. While the Gridcomputing paradigm offers new opportunities, it also gives rise to newdifficulties. This thesis investigates methods, architectures, and algorithmsfor a range of topics in the area of Grid resource management. One studiedtopic is how to automate and improve resource selection, despite heterogeneityin Grid hardware, software, availability, ownership, and usage policies.Algorithmical difficulties for this are, e.g., characterization of jobs andresources, prediction of resource performance, and data placementconsiderations. Investigated Quality of Service aspects of resource selectioninclude how to guarantee job start and/or completion times as well as how tosynchronize multiple resources for coordinated use through coallocation.Another explored research topic is architectural considerations for frameworksthat simplify and automate submission, monitoring, and fault handling for largeamounts of jobs. This thesis also investigates suitable Grid interactionpatterns for scientific workflows, studies programming models that enable dataparallelism for such workflows, as well as analyzes how workflow compositiontools should be designed to increase flexibility and expressiveness. We today have the somewhat paradoxical situation where Grids, originally aimed tofederate resources and overcome interoperability problems between differentcomputing platforms, themselves struggle with interoperability problems causedby the wide range of interfaces, protocols, and data formats that are used indifferent environments. This thesis demonstrates how proof-of-concept softwaretools for Grid resource management can, by using (proposed) standard formatsand protocols as well as leveraging state-of-the-art principles fromservice-oriented architectures, be made independent of current Gridinfrastructures. Further interoperability contributions include an in-depthstudy that surveys issues related to the use of Grid resources in scientificworkflows. This study improves our understanding of interoperability amongscientific workflow systems by viewing this topic from three differentperspectives: model of computation, workflow language, and executionenvironment. A final contribution in this thesis is the investigation of how the design ofGrid middleware tools can adopt principles and concepts from softwareengineering in order to improve, e.g., adaptability and interoperability.
106

Demand-Driven Type Inference with Subgoal Pruning

Spoon, Steven Alexander 29 August 2005 (has links)
Highly dynamic languages like Smalltalk do not have much static type information immediately available before the program runs. Static types can still be inferred by analysis tools, but historically, such analysis is only effective on smaller programs of at most a few tens of thousands of lines of code. This dissertation presents a new type inference algorithm, DDP, that is effective on larger programs with hundreds of thousands of lines of code. The approach of the algorithm borrows from the field of knowledge-based systems: it is a demand-driven algorithm that sometimes prunes subgoals. The algorithm is formally described, proven correct, and implemented. Experimental results show that the inferred types are usefully precise. A complete program understanding application, Chuck, has been developed that uses DDP type inferences. This work contributes the DDP algorithm itself, the most thorough semantics of Smalltalk to date, a new general approach for analysis algorithms, and experimental analysis of DDP including determination of useful parameter settings. It also contributes an implementation of DDP, a general analysis framework for Smalltalk, and a complete end-user application that uses DDP.
107

Capteur d'images événementiel, asynchrone à échantillonnage non-uniforme / Asynchronous Event-driven Image Sensor

Darwish, Amani 27 June 2016 (has links)
Face aux défis actuels liés à la conception de capteurs d'images à forte résolution comme la limitation de la consommation électrique, l'augmentation du flux de données ainsi que le traitement de données associé, on propose, à travers cette thèse, un capteur d'image novateur asynchrone à échantillonnage non uniforme.Ce capteur d’images asynchrone est basé sur une matrice de pixels événementiels qui intègrent un échantillonnage non uniforme par traversée de niveaux. Contrairement aux imageurs conventionnels, où les pixels sont lus systématiquement lors de chaque trame, les pixels événementiels proposés sont consultés que lorsqu’ils contiennent une information pertinente. Cela induit un flux de données réduit et dépendant de l’image.Pour compléter la chaîne de traitement des pixels, on présente également une architecture numérique de lecture dédiée conçue en utilisant de la logique asynchrone et destinée à contrôler et à gérer le flux de données des pixels événementiels. Ce circuit de lecture numérique permet de surmonter les difficultés classiques rencontrées lors de la gestion des demandes simultanées des pixels événementiels sans dégrader la résolution et le facteur de remplissage du capteur d’images. En outre, le circuit de lecture proposé permet de réduire considérablement les redondances spatiales dans une image ce qui diminue encore le flux de données.Enfin, en combinant l'aspect échantillonnage par traversée de niveau et la technique de lecture proposée, on a pu remplacer la conversion analogique numérique classique de la chaîne de traitement des pixels par une conversion temps-numérique (Time-to-Digital Conversion). En d'autres termes, l'information du pixel est codée par le temps. Il en résulte une diminution accrue de la consommation électrique du système de vision, le convertisseur analogique-numérique étant un des composants les plus consommant du système de lecture des capteurs d'images conventionnels / In order to overcome the challenges associated with the design of high resolution image sensors, we propose, through this thesis, an innovative asynchronous event-driven image sensor based on non-uniform sampling. The proposed image sensor aims the reduction of the data flow and its associated data processing by limiting the activity of our image sensor to the new captured information.The proposed asynchronous image sensor is based on an event-driven pixels that incorporate a non-uniform sampling crossing levels. Unlike conventional imagers, where the pixels are read systematically at each frame, the proposed event-driven pixels are only read when they hold new and relevant information. This induces a reduced and scene dependent data flow.In this thesis, we introduce a complete pixel reading sequence. Beside the event-driven pixel, the proposed reading system is designed using asynchronous logic and adapted to control and manage the flow of data from event pixels. This digital reading system overcomes the traditional difficulties encountered in the management of simultaneous requests for event pixels without degrading the resolution and fill factor of the image sensor. In addition, the proposed reading circuit significantly reduces the spatial redundancy in an image which further reduces the data flow.Finally, by combining the aspect of level crossing sampling and the proposed reading technique, we replaced the conventional analog to digital conversion of the pixel processing chain by a time-to-digital Conversion (TDC). In other words, the pixel information is coded by time. This results in an increased reduction in power consumption of the vision system, the analog-digital converter being one of the most consuming reading system of conventional image sensors components
108

A reutilização de modelos de requisitos de sistemas por analogia : experimentação e conclusões / Systems requirements reuse by analogy: examination and conclusions

Zirbes, Sergio Felipe January 1995 (has links)
A exemplo de qualquer outra atividade que se destine a produzir um produto, a engenharia de software necessariamente passa por um fase inicial, onde necessário definir o que será produzido. A análise de requisitos é esta fase inicial, e o produto dela resultante é a especificação do sistema a ser construído. As duas atividades básicas durante a analise de requisitos são a eliciação (busca ou descoberta das características do sistema) e a modelagem. Uma especificação completa e consistente é condição indispensável para o adequado desenvolvimento de um sistema. Muitos tem sido, entretanto, os problemas enfrentados pelos analistas na execução desta tarefa. A variedade e complexidade dos requisitos, as limitações humanas e a dificuldade de comunicação entre usuários e analistas são as principais causas destas dificuldades. Ao considerarmos o ciclo de vida de um sistema de informação, verificamos que a atividade principal dos profissionais em computação é a transformação de uma determinada porção do ambiente do usuário, em um conjunto de modelos. Inicialmente, através de um modelo descritivo representamos a realidade. A partir dele derivamos um modelo das necessidades (especificação dos requisitos), transformando-o a seguir num modelo conceitual. Finalizando o ciclo de transformações, derivamos o modelo programado (software), que ira se constituir no sistema automatizado requerido. Apesar da reconhecida importância da analise dos requisitos e da conseqüente representação destes requisitos em modelos, muito pouco se havia inovado nesta área ate o final dos anos 80. Com a evolução do conceito de reutilização de software para reutilização de especificações ou reutilização de modelos de requisitos, finalmente surge não apenas um novo método, mas um novo paradigma: a reutilização sistemática (sempre que possível) de modelos integrantes de especificações de sistemas semelhantes ao que se pretende desenvolver. Muito se tem dito sobre esta nova forma de modelagem e um grande número de pesquisadores tem se dedicado a tornar mais simples e eficientes várias etapas do novo processo. Entretanto, para que a reutilização de modelos assuma seu papel como uma metodologia de use geral e de plena aceitação, resta comprovar se, de fato, ele produz software de melhor quantidade e confiabilidade, de forma mais produtiva. A pesquisa descrita neste trabalho tem por objetivo investigar um dos aspectos envolvido nesta comprovação. A experimentação viabilizou a comparação entre modelos de problemas construídos com reutilização, a partir dos modelos de problemas similares previamente construídos e postos a disposição dos analistas, e os modelos dos mesmos problemas elaborados sem nenhuma reutilização. A comparação entre os dois conjuntos de modelos permitiu concluir, nas condições propostas na pesquisa, serem os modelos construídos com reutilização mais completos e corretos do que os que foram construídos sem reutilização. A apropriação dos tempos gastos pelos analistas durante as diversas etapas da modelagem, permitiu considerações sobre o esforço necessário em cada um dos dois tipos de modelagem. 0 protocolo experimental e a estratégia definida para a pesquisa possibilitaram também que medidas pudessem ser realizadas com duas series de modelos, onde a principal diferença era o grau de similaridade entre os modelos do problema reutilizado e os modelos do problema alvo. A variação da qualidade e completude dos dois conjuntos de modelos, bem como do esforço necessário para produzi-los, evidenciou uma questão fundamental do processo: a reutilização só terá efeitos realmente produtivos se realizada apenas com aplicações integrantes de domínios específicos e bem definidos, compartilhando, em alto grau, dados e procedimentos. De acordo com as diretrizes da pesquisa, o processo de reutilização de modelos de requisitos foi investigado em duas metodologias de desenvolvimento: na metodologia estruturada a modelagem foi realizada com Diagramas de Fluxo de Dados (DFD's) e na metodologia orientada a objeto com Diagramas de Objetos. A pesquisa contou com a participação de 114 alunos/analistas, tendo sido construídos 175 conjuntos de modelos com diagramas de fluxo de dados e 23 modelos com diagramas de objeto. Sobre estas amostras foram realizadas as analises estatísticas pertinentes, buscando-se responder a um considerável número de questões existentes sobre o assunto. Os resultados finais mostram a existência de uma série de benefícios na análise de requisitos com modelagem baseada na reutilização de modelos análogos. Mas, a pesquisa em seu todo mostra, também, as restrições e cuidados necessários para que estes benefícios de fato ocorram. / System Engineering, as well as any other product oriented activity, starts by a clear definition of the product to be obtained. This initial activity is called Requirement Analysis and the resulting product consists of a system specification. The Requirement Analysis is divided in two separated phases: elicitation and modeling. An appropriate system development definition relies in a complete, and consistent system specification phase. However, many problems have been faced by system analysts in the performance of such task, as a result of requirements complexity, and diversity, human limitations, and communication gap between users and developers. If we think of a system life cycle, we'll find out that the main activity performed by software engineers consists in the generation of models corresponding to specific parts of the users environment. This modeling activity starts by a descriptive model of the portion of reality from which the requirement model is derived, resulting in the system conceptual model. The last phase of this evolving modeling activity is the software required for the system implementation. In spite of the importance of requirement analysis and modeling, very little research effort was put in these activities and none significant improvement in available methodologies were presented until the late 80s. Nevertheless, when the concepts applied in software reuse were also applied to system specification and requirements modeling, then a new paradigm was introduced, consisting in the specification of new systems based on systematic reuse of similar available system models. Research effort have been put in this new modeling technique in the aim of make it usable and reliable. However, only after this methodology is proved to produce better and reliable software in a more productive way, it would be world wide accepted by the scientific and technical community. The present work provides a critical analysis about the use of such requirement modeling technique. Experimental modeling techniques based on the reuse of similar existing models are analyzed. Systems models were developed by system analyst with similar skills, with and without reusing previously existing models. The resulting models were compared in terms of correction, consumed time in each modeling phase, effort, etc. An experimental protocol and a special strategy were defined in order to compare and to measure results obtained from the use of two different groups of models. The main difference between the two selected groups were the similarity level between the model available for reuse and the model to be developed. The diversity of resulting models in terms of quality and completeness, as well in the modeling effort, was a corroboration to the hypothesis that reuse effectiveness is related to similarity between domains, data and procedures of pre-existing models and applications being developed. In this work, the reuse of requirements models is investigated in two different methodologies: in the first one, the modeling process is based on the use of Data Flow Diagrams, as in the structured methodology; in the second methodology, based on Object Orientation, Object Diagrams are used for modeling purposes. The research was achieved with the cooperation of 114 students/analysts, resulting in 175 series of Data Flow Diagrams and 23 series of Object Diagrams. Proper statistical analysis were conducted with these samples, in order to clarify questions about requirements reuse. According to the final results, modeling techniques based on the reuse of analogous models provide an improvement in requirement analysis, without disregarding restrictions resulting from differences in domain, data and procedures.
109

A reutilização de modelos de requisitos de sistemas por analogia : experimentação e conclusões / Systems requirements reuse by analogy: examination and conclusions

Zirbes, Sergio Felipe January 1995 (has links)
A exemplo de qualquer outra atividade que se destine a produzir um produto, a engenharia de software necessariamente passa por um fase inicial, onde necessário definir o que será produzido. A análise de requisitos é esta fase inicial, e o produto dela resultante é a especificação do sistema a ser construído. As duas atividades básicas durante a analise de requisitos são a eliciação (busca ou descoberta das características do sistema) e a modelagem. Uma especificação completa e consistente é condição indispensável para o adequado desenvolvimento de um sistema. Muitos tem sido, entretanto, os problemas enfrentados pelos analistas na execução desta tarefa. A variedade e complexidade dos requisitos, as limitações humanas e a dificuldade de comunicação entre usuários e analistas são as principais causas destas dificuldades. Ao considerarmos o ciclo de vida de um sistema de informação, verificamos que a atividade principal dos profissionais em computação é a transformação de uma determinada porção do ambiente do usuário, em um conjunto de modelos. Inicialmente, através de um modelo descritivo representamos a realidade. A partir dele derivamos um modelo das necessidades (especificação dos requisitos), transformando-o a seguir num modelo conceitual. Finalizando o ciclo de transformações, derivamos o modelo programado (software), que ira se constituir no sistema automatizado requerido. Apesar da reconhecida importância da analise dos requisitos e da conseqüente representação destes requisitos em modelos, muito pouco se havia inovado nesta área ate o final dos anos 80. Com a evolução do conceito de reutilização de software para reutilização de especificações ou reutilização de modelos de requisitos, finalmente surge não apenas um novo método, mas um novo paradigma: a reutilização sistemática (sempre que possível) de modelos integrantes de especificações de sistemas semelhantes ao que se pretende desenvolver. Muito se tem dito sobre esta nova forma de modelagem e um grande número de pesquisadores tem se dedicado a tornar mais simples e eficientes várias etapas do novo processo. Entretanto, para que a reutilização de modelos assuma seu papel como uma metodologia de use geral e de plena aceitação, resta comprovar se, de fato, ele produz software de melhor quantidade e confiabilidade, de forma mais produtiva. A pesquisa descrita neste trabalho tem por objetivo investigar um dos aspectos envolvido nesta comprovação. A experimentação viabilizou a comparação entre modelos de problemas construídos com reutilização, a partir dos modelos de problemas similares previamente construídos e postos a disposição dos analistas, e os modelos dos mesmos problemas elaborados sem nenhuma reutilização. A comparação entre os dois conjuntos de modelos permitiu concluir, nas condições propostas na pesquisa, serem os modelos construídos com reutilização mais completos e corretos do que os que foram construídos sem reutilização. A apropriação dos tempos gastos pelos analistas durante as diversas etapas da modelagem, permitiu considerações sobre o esforço necessário em cada um dos dois tipos de modelagem. 0 protocolo experimental e a estratégia definida para a pesquisa possibilitaram também que medidas pudessem ser realizadas com duas series de modelos, onde a principal diferença era o grau de similaridade entre os modelos do problema reutilizado e os modelos do problema alvo. A variação da qualidade e completude dos dois conjuntos de modelos, bem como do esforço necessário para produzi-los, evidenciou uma questão fundamental do processo: a reutilização só terá efeitos realmente produtivos se realizada apenas com aplicações integrantes de domínios específicos e bem definidos, compartilhando, em alto grau, dados e procedimentos. De acordo com as diretrizes da pesquisa, o processo de reutilização de modelos de requisitos foi investigado em duas metodologias de desenvolvimento: na metodologia estruturada a modelagem foi realizada com Diagramas de Fluxo de Dados (DFD's) e na metodologia orientada a objeto com Diagramas de Objetos. A pesquisa contou com a participação de 114 alunos/analistas, tendo sido construídos 175 conjuntos de modelos com diagramas de fluxo de dados e 23 modelos com diagramas de objeto. Sobre estas amostras foram realizadas as analises estatísticas pertinentes, buscando-se responder a um considerável número de questões existentes sobre o assunto. Os resultados finais mostram a existência de uma série de benefícios na análise de requisitos com modelagem baseada na reutilização de modelos análogos. Mas, a pesquisa em seu todo mostra, também, as restrições e cuidados necessários para que estes benefícios de fato ocorram. / System Engineering, as well as any other product oriented activity, starts by a clear definition of the product to be obtained. This initial activity is called Requirement Analysis and the resulting product consists of a system specification. The Requirement Analysis is divided in two separated phases: elicitation and modeling. An appropriate system development definition relies in a complete, and consistent system specification phase. However, many problems have been faced by system analysts in the performance of such task, as a result of requirements complexity, and diversity, human limitations, and communication gap between users and developers. If we think of a system life cycle, we'll find out that the main activity performed by software engineers consists in the generation of models corresponding to specific parts of the users environment. This modeling activity starts by a descriptive model of the portion of reality from which the requirement model is derived, resulting in the system conceptual model. The last phase of this evolving modeling activity is the software required for the system implementation. In spite of the importance of requirement analysis and modeling, very little research effort was put in these activities and none significant improvement in available methodologies were presented until the late 80s. Nevertheless, when the concepts applied in software reuse were also applied to system specification and requirements modeling, then a new paradigm was introduced, consisting in the specification of new systems based on systematic reuse of similar available system models. Research effort have been put in this new modeling technique in the aim of make it usable and reliable. However, only after this methodology is proved to produce better and reliable software in a more productive way, it would be world wide accepted by the scientific and technical community. The present work provides a critical analysis about the use of such requirement modeling technique. Experimental modeling techniques based on the reuse of similar existing models are analyzed. Systems models were developed by system analyst with similar skills, with and without reusing previously existing models. The resulting models were compared in terms of correction, consumed time in each modeling phase, effort, etc. An experimental protocol and a special strategy were defined in order to compare and to measure results obtained from the use of two different groups of models. The main difference between the two selected groups were the similarity level between the model available for reuse and the model to be developed. The diversity of resulting models in terms of quality and completeness, as well in the modeling effort, was a corroboration to the hypothesis that reuse effectiveness is related to similarity between domains, data and procedures of pre-existing models and applications being developed. In this work, the reuse of requirements models is investigated in two different methodologies: in the first one, the modeling process is based on the use of Data Flow Diagrams, as in the structured methodology; in the second methodology, based on Object Orientation, Object Diagrams are used for modeling purposes. The research was achieved with the cooperation of 114 students/analysts, resulting in 175 series of Data Flow Diagrams and 23 series of Object Diagrams. Proper statistical analysis were conducted with these samples, in order to clarify questions about requirements reuse. According to the final results, modeling techniques based on the reuse of analogous models provide an improvement in requirement analysis, without disregarding restrictions resulting from differences in domain, data and procedures.
110

A runtime system for data-flow task programming on multicore architectures with accelerators / Uma ferramenta para programação com dependência de dados em arquiteturas multicore com aceleradores / Vers un support exécutif avec dépendance de données pour les architectures multicoeur avec des accélérateurs

Lima, João Vicente Ferreira January 2014 (has links)
Dans cette thèse , nous proposons d’étudier des questions sur le parallélism de tâche avec dépendance de données dans le cadre de machines multicoeur avec des accélérateurs. La solution proposée a été développée en utilisant l’interface de programmation haute niveau XKaapi du projet MOAIS de l’INRIA Rhône-Alpes. D’abord nous avons étudié des questions liés à une approche d’exécution totalement asyncrone et l’ordonnancement par vol de travail sur des architectures multi-GPU. Le vol de travail avec localité de données a montré des résultats significatifs, mais il ne prend pas en compte des différents ressources de calcul. Ensuite nous avons conçu une interface et une modèle de coût qui permettent d’écrire des politiques d’ordonnancement sur XKaapi. Finalement on a évalué XKaapi sur un coprocesseur Intel Xeon Phi en mode natif. Notre conclusion est double. D’abord nous avons montré que le modèle de programmation data-flow peut être efficace sur des accélérateurs tels que des GPUs ou des coprocesseurs Intel Xeon Phi. Ensuite, le support à des différents politiques d’ordonnancement est indispensable. Les modèles de coût permettent d’obtenir de performance significatifs sur des calculs très réguliers, tandis que le vol de travail permet de redistribuer la charge en cours d’exécution. / Esta tese investiga os desafios no uso de paralelismo de tarefas com dependências de dados em arquiteturas multi-CPU com aceleradores. Para tanto, o XKaapi, desenvolvido no grupo de pesquisa MOAIS (INRIA Rhône-Alpes), é a ferramenta de programação base deste trabalho. Em um primeiro momento, este trabalho propôs extensões ao XKaapi a fim de sobrepor transferência de dados com execução através de operações concorrentes em GPU, em conjunto com escalonamento por roubo de tarefas em multi-GPU. Os resultados experimentais sugerem que o suporte a asincronismo é importante à escalabilidade e desempenho em multi-GPU. Apesar da localidade de dados, o roubo de tarefas não pondera a capacidade de processamento das unidades de processamento disponíveis. Nós estudamos estratégias de escalonamento com predição de desempenho em tempo de execução através de modelos de custo de execução. Desenvolveu-se um framework sobre o XKaapi de escalonamento que proporciona a implementação de diferentes algoritmos de escalonamento. Esta tese também avaliou o XKaapi em coprocessodores Intel Xeon Phi para execução nativa. A conclusão desta tese é dupla. Primeiramente, nós concluímos que um modelo de programação com dependências de dados pode ser eficiente em aceleradores, tais como GPUs e coprocessadores Intel Xeon Phi. Não obstante, uma ferramenta de programação com suporte a diferentes estratégias de escalonamento é essencial. Modelos de custo podem ser usados no contexto de algoritmos paralelos regulares, enquanto que o roubo de tarefas poder reagir a desbalanceamentos em tempo de execução. / In this thesis, we propose to study the issues of task parallelism with data dependencies on multicore architectures with accelerators. We target those architectures with the XKaapi runtime system developed by the MOAIS team (INRIA Rhône-Alpes). We first studied the issues on multi-GPU architectures for asynchronous execution and scheduling. Work stealing with heuristics showed significant performance results, but did not consider the computing power of different resources. Next, we designed a scheduling framework and a performance model to support scheduling strategies over XKaapi runtime. Finally, we performed experimental evaluations over the Intel Xeon Phi coprocessor in native execution. Our conclusion is twofold. First we concluded that data-flow task programming can be efficient on accelerators, which may be GPUs or Intel Xeon Phi coprocessors. Second, the runtime support of different scheduling strategies is essential. Cost models provide significant performance results over very regular computations, while work stealing can react to imbalances at runtime.

Page generated in 0.0652 seconds