Spelling suggestions: "subject:"codegeneration"" "subject:"vasodegeneration""
21 |
AI Code Generation : Trust & Risk Awareness Across Educational LevelsGrimskär, Hampus, Johansson, Niklas January 2024 (has links)
Generative Artificial Intelligence has experienced a boom of growth in both its capabilities and alsopopularity. These systems can be used for many different things like summarising text, explainingmathematical theories, and brainstorming to name a few. One use case is to use generative artificialintelligence to generate code. Using generative AI for code generation is very common amongstudents, and multiple factors influence how they use this technology. This study focuses on trust andrisk awareness because they play a big role in using automated systems effectively. Having too muchtrust in generative AI could lead to blindly accepting the generated result while too much distrustcould lead to an inability to identify correct recommendations. Additionally, having low-riskawareness may lead to problems such as accidental plagiarism or the introduction of security flawsinto a codebase. The aim of this paper is to investigate the differences in trust and risk awareness between universitystudents and upper secondary school pupils and to see if their current education level is influencingthese factors while using generative AI to generate source code. This paper shows the result of asurvey with 77 upper secondary school pupils at different schools in Sweden, and 33 students atBlekinge Tekniska Högskola. Additionally, 9 interviews were conducted among the two groupsfocusing on the participants’ risk awareness. The results showed a distinct difference in the amount of risks recognized by the university studentscompared to the upper secondary school pupils. Additionally, it was found that a lot of participants inboth groups did not recognize any risks at all. Therefore, further education about the limitations ofgenerative AI could potentially be integrated into lessons in school to help students use itappropriately. Trust was shown to be slightly higher among university students, however, the resultsfrom this study alone are not enough to prove that the difference in trust occurs due to their educationlevel.
|
22 |
Integrated Software PipeliningEriksson, Mattias January 2009 (has links)
<p>In this thesis we address the problem of integrated software pipelining for clustered VLIW architectures. The phases that are integrated and solved as one combined problem are: cluster assignment, instruction selection, scheduling, register allocation and spilling.</p><p>As a first step we describe two methods for integrated code generation of basic blocks. The first method is optimal and based on integer linear programming. The second method is a heuristic based on genetic algorithms.</p><p>We then extend the integer linear programming model to modulo scheduling. To the best of our knowledge this is the first time anybody has optimally solved the modulo scheduling problem for clustered architectures with instruction selection and cluster assignment integrated.</p><p>We also show that optimal spilling is closely related to optimal register allocation when the register files are clustered. In fact, optimal spilling is as simple as adding an additional virtual register file representing the memory and have transfer instructions to and from this register file corresponding to stores and loads.</p><p>Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal.</p><p>Experiments have been conducted to show the usefulness and limitations of our optimal methods. For the basic block case we compare the optimal method to the heuristic based on genetic algorithms.<em></em></p><p><em>This work has been supported by The Swedish national graduate school in computer science (CUGS) and Vetenskapsrådet (VR).</em></p>
|
23 |
Integrated Software PipeliningEriksson, Mattias January 2009 (has links)
In this thesis we address the problem of integrated software pipelining for clustered VLIW architectures. The phases that are integrated and solved as one combined problem are: cluster assignment, instruction selection, scheduling, register allocation and spilling. As a first step we describe two methods for integrated code generation of basic blocks. The first method is optimal and based on integer linear programming. The second method is a heuristic based on genetic algorithms. We then extend the integer linear programming model to modulo scheduling. To the best of our knowledge this is the first time anybody has optimally solved the modulo scheduling problem for clustered architectures with instruction selection and cluster assignment integrated. We also show that optimal spilling is closely related to optimal register allocation when the register files are clustered. In fact, optimal spilling is as simple as adding an additional virtual register file representing the memory and have transfer instructions to and from this register file corresponding to stores and loads. Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal. Experiments have been conducted to show the usefulness and limitations of our optimal methods. For the basic block case we compare the optimal method to the heuristic based on genetic algorithms. This work has been supported by The Swedish national graduate school in computer science (CUGS) and Vetenskapsrådet (VR).
|
24 |
From Timed Models to Timed ImplementationsDe Wulf, Martin 20 December 2006 (has links)
<p align="justify">Computer Science is currently facing a grand challenge : finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties : first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.</p>
<p align="justify">In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.</p>
<p align="justify">In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem : the universality test for finite automata.
|
25 |
Développement d'applications logicielles sûres de fonctionnement : une approche dirigée par la conception / Development of dependable applications : a design-driven approachEnard, Quentin 06 May 2013 (has links)
Dans de nombreux domaines tels que l’avionique, la médecine ou la domotique, les applications logicielles jouent un rôle de plus en plus important, allant jusqu’à être critique pour leur environnement. Afin de pouvoir faire confiance à ces applications, leur développement est contraint par des exigences de sûreté de fonctionnement. En effet il est nécessaire de démontrer que ces exigences de haut-niveau sont prises en compte tout au long du cycle de développement et que des solutions concrètessont mises en œuvre pour parvenir à les respecter. De telles contraintes rendent le développement d’applications sûres de fonctionnement particulièrement complexe et difficile. Faciliter ce processus appelle à la recherche de nouvelles approches dedéveloppement qui intègrent des concepts de sûreté de fonctionnement et guident les développeurs lors de chacune des étapesnécessaires à la production d’une nouvelle application digne de confiance.Cette thèse propose ainsi de s’appuyer sur une approche dirigée par la conception pour guider le développement des applications sûres de fonctionnement. Cette approche est concrétisée à travers une suite d’outils nommée DiaSuite et offre du support dédié à chaque étape du développement. En particulier, un langage de conception permet de décrire à la fois les aspects fonctionnels et non-fonctionnels des applications en se basant sur un paradigme dédié et en intégrant des concepts de sûreté de fonctionnement tels que le traitement des erreurs. A partir de la description d’une application, du support est généré pour guider les phases d’implémentation et de vérification. En effet, la génération d’un framework de programmation dédié permet de guider l’implémentation tandis que la génération d’un modèle formel permet de guider la vérification statique de l’application et qu’un support de simulation permet de faciliter les tests. Cette approche est évaluée grâce à des cas d’études réalisés dans les domaines de l’avionique et de l’informatique ubiquitaire. / In many domains such as avionics, medecine or home automation, software applications play an increasingly important rolethat can even be critical for their environment. In order to trust these applications, their development is contrained by dependability requirements. Indeed, it is necessary to demonstrate that these high-level requirements are taken into account throughout the development cycle and concrete solutions are implemented to achieve compliance. Such constraints make the development of dependable applications particularly complex and difficult. Easing this process calls for the research of new development approaches that integrate dependability concepts and guide the developers during each step of the development of trustworthy applications.This thesis proposes to leverage a design-driven approach to guide the development of dependable applications. This approachis materialized through a tool-suite called DiaSuite and offers dedicated support for each stage of the development. Inparticular, a design language is used to describe both functional and non-functional applications. This language is based on adedicated paradigm and integrates dependability concepts such as error handling. From the description of an application, development support is generated to guide the implementation and verification stages. Indeed, the generation of a dedicated programming framework allows to guide the implementation while the generation of a formal model allows to guide the static verification and simulation support eases the testing. This approach is evaluated through case studies conducted in the domains of avionics and pervasive computing.
|
26 |
Generování kódu ze stavového modelu UML / Code Generation from UML State Machine DescriptionPíš, Ľuboš January 2012 (has links)
This paper discusses the implementation of a suitable algorithm for code generation from UML state machine diagrams. The work includes analysis of state machines described in UML, followed by a description of the input fi le format of the proposed design of the generator and the generator itself. The generator was fully implemented in the work along with other functional requirements. At the end of this thesis is a description of the resulting implementation.
|
27 |
AspectAssay: A Technique for Expanding the Pool of Available Aspect Mining Test Data Using Concern SeedingMoore, David Gerald 01 January 2013 (has links)
Aspect-oriented software design (AOSD) enables better and more complete separation of
concerns in software-intensive systems. By extracting aspect code and relegating
crosscutting functionality to aspects, software engineers can improve the maintainability
of their code by reducing code tangling and coupling of code concerns. Further, the
number of software defects has been shown to correlate with the number of non-
encapsulated nonfunctional crosscutting concerns in a system.
Aspect-mining is a technique that uses data mining techniques to identify existing aspects
in legacy code. Unfortunately, there is a lack of suitably-documented test data for aspect-
mining research and none that is fully representative of large-scale legacy systems.
Using a new technique called concern seeding--based on the decades-old concept of
error seeding--a tool called AspectAssay (akin to the radioimmunoassay test in medicine)
was developed. The concern seeding technique allows researchers to seed existing legacy
code with nonfunctional crosscutting concerns of known type, location, and quantity, thus
greatly increasing the pool of available test data for aspect mining research.
Nine seeding test cases were run on a medium-sized codebase using the AspectAssay tool.
Each test case seeded a different concern type (data validation, tracing, and observer) and
attempted to achieve target values for each of three metrics: 0.95 degree of scattering
across methods (DOSM), 0.95 degree of scattering across classes (DOSC), and 10
concern instances. The results were manually verified for their accuracy in producing
concerns with known properties (i.e., type, location, quantity, and scattering). The
resulting code compiled without errors and was functionally identical to the original. The
achieved metrics averaged better than 99.9% of their target values.
Following the small tests, each of the three previously mentioned concern types was
seeded with a wide range of target metric values on each of two codebases--one
medium-sized and one large codebase. The tool targeted DOSM and DOSC values in the
range 0.01 to 1.00. The tool also attempted to reach target number of concern instances
from 1 to 100. Each of these 1,800 test cases was attempted ten times (18,000 total
trials). Where mathematically feasible (as permitted by scattering formulas), the tests
tended to produce code that closely matched target metric values.
Each trial's result was expressed as a percentage of its target value. There were 903 test
cases that averaged at least 0.90 of their targets. For each test case's ten trials, the
standard deviation of those trials' percentages of their targets was calculated. There was
an average standard deviation in all the trials of 0.0169. For the 808 seed attempts that
averaged at least 0.95 of their targets, the average standard deviation across the ten trials
for a particular target was only 0.0022. The tight grouping of trials for their test cases
suggests a high repeatability for the AspectAssay technique and tool.
The concern seeding technique opens the door for expansion of aspect mining research.
Until now, such research has focused on small, well-documented legacy programs.
Concern seeding has proved viable for producing code that is functionally identical to the
original and contains concerns with known properties. The process is repeatable and
precise across multiple seeding attempts and also accurate for many ranges of target
metric values.
Just like error seeding is useful in identifying indigenous errors in programs, concern
seeding could also prove useful in estimating indigenous nonfunctional crosscutting
concerns, thus introducing a new method for evaluating the performance of aspect
mining algorithms.
|
28 |
Distributed Execution of Recursive Irregular ApplicationsNikhil Hegde (7043171) 13 August 2019 (has links)
Massive computing power and applications running on this power, primarily confined to expensive supercomputers a decade ago, have now become mainstream through the availability of clusters with commodity computers and high-speed interconnects running big-data era applications. The challenges associated with programming such systems, for effectively utilizing the computing power, have led to the creation of intuitive abstractions and implementations targeting average users, domain experts, and savvy (parallel) programmers. There is often a trade-off between the ease of programming and performance when using these abstractions. This thesis develops tools to bridge the gap between ease of programming and performance of irregular programs—programs that involve one or more of irregular- data structures, control structures, and communication patterns—on distributed-memory systems. <div><br></div><div>Irregular programs are focused heavily in domains ranging from data mining to bioinformatics to scientific computing. In contrast to regular applications such as stencil codes and dense matrix-matrix multiplications, which have a predictable pattern of data access and control flow, typical irregular applications operate over graphs, trees, and sparse matrices and involve input-dependent data access pattern and control flow. This makes it difficult to apply optimizations such as those targeting locality and parallelism to programs implementing irregular applications. Moreover, irregular programs are often used with large data sets that prohibit single-node execution due to memory limitations on the node. Hence, distributed solutions are necessary in order to process all the data.<br><br>In this thesis, we introduce SPIRIT, a framework consisting of an abstraction and a space-adaptive runtime system for simplifying the creation of distributed implementations of recursive irregular programs based on spatial acceleration structures. SPIRIT addresses the insufficiency of traditional data-parallel approaches and existing systems in effectively parallelizing computations involving repeated tree traversals. SPIRIT employs locality optimizations applied in a shared-memory context, introduces a novel pipeline-parallel approach to execute distributed traversals, and trades-off performance with memory usage to create a space-adaptive system that achieves a scalable performance, and outperforms implementations done in contemporary distributed graph processing frameworks.</div><div><br>We next introduce Treelogy to understand the connection between optimizations and tree-algorithms. Treelogy provides an ontology and a benchmark suite of a broader class of tree algorithms to help answer: (i) is there any existing optimization that is applicable or effective for a new tree algorithm? (ii) can a new optimization developed for a tree algorithm be applied to existing tree algorithms from other domains? We show that a categorization (ontology) based on structural properties of tree- algorithms is useful for both developers of new optimizations and new tree algorithm creators. With the help of a suite of tree traversal kernels spanning the ontology, we show that GPU, shared-, and distributed-memory implementations are scalable and the two-point correlation algorithm with vptree performs better than the standard kdtree implementation.</div><div><br>In the final part of the thesis, we explore the possibility of automatically generating efficient distributed-memory implementations of irregular programs. As manually creating distributed-memory implementations is challenging due to the explicit need for managing tasks, parallelism, communication, and load-balancing, we introduce a framework, D2P, to automatically generate efficient distributed implementations of recursive divide-conquer algorithms. D2P automatically generates a distributed implementation of a recursive divide-conquer algorithm from its specification, which is a high-level outline of a recursive formulation. We evaluate D2P with recursive Dynamic programming (DP) algorithms. The computation in DP algorithms is not irregular per se. However, when distributed, the computation in efficient recursive formulations of DP algorithms requires irregular communication. User-configurable knobs in D2P allow for tuning the amount of available parallelism. Results show that D2P programs scale well, are significantly better than those produced using a state-of-the-art framework for parallelizing iterative DP algorithms, and outperform even hand-written distributed-memory implementations in most cases.</div>
|
29 |
A component-based virtual engineering approach to PLC code generation for automation systemsAhmad, Bilal January 2014 (has links)
In recent years, the automotive industry has been significantly affected by a number of challenges driven by globalisation, economic fluctuations, environmental awareness and rapid technological developments. As a consequence, product lifecycles are shortening and customer demands are becoming more diverse. To survive in such a business environment, manufacturers are striving to find a costeffective solution for fast and efficient development and reconfiguration of manufacturing systems to satisfy the needs of changing markets without losses in production. Production systems within automotive industry are vastly automated and heavily rely on PLC-based control systems. It has been established that one of the major obstacles in realising reconfigurable manufacturing systems is the fragmented engineering approach to implement control systems. Control engineering starts at a very late stage in the overall system engineering process and remains highly isolated from the mechanical design and build of the system. During this stage, control code is typically written manually in vendor-specific tools in a combination of IEC 61131-3 languages. Writing control code is a complex, time consuming and error-prone process.
|
30 |
Elementary functions : towards automatically generated, efficient, and vectorizable implementations / Fonctions élémentaires : vers des implémentations vectorisables, efficaces, et automatiquement généréesLassus Saint-Genies, Hugues de 17 May 2018 (has links)
Les fonctions élémentaires sont utilisées dans de nombreux codes de calcul haute performance. Bien que les bibliothèques mathématiques (libm) auxquelles font appel ces codes proposent en général plusieurs variétés d'une même fonction, celles-ci sont figées lors de leur implémentation. Cette caractéristique représente un frein à la performance des programmes qui les utilisent car elles sont conçues pour être polyvalentes au détriment d'optimisations spécifiques. De plus, la duplication de modèles partagés rend la maintenance de ces libms plus difficile et sujette à l'introduction de bugs. Un défi actuel est de proposer des "méta-outils" visant la génération automatique de code performant pour l'évaluation des fonctions élémentaires. Ces outils doivent permettre la réutilisation d'algorithmes efficaces et génériques pour différentes variétés de fonctions ou architectures matérielles. Il devient alors possible de générer des libms optimisées pour des besoins très spécifiques avec du code générateur factorisé, qui facilite sa maintenance. Dans un premier temps, nous proposons un algorithme original permettant de générer des tables sans erreur pour les fonctions trigonométriques et hyperboliques. Puis nous étudions les performances de schémas d'évaluation polynomiale vectorisés, premier pas vers la génération de fonctions vectorisées efficaces. Enfin, nous proposons une méta-implémentation d'un logarithme vectorisé, factorisant la génération de code pour différents formats et architectures. Ces contributions sont compétitives comparées à d'autres solutions, justifiant le développement de tels méta-codes. / Elementary mathematical functions are pervasive in many high performance computing programs. However, although the mathematical libraries (libms), on which these programs rely, generally provide several flavors of the same function, these are fixed at implementation time. Hence this monolithic characteristic of libms is an obstacle for the performance of programs relying on them, because they are designed to be versatile at the expense of specific optimizations. Moreover, the duplication of shared patterns in the source code makes maintaining such code bases more error prone and difficult. A current challenge is to propose "meta-tools" targeting automated high performance code generation for the evaluation of elementary functions. These tools must allow reuse of generic and efficient algorithms for different flavours of functions or hardware architectures. Then, it becomes possible to generate optimized tailored libms with factorized generative code, which eases its maintenance. First, we propose an novel algorithm that allows to generate lookup tables that remove rounding errors for trigonometric and hyperbolic functions. The, we study the performance of vectorized polynomial evaluation schemes, a first step towards the generation of efficient vectorized elementary functions. Finally, we develop a meta-implementation of a vectorized logarithm, which factors code generation for different formats and architectures. Our contributions are shown competitive compared to free or commercial solutions, which is a strong incentive to push for developing this new paradigm.
|
Page generated in 0.1176 seconds