Spelling suggestions: "subject:"[een] PROGRAMMING LANGUAGES"" "subject:"[enn] PROGRAMMING LANGUAGES""
421 |
Selection of programming languages for structural engineeringHuxford, David C., Jr. 14 November 2012 (has links)
This thesis presents the concepts of structured programming and illustrates how they can be used to develop efficient and reliable programs and aid in language selection. Topics are presented and used to compare several languages with each other rather than with some abstract ideal.
Structured design is a set of concepts that aids the decomposition of a problem using basic block structures into manageable subproblems. Decomposition is a process whereby the large problem is decomposed into components that can be easily understood. This process is continued until the smallest component can be represented by a unit of code performing a single action. By means of the four basic building blocks the atom, concatenation, selection, and repetition one can produce a correct well structured program. In addition, the top-down approach and/or the bottom up approach can assist in producing a structured program that is easy to design, code, debug, modify, and maintain. These approaches minimize the number of bugs and the time spent in the debugging process. Various testing techniques supporting the structured programming process are presented to aid in determining a program's correctness.
The languages must support structured programming. Microsoft FORTRAN, Microsoft QuickBASIC, Turbo Pascal, and Microsoft C are analyzed and compared on the basis of syntactic style, semantic structure, data types and manipulation, application facilities, and application requirements. Example programs are presented to reinforce these concepts. Frame programs are developed in these languages and are used to assist in the language evaluation. / Master of Science
|
422 |
Representation and simulation of a high level language using VHDLEdwards, Carleen Marie 24 November 2009 (has links)
This paper presents an approach for representing and simulating High Level Languages (HLL) using VHDL behavioral models. The graphical representation, a Data Flow Graph (DFG), is used as a base for the VHDL representation and simulation of a High Level Language (C). A package of behavioral models for the functional units for the High Level Language as well as individual entities has been developed using VHDL. An algorithm, Graph2VHDL, accepts a Data Flow Graph representation of a High Level Language and constructs a VHDL model for that graph. The representation of a High Level Language in VHDL frees users of custom computing platforms from the tedious job of developing a hardware model for a desired application. The algorithm also constructs a test file that is used with a pre-existing program, Test Bench Generation (TBG), to create a test-bench for the VHDL model of a Data Flow Graph. The test bench that is generated is used to simulate the representation of the High Level Language in the Data Flow Graph format. Experimental results verify the representation of the High Level Language in the Data Flow Graph format and in VHDL is correct. / Master of Science
|
423 |
Wasm-PBChunk: Incrementally Developing A Racket-To-Wasm Compiler Using Partial Bytecode CompilationPerlin, Adam C 01 June 2023 (has links) (PDF)
Racket is a modern, general-purpose programming language with a language-oriented focus. To date, Racket has found notable uses in research and education, among other applications. To expand the reach of the language, there has been a desire to develop an efficient platform for running Racket in a web-based environment. WebAssembly (Wasm) is a binary executable format for a stack-based virtual machine designed to provide a fast, efficient, and secure execution environment for code on the web. Wasm is primarily intended to be a compiler target for higher-level languages. Providing Wasm support for the Racket project may be a promising way to bring Racket to the browser.
To this end, we present an incremental approach to the development of a Racket- to-Wasm compiler. We make use of an existing backend for Racket that targets a portable bytecode known as PB, along with the associated PB interpreter. We per- form an ahead-of-time static translation of sections of PB into native Wasm, linking the chunks back to the interpreter before execution. By replacing portions of PB with native Wasm, we can eliminate some portion of interpretation overhead and move closer to native Wasm support for Chez Scheme (Racket’s Backend). Due to the use of an existing backend and interpreter, our approach already supports nearly all features of the Racket language – including delimited continuations, tail-calling behavior, and garbage collection – and excluding threading and FFI support for the time being.
We perform benchmarks against a baseline to validate our approach, to promising results.
|
424 |
Automatic Reasoning Techniques for Non-Serializable Data-Intensive ApplicationsGowtham Kaki (7022108) 14 August 2019 (has links)
<div>
<div>
<div>
<p>The performance bottlenecks in modern data-intensive applications have induced
database implementors to forsake high-level abstractions and trade-off simplicity and
ease of reasoning for performance. Among the first casualties of this trade-off are the
well-known ACID guarantees, which simplify the reasoning about concurrent database
transactions. ACID semantics have become increasingly obsolete in practice due
to serializable isolation – an integral aspect of ACID, being exorbitantly expensive.
Databases, including the popular commercial offerings, default to weaker levels of
isolation where effects of concurrent transactions are visible to each other. Such weak
isolation guarantees, however, are extremely hard to reason about, and have led to
serious safety violations in real applications. The problem is further complicated
in a distributed setting with asynchronous state replications, where high availability
and low latency requirements compel large-scale web applications to embrace weaker
forms of consistency (e.g., eventual consistency) besides weak isolation. Given the
serious practical implications of safety violations in data-intensive applications, there
is a pressing need to extend the state-of-the-art in program verification to reach non-
serializable data-intensive applications operating in a weakly-consistent distributed
setting.
</p>
<p>This thesis sets out to do just that. It introduces new language abstractions, program logics, reasoning methods, and automated verification and synthesis techniques
that collectively allow programmers to reason about non-serializable data-intensive
applications in the same way as their serializable counterparts. The contributions
</p>
</div>
</div>
<div>
<div>
<p>xi
</p>
</div>
</div>
</div>
<div>
<div>
<div>
<p>made are broadly threefold. Firstly, the thesis introduces a uniform formal model to
reason about weakly isolated (non-serializable) transactions on a sequentially consistent (SC) relational database machine. A reasoning method that relates the semantics
of weak isolation to the semantics of the database program is presented, and an automation technique, implemented in a tool called ACIDifier is also described. The
second contribution of this thesis is a relaxation of the machine model from sequential
consistency to a specifiable level of weak consistency, and a generalization of the data
model from relational to schema-less or key-value. A specification language to express
weak consistency semantics at the machine level is described, and a bounded verification technique, implemented in a tool called Q9 is presented that bridges the gap
between consistency specifications and program semantics, thus allowing high-level
safety properties to be verified under arbitrary consistency levels. The final contribution of the thesis is a programming model inspired by version control systems that
guarantees correct-by-construction <i>replicated data types</i> (RDTs) for building complex distributed applications with arbitrarily-structured replicated state. A technique
based on decomposing inductively-defined data types into <i>characteristic relations</i> is
presented, which is used to reason about the semantics of the data type under state
replication, and eventually derive its correct-by-construction replicated variant automatically. An implementation of the programming model, called Quark, on top of
a content-addressable storage is described, and the practicality of the programming
model is demonstrated with help of various case studies.
</p>
</div>
</div>
</div>
|
425 |
A tangible programming environment model informed by principles of perception and meaningSmith, Andrew Cyrus 09 1900 (has links)
It is a fundamental Human-Computer Interaction problem to design a tangible programming environment for use by multiple persons that can also be individualised. This problem has its origin in the phenomenon that the meaning an object holds can vary across individuals. The Semiotics Research Domain studies the meaning objects hold. This research investigated a solution based on the user designing aspects of the environment at a time after it has been made operational and when the development team is no longer available to implement the user’s design requirements.
Also considered is how objects can be positioned so that the collection of objects is interpreted as a program. I therefore explored how some of the principles of relative positioning of objects, as researched in the domains of Psychology and Art, could be applied to tangible programming environments. This study applied the Gestalt principle of perceptual grouping by proximity to the design of tangible programming environments to determine if a tangible programming environment is possible in which the relative positions of personally meaningful objects define the program. I did this by applying the Design Science Research methodology with five iterations and evaluations involving children.
The outcome is a model of a Tangible Programming Environment that includes Gestalt principles and Semiotic theory; Semiotic theory explains that the user can choose a physical representation of the program element that carries personal meaning whereas the Gestalt principle of grouping by proximity predicts that objects can be arranged to appear as if linked to each other. / School of Computing / Ph. D. (Computer Science)
|
426 |
Preserving the separation of concerns while composing aspects with reflective AOPMarot, Antoine 07 October 2011 (has links)
Aspect-oriented programming (AOP) is a programming paradigm to localize and modularize the concerns that tend to be tangled and scattered across traditional programming modules, like functions or classes. Such concerns are known as crosscutting concerns and aspect-oriented languages propose to encapsulate them in modules called aspects. Because each crosscutting concern implemented in an aspect is separated from the other concerns, AOP improves reusability, readability, and maintainability of code.<p><p>While it improves separation of concerns, AOP suffers from well-known composition issues. Aspects developed in isolation may indeed interact with each other in ways that were not expected by the programmers and therefore lead to a program that does not meet its requirements. Without appropriate tools, undesired aspect interactions must be identified by reading code in order to gain global knowledge of the program and understand where and how aspects interact. Then, if the aspect language does not offer the needed support, these interactions must be resolved by invasively changing the code of the conflicting aspects to make them work together. Neither one of these solutions are acceptable since global knowledge as well as invasive and composition-specific modifications are exactly what separation of concerns seeks to avoid.<p><p>In this dissertation we show that the existing approaches to compose aspects are not entirely satisfying either with respect to separation of concerns. These approaches either rely on global knowledge and invasive modifications, which is problematic, or lack genericity and/or expressivity, which means that code reading/code modification may still be required for the aspect interactions they cannot handle.<p><p>To properly detect and resolve aspect interactions we propose a novel approach that is based on AOP itself. Since aspect composition is a concern that, by definition, crosscuts the aspects, it indeed makes sense to expect that a technique to improve the separation of crosscutting concerns such as AOP is well-suited for the task. The resulting mechanism is based on reflection principles and is called reflective AOP. <p><p>The main difference between "regular" AOP and reflective AOP lies in the parts of the system they address. While traditional AOP aims at modularizing the concerns that crosscut the base system, reflective AOP offers the possibility to handle the concerns that crosscut the aspects themselves. This is achieved by incorporating new kinds of joinpoints, pointcuts and advice into the aspect language. These new elements, which form what we call a meta joinpoint model, are dedicated to the aspect level and enable programmers to reason about and act upon the semantics of aspects at runtime. As validated on numerous examples of aspect composition, having a well-designed and principled meta joinpoint model makes it possible to deal with both the detection and the resolution of composition issues in a way that preserves the separation of concerns principle. These examples are illustrated using Phase, our prototype reflective AOP language. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
427 |
Programování na základní škole a rozvoj algoritmického myšlení žáků / Programming in elementary school and development of students' algorithmic thinkingMilichovská, Lucie January 2020 (has links)
This thesis deals with development of algorithmic thinking and teaching programming in elementary school. It focuses on available ways and tools suitable for classes. The practical part of the thesis is focused on children's programming language Scratch, which is one of the tools designed to be used for teaching. The main goal is to create comprehensive collections of tasks that develop algorithmic thinking of pupils aged 9 - 10 years. The tasks get more complex gradually so that the pupils don't need any previous programming experience. Also they are designed so that they can be solved without the assistance of a teacher. All the tasks were checked against a set of pupils in the given age range. The collection of tasks is also made available as a web presentation for the ease of further use.
|
428 |
Extensible automated constraint modelling via refinement of abstract problem specificationsAkgün, Özgür January 2014 (has links)
Constraint Programming (CP) is a powerful technique for solving large-scale combinatorial (optimisation) problems. Constraint solving a given problem proceeds in two phases: modelling and solving. Effective modelling has an huge impact on the performance of the solving process. This thesis presents a framework in which the users are not required to make modelling decisions, concrete CP models are automatically generated from a high level problem specification. In this framework, modelling decisions are encoded as generic rewrite rules applicable to many different problems. First, modelling decisions are divided into two broad categories. This categorisation guides the automation of each kind of modelling decision and also leads us to the architecture of the automated modelling tool. Second, a domain-specific declarative rewrite rule language is introduced. Thanks to the rule language, automated modelling transformations and the core system are decoupled. The rule language greatly increases the extensibility and maintainability of the rewrite rules database. The database of rules represents the modelling knowledge acquired after analysis of expert models. This database must be easily extensible to best benefit from the active research on constraint modelling. Third, the automated modelling system Conjure is implemented as a realisation of these ideas; having an implementation enables empirical testing of the quality of generated models. The ease with which rewrite rules can be encoded to produce good models is shown. Furthermore, thanks to the generality of the system, one needs to add a very small number of rules to encode many transformations. Finally, the work is evaluated by comparing the generated models to expert models found in the literature for a wide variety of benchmark problems. This evaluation confirms the hypothesis that expert models can be automatically generated starting from high level problem specifications. An method of automatically identifying good models is also presented. In summary, this thesis presents a framework to enable the automatic generation of efficient constraint models from problem specifications. It provides a pleasant environment for both problem owners and modelling experts. Problem owners are presented with a fully automated constraint solution process, once they have a precise description of their problem. Modelling experts can now encode their precious modelling expertise as rewrite rules instead of merely modelling a single problem; resulting in reusable constraint modelling knowledge.
|
429 |
Locating Potential Aspect Interference Using Clustering AnalysisBennett, Brian Todd 01 May 2015 (has links)
Software design continues to evolve from the structured programming paradigm of the 1970s and 1980s and the object-oriented programming (OOP) paradigm of the 1980s and 1990s. The functional decomposition design methodology used in these paradigms reduced the prominence of non-functional requirements, which resulted in scattered and tangled code to address non-functional elements. Aspect-oriented programming (AOP) allowed the removal of crosscutting concerns scattered throughout class code into single modules known as aspects. Aspectization resulted in increased modularity in class code, but introduced new types of problems that did not exist in OOP. One such problem was aspect interference, in which aspects meddled with the data flow or control flow of a program. Research has developed various solutions for detecting and addressing aspect interference using formal design and specification methods, and by programming techniques that specify aspect precedence. Such explicit specifications required practitioners to have a complete understanding of possible aspect interference in an AOP system under development. However, as system size increased, understanding of possible aspect interference could decrease. Therefore, practitioners needed a way to increase their understanding of possible aspect interference within a program. This study used clustering analysis to locate potential aspect interference within an aspect-oriented program under development, using k-means partitional clustering. Vector space models, using two newly defined metrics, interference potential (IP) and interference causality potential (ICP), and an existing metric, coupling on advice execution (CAE), provided input to the clustering algorithms. Resulting clusters were analyzed via an internal strategy using the R-Squared, Dunn, Davies-Bouldin, and SD indexes. The process was evaluated on both a smaller scale AOP system (AspectTetris), and a larger scale AOP system (AJHotDraw). By seeding potential interference problems into these programs and comparing results using visualizations, this study found that clustering analysis provided a viable way for detecting interference problems in aspect-oriented software. The ICP model was best at detecting interference problems, while the IP model produced results that were more sporadic. The CAE clustering models were not effective in pinpointing potential aspect interference problems. This was the first known study to use clustering analysis techniques specifically for locating aspect interference.
|
430 |
Terminaison des systèmes de réécriture d'ordre supérieur basée sur la notion de clôture de calculabilitéBlanqui, Frédéric 13 July 2012 (has links) (PDF)
Dans ce document, nous montrons comment la notion de calculabilité introduite par W. W. Tait et étendue par Girard aux types polymorphes peut être utilisée et facilement étendue pour montrer la terminaison de différents types de relations de réécriture, y compris avec filtrage sur des symboles définis, filtrage d'ordre supérieur ou réécriture de classe modulo certaines théories équationnelles. Nous montrons également que la notion de clôture de calculabilité donne lieu a une relation bien fondée incluant l'extension à l'ordre supérieur par J.-P. Jouannaud et A. Rubio de l'ordre récursif sur les chemins de N. Dershowitz.
|
Page generated in 0.0912 seconds