Spelling suggestions: "subject:"[een] PROGRAMMING LANGUAGES"" "subject:"[enn] PROGRAMMING LANGUAGES""
511 |
An Architectural Pattern for Adaptable Middleware InfrastructureMitchell, Jason J. 01 January 2003 (has links)
Middleware technologies change so rapidly that designers must adapt existing software architectures to incorporate new emerging ones. This project proposes an architectural pattern and guidelines to abstract the communication barrier whereby allowing the developer to concentrate on the application logic.
We demonstrate our approach and the feasibility of easily upgrading the middleware infrastructure by implementing a sample project and three case studies using three different middlewares on the .NET framework.
|
512 |
LUTS : a Light-Weight User-Level Transaction Scheduler / LUTS : a Light-Weight User-Level Transaction SchedulerNicácio, Daniel Henricus de Knegt Dutra, 1984- 22 August 2018 (has links)
Orientador: Guido Costa Souza de Araújo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T08:27:32Z (GMT). No. of bitstreams: 1
Nicacio_DanielHenricusdeKnegtDutra_D.pdf: 2579331 bytes, checksum: b8e15a6f91203b98455f39d63d63a634 (MD5)
Previous issue date: 2012 / Resumo: Sistemas de Memória Transacional em Software (MTS) têm sido usados como uma abordagem para melhorar o desempenho ao permitir a execução concorrente de blocos atômicos. Porém, em cenários com alta contenção, sistemas baseados em MTS podem diminuir o desempenho consideravelmente, já que a taxa de conflitos aumenta. Políticas de gerenciamento de contenção têm sido usadas como uma forma de selecionar qual transação abortar quando um conflito ocorre. No geral, gerenciadores de contenção não são capazes de evitar conflitos, tendo em vista que eles apenas selecionam qual transação abortar e o momento em que ela deve reiniciar. Como gerenciadores de contenção agem somente após a detecção de um conflito, é difícil aumentar a taxa de transações finalizadas com sucesso. Abordagens mais pró-ativas foram propostas, focando na previsão de quando uma transação deve abortar e atrasando o início de sua execução. Contudo, as técnicas pró-ativas continuam sendo limitadas, já que elas não substituem a transação fadada a abortar por outra transação com melhores probabilidades de sucesso, ou quando o fazem, dependem do sistema operacional para essa tarefa, tendo pouco ou nenhum controle de qual transação será a substituta. Esta tese apresenta o LUTS, Lightweight User-Level Transaction Scheduler, um escalonador de transação de baixo custo em nível de usuário. Diferente de outras técnicas, LUTS provê maneiras de selecionar outra transação a ser executada em paralelo, melhorando o desempenho do sistema. Nós discutimos o projeto do LUTS e propomos uma heurística dinâmica, com o objetivo de evitar conflitos, que foi construída utilizando os métodos disponibilizados pelo LUTS. Resultados experimentais, conduzidos com os conjuntos de aplicações STAMP e STMBench7, e executando nas bibliotecas TinySTM e SwissTM, mostram como nossa heurística para evitar conflitos pode melhorar efetivamente o desempenho de sistema de MTS em aplicações com alta contenção / Abstract: Software Transaction Memory (STM) systems have been used as an approach to improve performance, by allowing the concurrent execution of atomic blocks. However, under high-contention workloads, STM-based systems can considerably degrade performance, as transaction conflict rate increases. Contention management policies have been used as a way to select which transaction to abort when a conflict occurs. In general, contention managers are not capable of avoiding conflicts, as they can only select which transaction to abort and the moment it should restart. Since contention manager's act only after a conflict is detected, it becomes harder to effectively increase transaction throughput. More proactive approaches have emerged, aiming at predicting when a transaction is likely to abort, postponing its execution. Nevertheless, most of the proposed proactive techniques are limited, as they do not replace the doomed transaction by another or, when they do, they rely on the operating system for that, having little or no control on which transaction to run. This article proposes LUTS, a Lightweight User-Level Transaction Scheduler. Unlike other techniques, LUTS provides the means for selecting another transaction to run in parallel, thus improving system throughput. We discuss LUTS design and propose a dynamic conflict-avoidance heuristic built around its scheduling capabilities. Experimental results, conducted with the STAMP and STMBench7 benchmark suites, running on TinySTM and SwissTM, show how our conflict-avoidance heuristic can effectively improve STM performance on high contention applications / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
|
513 |
[en] CORRESPONDENCE BETWEEN PEGS AND CLASSES OF CONTEXT-FREE GRAMMARS / [pt] CORRESPONDÊNCIA ENTRE PEGS E CLASSES DE GRAMÁTICAS LIVRES DE CONTEXTOSERGIO QUEIROZ DE MEDEIROS 31 January 2011 (has links)
[pt] Gramáticas de Expressões de Parsing (PEGs) são um formalismo que
permite descrever linguagens e que possui como característica distintiva
o uso de um operador de escolha ordenada. A classe de linguagens descrita
por PEGs contém propriamente todas as linguagens livres de contexto determinísticas. Nesta tese discutimos a correspondência de PEGs com dois
outros formalismos usados para descrever linguagens: expressões regulares
e Gramáticas Livres de Contexto (CFGs). Apresentamos uma formalização
de expressões regulares usando semântica natural e mostramos uma transformação
para converter expressões regulares em PEGs que descrevem a
mesma linguagem; essa transformação pode ser facilmente adaptada para
acomodar diversas extensões usadas por bibliotecas de expressões regulares
(e.g., repetição preguiçosa e subpadrões independentes). Também apresentamos
uma nova formalização de CFGs usando semântica natural e
mostramos a correspondência entre CFGs lineares à direita e PEGs equivalentes.
Além disso, mostramos que gramáticas LL(1) com uma pequena
restrição descrevem a mesma linguagem quando interpretadas como CFGs
e quando interpretadas como PEGs. Por fim, mostramos como transformar
CFGs LL(k)-forte em PEGs equivalentes. / [en] Parsing Expression Grammars (PEGs) are a formalism that allow us to
describe languages and that has as its distinguishing feature the use of
an ordered choice operator. The class of languages described by PEGs
properly contains all deterministic context-free languages. In this thesis
we discuss the correspondence between PEGs and two other formalisms
used to describe languages: regular expressions and Context-Free Grammars
(CFGs). We present a new formalization of regular expressions that uses
natural semantics and we show a transformation to convert a regular
expression into a PEG that describes the same language; this transformation
can be easily adapted to accommodate several extensions used by regular
expression libraries (e.g., lazy repetition and independent subpatterns). We
also present a new formalization of CFGs that uses natural semantics and we
show the correspondence between right linear CFGs and equivalent PEGs.
Moreover, we show that LL(1) grammars with a minor restriction define the
same language when interpreted as a CFG and when interpreted as a PEG.
Finally, we show how to transform strong-LL(k) CFGs into PEGs that are
equivalent.
|
514 |
Rollen und Kollaborationen in ScalaPradel, Michael 26 June 2008 (has links)
The interrelations of a set of software objects are usually manifold and complex. Common object-oriented programming languages provide constructs for structuring objects according to shared properties and behavior, but fail to provide abstraction mechanisms for the interactions of objects. Roles seem to be a promising approach to solve this problem as they focus on the behavior of an object in a certain context. Combining multiple roles yields collaborations, an interesting abstraction and reuse unit. However, existing approaches towards roles in programming languages require vast extensions of the underlying language or even propose new languages. We propose a programming technique that enables role-based programming with commonly available language constructs. Thus, programmers can express roles and collaborations by simply using a library, and hence, without the need to change the language, its compiler, and its tools. We explain our proposal on a language-independent level. Moreover, we provide an implementation in form of a library for the Scala programming language. Finally, we apply our ideas to design patterns and analyze to which extent these can be expressed and reused with roles. / Die Zusammenhänge zwischen Softwareobjekten sind vielfältig und komplex. In den meisten objektorientierten Programmiersprachen werden Objekte an Hand von gemeinsamen Eigenschaften und Verhalten klassifiziert. Konstrukte zum Strukturieren bezüglich ihrer Interaktionen fehlen jedoch. Ein vielversprechender Lösungsansatz sind Rollen, welche das Verhalten von Objekten in einem bestimmten Kontext beschreiben. Zusammenhängende Rollen können zu Kollaborationen abstrahiert werden. Diese sind insbesondere als wiederverwendbare Bausteine interessant. Allerdings verändern bisherige Ansätze zu rollenbasiertem Programmieren die zu Grunde liegende Sprache erheblich oder schlagen gar neue Sprachen vor. Im Gegensatz dazu zeigen wir eine Programmiermethode, die rollenbasiertes Programmieren mit üblichen Sprachkonstrukten ermöglicht. Somit können Rollen und Kollaborationen als Bibliothek bereitgestellt werden, also ohne Sprache, Compiler und Werkzeuge anpassen zu müssen. Wir erläutern unseren Ansatz zunächst sprachunabhängig. Desweiteren wird eine Implementierung als Bibliothek für die Scala Programmiersprache präsentiert. Als praktische Anwendung stellen wir Entwurfsmustern dar und überprüfen, inwiefern sich diese mit Rollen ausdrücken und wiederverwenden lassen.
|
515 |
A Model-Based AI-Driven Test Generation SystemSantiago, Dionny 09 November 2018 (has links)
Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. This thesis is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. We present an approach that combines a trainable web component classifier, a test case description language, and a trainable test generation and execution system that can learn to generate new test cases. Training data was collected and hand-labeled across 7 systems, 95 web pages, and 17,360 elements. A total of 250 test flows were also manually hand-crafted for training purposes. Various machine learning algorithms were evaluated. Results showed that Random Forest classifiers performed well on several web component classification problems. In addition, Long Short-Term Memory neural networks were able to model and generate new valid test flows.
|
516 |
[pt] FINALIZADORES E REFERÊNCIAS FRACAS: INTERAGINDO COM O COLETOR DE LIXO / [en] FINALIZERS AND WEAK REFERENCES: INTERFACING WITH THE GARBAGE COLLECTORMARCUS AMORIM LEAL 03 January 2006 (has links)
[pt] Inúmeras linguagens de programação oferecem suporte a
finalizadores e referências fracas. Não obstante, de
maneira geral esses mecanismos são relativamente
pouco conhecidos e pouco usados por programadores. Mesmo
entre pesquisadores e desenvolvedores de linguagens não
existe muito consenso
quanto à sua semântica, que varia consideravelmente entre
diferentes
implementações. Neste trabalho buscamos explorar os
conceitos de finalizadores
e de referências fracas, suprindo a ausência de uma
especificação clara
e abrangente, e permitindo uma melhor compreensão,
implementação e uso
dos mecanismos correspondentes. Como ponto de partida
realizamos um
amplo levantamento sobre como é feito o suporte a
finalizadores e referências
fracas em diferentes linguagens de programação,
identificando as características comuns, os problemas, e
as questões semânticas mais relevantes associadas
às implementações consideradas. Para garantir uma maior
precisão
em nossa análise, utilizamos um modelo abstrato de uma
linguagem de programação com gerenciamento automático de
memória. Através deste modelo
especificamos formalmente a semântica de finalizadores e
referências fracas,
incluindo descrições das suas principais variantes e
mecanismos relacionados.
Além disso, provamos certas propriedades inerentes a
linguagens de
programação com gerenciamento automático de memória,
indicando como
estas são afetadas pela introdução de finalizadores e
referências fracas. Por
fim, consideramos possíveis estratégias de implementação
desses mecanismos
em diferentes tipos de sistemas. Algumas das opções
semânticas investigadas
impõe um custo de processamento expressivo, o que
frequentemente
inviabiliza a sua adoção na prática. / [en] Most mainstream programming languages support finalizers
and weak
references. In spite of that, these abstractions are still
modestly known
by programmers in general. Even among language designers
there seems
to be no common view on how to define their semantics, and
language
implementations certainly reflect that. In this thesis we
explore the concepts
of finalizer and weak reference by discussing several
important issues that, as
far as we know, have not been explored by other authors.
After presenting
a survey on how finalizers and weak references are
supported by actual
programming languages, we thoroughly examine their
semantics and discuss
alternative implementation strategies. We also use an
operational approach
to develop a formal model for reasoning about garbage
collection and its
interaction with client programs. By explicitly
representing low-level details,
such as heap memory and its addresses, we were able to
clearly specify
memory management actions, and prove several important
memory-related
language invariants. Using this model we describe a formal
semantics for
finalizers and weak references, exploring some of its many
subtleties. We
believe that the topics covered here can serve as a
relevant reference for
further investigations, and also help to guide actual
implementations.
|
517 |
Composable, Sound Transformations for Nested Recursion and LoopsKirshanthan Sundararajah (16647885) 26 July 2023 (has links)
<p> </p>
<p>Programs that use loops to operate over arrays and matrices are generally known as <em>regular programs</em>. These programs appear in critical applications such as image processing, differential equation solvers, and machine learning. Over the past few decades, extensive research has been done on composing, verifying, and applying scheduling transformations like loop interchange and loop tiling for regular programs. As a result, we have general frameworks such as the polyhedral model to handle transformations for loop-based programs. Similarly, programs that use recursion and loops to manipulate pointer-based data structures are known as <em>irregular programs</em>. Irregular programs also appear in essential applications such as scientific simulations, data mining, and graphics rendering. However, there is no analogous framework for recursive programs. In the last decade, although many scheduling transformations have been developed for irregular programs, they are ad-hoc in various aspects, such as being developed for a specific application and lacking portability. This dissertation examines principled ways to handle scheduling transformations for recursive programs through a unified framework resulting in performance enhancement. </p>
<p>Finding principled approaches to optimize irregular programs at compile-time is a long-standing problem. We specifically focus on scheduling transformations that reorder a program’s operations to improve performance by enhancing locality and exploiting parallelism. In the first part of this dissertation, we present PolyRec, a unified general framework that can compose and apply scheduling transformations to nested recursive programs and reason about the correctness of composed transformations. PolyRec is a first-of-its-kind unified general transformation framework for irregular programs consisting of nested recursion and loops. It is built on solid theoretical foundations from the world of automata and transducers and provides a fundamentally novel way to think about recursive programs and scheduling transformations for them. The core idea is designing mechanisms to strike a balance between the expressivity in representing the set of dynamic instances of computations, transformations, and dependences and the decidability of checking the correctness of composed transformations. We use <em>multi-tape </em>automata and transducers to represent the set of dynamic instances of computations and transformations, respectively. These machines are similar yet more expressive than their classical single-tape counterparts. While in general decidable properties of classical machines are undecidable for multi-tape machines, we have proven that those properties are decidable for the class of machines we consider, and we present algorithms to verify these properties. Therefore these machines provide the building blocks to compose and verify scheduling transformations for nested recursion and loops. The crux of the PolyRec framework is its regular string-based representation of dynamic instances that allows to lexicographically order instances identically to their execution order. All the transformations considered in PolyRec require different ordering of these strings representable only with <em>additive </em>changes to the strings. </p>
<p>Loop transformations such as <em>skewing </em>require performing arithmetic on the representation of dynamic instances. In the second part of this dissertation, we explore this space of transformations by introducing skewing to nested recursion. Skewing plays an essential role in producing easily parallelizable loop nests from seemingly difficult ones due to dependences carried across loops. The inclusion of skewing for nested recursion to PolyRec requires significant extensions to representing dynamic instances and transformations that facilitate <em>performing arithmetic using strings</em>. First, we prove that the machines that represent the transformations are still composable. Then we prove that the representation of dependences and the algorithm that checks the correctness of composed transformations hold with minimal changes. Our new extended framework is known as UniRec, since it resembles the unimodular transformations for perfectly nested loop nests, which consider any combination of the primary transformations interchange, reversal, and skewing. UniRec opens possibilities of producing newly composed transformations for nested recursion and loops and verifying their correctness. We claim that UniRec completely subsumes the unimodular framework for loop transformations since nested recursion is more general than loop nests. </p>
|
518 |
Metaprogramming Program AnalyzersGuannan Wei (16650384) 28 July 2023 (has links)
<p>Static program analyzers are vital tools to produce useful insights about programs without executing these programs. These insights can be used to improve the quality of programs, e.g., detecting defects in programs, or optimizing programs to use fewer resources. However, building static program analyzers that are simultaneously sound, performant, and flexible is notoriously challenging.</p>
<p>This dissertation aims to address this challenge by exploring the potential of applying correct-by-construction metaprogramming techniques to build static program analyzers. Metaprogramming techniques manipulate and transform programs as data objects. In this thesis, we consider static program analyzers as the objects to be manipulated or transformed. We show that metaprogramming techniques can improve our understanding, the construction, flexibility, and performance of program analyzers.</p>
<p>We first study the inter-derivation of abstract interpreters. Using off-the-shelf program transformation techniques such as refunctionalization, we demonstrate that big-step abstract interpreters can be mechanically derived from their small-step counterparts, thus building a functional correspondence between two different styles of abstract interpretation.</p>
<p>To build high-performance program analyzers, we exploit the first Futamura projection to build compilers for abstract interpretation and symbolic execution. The first Futamura projection states that specializing an interpreter with respect to an input program is a process equivalent to compilation, thus providing a practical way to repurpose interpreters for compilation and code generation. We systematically apply this idea to build program-analysis compilers by writing analyzers as staged interpreters using higher-level abstractions. The staged interpreter can be used for generating sound and performant analysis code given a specific input program. Moreover, the approach enables using abstractions without regret: by using higher-level program abstractions, the analyzer can be written in a way that is close to its high-level specification (e.g. big-step operational semantics), and by compilation, the analyzer is performant since it does not need to pay the runtime overhead of using these abstraction mechanisms.</p>
<p>We also develop novel type systems that track sharing and separation in higher-order imperative languages. Such type systems are useful both for general-purpose programming languages and for optimization of domain-specific metaprograms such as those program-analysis compilers.</p>
<p><br></p>
|
519 |
Rozvoj algoritmického myšlení žáků základních škol / Development of algorithmic thinking of primary school pupilsVais, Jan January 2021 (has links)
The presented diploma thesis examines the possibilities of developing algorithmic thinking in primary school pupils. Algorithmic thinking is a necessary tool for effective analysis of the problem and the creation of a procedure for its subsequent repeatable solution. The main research topic of this work is effective and efficient ways of developing algorithmic thinking, especially how to formulate a problem and how to develop algorithmic thinking with the greatest effect in the educational process. The work summarizes various ways of developing algorithmic thinking and the approach to its teaching. The concepts of computer thinking, algorithmic thinking, algorithm and algorithmization are defined and analyzed. Furthermore, contemporary means of developing algorithmic thinking described in the literature are specified. They are then critically evaluated and at the end of the theoretical part the most suitable set of means is selected and the way of their use is proposed. Action research was carried out in the circle of informatics at the primary school in Prague. The verification of the effectiveness of the selected instruments took place during fifteen sixty-minute lessons. The focus of this research lies in increasing the quality of pedagogical practice of the teacher, in the development of his didactic...
|
520 |
Funqual: User-Defined, Statically-Checked Call Graph Constraints in C++Nelson, Andrew P 01 June 2018 (has links) (PDF)
Static analysis tools can aid programmers by reporting potential programming mistakes prior to the execution of a program. Funqual is a static analysis tool that reads C++17 code ``in the wild'' and checks that the function call graph follows a set of rules which can be defined by the user. This sort of analysis can help the programmer to avoid errors such as accidentally calling blocking functions in time-sensitive contexts or accidentally allocating memory in heap-sensitive environments. To accomplish this, we create a type system whereby functions can be given user-defined type qualifiers and where users can define their own restrictions on the call graph based on these type qualifiers. We demonstrate that this tool, when used with hand-crafted rules, can catch certain types of errors which commonly occur in the wild. We claim that this tool can be used in a production setting to catch certain kinds of errors in code before that code is even run.
|
Page generated in 0.0887 seconds