651 |
[pt] APOIO À TRANSFERÊNCIA DE CONHECIMENTO DE RACIOCÍNIO COMPUTACIONAL DE LINGUAGENS DE PROGRAMAÇÃO VISUAIS PARA LINGUAGENS DE PROGRAMAÇÃO TEXTUAIS / [en] SUPPORT FOR COMPUTATIONAL THINKING KNOWLEDGE TRANSFER FROM VISUAL PROGRAMMING LANGUAGES TO TEXTUAL PROGRAMMING LANGUAGESJOAO ANTONIO DUTRA MARCONDES BASTOS 28 January 2016 (has links)
[pt] Produzir tecnologia tem se mostrado uma habilidade cada vez mais
indispensável na sociedade moderna. Os usuários estão deixando de ser simples
consumidores e passando a ser produtores, usando a tecnologia para expressarem
suas ideias. Nesse contexto, o aprendizado do chamado raciocínio
computacional deve ser tão importante quanto o de disciplinas básicas, como a
leitura, a escrita e a aritmética. Ao desenvolver tal habilidade o aluno vai
conseguir se expressar através do software. Diversos projetos ao redor do mundo
têm suas tecnologias e didáticas próprias a fim de auxiliar o aluno a desenvolver
tal capacidade. Porém, sabemos que em um contexto que está em constante
evolução como é o caso da informática, não podemos deixar que o aluno fique
preso a uma única ferramenta ou meio de se expressar. Ferramentas podem ficar
obsoletas e ele perderia seu poder de produtor de tecnologia. Pensando nisso, foi
elaborado um modelo de transferência do aprendizado do raciocínio
computacional a ser incorporado a sistemas de documentação ativa que apoiam o
ensino-aprendizado desta habilidade. O modelo auxiliará o designer na criação
de um artefato tecnológico que seja capaz de ajudar alunos e professores a
aprenderem uma nova linguagem de programação. O modelo, que é baseado na
Engenharia Semiótica, é a principal contribuição científica dessa dissertação de
mestrado. / [en] Producing technology has been an increasingly essential ability in modern
society. The users are no longer simple consumers but actually, also, technology
producers, using technology to express their ideas. In this context, the learning
of the so-called computational thinking should be as important as learning
basic disciplines such as reading, writing and arithmetic. As long as the student
can develop this ability, he will be able to express himself or herself through the
software. Many projects around the world have their own technologies and
pedagogy to help the student develop such capacity. However, we know that in a
context that is constantly evolving as is the case of informatics, we cannot allow
the student to be attached to a single tool or means. Tools may become obsolete
and students would lose their technology producer status. With this in mind, we
designed a learning transfer model of computational thinking, which will assist
the designer in the creation of a technological artifact to help students and
teachers learn a new programming language. The model, which is based on the
Semiotic Engineering, is the main scientific contribution of this master s
dissertation.
|
652 |
Determiningeons : a computer program for approximating lie generators admitted by dynamical systemsNagao, Gregory G. 01 January 1980 (has links) (PDF)
As was recognized by same of the most reputable physicists of the world such as Galilee and Einstein, the basic laws of physics must inevitably be founded upon invariance principles. Galilean and special relativity stand as historical landmarks that emphasize this message. It's no wonder that the great developments of modern physics (such as those in elementary particle physics) have been keyed upon this concept.
The modern formulation of classical mechanics (see Abraham and Marsden [1]) is based upon "qualitative" or geometric analysis. This is primarily due to the works of Poincare. Poincare showed the value of such geometric analysis in the solution of otherwise insoluble problems in stability theory. It seems that the insights of Poincare have proven fruitful by the now famous works of Kolmogorov, Arnold, and Moser. The concepts used in this geometric theory are again based upon invariance principles, or symmetries.
The work of Sophus Lie from 1873 to 1893 laid the groundwork for the analysis of invariance or symmetry principles in modern physics. His primary studies were those of partial differential equations. This led him to the study of the theory of transformations and inevitably to the analysis of abstract groups and differential geometry. Here we show same further applications of Lie group theory through the use of transformation groups. We emphasize the use of transformation invariance to find conservation laws and dynamical properties in chemical physics.
|
653 |
Compilation Techniques, Algorithms, and Data Structures for Efficient and Expressive Data Processing SystemsSupun Madusha Bandara Abeysinghe Tennakoon Mudiyanselage (17454786) 30 November 2023 (has links)
<pre>The proliferation of digital data, driven by factors like social media, e-commerce, etc., has created an increasing demand for highly processed data at higher levels of fidelity, which puts increasing demands on modern data processing systems. In the past, data processing systems faced bottlenecks due to limited main memory availability. However, as main memory becomes more abundant, their optimization focus has shifted from disk I/O to optimized computation through techniques like compilation. This dissertation addresses several critical limitations within such compilation-based data processing systems.<br><br>In modern data analytics pipelines, combination of workloads from various paradigms, such as traditional DBMS and Machine Learning, is common. <br>These pipelines are typically managed by specialized systems designed for specific workload types. While these specialized systems optimize their individual performance, substantial performance loss occurs when they are combined to handle mixed workloads. This loss is mainly due to overheads at system boundaries, including data copying and format conversions, as well as the general inability to perform cross-system optimizations.<br><br>This dissertation tackles this problem in two angles. First, it proposes an efficient post-hoc integration of individual systems using generative programming via the construction of common intermediate layers. This approach preserves the best-of-breed performance of individual workloads while achieving state-of-the-art performance for combined workloads. Second, we introduce a high-level query language capable of expressing various workload types, acting as a general substrate to implement combined workloads. This allows the generation of optimized code for end-to-end workloads through<br>the construction of an intermediate representation (IR).<br><br>The dissertation then shifts focus to data processing systems used for incremental view maintenance (IVM). While existing IVM systems achieve high performance through compilation and novel algorithms, they have limitations in handling specific query classes. Notably, they are incapable of handling queries involving correlated nested aggregate subqueries. To address this, our work proposes a novel indexing scheme based on a new data structure and a corresponding set of algorithms that fully incrementalize such queries. This approach result in substantial asymptotic speedups and order-of-magnitude performance improvements for workloads of practical importance.<br><br>Finally, the dissertation explores efficient and expressive fixed-point computations, with a focus on Datalog--a language widely used for declarative program analysis. Although existing Datalog engines rely on compilation and specialized code generation to achieve performance, they lack the flexibility to support extensions required for complex program analysis. Our work introduces a new Datalog engine built using generative programming techniques that offers both flexibility and state-of-the-art performance through specialized code generation.</pre><p></p>
|
654 |
[pt] REVISITANDO MONITORES / [en] REVISITING MONITORSRENAN ALMEIDA DE MIRANDA SANTOS 13 August 2020 (has links)
[pt] A maioria das linguagens de programação modernas fornece ferramentas para programação concorrente sem restringir seu uso. Assim, fica a cargo do programador evitar a ocorrência de condições de corrida. Nessa dissertação, revisitamos o modelo de monitores, projetados para prevenir condições de corrida ao limitar o acesso à variáveis compartilhadas, e mostramos que monitores podem ser implementados em linguagens de programação com semântica referencial, dadas as regras de tipagem apropriadas. Nós descrevemos a linguagem de programação Aria, projetada com monitores nativos seguindo a proposta original do modelo. Através da resolução de problemas clássicos de concorrência, nós avaliamos o uso de monitores em Aria
para sincronização em diferentes níveis de granularidade, e extendemos a linguagem com novos recursos a fim de contemplar as limitações do modelo envolvendo desempenho e expressividade. / [en] Most current programming languages do not restrict the use of the concurrency primitives they provide, leaving it to the programmer to detect data races. In this dissertation, we revisit the monitor model, which
guards against data races by guaranteeing that accesses to shared variables occur only inside monitors, and show that this concept can be implemented in a programming language with referential semantics, given appropriate typing rules. We describe the Aria programming language, designed with native monitors according to these rules. Through the discussion of classic concurrency problems, we evaluate the use of Aria monitors for synchronization at different levels of granularity and extend the language with new features to address the limitations of monitors regarding performance and expressiveness.
|
655 |
Type-Safety for Inverse Imaging ProblemsMoghadas, Maryam 10 1900 (has links)
<p>This thesis gives a partial answer to the question: “Can type systems detect modeling errors in scientific computing, particularly for inverse problems derived from physical models?” by considering, in detail, the major aspects of inverse problems in Magnetic Resonance Imaging (MRI). We define a type-system that can capture all correctness properties for MRI inverse problems, including many properties that are not captured with current type-systems, e.g., frames of reference. We implemented a type-system in the Haskell language that can capture the errors arising in translating a mathe- matical model into a linear or nonlinear system, or alternatively into an objective function. Most models are (or can be approximated by) linear transformations, and we demonstrate the feasibility of capturing their correctness at the type level using what is arguably the most difficult case, the (discrete) Fourier transformation (DFT). By this, we mean that we are able to catch, at compile time, all known errors in ap- plying the DFT. The first part of this thesis describes the Haskell implementation of vector size, physical units, frame of reference, and so on required in the mathemat- ical modelling of inverse problems without regularization. To practically solve most inverse problems, especially those including noisy data or ill-conditioned systems, one must use regularization. The second part of this thesis addresses the question of defining new regularizers and identifying existing regularizers the correctness of which (in our estimation) can be formally verified at the type level. We describe such Bayesian regularization schemes based on probability theory, and describe a novel simple regularizer of this type. We leave as future work the formalization of such regularizers.</p> / Master of Science (MSc)
|
656 |
AN INQUIRY INTO THE APPLICABILITY OF KANTOROVICH'S APPROACH TO THE THERMODYNAMIC OPTIMIZATIONDai, Cong 10 1900 (has links)
<p>The purpose of this research has been to reassess the Ag-Mg system using the CALPHAD technique. Compared with previous assessments, we carry out the optimization by fitting calculations to the original data instead of second-hand information. Moreover, we use a two sub-lattice model and a four sub-lattice model based on compound energy formalism to simulate both first-order and second-order transformations between the FCC phase and the L1<sub>2</sub> phase. Undoubtedly, the CALPHAD technique has achieved a degree of maturity, but its deficiencies are regularly ignored.</p> <p>In this thesis, we develop an interval method based on Kantorovich’s idea to overcome the shortcomings of the CALPHAD technique. Both advantages and disadvantages of the interval method are discussed. We also present an example of the interval approach on thermodynamic optimization of the Ag-Mg melt. The results suggest that this method would be helpful as a pre-optimization tool.</p> / Master of Applied Science (MASc)
|
657 |
HDArray: PARALLEL ARRAY INTERFACE FOR DISTRIBUTED HETEROGENEOUS DEVICESHyun Dok Cho (18620491) 30 May 2024 (has links)
<p dir="ltr">Heterogeneous clusters with nodes containing one or more accelerators, such as GPUs, have become common. While MPI provides inter-address space communication, and OpenCL provides a process with access to heterogeneous computational resources, programmers are forced to write hybrid programs that manage the interaction of both of these systems. This paper describes an array programming interface that provides users with automatic and manual distributions of data and work. Using work distribution and kernel def and use information, communication among processes and devices in a process is performed automatically. By providing a unified programming model to the user, program development is simplified.</p>
|
658 |
Language-Based Techniques for Policy-Agnostic Oblivious ComputationQianchuan Ye (18431691) 28 April 2024 (has links)
<p dir="ltr">Protecting personal information is growing increasingly important to the general public, to the point that major tech companies now advertise the privacy features of their products. Despite this, it remains challenging to implement applications that do not leak private information either directly or indirectly, through timing behavior, memory access patterns, or control flow side channels. Existing security and cryptographic techniques such as secure multiparty computation (MPC) provide solutions to privacy-preserving computation, but they can be difficult to use for non-experts and even experts.</p><p dir="ltr">This dissertation develops the design, theory and implementation of various language-based techniques that help programmers write privacy-critical applications under a strong threat model. The proposed languages support private structured data, such as trees, that may hide their structural information and complex policies that go beyond whether a particular field of a record is private. More crucially, the approaches described in this dissertation decouple privacy and programmatic concerns, allowing programmers to implement privacy-preserving applications modularly, i.e., to independently develop application logic and independently update and audit privacy policies. Secure-by-construction applications are derived automatically by combining a standard program with a separately specified security policy.</p><p><br></p>
|
659 |
Towards Better Language Models: Algorithms, Architectures, and ApplicationsWu, Qingyang January 2024 (has links)
This thesis explores the advancement of language models by focusing on three important perspectives: Algorithms, Architectures, and Applications. We aim to improve the performance, efficiency, and practical usage of these language models. Specifically, we studied reinforcement learning for language models, recurrent memory-augmented transformers, and practical applications in text generation and dialogue systems.
Firstly, we address the limitations of the traditional training algorithm, maximum likelihood estimation (MLE). We propose TextGAIL, a generative adversarial imitation learning framework that combines large pre-trained language models with adversarial training to improve the quality and diversity of generated text. We further explore a modern reinforcement learning from human feedback (RLHF) pipeline to more effectively align language model outputs with human preferences.
Next, we investigate architecture improvements with Recurrent Memory-Augmented Transformers. In this direction, we first introduce Memformer, an autoregressive model that utilizes an external dynamic memory for efficient long-sequence processing. We build upon Memformer and propose MemBART, a stateful memory-augmented Transformer encoder-decoder model. Recurrent Memory-Augmented Transformers demonstrate superior performance and efficiency in handling long contexts compared to traditional Transformer architectures.
Finally, we make several contributions to effectively applying language models to dialogue systems in practice. We design task-oriented dialogue systems that leverage pre-trained language models to significantly reduce the need for human annotations. We also introduce DiactTOD, a novel approach to improving the out-of-distribution generalization ability of dialogue act-controlled generation in task-oriented systems. In this thesis, we also make progress by expanding the scope of traditional task-oriented dialogue systems by proposing a novel paradigm that utilizes external knowledge tools to provide more accurate knowledge. Our penultimate application tackles the data-scarcity problem common in many real-world dialogue systems. We propose an automatic data augmentation technique to improve training efficacy. Lastly, we make progress on end-user experiences by presenting FaceChat, a multimodal dialogue framework enabling emotionally-sensitive, face-to-face interactions, demonstrating the potential of multimodal language models in various applications.
Our work highlights the significance of building better language models, demonstrating how these improvements can positively impact a wide range of downstream tasks and applications. Our work makes a meaningful contribution to language model research, providing valuable insights and methodologies for developing more powerful and efficient models.
|
660 |
Automating Formal Verification of Distributed Systems via Property-Driven ReductionsChristopher Wagner (20817524) 05 March 2025 (has links)
<p dir="ltr">Distributed protocols, with their immense state spaces and complex behaviors, have long been popular targets for formal verification. Cutoff reductions offer an enticing path for verifying parameterized distributed systems, composed of arbitrarily many processes. While parameterized verification (i.e., algorithmically checking correctness of a system with an arbitrary number of processes) is generally undecidable, these reductions allow one to verify certain classes of parameterized systems by reducing verification of an infinite family of systems to that of a single finite instance. The finiteness of the resulting target system enables fully-automated verification of the entire unbounded system family. In this work, we aim to establish pathways for automated verification via cutoff reductions which emphasize a modular approach to establishing correctness.</p><p dir="ltr">First, we consider distributed, agreement-based (DAB) systems. That is, systems which are built on top of agreement protocols, such as agreement and consensus. While much attention has been paid to the correctness of the protocols themselves, relatively little consideration as been given to systems which utilize these protocols to achieve some higher-level functionality. To this end, we present the GSP model, a system model based on two types of globally-synchronous transitions: k-sender and k-maximal, the latter of which was introduced by this author. This model enables us to formalize systems built on distributed consensus and leader election, and define conditions under which such systems may be verified automatically, despite the involvement of an arbitrary number of participant processes (a problem which is generally undecidable). Further, we identify conditions under which these systems can be verified efficiently and provide proofs of their correctness developed in part by this author. We then present QuickSilver, a user-friendly framework for designing and verifying parameterized DAB systems and, on this author’s suggestion, lift the GSP decidability results to QuickSilver using this author’s notion of when the behavior of all processes in the system can be separated into sections of their control flow, called “phase analysis”.</p><p dir="ltr">Next, we address verification of systems beyond agreement-based protocols. We find that, among parameterized systems, a class of systems we refer to as star-networked systems has received limited attention as the subject of cutoff reductions. These systems combine heterogeneous client and server process definitions with both pairwise and broadcast communication, so they often fall outside the requirements of existing cutoff computations. We address these challenges in a novel cutoff reduction based on careful analysis of the interactions between a central process and an arbitrary number of peripheral client processes as they progress toward an error state. The key to our approach rests on identifying systems in which the central process coordinates primarily with a finite number of core client processes, and outside of such core clients, the system’s progress can be enabled by a finite number of auxiliary clients.</p><p dir="ltr">Finally, we examine systems that are doubly-unbounded, in particular, parameterized DAB systems that additionally have unbounded data domains. We present a novel reduction which leverages value symmetry and a new notion of data saturation to reduce verification of doubly-unbounded DAB systems to model checking of small, finite-state systems. We also demonstrate that this domain reduction can be applied beyond DAB systems, including to star-networked systems.</p><p dir="ltr">We implement our reductions in several frameworks to enable efficient verification of sophisticated DAB and star-networked system models, including the arbitration mechanism for a consortium blockchain, a simple key-value store, and a lock server. We show that, by reducing the complexity of verification problems, cutoff reductions open up avenues for the application of a variety of verification techniques, including further reduction.</p>
|
Page generated in 0.021 seconds