• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 48
  • 13
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

剖面導向函數語言之模組化狀態處理 / Design and Implementation of Aspects for Localizing Side-Effects

林佳瑩, Lin, Jia Yin Unknown Date (has links)
剖面所進行的運算通常都牽涉到狀態處理。在純粹函數式語言中,利用monadification技術添加狀態處理的剖面必須對程式碼做橫跨性的修改。本論文提出讓純粹函數式語言的剖面具備狀態處理功能,而使用者不須額外改寫既有程式碼的方法。我們提出了簡單直接的狀態操作語言機制,可以用來開發狀態處理剖面;並且設計出系統化的monadification規則,讓編譯器自動對程式碼做轉換,並維持惰性求值的特性。 / Computations performed in many typical aspects involve side effects. In a purely functional setting, adding such aspects using techniques such as monadification will generally lead to crosscutting changes. This thesis presents an approach to provide side-effecting aspects for purely lazy functional languages in a user transparent fashion. We propose a simple yet direct state manipulation construct for developing side-effecting aspects and devise a systematic monadification scheme to translate the woven code to a purely monadic style functional code. To maintain the lazy evaluation feature, the monad employed is extended with cache functionality.
32

Classification automatique pour la compréhension de la parole : vers des systèmes semi-supervisés et auto-évolutifs

Gotab, Pierre 04 December 2012 (has links) (PDF)
La compréhension automatique de la parole est au confluent des deux grands domaines que sont la reconnaissance automatique de la parole et l'apprentissage automatique. Un des problèmes majeurs dans ce domaine est l'obtention d'un corpus de données conséquent afin d'obtenir des modèles statistiques performants. Les corpus de parole pour entraîner des modèles de compréhension nécessitent une intervention humaine importante, notamment dans les tâches de transcription et d'annotation sémantique. Leur coût de production est élevé et c'est la raison pour laquelle ils sont disponibles en quantité limitée.Cette thèse vise principalement à réduire ce besoin d'intervention humaine de deux façons : d'une part en réduisant la quantité de corpus annoté nécessaire à l'obtention d'un modèle grâce à des techniques d'apprentissage semi-supervisé (Self-Training, Co-Training et Active-Learning) ; et d'autre part en tirant parti des réponses de l'utilisateur du système pour améliorer le modèle de compréhension.Ce dernier point touche à un second problème rencontré par les systèmes de compréhension automatique de la parole et adressé par cette thèse : le besoin d'adapter régulièrement leurs modèles aux variations de comportement des utilisateurs ou aux modifications de l'offre de services du système
33

The Provision of Non-Strictness, Higher Kinded Types and Higher Ranked Types on an Object Oriented Virtual Machine

Hunt, Oliver January 2007 (has links)
We discuss the development of a number of algorithms and techniques to allow object oriented virtual machines to support many of the features needed by functional and other higher level languages. These features include non-strict evaluation, partial function application, higher ranked and higher kinded types. To test the mechanisms that we have developed we have also produced a compiler to allow the functional language Haskell to be compiled to a native executable for the Common Language Runtime. This has allowed us to demonstrate that the techniques we have developed are practically viable.
34

Improving scalability of exploratory model checking

Boulgakov, Alexandre January 2016 (has links)
As software and hardware systems grow more complex and we begin to rely more on their correctness and reliability, it becomes exceedingly important to formally verify certain properties of these systems. If done na&iuml;vely, verifying a system can easily require exponentially more work than running it, in order to account for all possible executions. However, there are often symmetries or other properties of a system that can be exploited to reduce the amount of necessary work. In this thesis, we present a number of approaches that do this in the context of the CSP model checker FDR. CSP is named for Communicating Sequential Processes, or parallel combinations of state machines with synchronised communications. In the FDR model, the component processes are typically converted to explicit state machines while their parallel combination is evaluated lazily during model checking. Our contributions are motivated by this model but applicable to other models as well. We first address the scalability of the component machines by proposing a lazy compiler for a subset of CSP<sub>M</sub> selected to model parameterised state machines. This is a typical case where the state space explosion can make model checking impractical, since the size of the state space is exponential in the number and size of the parameters. A lazy approach to evaluating these systems allows only the reachable subset of the state space to be explored. As an example, in studying security protocols, it is common to model an intruder parameterised by knowledge of each of a list of facts; even a relatively small 100 facts results in an intractable 2<sup>100</sup> states, but the rest of the system can ensure that only a small number of these states are reachable. Next, we address the scalability of the overall combination by presenting novel algorithms for bisimulation reduction with respect to strong bisimulation, divergence- respecting delay bisimulation, and divergence-respecting weak bisimulation. Since a parallel composition is related to the Cartesian product of its components, performing a relatively time-consuming bisimulation reduction on the components can reduce its size significantly; an efficient bisimulation algorithm is therefore very desirable. This thesis is motivated by practical implementations, and we discuss an implementation of each of the proposed algorithms in FDR. We thoroughly evaluate their performance and demonstrate their effectiveness.
35

A carnavalização em ópera do malandro: diálogos (inter)semióticos

Lima , Rafael Torres Correia 23 February 2017 (has links)
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-22T14:12:12Z No. of bitstreams: 1 arquivototal.pdf: 1919820 bytes, checksum: b2266f92cac183b971e1a8907d383bde (MD5) / Made available in DSpace on 2017-08-22T14:12:12Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1919820 bytes, checksum: b2266f92cac183b971e1a8907d383bde (MD5) Previous issue date: 2017-02-23 / This work intends to analyze the play Ópera do Malandro written by Chico Buarque, concomitantly to the homonymous film directed by Ruy Guerra, using as theorical basis the Semiotic of Culture of Russian orientation together with the concept of carnavalization in order to modelize the image of the lazy stock character, present in both semiotic languages. The main objectives of this research are: to analyze the multiple aspects of carnavalization present in both the play and the film; to resignify the figure of the lazy stock character according to the other character’s point of view and finally, to verify in which way the lazy stock character image is performed and deconstruct through the songs which constitute the dramatic text. This research will be mainly supported by the theories of the carnavalization and the serious-comic gender, postulated by Mikhail Bakhtin. The analyze of the comic in the focused objects will be based on the ideas of Propp and Bergson. The discussion related to the Semiotic of Culture will be sustained by Lótman and Machado. Erika Fisher-Lichtie, Veltruski and Kowzan provided the basis to the studies related to the semiotics of the theatre. Based on the ideas of all these theorists, we intend to decodify and to resignify the process of carnavalization occurred in these two distinct semiotic languages, using the representation of the lazy stock character as the mediator linking between then. / Este trabalho pretende analisar a peça Ópera do Malandro, escrita por Chico Buarque, concomitantemente ao filme homônimo, de Ruy Guerra, usando como base teórica a Semiótica da Cultura de extração russa, bem como o conceito de carnavalização com o objetivo de modelizar a imagem do malandro, presente em ambas as linguagens. Os principais objetivos dessa pesquisa são: analisar os múltiplos aspectos da carnavalização, presentes tanto na peça quanto no filme; ressignificar a figura do malandro do ponto de vista de outros personagens e verificar de que maneira o personagem do malando é construído e desconstruído através das canções que fazem parte do texto dramático. Essa pesquisa terá como alicerce teórico os estudos sobre a carnavalização e o gênero sério-cômico, desenvolvidos por Mikhail Bakhtin, enquanto as ideias acerca do riso serão amparadas por Propp e Bergson. Já a discussão sobre a Semiótica da Cultura será fundamentada em Lótman e Machado. Erika Fisher-Lichtie, Veltruski e Kowzan são os teóricos que fomentarão as ideias da Semiótica do Teatro. Essas teorias, em conjunto, formarão a base conceitual dessa análise que pretende decodificar e ressignificar o processo de carnavalização, ocorrido tanto no texto dramático como no filme, usando a imagem do malandro como mediador entre as duas.
36

NEMATIC LIQUID CRYSTAL GUEST-HOST SYSTEM FOR EYEWEAR ANDRANDOM LASER APPLICATIONS

Shasti, Mansoureh 30 April 2019 (has links)
No description available.
37

Locality-Dependent Training and Descriptor Sets for QSAR Modeling

Hobocienski, Bryan Christopher 21 September 2020 (has links)
No description available.
38

Designing, Modeling, and Optimizing Transactional Data Structures

Hassan, Ahmed Mohamed Elsayed 25 September 2015 (has links)
Transactional memory (TM) has emerged as a promising synchronization abstraction for multi-core architectures. Unlike traditional lock-based approaches, TM shifts the burden of implementing threads synchronization from the programmer to an underlying framework using hardware (HTM) and/or software (STM) components. Although TM can be leveraged to implement transactional data structures (i.e., those where multiple operations are allowed to execute atomically, all-or-nothing, according to the transaction paradigm), its intensive speculation may result in significantly lower performance than the optimized concurrent data structures. This poor performance motivates the need to find other, more effective, alternatives for designing transactional data structures without losing the simple programming abstraction proposed by TM. To do so, we identified three major challenges that need to be addressed to design efficient transactional data structures. The first challenge is composability, namely allowing an atomic execution of two or more data structure operations in the same way as TM provides, but without its high overheads. The second challenge is integration, which enables the execution of data structure operations within generic transactions that may contain other memory- based operations. The last challenge is modeling, which encompasses the necessity of defining a unified formal methodology to reason about the correctness of transactional data structures. In this dissertation, we propose different approaches to address the above challenges. First, we address the composability challenge by introducing an optimistic methodology to effi- ciently convert concurrent data structures into transactional ones. Second, we address the integration challenge by injecting the semantic operations of those transactional data struc- ture into TM frameworks, and by presenting two novel STM algorithms in order to enhance the overall performance of those frameworks. Finally, we address the modeling challenge by presenting two models for concurrent and transactional data structures designs. • Our first main contribution in this dissertation is Optimistic transactional boosting (OTB), a methodology to design transactional versions of the highly concurrent optimistic (i.e., lazy) data structures. An earlier (pessimistic) boosting proposal added a layer of abstract locks on top of existing concurrent data structures. Instead, we propose an optimistic boosting methodology, which allows greater data structure-specific optimizations, easier integration with TM frameworks, and lower restrictions on the operations than the original (more pessimistic) boosting methodology. Based on the proposed OTB methodology, we implement the transactional version of two list-based data structures (i.e., set and priority queue). Then, we present TxCF-Tree, a balanced tree whose design is optimized to support transactional accesses. The core optimizations of TxCF-Tree's operations are: providing a traversal phase that does not use any lock and/or speculation and deferring the lock acquisition or physical modification to the transaction's commit phase; isolating the structural operations (such as re-balancing) in an interference-less housekeeping thread; and minimizing the interference between structural operations and the critical path of semantic operations (i.e., additions and removals on the tree). • Our second main contribution is to integrate OTB with both STM and HTM algorithms. For STM, we extend the design of both DEUCE, a Java STM framework, and RSTM, a C++ STM framework, to support the integration with OTB. Using our extension, programmers can include both OTB data structure operations and traditional memory reads/writes in the same transaction. Results show that OTB performance is closer to the optimal lazy (non-transactional) data structures than the original boosting algorithm. On the HTM side, we introduce a methodology to inject semantic operations into the well-known hybrid transactional memory algorithms (e.g., HTM-GL, HyNOrec, and NOre- cRH). In addition, we enhance the proposed semantically-enabled HTM algorithms with a lightweight adaptation mechanism that allows bypassing the HTM paths if the overhead of the semantic operations causes repeated HTM aborts. Experiments on micro- and macro- benchmarks confirm that our proposals outperform the other TM solutions in almost all the tested workloads. • Our third main contribution is to enhance the performance of TM frameworks in gen- eral by introducing two novel STM algorithms. Remote Transaction Commit (RTC) is a mechanism for executing commit phases of STM transactions in dedicated server cores. RTC shows significant improvements compared to its corresponding validation based STM algorithm (up to 4x better) as it decreases the overhead of spin locking during commit, in terms of cache misses, blocking of lock holders, and CAS operations. Remote Inval- idation (RInval) applies the same idea of RTC on invalidation based STM algorithms. Furthermore, it allows more concurrency by executing commit and invalidation routines concurrently in different servers. RInval performs up to 10x better than its corresponding invalidation based STM algorithm (InvalSTM), and up to 2x better than its corresponding validation-based algorithm (NOrec). • Our fourth and final main contribution is to provide a theoretical model for concurrent and transactional data structures. We exploit the similarities of the OTB-based data structures and provide a unified model to reason about the correctness of those designs. Specifically, we extend a recent approach that models data structures with concurrent readers and a single writer (called SWMR), and we propose two novel models that additionally allow multiple writers and transactional execution. Those models are more practical because they cover a wider set of data structures than the original SWMR model. / Ph. D.
39

Implantation des futures sur un système distribué par passage de messages

Lasalle-Ratelle, Jérémie 08 1900 (has links)
Ce mémoire présente une implantation de la création paresseuse de tâches desti- née à des systèmes multiprocesseurs à mémoire distribuée. Elle offre un sous-ensemble des fonctionnalités du Message-Passing Interface et permet de paralléliser certains problèmes qui se partitionnent difficilement de manière statique grâce à un système de partitionnement dynamique et de balancement de charge. Pour ce faire, il se base sur le langage Multilisp, un dialecte de Scheme orienté vers le traitement parallèle, et implante sur ce dernier une interface semblable à MPI permettant le calcul distribué multipro- cessus. Ce système offre un langage beaucoup plus riche et expressif que le C et réduit considérablement le travail nécessaire au programmeur pour pouvoir développer des programmes équivalents à ceux en MPI. Enfin, le partitionnement dynamique permet de concevoir des programmes qui seraient très complexes à réaliser sur MPI. Des tests ont été effectués sur un système local à 16 processeurs et une grappe à 16 processeurs et il offre de bonnes accélérations en comparaison à des programmes séquentiels équiva- lents ainsi que des performances acceptables par rapport à MPI. Ce mémoire démontre que l’usage des futures comme technique de partitionnement dynamique est faisable sur des multiprocesseurs à mémoire distribuée. / This master’s thesis presents an implementation of lazy task creation for distributed memory multiprocessors. It offers a subset of Message-Passing Interface’s functionality and allows parallelization of some problems that are hard to statically partition thanks to its dynamic partitionning and load balancing system. It is based on Multilisp, a Scheme dialect for parallel computing, and implements an MPI like interface on top of it. It offers a richer and more expressive language than C and simplify the work needed to developp programs similar to those in MPI. Finally, dynamic partitioning allows some programs that would be very hard to develop in MPI. Tests were made on a 16 cpus computer and on a 16 cpus cluster. The system gets good accelerations when compared to equivalent sequential programs and acceptable performances when compared to MPI. It shows that it is possible to use futures as a dynamic partitioning method on distributed memory multiprocessors.
40

Optimization of memory management on distributed machine

Ha, Viet Hai 05 October 2012 (has links) (PDF)
In order to explore further the capabilities of parallel computing architectures such as grids, clusters, multi-processors and more recently, clouds and multi-cores, an easy-to-use parallel language is an important challenging issue. From the programmer's point of view, OpenMP is very easy to use with its ability to support incremental parallelization, features for dynamically setting the number of threads and scheduling strategies. However, as initially designed for shared memory systems, OpenMP is usually limited on distributed memory systems to intra-nodes' computations. Many attempts have tried to port OpenMP on distributed systems. The most emerged approaches mainly focus on exploiting the capabilities of a special network architecture and therefore cannot provide an open solution. Others are based on an already available software solution such as DMS, MPI or Global Array and, as a consequence, they meet difficulties to become a fully-compliant and high-performance implementation of OpenMP. As yet another attempt to built an OpenMP compliant implementation for distributed memory systems, CAPE − which stands for Checkpointing Aide Parallel Execution − has been developed which with the following idea: when reaching a parallel section, the master thread is dumped and its image is sent to slaves; then, each slave executes a different thread; at the end of the parallel section, slave threads extract and return to the master thread the list of all modifications that has been locally performed; the master includes these modifications and resumes its execution. In order to prove the feasibility of this paradigm, the first version of CAPE was implemented using complete checkpoints. However, preliminary analysis showed that the large amount of data transferred between threads and the extraction of the list of modifications from complete checkpoints lead to weak performance. Furthermore, this version was restricted to parallel problems satisfying the Bernstein's conditions, i.e. it did not solve the requirements of shared data. This thesis aims at presenting the approaches we proposed to improve CAPE' performance and to overcome the restrictions on shared data. First, we developed DICKPT which stands for Discontinuous Incremental Checkpointing, an incremental checkpointing technique that supports the ability to save incremental checkpoints discontinuously during the execution of a process. Based on the DICKPT, the execution speed of the new version of CAPE was significantly increased. For example, the time to compute a large matrix-matrix product on a desktop cluster has become very similar to the execution time of the same optimized MPI program. Moreover, the speedup associated with this new version for various number of threads is quite linear for different problem sizes. In the side of shared data, we proposed UHLRC, which stands for Updated Home-based Lazy Release Consistency, a modified version of the Home-based Lazy Release Consistency (HLRC) memory model, to make it more appropriate to the characteristics of CAPE. Prototypes and algorithms to implement the synchronization and OpenMP data-sharing clauses and directives are also specified. These two works ensures the ability for CAPE to respect shared-data behavior

Page generated in 0.1027 seconds