• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 23
  • 13
  • 9
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 254
  • 78
  • 53
  • 50
  • 44
  • 42
  • 39
  • 37
  • 35
  • 32
  • 32
  • 30
  • 29
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

aiLangu - Real-time Transcription and Translation to Reduce Language Barriers : An Engineering Project to Develop an Application for Enhancing Human Verbal Communication / aiLangu - realtids transkribering och översättning för att reducera språkbarriärer : Ett ingenjörsarbete som utvecklar en applikation för att förbättra mänsklig verbal kommunikation

Ringström1, Vincent, Alvarez Funcke, Iley January 2023 (has links)
The research area this report relates to is real-time automatic transcription and translation. The purpose of the work done for the report is to reduce the perceived language barriers online and to make a user-friendly application to make use of the latest deep learning technology to transcribe and translate in real-time. This application could be used in a work environment (especially when working from home) and for leisure activities such as watching videos. There is currently most likely no application that uses automatic speech recognition in this way. The most similar applications that were found were mainly similar to Google Translate which are not meant for real-time usage on a computer but rather to wait for an input and then write it out when it is completely done. The application created for this purpose was a desktop application that combines Open-AI's Whisper model for transcription and Argos Translate for translation into one application with a user-friendly GUI created with Java Swing. For creating the application, an iterative and incremental methodology was used both for the GUI design and the software development. In the end, the development was successful resulting in a working desktop application accomplishing the goals of transcribing and translating in real-time with the user of a user-friendly application, which could for example easily be used for digital meetings or videos online. / Det område som denna rapport handlar om är automatisk transkription och översättning i realtid. Syftet med arbetet som gjorts för rapporten är att minska de upplevda digitala språkbarriärerna och att göra en användarvänlig applikation för att använda den senaste djup maskininlärnings teknologin för att transkribera och översätta i realtid. Just nu finns det med största sannolikhet inget program som använder automatisk röstigenkänning på detta sätt. De mest liknande applikationerna som var funna är sådanna som liknar Google Translate, men dessa är inte skapade för anvädning i realtid utan istället för att höra hela indatan och sedan skriva ut hela resultatet. Applikationen som skapades med detta syfte var en datorapplikation som kombinerar Open-AIs Whisper-modell för transkription och Argos Translate för översättning till en applikation med ett användarvänligt grafiskt användargränssnitt skapat med Java Swing. För att skapa applikationen användes en iterativ och inkrementell metodik både för den grafiska användargränssnittsdesignen och mjukvaruutvecklingen. Resultatet var lyckat vilket ledde till en fungerande dator applikation som uppnådde målen att transkribera och översätta i realtid med en användarvänlig applikation.
242

Mitigating serverless cold starts through predicting computational resource demand : Predicting function invocations based on real-time user navigation

Persson, Gustav, Branth Sjöberg, William January 2023 (has links)
Serverless functions have emerged as a prominent paradigm in software deployment, providing automated resource scaling, resulting in demand-based operational expenses. One of the most significant challenges associated with serverless functionsis the cold start delay, preventing organisations with latency-critical web applications from adopting a serverless technology. Existing research on the cold start problem primarily focuses on mitigating the delay by modifying and optimising serverless platform technologies. However, these solutions have predominantly yielded modest reductions in time delay. Consequently, the purpose of this study is to establish conditions and circumstances under which the cold start issue can be addressed through the type of approach presented in this study. Through a design science research methodology, a software artefact named AdaptiveServerless Invocation Predictor (ASIP) was developed to mitigate the cold start issue through monitoring web application user traffic in real-time. Based on the user traffic, ASIP preemptively pre-initialises serverless functions likely to be invoked, to avoid cold start occurrences. ASIP was tested against a realistic workload generated by test participants. Evaluation of ASIP was performed through analysing the reduction in time delay achieved and comparing this against existing cold start mitigation strategies. The results indicate that predicting serverless function invocations based on real-time traffic analysis is a viable approach, as a tangible reduction in response time was achieved. Conclusively, the cold start mitigation strategy assessed and presented in this study may not provide a sufficiently significant mitigation effect relative to the required implementation effort and operational expenses. However, the study has generated valuable insights regarding circumstantial factors concerning cold start mitigation. Consequently, this study provides a proof of concept for a more sophisticated version of the mitigation strategy developed in this study, with greater potential to provide a significant delay reduction without requiring substantial computational resources.
243

Optimizing data management for MapReduce applications on large-scale distributed infrastructures / Optimisation de la gestion des données pour les applications MapReduce sur des infrastructures distribuées à grande échelle

Moise, Diana Maria 16 December 2011 (has links)
Les applications data-intensive sont largement utilisées au sein de domaines diverses dans le but d'extraire et de traiter des informations, de concevoir des systèmes complexes, d'effectuer des simulations de modèles réels, etc. Ces applications posent des défis complexes tant en termes de stockage que de calcul. Dans le contexte des applications data-intensive, nous nous concentrons sur le paradigme MapReduce et ses mises en oeuvre. Introduite par Google, l'abstraction MapReduce a révolutionné la communauté intensif de données et s'est rapidement étendue à diverses domaines de recherche et de production. Une implémentation domaine publique de l'abstraction mise en avant par Google, a été fournie par Yahoo à travers du project Hadoop. Le framework Hadoop est considéré l'implémentation de référence de MapReduce et est actuellement largement utilisé à des fins diverses et sur plusieurs infrastructures. Nous proposons un système de fichiers distribué, optimisé pour des accès hautement concurrents, qui puisse servir comme couche de stockage pour des applications MapReduce. Nous avons conçu le BlobSeer File System (BSFS), basé sur BlobSeer, un service de stockage distribué, hautement efficace, facilitant le partage de données à grande échelle. Nous étudions également plusieurs aspects liés à la gestion des données intermédiaires dans des environnements MapReduce. Nous explorons les contraintes des données intermédiaires MapReduce à deux niveaux: dans le même job MapReduce et pendant l'exécution des pipelines d'applications MapReduce. Enfin, nous proposons des extensions de Hadoop, un environnement MapReduce populaire et open-source, comme par example le support de l'opération append. Ce travail inclut également l'évaluation et les résultats obtenus sur des infrastructures à grande échelle: grilles informatiques et clouds. / Data-intensive applications are nowadays, widely used in various domains to extract and process information, to design complex systems, to perform simulations of real models, etc. These applications exhibit challenging requirements in terms of both storage and computation. Specialized abstractions like Google’s MapReduce were developed to efficiently manage the workloads of data-intensive applications. The MapReduce abstraction has revolutionized the data-intensive community and has rapidly spread to various research and production areas. An open-source implementation of Google's abstraction was provided by Yahoo! through the Hadoop project. This framework is considered the reference MapReduce implementation and is currently heavily used for various purposes and on several infrastructures. To achieve high-performance MapReduce processing, we propose a concurrency-optimized file system for MapReduce Frameworks. As a starting point, we rely on BlobSeer, a framework that was designed as a solution to the challenge of efficiently storing data generated by data-intensive applications running at large scales. We have built the BlobSeer File System (BSFS), with the goal of providing high throughput under heavy concurrency to MapReduce applications. We also study several aspects related to intermediate data management in MapReduce frameworks. We investigate the requirements of MapReduce intermediate data at two levels: inside the same job, and during the execution of pipeline applications. Finally, we show how BSFS can enable extensions to the de facto MapReduce implementation, Hadoop, such as the support for the append operation. This work also comprises the evaluation and the obtained results in the context of grid and cloud environments.
244

XML manipulation by non-expert users / Manipulation des données XML par des utilisateurs non-experts

Tekli, Gilbert 04 October 2011 (has links)
Aujourd’hui, les ordinateurs et l’Internet sont partout dans le monde : dans chaque maison, domaine et plateforme. Dans ce contexte, le standard XML s’est établi comme un moyen insigne pour la représentation et l’échange efficaces des données. Les communications et les échanges d’informations entre utilisateurs, applications et systèmes d’information hétérogènes sont désormais réalisés moyennant XML afin de garantir l’interopérabilité des données. Le codage simple et robuste de XML, à base de données textuelles semi-structurées, a fait que ce standard a rapidement envahi les communications medias. Ces communications sont devenues inter-domaines, partant de l’informatique et s’intégrant dans les domaines médical, commercial, et social, etc. Par conséquent, et au vu du niveau croissant des données XML flottantes entre des utilisateurs non-experts (employés, scientifiques, etc.), que ce soit sur les messageries instantanées, réseaux sociaux, stockage de données ou autres, il devient incontournable de permettre aux utilisateurs non-experts de manipuler et contrôler leurs données (e.g., des parents qui souhaitent appliquer du contrôle parental sur les messageries instantanées de leur maison, un journaliste qui désire regrouper et filtrer des informations provenant de différents flux RSS, etc.). L'objectif principal de cette thèse est l'étude des manipulations des données XML par des utilisateurs non-experts. Quatre principales catégories ont été identifiées dans la littérature : i) les langages visuels orientés XML, ii) les Mashups, iii) les techniques de manipulation des données XML, et iv) les DFVPL (langages de programmation visuel à base de Dataflow), couvrant différentes pistes. Cependant, aucune d’entre elles ne fournit une solution complète. Dans ce travail de recherche, nous avons formellement défini un Framework de manipulation XML, intitulé XA2C (XML-oriented mAnipulAtion Compositions). XA2C représente un environnement de programmation visuel (e.g., Visual-Studio) pour un DFVPL orienté XML, intitulé XCDL (XML-oriented Composition Definition Language) qui constitue la contribution majeure de cette thèse. XCDL, basé sur les réseaux de Pétri colorés, permet aux non-experts de définir, d’arranger et de composer des opérations de manipulation orientées XML. Ces opérations peuvent être des simples sélections/projections de données, ainsi que des opérations plus complexes de modifications de données (insertion, suppression, tatouage, etc.). Le langage proposé traite les données XML à base de documents ou de fragments. En plus de la définition formelle (syntaxique et sémantique) du langage XCDL, XA2C introduit une architecture complète à base d’un compilateur et un environnement d'exécution dédiés. Afin de tester et d’évaluer notre approche théorique, nous avons développé un prototype, intitulé X-Man, avec un Framework d’évaluation pour les langages et outils visuels de programmation orientés XML. Une série d'études de cas et d’expérimentations a été réalisée afin d'évaluer la qualité d'usage de notre langage, et de le comparer aux solutions existantes. Les résultats obtenus soulignent la supériorité de note approche, notamment en termes de qualité d’interaction, de visualisation, et d’utilisation. Plusieurs pistes sont en cours d’exploration, telles que l'intégration des opérations plus complexes (opérateurs de contrôle, boucles, etc.), les compositions automatiques, et l’extension du langage pour gérer la spécificité des formats dérivés du standard XML (flux RSS, RDF, SMIL, etc.) / Computers and the Internet are everywhere nowadays, in every home, domain and field. Communications between users, applications and heterogeneous information systems are mainly done via XML structured data. XML, based on simple textual data and not requiring any specific platform or environment, has invaded and governed the communication Medias. In the 21stcentury, these communications are now inter-domain and have stepped outside the scope of computer science into other areas (i.e., medical, commerce, social, etc.). As a consequence, and due to the increasing amount of XML data floating between non-expert users (programmers, scientists, etc.), whether on instant messaging, social networks, data storage and others, it is becoming crucial and imperative to allow non-experts to be able to manipulate and control their data (e.g.,parents who want to apply parental control over instant messaging tools in their house, a journalist who wants to gather information from different RSS feeds and filter them out, etc.). The main objective of this work is the study of XML manipulations by non-expert users. Four main related categories have been identified in the literature: XML-oriented visual languages, Mashups, XML manipulation by security and adaptation techniques, and Dataflow visual programming languages. However, none of them provides a full-fledged solution for appropriate XML data manipulation. In our research, we formally defined an XML manipulation framework, entitled XA2C (XML Alteration/Adaptation Composition Framework). XA2C represents a visual studio for an XML-oriented DFVPL (Dataflow Visual Programming Language), called XCDL (XML-oriented Composition Definition Language) which constitutes the major contribution of this study. XCDL is based on Colored Petri Nets allowing non-expert users to compose manipulation operations. The XML manipulations range from simple data selection/projection to data modification (insertion, removal, obfuscation, etc.). The language is oriented to deal with XML data (XML documents and fragments), providing users with means to compose XML oriented operations. Complementary to the language syntax and semantics, XA2C formally defines also the compiler and runtime environment of XCDL. In addition to the theoretical contribution, we developed a prototype, called X-Man, and formally defined an evaluation framework for XML-oriented visual languages and tools that was used in a set of case studies and experiments to evaluate the quality of use of our language and compare it to existing approaches. The obtained assessments and results were positive and show that our approach outperforms existing ones. Several future tracks are being studied such as integration of more complex operations (control operators, loops, etc.), automated compositions, and language derivation to define specific languages oriented towards different XML-based standards (e.g., RSS, RDF, SMIL, etc.)
245

Programming Model and Protocols for Reconfigurable Distributed Systems

Arad, Cosmin January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / Kompics / CATS / REST
246

Programming Model and Protocols for Reconfigurable Distributed Systems

Arad, Cosmin Ionel January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / <p>QC 20130520</p>
247

Static Partial Order Reduction for Probabilistic Concurrent Systems

Fernández-Díaz, Álvaro, Baier, Christel, Benac-Earle, Clara, Fredlund, Lars-Åke 10 September 2013 (has links) (PDF)
Sound criteria for partial order reduction for probabilistic concurrent systems have been presented in the literature. Their realization relies on a depth-first search-based approach for generating the reduced model. The drawback of this dynamic approach is that it can hardly be combined with other techniques to tackle the state explosion problem, e.g., symbolic probabilistic model checking with multi-terminal variants of binary decision diagrams. Following the approach presented by Kurshan et al. for non-probabilistic systems, we study partial order reduction techniques for probabilistic concurrent systems that can be realized by a static analysis. The idea is to inject the reduction criteria into the control flow graphs of the processes of the system to be analyzed. We provide the theoretical foundations of static partial order reduction for probabilistic concurrent systems and present algorithms to realize them. Finally, we report on some experimental results.
248

Study of concurrency in real-time distributed systems / La concurrence dans les systèmes temps-réel distribués

Balaguer, Sandie 13 December 2012 (has links)
Cette thèse s'intéresse à la modélisation et à l'analyse dessystèmes temps-réel distribués.Un système distribué est constitué de plusieurs composantsqui évoluent de manière partiellement indépendante. Lorsque des actionsexécutables par différentscomposants sont indépendantes, elles sont dites concurrentes.Dans ce cas, elles peuvent être exécutées dans n'importe quel ordre, sanss'influencer, et l'état atteint après ces actions ne dépend pas de leur ordred'exécution.Dans les systèmes temps-réel distribués, les contraintes de temps créent desdépendances complexes entre les composants et les événements qui ont lieu surces composants. Malgré l'omniprésence et l'aspect critique de ces systèmes,beaucoup de leurs propriétés restent encore à étudier.En particulier, la nature distribuée de ces systèmes est souvent laissée de côté.Notre travail s'appuie sur deux formalismesde modélisation: les réseaux de Petri temporels et les réseaux d'automatestemporisés, et est divisé en deux parties.Dans la première partie, nous mettons en évidence les différences entre lessystèmes temporisés centralisés et les systèmes temporisés distribués. Nouscomparons les formalismes principaux et leurs extensions, avec une approcheoriginale qui considère la concurrence.En particulier, nous montrons comment transformer un réseau de Petri temporelen un réseau d'automates temporisés qui a le même comportement distribué.Nous nous intéressons ensuite aux horloges partagées dans lesréseaux d'automates temporisés. Les horloges partagées sont problématiqueslorsque l'on envisage d'implanter ces modèles sur des architecturesdistribuées. Nous montrons comment se passer des horloges partagées, touten préservant le comportement distribué, lorsque cela est possible.Dans la seconde partie, nous nous attachons à formaliser les dépendancesentre les événements dans les représentations en ordre partieldes exécutions des réseaux de Petri (temporels ou non).Les réseaux d'occurrence sont une de ces représentations, et leur structuredonne directement les relations de causalité, conflit et concurrence entreles événements. Cependant, nous montrons que, même dans le cas non temporisé,certaines relations logiques entre les événements nepeuvent pas être directement décrites par ces relations structurelles.Après avoir formalisé les relations logiques en question, nous résolvons leproblème de synthèse suivant: étant donnée une formule logique qui décrit unensemble d'exécutions, construire un réseau d'occurrence associé,quand celui-ci existe.Nous étudions ensuite les relations logiques dans un cadre temporisé simplifié,et montrons que le temps crée des dépendances complexes entre les événements.Ces dépendances peuvent être utilisées pour définir des dépliages canoniques deréseaux de Petri temporels, dans ce cadre simplifié. / This thesis is concerned with the modeling and the analysis of distributedreal-time systems. In distributed systems, components evolve partlyindependently: concurrent actions may be performed in any order, withoutinfluencing each other and the state reached after these actions does notdepends on the order of execution. The time constraints in distributed real-timesystems create complex dependencies between the components and the events thatoccur. So far, distributed real-time systems have not been deeply studied, andin particular the distributed aspect of these systems is often left aside. Thisthesis explores distributed real-time systems. Our work on distributed real-timesystems is based on two formalisms: time Petri nets and networks of timedautomata, and is divided into two parts.In the first part, we highlight the differences between centralized anddistributed timed systems. We compare the main formalisms and their extensions,with a novel approach that focuses on the preservation of concurrency. Inparticular, we show how to translate a time Petri net into a network of timedautomata with the same distributed behavior. We then study a concurrency relatedproblem: shared clocks in networks of timed automata can be problematic when oneconsiders the implementation of a model on a multi-core architecture. We showhow to avoid shared clocks while preserving the distributed behavior, when thisis possible.In the second part, we focus on formalizing the dependencies between events inpartial order representations of the executions of Petri nets and time Petrinets. Occurrence nets is one of these partial order representations, and theirstructure directly provides the causality, conflict and concurrency relationsbetween events. However, we show that, even in the untimed case, some logicaldependencies between event occurrences are not directly described by thesestructural relations. After having formalized these logical dependencies, wesolve the following synthesis problem: from a formula that describes a set ofruns, we build an associated occurrence net. Then we study the logicalrelations in a simplified timed setting and show that time creates complexdependencies between event occurrences. These dependencies can be used to definea canonical unfolding, for this particular timed setting.
249

Podpora pro monitorování procesů za běhu v prostředí ANaConDA / Support of Run-time Monitoring of Processes in ANaConDA Framework

Mužikovská, Monika January 2020 (has links)
Tato práce rozšiřuje nástroj ANaConDA pro dynamickou analýzu vícevláknových programů o možnost analyzovat také programy víceprocesové. Část práce se soustředí na popis nástroje ANaConDA a mechanismů, které pro monitorování využívá, a na jejich nutné úpravy vzhledem k rozdílům procesů a vláken. Tyto zahrnují nutnost složitějších mechanismů pro meziprocesovou komunikaci, nutnost překládat logické adresy na jiný jednoznačný identifikátor a monitorování obecných semaforů. Rozšíření pro monitorování procesů tyto problémy řeší za vývojáře analyzátorů, čímž velmi zjednodušuje jejich vývoj. Užitečnost rozšíření je ukázána na implementaci dvou analyzátorů pro detekci souběhu (AtomRace a FastTrack), které bylo dosud možné využít pouze na vícevláknové programy. Implementace algoritmu FastTrack využívá happens-before relaci pro obecné semafory, která byla také definována jako součást této práce. Experimenty s analyzátory na studentských projektech ukázaly, že nástroj ANaConDA je nyní schopen detekovat paralelní chyby i ve víceprocesových programech a může tak pomoci při vývoji další skupiny paralelních programů.
250

Pokročilá statická analýza atomičnosti v paralelních programech v prostředí Facebook Infer / Advanced Static Analysis of Atomicity in Concurrent Programs through Facebook Infer

Harmim, Dominik January 2021 (has links)
Nástroj Atomer je statický analyzátor založený na myšlence, že pokud jsou některé sekvence funkcí vícevláknového programu prováděny v některých bězích pod zámky, je pravděpodobně zamýšleno, že mají být vždy provedeny atomicky. Analyzátor Atomer se tudíž snaží takové sekvence hledat a poté zjišťovat, pro které z nich může být v některých jiných bězích programu porušena atomicita. Autor této diplomové práce ve své bakalářské práci navrhl a implementoval první verzi nástroje Atomer jako zásuvný modul aplikačního rámce Facebook Infer. V této diplomové práci je navržena nová a výrazně vylepšená verze analyzátoru Atomer. Cílem vylepšení je zvýšení jak škálovatelnosti, tak přesnosti. Kromě toho byla přidána podpora pro několik původně nepodporovaných programovacích vlastností (včetně např. možnosti analyzovat programy napsané v jazycích C++ a Java nebo podpory pro reentrantní zámky nebo stráže zámků, tzv. "lock guards"). Prostřednictvím řady experimentů (včetně experimentů s reálnými programy a reálnými chybami) se ukázalo, že nová verze nástroje Atomer je skutečně mnohem obecnější, přesnější a lépe škáluje.

Page generated in 0.0513 seconds