Spelling suggestions: "subject:"concurrency control"" "subject:"concurrency coontrol""
31 |
Atendimento para composição de serviços justo e transacional com origem em múltiplos domínios. / Service composition attendance with fair policy and transactional support from multiples domains.Fernando Ryoji Kakugawa 18 May 2016 (has links)
O uso de Web Services tem aberto novas possibilidades de desenvolvimento de software, entre elas a composição de serviços. A composição de serviços apresenta novas questões no ambiente computacional, entre elas a execução integral, garantindo consistência e contemplando o controle de concorrência. O workflow é um conjunto de tarefas e interações organizadas de modo que forneça uma funcionalidade ao sistema, provendo a automatização de processos complexos, através da composição de serviços. Tal composição deve ser executada de forma transacional, processando as operações com consistência. A execução de workflows oriundos de domínios diferentes, faz com que os serviços que estão sendo utilizados, não possuam ciência do contexto da execução, podendo gerar atendimentos que não sejam justos, causando situações de deadlock e de starvation. Este trabalho apresenta estratégias para a execução de workflows em domínios distintos, que requisitam múltiplos serviços de um mesmo conjunto, sem a necessidade de um coordenador central, de forma transacional. O atendimento a requisição contempla uma política justa na utilização do recurso que impede a ocorrência de deadlock ou de starvation para os workflows em execução. Os experimentos realizados neste trabalho mostram que o sistema desenvolvido, aplicando as estratégias propostas, executa as composições de serviços de maneira transacional, atendendo as requisições com justiça, livre de deadlock e starvation, mantendo o sistema independente e autônomo. / Web Services are increasing software development possibilities, among then service composition. Service composition introduces new issues on computational environment, such as the whole service execution, ensuring consistency and concurrency control. Workflow is a set of organized tasks and interactions in order to provide functionality to the system, automating complex process through composition service. Such composition must be performed by transactional support, performing operations consistently. The workflow execution from different domain clients sharing the same composition make these clients ignore the execution context. It may cause inconsistencies, from unfair attendance to deadlock or starvation. This work depicts strategies for workflow execution from different domains, requesting multiple services from the same composition, without a centralized coordinator, in transactional way. The request attendance contains a fair policy for resource usage and consumption to avoid deadlock and starvation. Applying the proposed strategy on the experiments performed in this work, it confirms that the developed system executes service composition with transactional support, avoiding deadlock or starvation, keeping the whole system autonomous and independent.
|
32 |
Active Behavior in a Configurable Real-Time Database for Embedded SystemsDu, Ying January 2006 (has links)
<p>An embedded system is an application-specific system that is typically dedicated to performing a particular task. Majority of embedded systems are also real-time, implying that timeliness in the system need to be enforced. An embedded system needs to be enforced efficient management of a large amount of data, including maintenance of data freshness in an environment with limited CPU and memory resources. Uniform and efficient data maintenance can be ensured by integrating database management functionality with the system. Furthermore, the resources can be utilized more efficiently if the redundant calculations can be avoided. On-demand updating and active behavior are two solutions that aim at decreasing the number of calculations on data items in embedded systems.</p><p>COMET is a COMponent-based Embedded real-Time database, developed to meet the increasing requirements for efficient data management in embedded real-time systems. The COMET platform has been developed using a novel software engineering technique, AspeCtual COmponent-based Real-time software Development (ACCORD), which enables creating database configurations, using software components and aspects from the library, based on the requirements of an application. Although COMET provides uniform and efficient data management for real-time and embedded systems, it does not provide support for on-demand and active behavior.</p><p>This thesis is focusing on design, implementation, and evaluation of two new COMET configurations, on-demand updating of data and active behavior. The configurations are created by extending the COMET component and aspect library with a set of aspects that implement on-demand and active behavior. The on-demand updating aspect implements the ODDFT algorithm, which traverses the data dependency graph in the depth-first manner, and triggers and schedules on-demand updates based on data freshness in the value domain. The active behavior aspect enables the database to take actions when an event occurs and a condition coupled with that event and action is fulfilled.</p><p>As we show in the performance evaluation, integrating on-demand and active behavior in COMET improves the performance of the database system, gives a better utilization of the CPU, and makes the management of data more efficient.</p>
|
33 |
Active Behavior in a Configurable Real-Time Database for Embedded SystemsDu, Ying January 2006 (has links)
An embedded system is an application-specific system that is typically dedicated to performing a particular task. Majority of embedded systems are also real-time, implying that timeliness in the system need to be enforced. An embedded system needs to be enforced efficient management of a large amount of data, including maintenance of data freshness in an environment with limited CPU and memory resources. Uniform and efficient data maintenance can be ensured by integrating database management functionality with the system. Furthermore, the resources can be utilized more efficiently if the redundant calculations can be avoided. On-demand updating and active behavior are two solutions that aim at decreasing the number of calculations on data items in embedded systems. COMET is a COMponent-based Embedded real-Time database, developed to meet the increasing requirements for efficient data management in embedded real-time systems. The COMET platform has been developed using a novel software engineering technique, AspeCtual COmponent-based Real-time software Development (ACCORD), which enables creating database configurations, using software components and aspects from the library, based on the requirements of an application. Although COMET provides uniform and efficient data management for real-time and embedded systems, it does not provide support for on-demand and active behavior. This thesis is focusing on design, implementation, and evaluation of two new COMET configurations, on-demand updating of data and active behavior. The configurations are created by extending the COMET component and aspect library with a set of aspects that implement on-demand and active behavior. The on-demand updating aspect implements the ODDFT algorithm, which traverses the data dependency graph in the depth-first manner, and triggers and schedules on-demand updates based on data freshness in the value domain. The active behavior aspect enables the database to take actions when an event occurs and a condition coupled with that event and action is fulfilled. As we show in the performance evaluation, integrating on-demand and active behavior in COMET improves the performance of the database system, gives a better utilization of the CPU, and makes the management of data more efficient.
|
34 |
Uma EstratÃgia para o Gerenciamento da ReplicaÃÃo Parcial de Dados XML / A Approach for Management of Partial XML Data ReplicationÃriko Joaquim RogÃrio Moreira 04 September 2009 (has links)
XML tornou-se um padrÃo amplamente utilizado na representaÃÃo e troca de dados entre aplicaÃÃes na Web. Com isso, um grande volume desses dados està distribuÃdo na Web e armazenado em diversos meios de persistÃncia. SGBDs relacionais que suportam XML fornecem tÃcnicas de controle de concorrÃncia para gerenciar esses dados. No entanto, a estrutura de dados XML dificulta a aplicaÃÃo dessas tÃcnicas.
Adicionalmente, as tÃcnicas de replicaÃÃo tÃm sido utilizadas para melhorar o gerenciamento de grandes quantidades de dados XML. Pesquisas atuais de replicaÃÃo de dados XML consistem em adaptar os conceitos existentes ao modelo semi-estruturado. Em especial, a replicaÃÃo total apresenta uma grande quantidade de bloqueios, em decorrÃncia das atualizaÃÃes ocorrerem em todas as cÃpias da base. Por outro lado, a replicaÃÃo parcial visa aumentar a concorrÃncia entre as transaÃÃes, com uma menor quantidade de bloqueios em relaÃÃo à replicaÃÃo total.
Este trabalho apresenta o RepliXP, uma estratÃgia para o gerenciamento da replicaÃÃo parcial de dados XML. Ele à apresentado como um mecanismo que combina caracterÃsticas de protocolos de replicaÃÃo sÃncronos e assÃncronos para diminuir o nÃmero de bloqueios de atualizaÃÃo. Para validar a estratÃgia, foram realizados testes de desempenho analisando o tempo de resposta das transaÃÃes. Foram comparadas as abordagens de replicaÃÃo total e replicaÃÃo parcial no RepliXP. De acordo com os resultados obtidos, o RepliXP utilizando a estratÃgia de replicaÃÃo parcial de dados XML proporcionou uma melhoria no tempo de resposta das transaÃÃes concorrentes. / XML has become a widely used standard in representing and exchanging data among Web Applications. Consequently, a large amount of data is distributed on the Web and stored in several persistence medias. Relational DBMSs XML-enabled provide concurrency control techniques to manage such data. However, XML data structure makes it difficult implementation of these techniques.
Additionally, replication techniques have been used to improve management of large amounts of XML data. Current researches of XML data replication consist of to adapt existing concepts to semi-structured model. In particular, full replication provides a large of locks, due to updates that have occurred on all copies of the base. Moreover, the partial replication aims to increase concurrency among transactions, with a smaller amount of blocks in relation to total replication.
This work presents the RepliXP, towards for management of partial replication of XML data. It is presented as a mechanism that combines features of synchronous and asynchronous replication protocols to reduce the amount of update locks. In order to evaluate the strategy, performance tests were carried out by analyzing the response time of transactions. Full and partial replication approaches were compared in RepliXP. According to the results, RepliXP using the strategy of partial XML data replication provided an improvement in response time of concurrent transactions.
|
35 |
Memorias transacionais : prototipagem e simulação de implementações em hardware e uma caracterização para o problema de gerenciamento de contenção em software / Transactional memories : prototyping and simulation of hardware implementations and a characterization of the problem of contention management in softwareKronbauer, Fernando André 11 July 2008 (has links)
Orientador: Sandro Rigo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-13T10:38:16Z (GMT). No. of bitstreams: 1
Kronbauer_FernandoAndre_M.pdf: 3637569 bytes, checksum: 4c5752e2ae7f853d3b5f4971d6d7cbab (MD5)
Previous issue date: 2009 / Resumo: Enquanto que arquiteturas paralelas vão se tornando cada vez mais comuns na indústria de computação mais e mais programadores precisam escrever programas paralelos e desta forma são expostos aos problemas relacionados ao uso dos mecanismos tradicionais de controle de concorrência. Memórias transacionais têm sido propostas como um meio de aliviar as dificuldades encontradas ao escreverem-se programas paralelos: o desenvolvedor precisa apenas marcar as seções de código que devem ser executadas de forma atômica e isolada - na forma de transações, e o sistema cuida dos detalhes de sincronização. Neste trabalho exploramos propostas de memórias transacionais com suporte específico em hardware (HTM), desenvolvendo uma plataforma flexível para a prototipagem, simulação e caracterização destes sistemas. Também exploramos um sistema de memória transacional com suporte apenas em software (STM), apresentando uma abordagem nova para gerenciar a contenção entre transações. Esta abordagem leva em consideração os padrões de acesso aos diferentes dados de um programa ao escolher o gerenciador de contenção a ser usado para o acesso a estes dados. Elaboramos uma modificação da plataforma de STM que nos permite realizar esta associação entre dados e gerenciamento de contenção, e a partir desta implementação realizamos uma caracterização baseada nos padrões de acesso aos dados de um programa executando em diferentes sistemas de computação. Os resultados de nosso trabalho mostram a viabilidade do uso de memórias transacionais em um ambiente de pesquisa acadêmica, e apontam caminhos para a realização de trabalhos futuros que aumentem a viabilidade do seu uso também pela indústria. / Abstract: As parallel architectures become prevalent in the computer industry, more and more programmers are required to write parallel programs and are thus being exposed to the problems related to the use of traditional mechanisms for concurrency control. Transactional memory has been devised as a means for easing the burden of writing parallel
Programs: the programmer has only to mark the sections of code that are to be executed in an atomic and isolated way - in the form of transactions, and the system takes care of the synchronization details. In this work we explore different proposals of transactional memories based on specific hardware support (HTM), developing a flexible platform for the prototyping, simulation and characterization of these systems. We also explore a transactional memory system based solely on software support (STM), devising a novel approach for managing the contention among transactions. This new approach takes into account access patterns to different data in an application when choosing the contention management strategy to be used for the access to these data. We made modifications to the STM system in order to enable the association of the data with the contention manager algorithm, and using the new implementation we characterized the STM system based on the access patterns to the data of a program, running it on different hardware. Our results show the viability of the use of transactional memories in an academic environment, and serve as a basis for the proposal of different directions to be followed in future research work, aimed at leveraging the use of transactional memories by the industry. / Mestrado / Mestre em Ciência da Computação
|
36 |
Principles for Distributed Databases in Telecom Environment / Principer för distribuerade databaser inom Telecom MiljöAshraf, Imran, Khokhar, Amir Shahzed January 2010 (has links)
Centralized databases are becoming bottleneck for organizations that are physically distributed and access data remotely. Data management is easy in centralized databases. However, it carries high communication cost and most importantly high response time. The concept of distributing the data over various locations is very attractive for such organizations. In such cases the database is fragmented into fragments and distributed to the locations where it is needed. This kind of distribution provides local control of data and the data access is also very fast in such databases. However, concurrency control, query optimization and data allocations are the factors that affect the response time and must be investigated prior to implementing distributed databases. This thesis makes the use of mixed method approach to meet its objective. In quantitative section, we performed an experiment to compare the response time of two databases; centralized and fragmented/distributed. The experiment was performed at Ericsson. A literature review was also done to find out other important response time related issues like query optimization, concurrency control and data allocation. The literature review revealed that these factors can further improve the response time in distributed environment. Results of the experiment showed a substantial decrease in the response time due to the fragmentation and distribution. / Centraliserade databaser blir flaskhals för organisationer som är fysiskt distribuerade och tillgång till data på distans. Datahantering är lätt i centrala databaser. Men bär den höga kostnaden kommunikation och viktigast av hög svarstid. Konceptet att distribuera data över olika orter är mycket attraktiv för sådana organisationer. I sådana fall databasen är splittrade fragment och distribueras till de platser där det behövs. Denna typ av distribution ger lokal kontroll av uppgifter och dataåtkomst är också mycket snabb i dessa databaser. Men, samtidighet kontroll, frågeoptimering och data anslagen är de faktorer som påverkar svarstiden och måste utredas innan genomförandet distribuerade databaser. Denna avhandling gör användningen av blandade metod strategi för att nå sitt mål. I kvantitativa delen utförde vi ett experiment för att jämföra svarstid på två databaser, centraliserad och fragmenterad / distribueras. Försöket utfördes på Ericsson. En litteraturstudie har gjorts för att ta reda på andra viktiga svarstid liknande frågor som frågeoptimering, samtidighet kontroll och data tilldelning. Litteraturgenomgången visade att dessa faktorer ytterligare kan förbättra svarstiden i distribuerad miljö. Resultaten av försöket visade en betydande minskning av den svarstid på grund av splittring och distribution.
|
37 |
Advanced Concurrency Control Algorithm Design and GPU System Support for High Performance In-Memory Data ManagementYuan, Yuan January 2016 (has links)
No description available.
|
38 |
Enriching Web Applications Efficiently with Real-Time Collaboration CapabilitiesHeinrich, Matthias 26 September 2014 (has links) (PDF)
Web applications offering real-time collaboration support (e.g. Google Docs) allow geographically dispersed users to edit the very same document simultaneously, which is appealing to end-users mainly because of two application characteristics. On the one hand, provided real-time capabilities supersede traditional document merging and document locking techniques that distract users from the content creation process. On the other hand, web applications free end-users from lengthy setup procedures and allow for instant application access. However, implementing collaborative web applications is a time-consuming and complex endeavor since offering real-time collaboration support requires two specific collaboration services. First, a concurrency control service has to ensure that documents are synchronized in real-time and that emerging editing conicts (e.g. if two users change the very same word concurrently) are resolved automatically. Second, a workspace awareness service has to inform the local user about actions and activities of other participants (e.g. who joined the session or where are other participants working). Implementing and integrating these two collaboration services is largely ine cient due to (1) the lack of necessary collaboration functionality in existing libraries, (2) incompatibilities of collaboration frameworks with widespread web development approaches as well as (3) the need for massive source code changes to anchor collaboration support. Therefore, we propose a Generic Collaboration Infrastructure (GCI) that supports the e cient development of web-based groupware in various ways. First, the GCI provides reusable concurrency control functionality and generic workspace awareness support. Second, the GCI exposes numerous interfaces to consume these collaboration services in a exible manner and without requiring invasive source code changes. And third, the GCI is linked to a development methodology that e ciently guides developers through the development of web-based groupware. To demonstrate the improved development e ciency induced by the GCI, we conducted three user studies encompassing developers and end-users. We show that the development e ciency can be increased in terms of development time when adopting the GCI. Moreover, we also demonstrate that implemented collaborative web applications satisfy end-user needs with respect to established software quality characteristics (e.g. usability, reliability, etc.). / Webbasierte, kollaborative Echtzeitanwendungen (z.B. Google Docs) erlauben es geografisch verteilten Nutzern, Dokumente gemeinschaftlich und simultan zu bearbeiten. Die Implementierung kollaborativer Echtzeitanwendungen ist allerdings aufwendig und komplex, da einerseits eine Nebenläufigkeitskontrolle von Nöten ist und andererseits die Nachvollziehbarkeit von nicht-lokalen Interaktionen mit dem gemeinsamen virtuellen Arbeitsraum gewährleistet sein muss (z.B. wer editiert wo).
Um die Entwicklung kollaborativer Echtzeitanwendungen effizient zu gestalten, wurde eine Generische Kollaborationsinfrastruktur (GKI) entwickelt. Diese GKI stellt sowohl eine Nebenläufigkeitskontrolle als auch Komponenten zur Nachvollziehbarkeit von nicht-lokalen Interaktionen auf eine wiederverwendbare und nicht-invasive Art und Weise zur Verfügung. In drei dedizierten Studien, die sowohl Entwickler als auch Endanwender umfassten, wurde die Entwicklungseffizienz der GKI nachgewiesen. Dabei wurde die Entwicklungszeit, der Umfang des Quelltextes als auch die Gebrauchstauglichkeit analysiert.
|
39 |
Enriching Web Applications Efficiently with Real-Time Collaboration CapabilitiesHeinrich, Matthias 26 September 2014 (has links)
Web applications offering real-time collaboration support (e.g. Google Docs) allow geographically dispersed users to edit the very same document simultaneously, which is appealing to end-users mainly because of two application characteristics. On the one hand, provided real-time capabilities supersede traditional document merging and document locking techniques that distract users from the content creation process. On the other hand, web applications free end-users from lengthy setup procedures and allow for instant application access. However, implementing collaborative web applications is a time-consuming and complex endeavor since offering real-time collaboration support requires two specific collaboration services. First, a concurrency control service has to ensure that documents are synchronized in real-time and that emerging editing conicts (e.g. if two users change the very same word concurrently) are resolved automatically. Second, a workspace awareness service has to inform the local user about actions and activities of other participants (e.g. who joined the session or where are other participants working). Implementing and integrating these two collaboration services is largely ine cient due to (1) the lack of necessary collaboration functionality in existing libraries, (2) incompatibilities of collaboration frameworks with widespread web development approaches as well as (3) the need for massive source code changes to anchor collaboration support. Therefore, we propose a Generic Collaboration Infrastructure (GCI) that supports the e cient development of web-based groupware in various ways. First, the GCI provides reusable concurrency control functionality and generic workspace awareness support. Second, the GCI exposes numerous interfaces to consume these collaboration services in a exible manner and without requiring invasive source code changes. And third, the GCI is linked to a development methodology that e ciently guides developers through the development of web-based groupware. To demonstrate the improved development e ciency induced by the GCI, we conducted three user studies encompassing developers and end-users. We show that the development e ciency can be increased in terms of development time when adopting the GCI. Moreover, we also demonstrate that implemented collaborative web applications satisfy end-user needs with respect to established software quality characteristics (e.g. usability, reliability, etc.). / Webbasierte, kollaborative Echtzeitanwendungen (z.B. Google Docs) erlauben es geografisch verteilten Nutzern, Dokumente gemeinschaftlich und simultan zu bearbeiten. Die Implementierung kollaborativer Echtzeitanwendungen ist allerdings aufwendig und komplex, da einerseits eine Nebenläufigkeitskontrolle von Nöten ist und andererseits die Nachvollziehbarkeit von nicht-lokalen Interaktionen mit dem gemeinsamen virtuellen Arbeitsraum gewährleistet sein muss (z.B. wer editiert wo).
Um die Entwicklung kollaborativer Echtzeitanwendungen effizient zu gestalten, wurde eine Generische Kollaborationsinfrastruktur (GKI) entwickelt. Diese GKI stellt sowohl eine Nebenläufigkeitskontrolle als auch Komponenten zur Nachvollziehbarkeit von nicht-lokalen Interaktionen auf eine wiederverwendbare und nicht-invasive Art und Weise zur Verfügung. In drei dedizierten Studien, die sowohl Entwickler als auch Endanwender umfassten, wurde die Entwicklungseffizienz der GKI nachgewiesen. Dabei wurde die Entwicklungszeit, der Umfang des Quelltextes als auch die Gebrauchstauglichkeit analysiert.
|
40 |
A comparative study of transaction management services in multidatabase heterogeneous systemsRenaud, Karen Vera 04 1900 (has links)
Multidatabases are being actively researched as a relatively new area in which many aspects are not yet fully understood. This area of transaction management in multidatabase systems still has many unresolved problems. The problem areas which this dissertation addresses are classification of multidatabase systems, global concurrency control, correctness criterion in a multidatabase environment, global deadlock detection, atomic commitment and crash recovery. A core group of research addressing these problems was identified and studied. The dissertation contributes to the multidatabase transaction management topic by introducing an alternative classification method for such multiple database systems; assessing existing research into
transaction management schemes and based on this assessment, proposes a transaction
processing model founded on the optimal properties of transaction management identified during
the course of this research. / Computing / M. Sc. (Computer Science)
|
Page generated in 0.1194 seconds