• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 11
  • 10
  • 10
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Automatisering av test av legacysystem : Utmaningar och faktorer att beakta / : Test automation on legacy systems – challenges and aspects to consider

Nilsson, Martin, Norberg, Patrik January 2018 (has links)
Information systems are used by most organisations today and are critical to the business. Many organisations got one or more legacy systems which have been used for a long period of time. Failures and disruption in these systems can lead to great consequences and it is important to test the systems to avoid them. Manual testing requires much resources. To automate the tests are a possibility and our partner in this study, Stora Enso Skog, wanted to look into the possibilities.The purpose of this study is to describe what challenges you meet when automating tests for legacy systems. The purpose is also to describe what factors you need to consider when implementing test automation. To do this we have used a case study and gathered data through interviews and study of documents. The interviews have been conducted with experienced experts within the testing area.Our conclusions shows that there are a number of challenges to consider when automating tests for legacy systems. The challenges include problems trying to automate some sort of tests for legacy systems due to the code being unstructured and not prepared for tests. There’s also a risk of spending too much resources in automating the tests instead of improving the systems.A number of factors to consider when implementing automated tests for legacy systems have also been found. Examples given are that automating tests can never fully compensate for a bad architecture and design. Focus should also be on the most important tests and start automating small parts. Also to improve the possibility to test in legacy systems instead of just adopting the testing tool to the system is to be considered.Finally we have noted that there are different types and levels of legacy systems and because of that challenges and factors to consider when automating tests may vary. Many challenges and factors presented in this study also applies to systems not considered legacy. / Informationssystem används idag av de flesta organisationer och är kritiska för att verksamheten skall fungera väl. Många organisationer har ett eller flera så kallade legacysystem som funnits i verksamheten en tid. Fel i systemen kan leda till stora konsekvenser och för att undvika detta är det viktigt att de testas. Att manuellt testa är resurskrävande. Möjligheter finns att automatisera tester i informationssystem vilket vår samarbetspartner Stora Enso Skog velat undersöka.Syftet med detta examensarbete är att beskriva vilka utmaningar som finns vid automatisering av tester för legacysystem. Syftet är vidare att beskriva vilka faktorer som behöver beaktas vid implementering. Arbetet har genomförts som en fallstudie och datainsamling har skett genom intervjuer och dokument. Intervjuer har genomförts med erfarna och kunniga inom testområdet.Slutsatserna är att det finns ett antal utmaningar vid automatisering av test av legacysystem. Exempel på utmaningar är att det kan vara svårt att automatisera vissa typer av test då koden i legacysystem ofta är ostrukturerad och inte förbered för test. Det finns också risk att man lägger för mycket resurser på automatiserade tester istället för att förbättra informationssystemet.Faktorer att beakta vid implementering är bland annat att automatisering av test aldrig kan kompensera för en dålig arkitektur och design. Man bör också fokusera på det viktigaste delarna och börja i mindre omfattning. Att utveckla testbarheten i legacysystem istället för att bara anpassa testverktyg efter systemet är en annan faktor.Vi har slutligen konstaterat att det finns olika typer eller nivåer av legacysystem och beroende på det kan utmaningar och faktorer vid automatisering av test variera. Många av de utmaningar och faktorer som presenteras i examensarbetet gäller även system som inte klassas som legacysystem.
22

Integrace legacy databází do soudobých informačních systémů / Integration of legacy databases to current information systems

Navrátil, Jan January 2016 (has links)
The goal of this thesis is to design and implement a framework to support Legacy system access. Legacy systems are databases that use incompatible and obsolete technologies and can not be easily abandoned. The framework allows the abstraction of application logic from database platform and will enable full or incremental migration to a new, modern platform in the future. The framework also considers the option of encapsulation of an existing legacy application into the new system as a black box. The framework is well configurable and extendable and is independent on database platform or data context. A system based on proposed framework has been succesfully deployed in a company. The system facilitated the migration of the company to a new information system with an entirely different database platform. The practice shows the viability of the framework design. Powered by TCPDF (www.tcpdf.org)
23

A software framework to support distributed command and control applications

Duvenhage, Arno 09 August 2011 (has links)
This dissertation discusses a software application development framework. The framework supports developing software applications within the context of Joint Command and Control, which includes interoperability with network-centric systems as well as interoperability with existing legacy systems. The next generation of Command and Control systems are expected to be built on common architectures or enterprise middleware. Enterprise middleware does however not directly address integration with legacy Command and Control systems nor does it address integration with existing and future tactical systems like fighter aircraft. The software framework discussed in this dissertation enables existing legacy systems and tactical systems to interoperate with each other; it enables interoperability with the Command and Control enterprise; and it also enables simulated systems to be deployed within a real environment. The framework does all of this through a unique distributed architecture. The architecture supports both system interoperability and the simulation of systems and equipment within the context of Command and Control. This hybrid approach is the key to the success of the framework. There is a strong focus on the quality of the framework and the current implementation has already been successfully applied within the Command and Control environment. The current framework implementation is also supplied on a DVD with this dissertation. / Dissertation (MEng)--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
24

An Evaluation of Ethernet as Data Transport System in Avionics

Doverfelt, Rickard January 2020 (has links)
ÅF Digital Solutions AB are looking to replace their current legacy system for audio transmissions within aircrafts with a new system based on Ethernet. They also want the system to be as closely matching the current Audio Integration System as possible as well as preferably using commercial off the shelf components. The issue evaluated in this thesis is whether it is feasible to port the legacy protocol over to an Ethernet based solution with as few modifications as possible, what performance requirements are present on the Ethernet solution as well as what remaining capacity is available in the network. Furthermore is ÅF Digital Solutions AB interested in what avionics related Ethernet based protocols and standards are already present on the market.The work is conducted in two tracks - one track of experimental measurements and statistical analysis of the latency present in the proposed solutions and one track with a survey regarding the integration of the present Audio Integration System protocol into the propesed Ethernet based solutions. The study finds two standards present on the market: Avionics Full-Duplex Ethernet (AFDX) and Time-Triggered Ethernet (TTEthernet). Two prototype implementations are built, one implementing AFDX and one custom built upon Ethernet and UDP. The latency of these are measured and found to be largely similar at ideal conditions. Ethernet is found to be more flexible, whilst AFDX allow for interoperation with other manufacturers and TTEthernet facilitates strict timing requirements at the cost of specialised hardware. The bandwidth utilisation of AFDX at ideal conditions is found to be 0.980% per stream and for the Ethernet solution 0.979% per stream.It is proposed that ÅF Digital Solutions AB pursue a custom Ethernet based solution unless they require interoperability on the same network with other manufacturers as a custom solution with full control over the network allows the largest flexibility in regards to timings and load. If interoperability is required is AFDX proposed instead as it is a standardised protocol and without the, for ÅF Digital Solutions AB, unnecessary overhead of TTEthernet. / Åf Digital Solutions AB vill undersöka möjligheterna att byta sitt nuvarande legacysystem för kommunikation inom flygplan till ett Ethernet-baserat system. Detta på ett sätt som håller implementationen så nära deras nuvarande Audio Integration System som möjligt. Problemet som undersöks är huruvida det är rimligt att flytta legacyprotokollet till Ethernet med så lite modifikationer som möjligt. Utöver detta vill ÅF Digital Solutions AB veta prestandakraven som blir på en Ethernet-lösning samt hur mycket resterande kapacitet som eventuellt finns kvar för framtida användning. Vidare vill de veta vilka standarder som redan finns på marknaden.Arbetet genomförs genom två spår - ett med experimentella mätningar och statistisk analys och en med ett case-study av integrationen av Audio Integration System och Ethernet. Undersökningen finner två standarder på marknaden relaterat till avionik; Avionics Full-Duplex Ethernet (AFDX) samt Time-Triggered Ethernet (TTEthernet).Två prototyper byggs, en baserad på AFDX och en baserad på UDP och Ethernet. Latencyn för dessa två mäts och finns vara snarlika vid deras respektive ideala scenarion. Ethernet finns vara mer flexibelt, AFDX merinteroperabel och TTEthernet mer lämplig vid strikta tidskrav. Bandbreddsutnyttjandet för AFDX finns vara 0.980% vid ideala förhållanden och 0.979% för Ethernetvid ideala förhållanden.Det rekommenderas att ÅF Digital Solutions använder sig av en egenutformad Ethernetbaserad lösning om de inte har krav på interoperabilitet ty det ger mer flexibilietet gällande tidskrav, protokoll och dataflödet.
25

Challenges and success factors in the migration of legacy systems to Service Oriented Architecture (SOA)

Vlizko, Nataliya January 2014 (has links)
Service-Oriented Architecture (SOA) provides a standards-based conceptual framework for flexible and adaptive systems and has become widely used in the recent years because of it. The number of legacy systems has already been migrated to this platform. As there are still many systems under consideration of such migration, we found it relevant to share the existing experience of SOA migration and highlight challenges that companies meet while adopting SOA. As not all of these migrations were successful, we also look into factors that have influence on the success of SOA projects. The research is based on two methods: a literature review and a survey. The results of the thesis include identification and quantitative analysis of the challenges and success factors of SOA projects. We also compare the survey results for different samples (based on the company industry, work area, size, and respondents experience with SOA and respondents job positions). In total, 13 SOA challenges and 18 SOA success factors were identified, analyzed and discussed in this thesis. Based on the survey results, there are three SOA challenges received the highest importance scores: “Communicating SOA Vision”, “Focus on business perspective, and not only IT perspective” and “SOA Governance”. The highest scored SOA success factor is “Business Process of Company”. While comparing different samples of the survey results, the most obvious differences are identified between the results received from people with development related job positions and people with business related job positions, and the results from companies of different sizes.
26

Tidsfördelning vid vidareutveckling av "legacy" system / Time Distribution when Reconstructing Legacy Software System

Jakobsson, Rikard, Molin, Jakob January 2020 (has links)
Att arbeta med ett äldre så kallat legacy-system är en vanlig uppgift bland dagens programmerare men det saknas data om hur arbetsinsatsen är fördelad. Denna data vore användbar för att utvärdera hur kostsamt det är att vidareutveckla ett system kontra en omskrivning eller migration. För att åtgärda detta bidrar den här undersökningen med data som visar arbetsinsatsfördelningen vid migration av ett mindre legacy-system. Frågan som undersöks är ”Hur fördelas kostnaden i tid när man utvärderar och bygger om ett legacy-system?. Grunden för data i denna undersökning kommer ifrån utvecklingen av ett litet studentutvecklat system som använts på KTH och som var i stort behov av uppdatering. Det fanns mycket dokumentation om systemets krav och design, men den kod som fanns var ej användbar då den ej var dokumenterad och saknade klar struktur. Detta ledde till en omskrivning av systemet enligt de krav som tidigare formulerats. I det här projektet användes en vetenskaplig fallstudie med en kvantitativ metod för att få fram resultat. Tiden som lades ned på de moment som identifierats innan uppstart mättes och användes för att beräkna arbetsinsatsfördelningen. Resultatet av denna undersökning är en samling data som kan användas för uppskattningen av arbetsinsatsfördelningen vid omskrivningen av ett mindre legacy-system. I denna undersökning redovisas arbetsinsatsfördelningen som uppmätts under migrationen av ett legacy-system till en ny teknologi, då det existerande systemet inte betraktades som värt att uppdatera. Undersökningens slutsats är att om det finns ett bra förarbete som går att använda för att bygga om systemet så kommer majoriteten av arbetsarbetsinsatsen att läggas på implementeringen av systemet i kod. / Working with legacy-systems is a common task for programmers, and the development of these requires a great effort, but data regarding the distribution of this effort is scarce. This data would be valuable when evaluating the cost of continued development of a system compared to a rewrite or migration. To rectify this, we aim to provide a datapoint regarding the effort distribution for the migration of a small legacy-system. Our question is “How is the cost in time distributed when a legacy-system is evaluated and rebuilt?”. The data presented in this thesis comes from the development of a legacy-system developed by students at KTH. The system needed an update since it had ceased to function. There was a great amount of documentation with regards to requirement specifications and application design which could be used when redeveloping the system. The code, however, lacked any substantial documentation and structure, so it was decided early on that rewriting the system according to the existing documentation was going to be more efficient than working with the code for the current system. A scientific case study built on quantitative methods was used to collect data. To measure effort the time spent on each predefined moment was counted in minutes, and this was used to calculate the distribution of effort. The result of this thesis is a table of data and a review of the distribution of effort when working on a small legacy-system with clear requirements. The data produced in this thesis is based on the effort spent on rewriting a system that was not worth updating. The conclusion of this thesis is that most of the effort will be spent on implementing the code when a clearly defined system is rewritten from the ground up.
27

Relationship Between Enterprise Resource Planning System and Organizational Productivity in Local Government

Chiawah, Tambei 01 January 2019 (has links)
Organizations experience challenges despite efforts to increase productivity through implementing large-scale enterprise systems. Leaders of local government institutions do not understand how to achieve expected and desired benefits from the implementation of enterprise resource planning (ERP) systems. Lack of alignment between social and technical elements in ERP implementation depresses organizational productivity. The purpose of this quantitative correlational study was to examine whether social and technical elements increase use and productivity in ERP implementation. The research questions addressed the relationship between ERP and organizational efficiency, cross-functional communication, information sharing, ease of ERP use, and ERP usefulness. Sociotechnical systems theory provided the theoretical basis for the study. Data were collected from online surveys completed by 61 ERP users and analyzed using Wilcoxon matched pairs statistics and Spearman's correlation coefficient. Findings indicated a positive significant relationship between ERP and information sharing, a positive significant relationship between ERP system quality and ease of ERP use, and a positive significant relationship between ERP system quality and organizational productivity. Findings may be used by local government leaders, technology managers, and chief information officers to ensure ERP sustainability and increase productivity.
28

Εφαρμογή τεχνικών υπολογιστικής νοημοσύνης για υποστήριξη συστημάτων ηλεκτρονικής μάθησης βασισμένη σε αρχιτεκτονική ευφυών πρακτόρων / Integrating e-learning environments with computational intelligence assessment

Θερμογιάννη, Ελένη 26 September 2007 (has links)
Οι τεχνικές Υπολογιστικής Νοημοσύνης βρίσκουν σε μεγάλο βαθμό εφαρμογή σε Ηλεκτρονικά Συστήματα Μάθησης. Στην εργασία αυτή υιοθετείται η τεχνική των Bayesian δικτύων. Αναλυτικότερα υλοποιείται ένα έξυπνο σύστημα το οποίο αναλαμβάνει τη διαχείριση των ερωτηματολογίων ενός Ηλεκτρονικού Συστήματος Μάθησης. Σκοπός της των Bayesian δικτύων είναι η «έξυπνη» διαχείριση των ερωτηματολογίων. Πιο συγκεκριμένα, πραγματοποιείται γραφική απεικόνιση των ερωτηματολογίων σε Bayesian γράφημα όπου κάθε ερώτηση αντιστοιχεί σε ένα κόμβο του γραφήματος. Στο γράφημα αυτό εφαρμόζονται οι εξισώσεις του Bayes σε κάθε κόμβο του γραφήματος ώστε να υπολογιστούν οι πιθανότητες επιτυχούς απάντησης μιας ερώτησης. Στη συνέχεια οι πιθανότητες συγκρίνονται με κατώφλια τα οποία ορίζει ο διαχειριστής του συστήματος ώστε να αποφευχθούν ερωτήσεις στις οποίες ο χρήστης έχει μεγάλη πιθανότητα να απαντήσει επιτυχώς. Επίτευγμα αυτής της υλοποίησης είναι η εξοικονόμηση ερωτήσεων και χρόνου εκ μέρους του χρήστη. Το δεύτερο μέρος της εργασίας αφορά στην επέκταση του παραπάνω συστήματος χρησιμοποιώντας την αρχιτεκτονική ευφυών πρακτόρων. Βασικός σκοπός της επέκτασης αυτής είναι η δυνατότητα διαχείρισης ενός μεγάλου αριθμού χρηστών και ερωτηματολογίων από απομακρυσμένα συστήματα. / In this contribution an innovative platform is being presented that integrates intelligent agents in legacy e-learning environments. It introduces the design and development of a scalable and interoperable integration platform supporting various assessment agents for e-learning environments. The agents are implemented in order to provide intelligent assessment services to computational intelligent techniques such as Bayesian Networks and Genetic Algorithms. The utilization of new and emerging technologies like web services allows integrating the provided services to any web based legacy e-learning environment.
29

Engenharia de requisitos aplicada em sistema legado de gestão e custeio de propostas comerciais: pesquisa-ação em empresa do setor de estamparia / Requirements engineering applied to legacy system of management and costing methods of sales proposals: research-action in company stamping industry

Oliveira, Paulo Henrique Ribeiro de 29 February 2016 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2016-07-01T14:26:09Z No. of bitstreams: 1 Paulo Henrique Ribeiro De Oliveira.pdf: 2464979 bytes, checksum: 93b7213ac2b763d9ce75892da0e93cda (MD5) / Made available in DSpace on 2016-07-01T14:26:09Z (GMT). No. of bitstreams: 1 Paulo Henrique Ribeiro De Oliveira.pdf: 2464979 bytes, checksum: 93b7213ac2b763d9ce75892da0e93cda (MD5) Previous issue date: 2016-02-29 / The effort spent in maintaining systems regarded as legacies is relatively higher than that of new projects development effort. Such systems should be kept in place because, in most cases, are difficult to replace, given the complexity of changing the interaction and the impact on the functioning of processes, ie, the system can not stop. Thus, maintenance or modifications represent a sign of success for a Legacy System because it means that it is still useful and worth investing resources to keep it updated and running. However, if changes are carried out due to an emergency on business dynamics, and proper documentation is not completed, problems involving control and management of future maintenance might arise. In this context, it is the Requirements Engineering´s responsibility, as a sub-area of Software Engineering, to improve processes by proposing methods, tools and techniques that promote the development of the Requirement´s documentation, so that the requirements are in accordance with the satisfaction of stakeholders, meeting the business attributes in question. The objective of this work was to apply the Requirements Engineering in Legacy Systems of Management and Costing Methods of Sales Proposals in the stamping industry. Through literature review, document analysis and action research, the study was divided into four phases considering the development of the Sales Proposal Management and Costing Method System and three maintenance stages performed with the application of Requirements Engineering. In the first phase, a group of artifacts was generated expressing all system features. In the second phase, a progressive maintenance incorporated new features based in the system´s backlog with requirements collected in the first phase. The third phase included a new stamping business area that was not present in the initial development. Lastly, the fourth phase included new maintenance adjustments that answered to the needs of the stamping business system. The results of the study phases proved that the processes described in the Requirements Engineering (RE) were present in the information gathering actions, analysis, documentation and verification and validation of requirements, bringing academic and technical knowledge on issues related to legacy systems, ER and Software Engineering. As a result, it was concluded that the Requirements Engineering can be applied to Legacy Systems of Management and Costing Methods of Sales Proposals in stamping company in the industry. / O esforço despendido para a manutenção de sistemas considerados como legado é relativamente maior que o esforço de desenvolvimento de novos projetos. Tais sistemas devem ser mantidos em funcionamento pois, em sua maioria, são de difícil substituição, dada a complexidade de convívio da mudança e o impacto no funcionamento dos processos, ou seja, o sistema não pode parar. Dessa forma, manutenções ou modificações representam um sinal de sucesso para um sistema legado, pois significam que ele ainda é útil e que vale a pena investir recursos para mantê-lo atualizado e em funcionamento. No entanto, se modificações são realizadas emergencialmente devido a dinâmica do negócio, e a devida documentação não é realizada, tem-se instaurado o caos para o controle e gerência de futuras manutenções. Neste contexto, cabe à Engenharia de Requisitos, como sub-área da Engenharia de Software, aperfeiçoar os processos para o gerenciamento do ciclo de vida dos requisitos propondo métodos, ferramentas e técnicas que promovam o desenvolvimento do documento de requisitos, para que os requisitos estejam em conformidade com a satisfação dos stakeholders, atendendo as características do negócio em questão. Assim, o objetivo deste trabalho foi aplicar a Engenharia de Requisitos em Sistema Legado de Gestão e Custeio de Propostas Comerciais em empresa do setor de estamparia. Por meio de levantamento bibliográfico, análise documental e pesquisa-ação, o estudo foi dividido em quatro fases considerando o desenvolvimento do Sistema de Gestão e Custeio de Propostas Comerciais e três manutenções realizadas com a aplicação da Engenharia de Requisitos. Na primeira fase um conjunto de artefatos foi gerado expressando todas as funcionalidades do sistema. Na segunda fase uma manutenção evolutiva incorporou novas funcionalidades no sistema baseada em requisitos de backlog coletados na primeira fase. A terceira fase incluiu uma nova área de negócios da estamparia que não esteve presente no desenvolvimento inicial e a quarta fase contemplou novas manutenções ajustando o sistema as necessidades de negócio da estamparia. Os resultados das fases do estudo possibilitaram identificar que os processos descritos na Engenharia de Requisitos (ER) se fizeram presentes nas ações de levantamento, análise, documentação e verificação e validação de requisitos trazendo conhecimento acadêmico e técnico nos temas relacionados a sistemas legados, ER e Engenharia de Software. Concluiu-se, então, que a Engenharia de Requisitos pode ser aplicada em Sistema Legado de Gestão e Custeio de Propostas Comerciais em empresa do setor de estamparia.
30

Extracting Parallelism from Legacy Sequential Code Using Transactional Memory

Saad Ibrahim, Mohamed Mohamed 26 July 2016 (has links)
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup. / Ph. D.

Page generated in 0.4373 seconds