• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 6
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 9
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrating Algorithmic and Systemic Load Balancing Strategies in Parallel Scientific Applications

Ghafoor, Sheikh Khaled 13 December 2003 (has links)
Load imbalance is a major source of performance degradation in parallel scientific applications. Load balancing increases the efficient use of existing resources and improves performance of parallel applications running in distributed environments. At a coarse level of granularity, advances in runtime systems for parallel programs have been proposed in order to control available resources as efficiently as possible by utilizing idle resources and using task migration. At a finer granularity level, advances in algorithmic strategies for dynamically balancing computational loads by data redistribution have been proposed in order to respond to variations in processor performance during the execution of a given parallel application. Algorithmic and systemic load balancing strategies have complementary set of advantages. An integration of these two techniques is possible and it should result in a system, which delivers advantages over each technique used in isolation. This thesis presents a design and implementation of a system that combines an algorithmic fine-grained data parallel load balancing strategy called Fractiling with a systemic coarse-grained task-parallel load balancing system called Hector. It also reports on experimental results of running N-body simulations under this integrated system. The experimental results indicate that a distributed runtime environment, which combines both algorithmic and systemic load balancing strategies, can provide performance advantages with little overhead, underscoring the importance of this approach in large complex scientific applications.
2

LB_Migrate: A DYNAMIC LOAD BALANCING LIBRARY FOR SCIENTIFIC APPLICATIONS

Chaube, Rohit Kailash 15 December 2007 (has links)
Parallel and distributed environments are used to solve large scientific and engineering problems that often are irregular and data parallel. However, performance of many parallel applications is affected by computation overheads, communication time and load imbalance. Among these factors, load imbalance is caused by the irregular nature of the problem, its algorithm, the difference in processor characteristics, and runtime loads. A number of applications achieve load balancing by one-time assignment of task. However, a number of applications have workloads that are unpredictable, and vary over the course of their execution. For such type of applications, load balancing is achieved by dynamic assignment of tasks at runtime. A large group of scientific applications has parallel loops as major source of concurrency. However, due to the irregular execution times of the loops, it is difficult to achieve optimal performance without dynamic load balancing. There are number of dynamic load balancing tools and libraries have been developed for different kind of applications. However these libraries fail to address all three degradation factors i.e. problem, algorithmic, and systemic. In this thesis a dynamic load balancing library called LB_Migrate is presented which addresses the degradation factors in application with parallel loops. The library provides a range of dynamic scheduling techniques and data migration strategies to achieve effective load balancing. It is designed to be independent of the host application data structure hence providing the flexibility to be used with different applications. The analysis of the experimental results using LB_Migrate with different applications indicates consistent performance improvement, and low overhead cost by the use of the library.
3

Vytvoření a aplikace metodiky pro hodnocení kvality provedení datové migrace / Creation and application of methodology for evaluating quality of data migration

Němcová, Alžběta January 2015 (has links)
The main goal of this theses is to create useful methodology for data migration assessment, especially assess the quality of execution. Data migration projects are often mentioned topic but the way of assessment in the literature is missing. Methodology is based on existing information, methodologies and processes and the expert experience is included. As a part of the methodology was the tool for the evaluation which is universally applicable for various projects. Data migration process was for this purpose divided into individual activities that contain processes and their criteria, which help assess the fulfillment of the processes. This tool works by assigning weights to each criteria according which are all the processes evaluated. All the criteria should be scored by expert evaluator, who should have knowledge in the data migration area. Part of this work is also an application of methodology for the real project Implementation of new Core banking system (CBS). Based on the results and their interpretation some recommendations were made. These recommendation should improve quality of data migration.
4

Metodika projektů zajištění kvality a testování datových migrací Deloitte ČR / Data migration and quality assurance and testing projects methodology of Deloitte CZ

Pospíšil, Marek January 2011 (has links)
The main purpose of the thesis is to introduce a method for data migration quality assurance or completeness and accuracy testing of migrated data. Method will become part of the knowledge base of Deloitte Czech Republic for projects in the Enterprise Risk Services department. Data migration quality assurance projects carried by Deloitte in the Czech Republic have its own specifics. Although there exists methodology "Systems Development Playbook" which also includes a data migration methodology, the problem especially of the Prague branch is the fact that the procedures and methods for consulting projects in the area of data migration are not described in current methodological documentations including specifics of the Czech and Slovak projects. This represents a risk of inconsistencies in the implementation of this type of consulting projects in case of key employees leaving. Improving procedures and optimization of human resources engagement in data migration projects can't be measurably compared among projects, if there don't exists a basic methodology against which specific projects can be measured. The objectives of the work are achieved by consolidating experience from past projects involving data migration within Deloitte Czech Republic and designing improvement of existing processes by integrating information from external sources and internal sources of the global Deloitte Touche Tohmatsu.
5

QuickMig: automatic schema matching for data migration projects

Drumm, Christian, Schmitt, Matthias, Do, Hong-Hai, Rahm, Erhard 14 December 2018 (has links)
A common task in many database applications is the migration of legacy data from multiple sources into a new one. This requires identifying semantically related elements of the source and target systems and the creation of mapping expressions to transform instances of those elements from the source format to the target format. Currently, data migration is typically done manually, a tedious and timeconsuming process, which is difficult to scale to a high number of data sources. In this paper, we describe QuickMig, a new semi-automatic approach to determining semantic correspondences between schema elements for data migration applications. QuickMig advances the state of the art with a set of new techniques exploiting sample instances, domain ontologies, and reuse of existing mappings to detect not only element correspondences but also their mapping expressions. QuickMig further includes new mechanisms to effectively incorporate domain knowledge of users into the matching process. The results from a comprehensive evaluation using real-world schemas and data indicate the high quality and practicability of the overall approach.
6

Undersökning av migrationsmetoder för databaser : Jämförelse mellan Export/Import och ETL utifrån den mest lämpade metoden för att effektivisera en databas

Asplund, Felicia January 2020 (has links)
Migration av data innebär att data flyttas från en databas till en annan. Denna åtgärd kan företag behöva av olika skäl, till exempel för att ändra språk eller förnya befintlig databas. Något som diskuteras är hur denna process ska ske och hur data migreras på bästa smidigaste sätt. Ett av IT företagen som var i behov av svaren är XLENT Sundsvall. XLENT har en webbshop där gränssnittet är föråldrat, samt i behov av en bättre serverlösning ur ett förvaltningsperspektiv. Den här studien syftar till att se över möjligheterna att flytta över den befintliga hemsidan till en modern e-handelslösning. Titta på vilket bästa sätt är att migrera datan till en ny databas som är mer lämplig för hemsidan. De metoderna som ska jämföras är en export-import möjlighet och Extract Transform Load ( ETL) verktyg. Export-import metoden visade sig vara den mest lämpade processen för en databas med våra egenskaper, och en migration med valda processen genomfördes. Till migrationsprocessen hör även städning av data. Ett viktigt steg då databasen bestod av redundant data. En jämförelse mellan den nya och gamla databasen visade att städningen vara en lyckad process då den redundanta datan reducerades med 24 procent. Export-import processen valdes då metoden passade bäst för databas egenskaperna. Då databasen hade varit mycket större eller vara skrivet i ett annat SQL språk så vore metoden inte helt optimalt längre. Då skulle ETL verktyget vara med önskvärt. Vid fortsatt arbete så skulle det vara intressant att göra en mer teoretisk jämförelse. Testa att migrera på de olika sätten, med databaser med olika egenskaper för att få en mer övergripande blick av vilken som passar bäst i olika fall. / Data migration means the transfer of data from one database to another. This is something that more and more companies need to do. for various reasons, for example to change the language or renew the existing database. Something that is discussed is how this process should take place and how data is migrated in the best and most flexible way. One of the IT companies that was in need of the answers is XLENT Sundsvall. XLENT has an online shop where the interface is outdated, and in need of a better server solution from a management perspective. This study aims to review the possibilities of moving over the existing website to a more modern solution. Look at the best way to migrate the data to a new database that is more suitable for the website. The methods to be compared are an export-import method and Extract Transform Load (ETL) tool. The export-import method proved to be the most suitable process for a database with our properties, and a migration with the selected process was performed. The migration process also includes cleaning of data. An important step as the database consisted of redundant data. A comparison between the new and old databases showed that cleaning was a successful process as the redundant data was reduced by 24 percent. The export-import process was chosen as the method best suited the characteristics of the existing database. With a database being much larger or being written in another SQL language, it might not be the most optimal solution anymore. Then the the ETL tool would be better. With continued work, it would be interesting to make more than a theoretical comparison. Try migrating with many different databases with different properties to get a more comprehensive look of which one is best suited in different cases.
7

Fúze společností z hlediska projektu migrace dat / Merger in terms of data migration project

Worsch, Filip January 2013 (has links)
This work deals with the data migration during the process of mergers and acquisitions. The first, theoretical part, defines the general terms concerning mergers and acquisitions, further mentions motives that may lead to it, and the basic stages of the implementation process of mergers and acquisitions. This section also presents brief information on selected methodologies of project management, of which principles are used in the next part, dealing with data migration projects. The aim of this work is to develop a methodology for project management of data migration in mergers and acquisitions process, which may serve as a guide for merging companies and data migration teams. In the methodology there are specified activities and defined procedures to be followed in the various stages of the project life cycle. To each activity the way and necessary prerequisites for carrying out are specified. The methodology also describes common problems with which the data migration project in carrying out the action may be faced and which could threaten the success of the project or its part. For each potential problem a possible way to eliminate it is added, or solutions.
8

Duomenų perkėlimo tarp informacinių sistemų analizė ir įgyvendinimas / Analysis And Implementation Of Data Migration Between Information Systems

Urbanavičius, Tomas 28 June 2010 (has links)
Baigiamajame magistro darbe yra tiriamas duomenų perkėlimas tarp informacinių sistemų. Šio darbo tikslas – perkelti kliento duomenis iš senos informacinės sistemos į naują. Darbe pateikiami dažniausiai praktikoje naudojami duomenų perkėlimo būdai bei jų privalumai ir trūkumai. Išsamiai pateikti duomenų perkėlimo etapai ir kaip jie turi būti taikomi duomenų perkėlimo procese. Praktinėje dalyje pristatytas duomenų perkėlimo įrankis, sukurtas įmonėje UAB „Exigen Services Vilnius“. Programa sukurta seniems draudimo polisų duomenims perkelti naudojant XML rinkmeną. Kūrimo proceso metu buvo sudaryta duomenų perkėlimo specifikacija, pasirinkta duomenų perkėlimo metodika ir sukurtas duomenų perkėlimo įrankis. Darbą surado 5 dalys: duomenų perkėlimo metodika, duomenų perkėlimo etapai ir problemos, duomenų perkėlimo procesas, duomenų perkėlimo įgyvendinimas, duomenų perkėlimo specifikacija. Darbo apimtis 63 puslapiai teksto be priedų, 25 paveikslai, 2 lentelės ir 11 bibliografinių šaltinių. / A research of master‘s final work is data migration between information systems. The aim of this work is to mirgate client data from old information system to new one. Data migration methodologies in this work are thoroughly analyzed. Most popular data migration methodologies are analyzed, by introducing their advantages and drawbacks and how they should be applied in data migration process. Data migration tool example is also analyzed in this work. Software tool, that was implemented in UAB „Exigen Services Lietuva“ organization, is analized in this master`s work. This data loading tool is using XML for information transfer between export tool and data loader. Data loader also has its specification, that is also described in this job. Thesis structure consists of 5 parts: data migration methodology, data migration phases and problems, data migration process, data migration implementation, data migration tool specification. Thesis consists of 63 pages of text without extras, 25 pictures, 2 table and 11 bibliographical entries.
9

Process evaluation of general data migration guidelines : A comparative study

Eng, Dennis January 2010 (has links)
<p>Information systems form the backbone of many organizations today and are vital for their daily activities. For each day these systems grows bigger and more customized to the point where it is heavily integrated in the current platform. However, eventually the platform grows obsolete and the system itself becomes an obstacle for further development. Then the question arises, how do we upgrade the platform while retaining customizations and data? One answer is data migration which essentially is the process of moving data from one device to another. The problems of data migration becomes evident with extensive and heavily customized systems which effectively lead to the absence of any general guidelines for data migration.This thesis attempts to take a first step in finding and testing a set of general migration guidelines that might facilitate the creation of future migration projects. This is achieved using a comparative analysis of the general migration guidelines contra the process of migrating data between different editions of the Microsoft SharePoint framework. The analysis attempts to find out if the general guidelines are general enough for this migration process and leave it to future research to further assess their generality. This paper will also investigate the importance of using incremental migration and the ability to perform structural change during migration as well as how these issues is handled by the built in migration tool of SharePoint. In the end the general guidelines proved to be sufficient to express the SharePoint migration process and should therefore be used for further research to assess their worth in other projects. In terms of the second issue, the built-in migration tool proved weak in handling either incremental migration nor structural change which is unfortunate due to the benefits these features bring.</p>
10

Functional Query Languages with Categorical Types

Wisnesky, Ryan 25 February 2014 (has links)
We study three category-theoretic types in the context of functional query languages (typed lambda-calculi extended with additional operations for bulk data processing). The types we study are: / Engineering and Applied Sciences

Page generated in 0.1191 seconds