11 |
Multithreaded PDE Solvers on Non-Uniform Memory ArchitecturesNordén, Markus January 2006 (has links)
A trend in parallel computer architecture is that systems with a large shared memory are becoming more and more popular. A shared memory system can be either a uniform memory architecture (UMA) or a cache coherent non-uniform memory architecture (cc-NUMA). In the present thesis, the performance of parallel PDE solvers on cc-NUMA computers is studied. In particular, we consider the shared namespace programming model, represented by OpenMP. Since the main memory is physically, or geographically distributed over several multi-processor nodes, the latency for local memory accesses is smaller than for remote accesses. Therefore, the geographical locality of the data becomes important. The focus of the present thesis is to study multithreaded PDE solvers on cc-NUMA systems, in particular their memory access pattern with respect to geographical locality. The questions posed are: (1) How large is the influence on performance of the non-uniformity of the memory system? (2) How should a program be written in order to reduce this influence? (3) Is it possible to introduce optimizations in the computer system for this purpose? The main conclusion is that geographical locality is important for performance on cc-NUMA systems. This is shown experimentally for a broad range of PDE solvers as well as theoretically using a model involving characteristics of computer systems and applications. Geographical locality can be achieved through migration directives that are inserted by the programmer or — possibly in the future — automatically by the compiler. On some systems, it can also be accomplished by means of transparent, hardware initiated migration and replication. However, a necessary condition that must be fulfilled if migration is to be effective is that the memory access pattern must not be "speckled", i.e. as few threads as possible shall make accesses to each memory page. We also conclude that OpenMP is competitive with MPI on cc-NUMA systems if care is taken to get a favourable data distribution.
|
12 |
Process evaluation of general data migration guidelines : A comparative studyEng, Dennis January 2010 (has links)
Information systems form the backbone of many organizations today and are vital for their daily activities. For each day these systems grows bigger and more customized to the point where it is heavily integrated in the current platform. However, eventually the platform grows obsolete and the system itself becomes an obstacle for further development. Then the question arises, how do we upgrade the platform while retaining customizations and data? One answer is data migration which essentially is the process of moving data from one device to another. The problems of data migration becomes evident with extensive and heavily customized systems which effectively lead to the absence of any general guidelines for data migration.This thesis attempts to take a first step in finding and testing a set of general migration guidelines that might facilitate the creation of future migration projects. This is achieved using a comparative analysis of the general migration guidelines contra the process of migrating data between different editions of the Microsoft SharePoint framework. The analysis attempts to find out if the general guidelines are general enough for this migration process and leave it to future research to further assess their generality. This paper will also investigate the importance of using incremental migration and the ability to perform structural change during migration as well as how these issues is handled by the built in migration tool of SharePoint. In the end the general guidelines proved to be sufficient to express the SharePoint migration process and should therefore be used for further research to assess their worth in other projects. In terms of the second issue, the built-in migration tool proved weak in handling either incremental migration nor structural change which is unfortunate due to the benefits these features bring.
|
13 |
Migrace databáze do prostředí Oracle Exadata / Database migration to environment Oracle ExadataStarý, Jan January 2013 (has links)
This thesis concerns the question of data migration and specifically data migration into the Oracle Exadata database environment. A characteristic project of data migration has been designed which is then specified in a form of a data migration operating guideline creation. Suggested process is then tested on a real database migration. A part of this work is also a detailed description of the Oracle Exadata technological solution including an evaluation of its benefits and limitations in a practical usage. Information and feedbacks necessary to accomplish these goals were gained by interviewing specialists on different positions throughout the organisation that have a real experience with the system. This work can also serve as a valuable source of information for projects dealing with data migration, mainly data migration in the Oracle Exadata environment. As a matter of fact, potential future users can find this work very helpful when considering a purchase of the Oracle Exadata system package.
|
14 |
Parallel Evolutionary Algorithms with SOM-Like Migration and their Application to Real World Data SetsVillmann, Thomas, Haupt, Reiner, Hering, Klaus, Schulze, Hendrik 19 October 2018 (has links)
We introduce a multiple subpopulation approach for parallel evolutionary algorithms the migration scheme of which follows a SOM-like dynamics. We succesfully apply this approach to clustering in both VLSI-design and psychotherapy research. The advantages of the approach are shown which consist in a reduced communication overhead between the sub-populations preserving a non-vanishing information flow.
|
15 |
A Novel Cache Migration Scheme in Network-on-Chip DevicesNafziger, Jonathan W. 06 December 2010 (has links)
No description available.
|
16 |
Migrace systémové databáze elektronického obchodu / E-commerce System Database MigrationZkoumalová, Barbora January 2016 (has links)
The object of master‘s thesis is design and creation of e-commerce system database migration tool from the ZenCart platform to the PrestaShop platform. Both system databases will be described and analysed and based on gained information the migration tool will be created according customers‘ requirements and then final data migration from original to the new database will be executed.
|
17 |
Knowledge Guided Non-Uniform Rational B-Spline (NURBS) for Supporting Design Intent in Computer Aided Design (CAD) ModelingRajab, Khairan 01 January 2011 (has links)
For many years, incompatible computer-aided design (CAD) packages that are based on Non-uniform Rational B-Spline (NURBS) technology carried out the exchange of models and data through either neutral file formats (IGES or STEP) or proprietary formats that have been accepted as quasi industry standards. Although it is the only available solution at the current time, the exchange process most often produces unsatisfactory results. Models that are impeccable in the original modeling system usually end up with gaps or intersections between surfaces on another incompatible system. Issues such as loss of information, change of data accuracy, inconsistent tolerance, and misinterpretation of the original design intent are a few examples of problems associated with migrating models between different CAD systems. While these issues and drawbacks are well known and cost the industry billions of dollars every year, a solution to eradicate problems from their sources has not been developed. Meanwhile, researchers along with the industries concerned with these issues have been trying to resolve such problems by finding means to repair the migrated models either manually or by using specialized software.
Designing in recent years is becoming more knowledge intensive and it is essential for NURBS to take its share of the ever increasing use of knowledge. NURBS are very powerful modeling tools and have become the de facto standard in modeling. If we stretch their strength and make them knowledge driven, benefits beyond current expectations can be achieved easily. This dissertation introduces knowledge guided NURBS with theoretical and practical foundations for supporting design intent capturing, retrieval, and exchange among dissimilar CAD systems. It shows that if NURBS entities are tagged with some knowledge, we can achieve seamless data exchange, increase robustness, and have more reliable computations, all of which are ultimate objectives many researchers in the field of CAD have been trying to accomplish for decades. Establishing relationships between a NURBS entity and its origin and destinations can aid with seamless CAD model migration. The type of the NURBS entity and the awareness of any irregularities can lead to more intelligent decisions on how to proceed with many computations to increase robustness and achieve a high level of reliability.
As a result, instead of having models that are hardly modifiable because of migrating raw numerical data in isolation, the knowledge driven migration process will produce models that are editable and preserve design intent. We have addressed the issues not only theoretically but also by developing a prototype system that can serve as a test bed. The developed system shows that a click of a button can regenerate a migrated model instead of repairing it, avoiding delay and corrective processes that only limit the effective use of such models.
|
18 |
Testování neoklasického modelu migrace: Empirická analýza panelových dat ČR / Testing the neoclassical migration model: An empirical analysis based on panel data for the Czech republicKureková, Lucie January 2013 (has links)
In this paper is tested validity of the neoclassical migration model. For this purpose, were used Fixed effects model and VAR model. Data contain period of years 2001 to 2010 from 14 regions of the Czech republic and dataset contains 140 observations. Empirical results of Fixed effects model show that socioeconomic determinants had signifficant influence on regional rate of migration in the Czech republic. The direction and strength of influence of the most explanatory variables corresponded to the neoclassical theory. Estimations of VAR model indicate that regional migration did not decrease disparities within regions. These results questioned validity of neoclassical migration model.
|
19 |
Utilization of ETL Processes for Geographical Data Migration : A Case Study at Metria ABSihvola, Toni January 2024 (has links)
In this study, the safety of using ETL processes to migrate geographical data between heterogeneous data sources was investigated, as well as whether certain data structures are more prone to integrity loss during such migrations. Geographical data in various vector structures was migrated using ETL software, FME, from a legacy data source (Oracle 11g with integrated Esri geodatabases) to another (PostgreSQL 14.10 with the PostGIS extension) in order to explore the aforementioned challenges. The maintenance of data integrity post-migration was assessed by comparing the difference between the geodata housed in Oracle 11g (the source) and PostgreSQL 14.10 (the destination) using ArcGIS Pro's built-in tools and a Python script. Further evaluation of the role of ETL processes in geographical data migration included conducting interviews with specialists in databases, data migration, and FME both before and after the migration. The study concludes that different vector structures are affected differently. Whereas points and lines maintained 100% data integrity across all datasets, polygons achieved 99.95% accuracy in one out of the three tested datasets. Managing this issue can be addressed by implementing a repair process during the Transform stage of an ETL process. However, such a process does not guarantee an entirely successful outcome; although the affected area was significantly reduced post-repair, the polygons contained a higher amount of mismatches. / I denna studie undersöktes om ETL-processer kan användas på ett säkert sätt för att migrera geografiska data mellan heterogena datakällor, samt om vissa datastrukturer är mer benägna att förlora integritet under sådana migrationer. Geografiskt data i olika vektorstrukturer migrerades med hjälp av ETL-programvaran FME, från en föråldrad datakälla (Oracle 11g med integrerade Esri geodatabaser) till en annan (PostgreSQL 14.10 med PostGIS-tillägget) för att utforska de ovannämnda frågorna. Dataintegritet mättes genom att jämföra skillnaden mellan geodatan på Oracle 11g (källan) och PostgreSQL 14.10 (destinationen) med hjälp av ArcGIS Pro's inbyggda verktyg och ett Python skript. För att ytterligare utvärdera rollen av ETL-processer i migrering av geografiskt data genomfördes intervjuer med specialister inom databaser, datamigration och FME, både före och efter migrationen. Studien konstaterar att olika vektorstrukturer påverkas olika. Medan punkter och linjer bibehöll 100% datatillförlitlighet över alla dataset, uppnådde polygoner 99,95% noggrannhet i ett av de tre testade dataseten. Hantering av detta problem kan adresseras genom att implementera en reparationsprocess under Transform-steget av en ETL-process. Dock garanterar inte en sådan process ett helt lyckat resultat; även om den påverkade arean minskades avsevärt efter reparationen, innehöll polygonerna ett högre antal avvikelser.
|
20 |
Data and application migration in cloud based data centers --architectures and techniquesZhang, Gong 19 May 2011 (has links)
Computing and communication have continued to impact on the way we run business, the way we learn, and the way we live. The rapid technology evolution of computing has also expedited the growth of digital data, the workload of services, and the complexity of applications. Today, the cost of managing storage hardware ranges from two to ten times the acquisition cost of the storage hardware. We see an increasing demand on technologies for transferring management burden from humans to software. Data migration and application migration are one of popular technologies that enable computing and data storage management to be autonomic and self-managing.
In this dissertation, we examine important issues in designing and developing scalable architectures and techniques for efficient and effective data migration and application migration. The first contribution we have made is to investigate the opportunity of automated data migration across multi-tier storage systems. The significant IO improvement in Solid State Disks (SSD) over traditional rotational hard disks (HDD) motivates the integration of SSD into existing storage hierarchy for enhanced performance. We developed adaptive look-ahead data migration approach to effectively integrate SSD into the multi-tiered storage architecture. When using the fast and expensive SSD tier to store the high temperature data (hot data) while placing the relatively low temperature data (low data) in the HDD tier, one of the important functionality is to manage the migration of data as their access patterns are changed from hot to cold and vice versa. For example, workloads during day time in typical banking applications can be dramatically different from those during night time. We designed and implemented an adaptive lookahead data migration model. A unique feature of our automated migration approach is its ability to dynamically adapt the data migration schedule to achieve the optimal migration effectiveness by taking into account of application specific characteristics and I/O profiles as well as workload deadlines. Our experiments running over the real system trace show that the basic look-ahead data migration model is effective in improving system resource utilization and the adaptive look-ahead migration model is more efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems. The second main contribution we have made in this dissertation research is to address the challenge of ensuring reliability and balancing loads across a network of computing nodes, managed in a decentralized service computing system. Considering providing location based services for geographically distributed mobile users, the continuous and massive service request workloads pose significant technical challenges for the system to guarantee scalable and reliable service provision. We design and develop a decentralized service computing architecture, called Reliable GeoGrid, with two unique features. First, we develop a distributed workload migration scheme with controlled replication, which utilizes a shortcut-based optimization to increase the resilience of the system against various node failures and network partition failures. Second, we devise a dynamic load balancing technique to scale the system in anticipation of unexpected workload changes. Our experimental results show that the Reliable GeoGrid architecture is highly scalable under changing service workloads with moving hotspots and highly reliable in the presence of massive node failures. The third research thrust in this dissertation research is focused on study the process of migrating applications from local physical data centers to Cloud. We design migration experiments and study the error types and further build the error model. Based on the analysis and observations in migration experiments, we propose the CloudMig system which provides both configuration validation and installation automation which effectively reduces the configuration errors and installation complexity. In this dissertation, I will provide an in-depth discussion of the principles of migration and its applications in improving data storage performance, balancing service workloads and adapting to cloud platform.
|
Page generated in 0.0935 seconds