• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 21
  • 19
  • 16
  • 14
  • 10
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 235
  • 65
  • 44
  • 31
  • 27
  • 26
  • 26
  • 26
  • 26
  • 24
  • 23
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Adapting the polytope model for dynamic and speculative parallelization

Jimborean, Alexandra 14 September 2012 (has links) (PDF)
In this thesis, we present a Thread-Level Speculation (TLS) framework whose main feature is to speculatively parallelize a sequential loop nest in various ways, to maximize performance. We perform code transformations by applying the polyhedral model that we adapted for speculative and runtime code parallelization. For this purpose, we designed a parallel code pattern which is patched by our runtime system according to the profiling information collected on some execution samples. We show on several benchmarks that our framework yields good performance on codes which could not be handled efficiently by previously proposed TLS systems.
202

Optimisation évolutionnaire multi-objectif parallèle : application à la combustion Diesel / Multi-objective parallel evolutionary algorithms : Application to Diesel Combustion

Yagoubi, Mouadh 03 July 2012 (has links)
Avec la sévérisation des réglementations environnementales sur les émissions polluantes (normes Euro) des moteurs d'automobiles, la nécessité de maitriser les phénomènes de combustion a motivé le développement de la simulation numérique comme outil d'aide à la conception. Tenant compte de la complexité des phénomènes à modéliser, et de l'antagonisme des objectifs à optimiser, l'optimisation évolutionnaire multi-objectif semble être la mieux adaptée pour résoudre ce type de problèmes. Cependant, l'inconvénient principal de cette approche reste le coût très élevé en termes de nombre d'évaluations qui peut devenir très contraignant dans le contexte des optimisations réelles caractérisées par des évaluations très coûteuseL'objectif principal de ce travail de thèse est de réduire le coût global des optimisations du monde réel, en explorant la parallélisation des algorithmes évolutionnaires multi-objectifs, et en utilisant les techniques de réduction du nombre d'évaluations (méta-modèles).Motivés par le phénomène d'hétérogénéité des coûts des évaluations, nous nous proposons d'étudier les schémas d'évolution stationnaires asynchrones dans une configuration parallèle de type « maître-esclave ». Ces schémas permettent une utilisation plus efficace des processeurs sur la grille de calcul, et par conséquent de réduire le coût global de l'optimisation.Ce problème a été attaqué dans un premier temps d'un point de vue algorithmique, à travers une adaptation artificielle des algorithmes évolutionnaires multi-objectifs au contexte des optimisations réelles caractérisées par un coût d'évaluation hétérogène. Dans un deuxième temps, les approches développées et validées dans la première partie sur des problèmes analytiques, ont été appliquées sur la problématique de la combustion Diesel qui représente le contexte industriel de cette thèse. Dans ce cadre, deux types de modélisations ont été utilisés: la modélisation phénoménologique 0D et la modélisation multidimensionnelle 3D. La modélisation 0D a permis par son temps de retour raisonnable (quelques heures par évaluation) de comparer l'approche stationnaire asynchrone avec celle de l'état de l'art en réalisant deux optimisations distinctes. Un gain de l'ordre de 42 % a été réalisé avec l'approche stationnaire asynchrone. Compte tenu du temps de retour très coûteux de la modélisation complète 3D (quelques jours par évaluation), l'approche asynchrone stationnaire déjà validée a été directement appliquée. L'analyse physique des résultats a permis de dégager un concept intéressant de bol de combustion permettant de réaliser un gain en termes d'émissions polluantes. / In order to comply with environmental regulations, automotive manufacturers have to develop efficient engines with low fuel consumption and low emissions. Thus, development of engine combustion systems (chamber, injector, air loop) becomes a hard task since many parameters have to be defined in order to optimize many objectives in conflict. Evolutionary Multi-objective optimization algorithms (EMOAs) represent an efficient tool to explore the search space and find promising engine combustion systems. Unfortunately, the main drawback of Evolutionary Algorithms (EAs) in general, and EMOAs in particular, is their high cost in terms of number of function evaluations required to reach a satisfactory solution. And this drawback can become prohibitive for those real-world problems where the computation of the objectives is made through heavy numerical simulations that can take hours or even days to complete.The main objective of this work is to reduce the global cost of real-world optimization, using the parallelization of EMOAs and surrogate models.Motivated by the heterogeneity of the evaluation costs observed on real-world applications, we study asynchronous steady-state selection schemes in a master-slave parallel configuration. This approach allows an efficient use of the available processors on the grid computing system, and consequently reduces the global optimization cost.In the first part of this work, this problem has been studied in an algorithmical point of view, through an artificial adaptation of EMOAs to the context of real-world optimizations characterized by a heterogeneous evaluation cost.In the second part, the proposed approaches, already validated on analytical functions, have been applied on the Diesel combustion problem, which represents the industrial context of this thesis. Two modelling approaches have been used: phenomenological modelling (0D model) and multi-dimensional modelling (3D model).The 0D model allowed us, thanks to its reasonable evaluation cost (few hours per evaluation) to compare the asynchronous steady-state approach with the standard generational one by performing two distinct optimizations. A gain of 42 % was observed with the asynchronous steady-state approach.Given the very high evaluation cost of the full 3D model, the asynchronous steady-state approach already validated has been applied directly. The physical analysis of results allowed us to identify an interesting concept of combustion bowl with a gain in terms of pollutant emissions.
203

Rozvoj inverzních úloh vedení tepla řešených s využitím optimalizačních postupů a vysokého stupně paralelizace / Development of inverse tasks solved by using the optimizing procedures and large number of parallel threads

Ondroušková, Jana Unknown Date (has links)
In metallurgy it is important to know a cooling efficiency of a product as well as cooling efficiency of working rolls to maximize the quality of the product and to achieve the long life of working rolls. It is possible to examine this cooling efficiency by heat transfer coefficients and surface temperatures. The surface temperature is hardly measured during the cooling. It is better to compute it together with heat transfer coefficient by inverse heat conduction problem. The computation is not easy and it uses estimated values which are verified by direct heat conduction problem. The time-consuming of this task can be several days or weeks, depends on the complexity of the model. Thus there are tendencies to shorten the computational time. This doctoral thesis considers the possible way of the computing time shortening of inverse heat conduction problem, which is the parallelization of this task and its transfer to a graphic card. It has greater computing power than the central processing unit (CPU). One computer can have more compute devices. That is why the computing time on different types of devices is compared in this thesis. Next this thesis deals with obtaining of surface temperatures for the computation by infrared line scanner and using of inverse heat conduction problem for the computing of the surface temperature and heat transfer coefficient during passing of a test sample under cooling section and cooling by high pressure nozzles.
204

Optimalizované sledování paprsku / Optimized Ray Tracing

Brich, Radek Unknown Date (has links)
Goal of this work is to write an optimized program for visualization of 3D scenes using ray tracing method. First, the theory of ray tracing together with particular techniques are presented. Next part focuses on different approaches to accelerate the algorithm. These are space partitioning structures, fast ray-triangle intersection technique and possibilities to parallelize the whole ray tracing method. A standalone chapter addresses the design and implementation of the ray tracing program.
205

Modélisation des plasmas magnétisés. Application à l'injection de neutres pour ITER et au magnétron en régime impulsionnel haute puissance / Modeling of magnetized plasmas. Application to neutral particle injection for ITER and to magnetron in high power pulsed regime

Revel, Adrien 05 June 2015 (has links)
Un plasma est défini comme un gaz partiellement ou totalement ionisé. Bien que très présent dans l'univers visible, les plasmas naturels sont rares sur Terre. Cependant, ils représentent un intérêt majeur pour les industries et les instituts de recherche (traitement de surface, propulsion spatiale). Toutefois, la compréhension du comportement d'un plasma est complexe et fait appel à de nombreux domaines de la physique. De plus, ces plasmas peuvent être magnétisé i.e. lorsqu'un champ magnétique extérieur ou induit influence significativement la trajectoire des particules : r/L<1 où r est le rayon de Larmor et L la longueur caractéristique du système. Ce travail de thèse s'intéresse à la modélisation du comportement du plasma présent dans deux dispositifs : l'accélérateur de l'Injecteur de Neutres (IdN) rapides d'ITER et le magnétron en régime DC ou HiPIMS. La réalisation de la fusion nucléaire sur Terre fait actuellement l'objet de nombreuses recherche dans le monde. Du fait de l'énergie nécessaire au franchissement de la barrière de répulsion coulombienne, le plasma doit être confiné. Dans le cas d'ITER, le confinement est réalisé par de puissant champ magnétique. Cependant, pour atteindre les conditions nécessaires aux réactions de fusion, notamment en température, un injecteur de particules neutres à haute énergie (1MeV) est nécessaire. L'accélération de ces particules est une phase critique dans la création du faisceau de neutres et elle représente un défi technologique qui fait l'objet d'une étude dans ce travail de thèse. Le magnétron est un procédé industriel permettant la réalisation de couches minces par pulvérisation cathodique. Les ions créés par un plasma de décharge arrachent les atomes de la cathode qui se déposent sur l'anode. Le champ magnétique créé par des aimants permanents piège les électrons à proximité de la cathode augmentant l'efficacité du dispositif. Le comportement du plasma magnétron est ainsi étudié en régime continu ou pulsé ainsi que l'apparition de structures auto-organisées en rotation autour de l'axe du magnétron dans certaines conditions. Afin d'étudier ces dispositifs, plusieurs programmes de simulation numérique ont été développés. La méthode Paticle-In-Cell a été choisie car elle permet de prendre en compte la charge d'espace des particules de manière auto-cohérente. Diverses techniques (technique de collision nulle, Monte Carlo Collision, a posteriori Monte Carlo) et améliorations (maillage non uniforme, projections de charges au troisième ordre) ont été développées et implémentées. De plus, une méthode originale, Pseudo 3D, permettant un traitement à trois dimension du magnétron a été utilisées avec succès. Enfin, ces programmes ont été parallélisés afin de réduire le temps de calcul. / A plasma is defined as a partially or completely ionized gas. Even though, they are very present in the visible universe, natural plasmas are rare on Earth. However, they are a major interest for industries and research institutes (surface treatment, spatial propulsion). Nevertheless, the understanding of plasma behavior is complicated because of the numerous physical fields involved. Moreover, theses plasmas can be magnetized, i.e., a magnetic field, external or induced, affects significantly the particle trajectories: r/L<1 where r is the Larmor radius and L the typical length of the system. This thesis is focused on the plasma modeling in two device: the accelerator of the ITER's neutral beam injector (NBI) and the magnetron in DC or HiPIMS regime. The feasibility of nuclear fusion on Earth is subject of numerous research around the world. Because of the energy necessary to get over the Coulomb barrier, the plasma must be confined. For ITER, the confinement is achieved by intense magnetic fields. However, to reach the required conditions of nuclear fusion reactions, especially in temperature, a high energy (1MeV) neutral beam injector is needed. The particle acceleration is a critical part in the creation of the neutral beam and it represents a technical challenge which is studied in this thesis work. The magnetron is an industrial process for creating thin film by physical sputtering. The ions created by a plasma discharge tear the atoms out of the cathode which are then deposited on the anode. The magnetic field created by permanent magnets trap the electrons near the cathode improving the process efficiency. The plasma behavior inside the magnetron is studied in direct and pulsed current as well as the appearance of self-organized structures in rotation around the magnetron axis. To study these devices, several program of numerical simulation have been developed. The Particle-In-Cell methode has been chosen because it takes into account, self-consistently, the space charge of the particules. Several techniques (null collision technique, Monte Carlo Collision, a posteriori Monte Carlo) and improvement (Non uniform mesh, third order charge projection) have been developed and implemented. Moreover, an original method, Pseudo 3D, allowing a three dimensional study of the magnetron, has been used with success. Finally, these programs have been parallelized to reduce the computation time.
206

PCAISO-GT: uma metaheurística co-evolutiva paralela de otimização aplicada ao problema de alocação de berços

Oliveira, Carlos Eduardo de Jesus Guimarães 24 March 2013 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-03-30T11:51:21Z No. of bitstreams: 1 Carlos Eduardo de Jesus Guimarães Oliveira.pdf: 1236896 bytes, checksum: ef9d04e6f25aee7908b56a622411bc74 (MD5) / Made available in DSpace on 2015-03-30T11:51:21Z (GMT). No. of bitstreams: 1 Carlos Eduardo de Jesus Guimarães Oliveira.pdf: 1236896 bytes, checksum: ef9d04e6f25aee7908b56a622411bc74 (MD5) Previous issue date: 2014-01-31 / Nenhuma / Este trabalho apresenta um algoritmo de otimização baseado na metaheurística dos Sistemas Imunológicos Artificiais, princípios de Teoria dos Jogos, Co-evolução e Paralelização. Busca-se a combinação adequada dos conceitos de Teoria dos Jogos, Co-evolução e Paralelização aplicados ao algoritmo AISO (Artificial Immune System Optimization) para resolução do Problema de Alocação de Berços (PAB). Dessa maneira, o algoritmo é formalizado a partir das técnicas citadas, formando o PCAISO-GT: Parallel Coevolutionary Artificial Immune System Optimization with Game Theory. Inicialmente, foram realizados experimentos visando à sintonia dos parâmetros empregados nas diferentes versões da ferramenta desenvolvida. Com base nas melhores configurações identificadas, foram realizados experimentos de avaliação através da solução de um conjunto de instâncias do PAB. Os resultados obtidos permitiram a indicação da versão co-evolutiva associada à teoria dos jogos como a melhor para solução do problema em estudo. / This paper presents an optimization algorithm based on metaheuristic of Artificial Immune Systems, principles of Game Theory, Co-evolution and parallelization. The objective is find the appropriate combination of the concepts of Game Theory, Co-evolution and Parallelization applied to AISO algorithm (Artificial Immune System Optimization) for solving the Berth Allocation Problem (BAP). Thus, the algorithm is formalized from the above mentioned techniques, forming the PCAISO-GT: Parallel Coevolutionary Artificial Immune System Optimization with Game Theory. Initially, experiments aiming to tune the parameters were performed using different versions of the tool developed. Based on the identified best settings, evaluation experiments were carried out by solving a set of instances of the PAB. The results obtained allowed the appointment of co-evolutionary version associated with game theory as the best solution to the problem under study.
207

Development, implementation and theoretical analysis of the bee colony optimization meta-heuristic method / Развој, имплементација и теоријска анализа метахеуристичке методеоптимизације колонијом пчела / Razvoj, implementacija i teorijska analiza metaheurističke metodeoptimizacije kolonijom pčela

Jakšić Krüger Tatjana 27 June 2017 (has links)
<p>The Ph.D. thesis addresses a comprehensive study of the bee colony<br />optimization meta-heuristic method (BCO). Theoretical analysis of the<br />method is conducted with the tools of probability theory. Necessary and<br />sufficient conditions are presented that establish convergence of the BCO<br />method towards an optimal solution. Three parallelization strategies and five<br />corresponding implementations are proposed for BCO for distributed-memory<br />systems. The influence of method&rsquo;s parameters on the performance of the<br />BCO algorithm for two combinatorial optimization problems is analyzed<br />through the experimental study.</p> / <p>Докторска дисертације се бави испитивањем метахеуристичке методе<br />оптимизације колонијом пчела. Извршена је теоријска анализа<br />асимптотске конвергенције методе посматрањем конвергенције низа<br />случајних променљивих. Установљени су довољни и потребни услови<br />за које метода конвергира ка оптималном решењу. Предложене су три<br />стратегије паралелизације и пет одговарајућих имплементација конст-<br />руктивне варијанте методе за рачунаре са дистрибуираном меморијом.<br />Извршено је експериментално испитивање утицаја параметара методе<br />на њене перформансе за два различита комбинаторна проблема:<br />проблем распоређивања и проблем задовољивости.</p> / <p>Doktorska disertacije se bavi ispitivanjem metaheurističke metode<br />optimizacije kolonijom pčela. Izvršena je teorijska analiza<br />asimptotske konvergencije metode posmatranjem konvergencije niza<br />slučajnih promenljivih. Ustanovljeni su dovoljni i potrebni uslovi<br />za koje metoda konvergira ka optimalnom rešenju. Predložene su tri<br />strategije paralelizacije i pet odgovarajućih implementacija konst-<br />ruktivne varijante metode za računare sa distribuiranom memorijom.<br />Izvršeno je eksperimentalno ispitivanje uticaja parametara metode<br />na njene performanse za dva različita kombinatorna problema:<br />problem raspoređivanja i problem zadovoljivosti.</p>
208

MPEG Z/Alpha and high-resolution MPEG / MPEG Z/Alpha och högupplösande MPEG-video

Ziegler, Gernot January 2003 (has links)
<p>The progression of technical development has yielded practicable camera systems for the acquisition of so called depth maps, images with depth information. </p><p>Images and movies with depth information open the door for new types of applications in the area of computer graphics and vision. That implies that they will need to be processed in all increasing volumes.</p><p>Increased depth image processing puts forth the demand for a standardized data format for the exchange of image data with depth information, both still and animated. Software to convert acquired depth data to such videoformats is highly necessary. </p><p>This diploma thesis sheds light on many of the issues that come with this new task group. It spans from data acquisition over readily available software for the data encoding to possible future applications. </p><p>Further, a software architecture fulfilling all of the mentioned demands is presented. </p><p>The encoder is comprised of a collection of UNIX programs that generate MPEG Z/Alpha, an MPEG2 based video format. MPEG Z/Alpha contains beside MPEG2's standard data streams one extra data stream to store image depth information (and transparency). </p><p>The decoder suite, called TexMPEG, is a C library for the in-memory decompression of MPEG Z/Alpha. Much effort has been put into video decoder parallelization, and TexMPEG is now capable of decoding multiple video streams, not only in parallel internally, but also with inherent frame synchronization between parallely decoded MPEG videos.</p>
209

Development of a Parallel Computing Optimized Head Movement Correction Method in Positron Emission Tomography

Langner, Jens 06 August 2009 (has links) (PDF)
As a modern tomographic technique, Positron-Emission-Tomography (PET) enables non-invasive imaging of metabolic processes in living organisms. It allows the visualization of malfunctions which are characteristic for neurological, cardiological, and oncological diseases. Chemical tracers labeled with radioactive positron emitting isotopes are injected into the patient and the decay of the isotopes is then observed with the detectors of the tomograph. This information is used to compute the spatial distribution of the labeled tracers. Since the spatial resolution of PET devices increases steadily, the whole sensitive process of tomograph imaging requires minimizing not only the disturbing effects, which are specific for the PET measurement method, such as random or scattered coincidences, but also external effects like body movement of the patient. Methods to correct the influences of such patient movement have been developed in previous studies at the PET center, Rossendorf. These methods are based on the spatial correction of each registered coincidence. However, the large amount of data and the complexity of the correction algorithms limited the application to selected studies. The aim of this thesis is to optimize the correction algorithms in a way that allows movement correction in routinely performed PET examinations. The object-oriented development in C++ with support of the platform independent Qt framework enables the employment of multiprocessor systems. In addition, a graphical user interface allows the use of the application by the medical assistant technicians of the PET center. Furthermore, the application provides methods to acquire and administrate movement information directly from the motion tracking system via network communication. Due to the parallelization the performance of the new implementation demonstrates a significant improvement. The parallel optimizations and the implementation of an intuitive usable graphical interface finally enables the PET center Rossendorf to use movement correction in routine patient investigations, thus providing patients an improved tomograph imaging. / Die Positronen-Emissions-Tomographie (PET) ist ein modernes medizinisches Diagnoseverfahren, das nichtinvasive Einblicke in den Stoffwechsel lebender Organismen ermöglicht. Es erfasst Funktionsstörungen, die für neurologische, kardiologische und onkologische Erkrankungen charakteristisch sind. Hierzu werden dem Patienten radioaktive, positronen emittierende Tracer injiziert. Der radioaktive Zerfall der Isotope wird dabei von den umgebenden Detektoren gemessen und die Aktivitätsverteilung durch Rekonstruktionsverfahren bildlich darstellbar gemacht. Da sich die Auflösung solcher Tomographen stetig verbessert und somit sich der Einfluss von qualitätsmindernden Faktoren wie z.B. das Auftreten von zufälligen oder gestreuten Koinzidenzen erhöht, gewinnt die Korrektur dieser Einflüsse immer mehr an Bedeutung. Hierzu zählt unter anderem auch die Korrektur der Einflüsse eventueller Patientenbewegungen während der tomographischen Untersuchung. In vorangegangenen Studien wurde daher am PET Zentrum Rossendorf ein Verfahren entwickelt, um die nachträgliche listmode-basierte Korrektur dieser Bewegungen durch computergestützte Verfahren zu ermöglichen. Bisher schränkte der hohe Rechenaufwand den Einsatz dieser Methoden jedoch ein. Diese Arbeit befasst sich daher mit der Aufgabe, durch geeignete Parallelisierung der Korrekturalgorithmen eine Optimierung dieses Verfahrens in dem Maße zu ermöglichen, der einen routinemässigen Einsatz während PET Untersuchungen erlaubt. Hierbei lässt die durchgeführte objektorientierte Softwareentwicklung in C++ , unter Zuhilfenahme des plattformübergreifenden Qt Frameworks, eine Nutzung von Mehrprozessorsystemen zu. Zusätzlich ermöglicht eine graphische Oberfläche die Bedienung einer solchen Bewegungskorrektur durch die medizinisch technischen Assistenten des PET Zentrums. Um darüber hinaus die Administration und Datenakquisition der Bewegungsdaten zu ermöglichen, stellt die entwickelte Anwendung Funktionen bereit, die die direkte Kommunikation mit dem Bewegungstrackingsystem erlauben. Es zeigte sich, dass durch die Parallelisierung die Geschwindigkeit wesentlich gesteigert wurde. Die parallelen Optimierungen und die Implementation einer intuitiv nutzbaren graphischen Oberfläche erlaubt es dem PET Zentrum nunmehr Bewegungskorrekturen innerhalb von Routineuntersuchungen durchzuführen, um somit den Patienten ein verbessertes Bildgebungsverfahren bereitzustellen.
210

Automatic Hardening against Dependability and Security Software Bugs / Automatisches Härten gegen Zuverlässigkeits- und Sicherheitssoftwarefehler

Süßkraut, Martin 15 June 2010 (has links) (PDF)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.

Page generated in 0.1874 seconds