• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 6
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 20
  • 20
  • 15
  • 14
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Parallélisation de simulations interactives de champs ultrasonores pour le contrôle non destructif / Parallelization of ultrasonic field simulations for non destructive testing

Lambert, Jason 03 July 2015 (has links)
La simulation est de plus en plus utilisée dans le domaine industriel du Contrôle Non Destructif. Elle est employée tout au long du processus de contrôle, que ce soit pour en accélérer la mise au point ou en comprendre les résultats. Les travaux menés au cours de cette thèse présentent une méthode de calcul rapide de champ ultrasonore rayonné par un capteur multi-éléments dans une pièce isotrope, permettant un usage interactif des simulations. Afin de tirer parti des architectures parallèles communément disponibles, un modèle régulier (qui limite au maximum les branchements divergents) dérivé du modèle générique présent dans la plateforme logicielle CIVA a été mis au point. Une première implémentation de référence a permis de le valider par rapport aux résultats CIVA et d'analyser son comportement en termes de performances. Le code a ensuite été porté et optimisé sur trois classes d'architectures parallèles aujourd'hui disponibles dans les stations de calcul : le processeur généraliste central (GPP), le coprocesseur manycore (Intel MIC) et la carte graphique (nVidia GPU). Concernant le processeur généraliste et le coprocesseur manycore, l'algorithme a été réorganisé et le code implémenté afin de tirer parti des deux niveaux de parallélisme disponibles, le multithreading et les instructions vectorielles. Sur la carte graphique, les différentes étapes de simulation de champ ont été découpées en une série de noyaux CUDA. Enfin, des bibliothèques de calculs spécifiques à ces architectures, Intel MKL et nVidia cuFFT, ont été utilisées pour effectuer les opérations de Transformées de Fourier Rapides. Les performances et la bonne adéquation des codes produits ont été analysées en détail pour chaque architecture. Dans plusieurs cas, sur des configurations de contrôle réalistes, des performances autorisant l'interactivité ont été atteintes. Des perspectives pour traiter des configurations plus complexes sont dressées. Enfin la problématique de l'industrialisation de ce type de code dans la plateforme logicielle CIVA est étudiée. / The Non Destructive Testing field increasingly uses simulation.It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developped. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developped. First, a reference implementation was developped to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), manycore coprocessors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multhreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels.Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced algorithms were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to adress more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT plateform CIVA is discussed.
62

Modeling and Analysis of Large-Scale On-Chip Interconnects

Feng, Zhuo 2009 December 1900 (has links)
As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before.
63

General-purpose optimization through information maximization

Lockett, Alan Justin 05 July 2012 (has links)
The primary goal of artificial intelligence research is to develop a machine capable of learning to solve disparate real-world tasks autonomously, without relying on specialized problem-specific inputs. This dissertation suggests that such machines are realistic: If No Free Lunch theorems were to apply to all real-world problems, then the world would be utterly unpredictable. In response, the dissertation proposes the information-maximization principle, which claims that the optimal optimization methods make the best use of the information available to them. This principle results in a new algorithm, evolutionary annealing, which is shown to perform well especially in challenging problems with irregular structure. / text
64

Σχεδιασμός και ανάπτυξη λογισμικού ΕΛ/ΛΑΚ (open source) για διαχείριση οποιασδήποτε ενσωματωμένης (embedded) και μη συσκευής / Extending and customizing OpenRSM for wireless embedded devices and LINUX

Κουμούτσος, Κωνσταντίνος 25 May 2011 (has links)
Οι ενσωματωμένες συσκευές αποτελούν μια κατηγόρια υπολογιστών ειδικού σκοπού με ραγδαία αύξηση τα τελευταία χρόνια. Σε αντίθεση με τους γνωστούς σε όλους υπολογιστές γενικού σκοπού που μπορούν να επιτελέσουν σχεδόν οποιαδήποτε λειτουργία, οι ενσωματωμένες συσκευές επιτελούν μόνο συγκεκριμένες λειτουργίες, οι οποίες είναι προκαθορισμένες κατά τον σχεδιασμό τους. Η διαχείριση τέτοιων και μη συσκευών αποτελεί ένα τεράστιο κεφάλαιο αφού η διαφορετικότητα των λειτουργιών τους, δημιουργεί ένα διαφορετικό τρόπο αντιμετώπισης τους κατά την πρακτική της διαχείρισης. Υπάρχει περιορισμός στα διαθέσιμα εργαλεία για την διαχείριση όλων των ενσωματωμένων συστημάτων με ένα εργαλείο, αλλά η ερεύνα μας επικεντρώνεται στη διαχείριση οικογενειών τέτοιων συσκευών με κριτήριο τη λειτουργία του ειδικού σκοπού που επιτελούν. Σκοπός λοιπόν της εργασίας είναι ο σχεδιασμός και η ανάπτυξη λογισμικού για την ομαδική διαχείριση οικογένειας ενσωματωμένων συσκευών και κοινών υπολογιστών γενικού σκοπού οπουδήποτε λειτουργικού συστήματος. Η συνεισφορά της υπάρχουσας εργασίας συνοψίζεται στις εξής συνιστώσες: 1. Οι ενσωματωμένες συσκευές στις οποίες επικεντρωθήκαμε αφορούν δικτυακές συσκευές (ασύρματες ή ενσύρματες) πολλών λειτουργιών (Access Points, Clients, Repeaters,Points to Points, WDS, Transparent Clients, Routers). 2. Το λογισμικό για υπολογιστές ειδικού σκοπού που δημιουργήθηκε μπορεί να εκτελεστεί τόσο σε λειτουργικά συστήματα MS Windows όσο και σε *ΝΙΧ. 3. Η ανάπτυξη του λογισμικού έγινε βάση του συστήματος ORSM, το οποίο είναι ένα εργαλείο ανοικτού κώδικα για την απομακρυσμένη διαχείριση συστημάτων και δικτύων. (Με αστερίσκο * τόσο στα περιεχόμενα όσο και στο κύριο μέρος της εργασίας δείχνουμε τις δυνατότητες του νέου λογισμικού σε σχέση με το σύστημα ORSM). Συνοπτικά οι δυνατότητες διαχείρισης αφορά τις παρακάτω λειτουργίες: • Ανακάλυψη περιουσιακών στοιχείων (Inventory Process). • Παρακολούθηση απόδοσης συστημάτων (Monitoring). • Εγκατάσταση και απεγκατάσταση λογισμικού (Software Deployment). • Απομακρυσμένο έλεγχο (Remote Desktop). • Εκτέλεση εντολών κελύφους (Remote Command). / An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is usually embedded as a part of a complete device including hardware and mechanical parts. In contrast, a general-purpose computer, such as a personal computer, can do many different tasks depending on programming. Embedded systems control many of the common devices in use today. Managing infrastructure with such devices (embedded and general purpose computers) is usually demanding and expensive but nevertheless essential for organizations. There is a limit in tools which achieve effective management to those infrastructure topologies. At present, open management solutions are few and immature however there are tools such as OpenRSM aiming to deliver lightweight, remote and customizable management, easily customizable to cover the needs of small organizations. OpenRSM implements a generic management framework that models generalized use cases that can be exploited by users to adapt the tool to their needs. However, given maturity of the tool, it is unclear how easy it would be for users to extend it in order to include management of new types of devices. As network environments grow to digital ecosystems, the management targets increase in number and diversity. Wireless active elements, handheld systems or embedded devices are becoming common and need to be brought under standard management practices in the same manner as routers or workstations. This paper describes how the OpenRSM management functionality can be extended in order to provide customizable management of embedded devices and more specifically of wireless access points (the symbol * shows the new extension of ORSM). In general the management capabilities which are embed to OPENRSM system and target to wireless active elements are: (Inventory process, monitoring, firmware upgrade, save/reload configuration settings, remote commands, and discovery process).
65

Variation de l'épigénome, du phénotype et des stratégies écologiques pour la persistance d'un vertébré asexué

Leung, Christelle 11 1900 (has links)
La capacité à répondre aux changements environnementaux est cruciale pour l’établissement et la persistance des populations dans un système hétérogène. Les processus sous-jacents à cette capacité peuvent prendre différentes formes. La grande diversité génétique (et phénotypique) découlant de la reproduction sexuée est un des processus permettant à une population de répondre aux différentes conditions environnementales par la survie d’au moins quelques individus. D’autre part, les processus épigénétiques, en permettant la modification du patron d’expression des gènes, contribuent également à la variation phénotypique. Ces variations peuvent survenir accidentellement (épimutations) ou en réponse à un stimulus environnemental (plasticité). Les processus épigénétiques représenteraient ainsi une alternative à la variation génétique pouvant expliquer le succès écologique des organismes génétiquement identiques. Toutefois, les génotypes ne répondent pas tous de la même façon aux conditions environnementales ; certains génotypes sont considérés comme généralistes et peuvent s’acclimater à une vaste gamme de conditions environnementales alors que d’autres sont plutôt spécialistes et restreints à un type d’environnement donné. L’objectif général de cette thèse est de mettre en évidence certains des processus responsables de la persistance des organismes asexués en déterminant les relations entre la variation épigénétique, la variation phénotypique et les stratégies écologiques. Pour ce faire, des hybrides asexués Chrosomus eos-neogaeus, un poisson à reproduction clonale, ont servi de modèle biologique. Se reproduisant par gynogenèse, les lignées clonales (génotypes) doivent faire face aux mêmes variations environnementales que les espèces parentales sexuées. De plus, la répartition géographique de chacune des lignées suggère la présence de génotypes généralistes et spécialistes. L’analyse de ces lignées en milieux naturels et en milieux contrôlés a permis de décortiquer l’influence des variations génétique, environnementale et épigénétique sur la variation phénotypique. Dans un premier temps, les modèles de stratégies généraliste et spécialiste ont été testés de manière empirique : (i) des analyses génétiques ont permis d’inférer les processus historiques expliquant la diversité et la répartition actuelle des lignées clonales ; (ii) l’analyse du patron de ii méthylation de différentes lignées a permis de déterminer l’importance relative des sources de variations épigénétiques selon les fluctuations environnementales et de les associer à une stratégie écologique : une importante influence de l’environnement ou une plus grande stochasticité dans l’établissement des marques épigénétiques étant associées à la plasticité phénotypique ou au bet-hedging, respectivement ; et (iii) le rôle de la plasticité phénotypique dans la diversification de niches écologiques a été souligné par la comparaison de la variation de la morphologie trophique en milieux naturels et contrôlés. D’autre part, le second volet de cette thèse a permis d’associer un coût (détection d’une instabilité développementale) à la plasticité phénotypique et d’élaborer une hypothèse sur l’existence d’un handicap démographique nécessaire afin d’expliquer la situation contre-intuitive concernant la coexistence des organismes gynogènes et de leurs hôtes sexués. La compréhension des différents mécanismes sous-jacents au succès écologique des organismes face aux variations environnementales permet ainsi une meilleure évaluation de leur potentiel évolutif et fournit des outils supplémentaires pour leur protection face à un environnement changeant. / The capacity to cope with environmental changes is crucial to the establishment and persistence of populations. The processes underlying such a capacity can take different forms. The high genetic (and phenotypic) diversity arising from sexual reproduction is one process that allows to cope with different environmental conditions through the survival of at least a few individuals. On the other hand, epigenetic processes, allowing the modification of gene expression, are also responsible for phenotypic variation. Epigenetic changes may occur accidentally (epimutations) or in response to an environmental stimulus (plasticity). Thus, epigenetics would represent an alternative process to genetic variation to explain the ecological success of genetically identical organisms. However, genotypes are not equally capable of coping with environmental changes. Some genotypes are generalists and can acclimatize to a wide range of environmental conditions, whereas other genotypes are specialized and restricted to narrow environmental conditions. The general objective of this thesis is to highlight the processes responsible for the persistence of asexual organisms by determining the relationship between epigenetic processes, phenotypic variation and ecological strategies. To achieve this, the clonal hybrid fish Chrosomus eos-neogaeus were used as a biological model. By reproducing by gynogenesis, the clonal lineages (genotypes) are distributed among the same environments than the sexual parental species. Moreover, the geographical distribution of the lineages suggests the presence of generalist and specialist genotypes. The analysis of these lineages in natural and controlled conditions allowed to disentangling the influence of genetic, environmental and epigenetic variations on phenotypic variation. On the one hand, generalist and specialist strategies were empirically tested: (i) genetic analyses were used to infer the historical processes explaining the diversity and the current distribution of the clonal lineages; (ii) the comparison of methylation patterns in different lineages allowed to determine the dynamism of the epigenetic processes according to environmental fluctuations and to associate them with an ecological strategy: an important environmental or stochastic effect on epigenetic variation being associated with phenotypic iv plasticity or bet-hedging, respectively; and (iii) the role of phenotypic plasticity in niches diversification was highlighted by comparing individuals’ trophic morphology in natural and controlled conditions. On the other hand, the second part of this thesis has allowed to associate a cost (developmental instability) to phenotypic plasticity and to propose a hypothesis about the existence of a demographic handicap necessary to explain the paradox regarding the coexistence of sexual and asexual sperm-dependent organisms. Understanding these different mechanisms underlying the ecological success of organisms in face of environmental heterogeneity allows us to better establish their evolutionary potential and provides additional tools to protect them against changing environments.
66

Evoluční návrh kolektivních komunikací akcelerovaný pomocí GPU / Evolutionary Design of Collective Communications Accelerated by GPUs

Tyrala, Radek January 2012 (has links)
This thesis provides an analysis of the application for evolutionary scheduling of collective communications. It proposes possible ways to accelerate the application using general purpose computing on graphics processing units (GPU). This work offers a theoretical overview of systems on a chip, collective communications scheduling and more detailed description of evolutionary algorithms. Further, the work provides a description of the GPU architecture and its memory hierarchy using the OpenCL memory model. Based on the profiling, the work defines a concept for parallel execution of the fitness function. Furthermore, an estimation of the possible level of acceleration is presented. The process of implementation is described with a closer insight into the optimization process. Another important point consists in comparison of the original CPU-based solution and the massively parallel GPU version. As the final point, the thesis proposes distribution of the computation among different devices supported by OpenCL standard. In the conclusion are discussed further advantages, constraints and possibilities of acceleration using distribution on heterogenous computing systems.
67

Detekce pohyblivého objektu ve videu na CUDA / Moving Object Detection in Video Using CUDA

Čermák, Michal January 2011 (has links)
This thesis deals with model-based approach to 3D tracking from monocular video. The 3D model pose dynamically estimated through minimization of objective function by particle filter. Objective function is based on rendered scene to real video similarity.
68

Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge

Contreras Ochando, Lidia 04 February 2021 (has links)
[ES] El proceso de ciencia de datos es esencial para extraer valor de los datos. Sin embargo, la parte más tediosa del proceso, la preparación de los datos, implica una serie de formateos, limpieza e identificación de problemas que principalmente son tareas manuales. La preparación de datos todavía se resiste a la automatización en parte porque el problema depende en gran medida de la información del dominio, que se convierte en un cuello de botella para los sistemas de última generación a medida que aumenta la diversidad de dominios, formatos y estructuras de los datos. En esta tesis nos enfocamos en generar algoritmos que aprovechen el conocimiento del dominio para la automatización de partes del proceso de preparación de datos. Mostramos la forma en que las técnicas generales de inducción de programas, en lugar de los lenguajes específicos del dominio, se pueden aplicar de manera flexible a problemas donde el conocimiento es importante, mediante el uso dinámico de conocimiento específico del dominio. De manera más general, sostenemos que una combinación de enfoques de aprendizaje dinámicos y basados en conocimiento puede conducir a buenas soluciones. Proponemos varias estrategias para seleccionar o construir automáticamente el conocimiento previo apropiado en varios escenarios de preparación de datos. La idea principal se basa en elegir las mejores primitivas especializadas de acuerdo con el contexto del problema particular a resolver. Abordamos dos escenarios. En el primero, manejamos datos personales (nombres, fechas, teléfonos, etc.) que se presentan en formatos de cadena de texto muy diferentes y deben ser transformados a un formato unificado. El problema es cómo construir una transformación compositiva a partir de un gran conjunto de primitivas en el dominio (por ejemplo, manejar meses, años, días de la semana, etc.). Desarrollamos un sistema (BK-ADAPT) que guía la búsqueda a través del conocimiento previo extrayendo varias meta-características de los ejemplos que caracterizan el dominio de la columna. En el segundo escenario, nos enfrentamos a la transformación de matrices de datos en lenguajes de programación genéricos como R, utilizando como ejemplos una matriz de entrada y algunas celdas de la matriz de salida. También desarrollamos un sistema guiado por una búsqueda basada en árboles (AUTOMAT[R]IX) que usa varias restricciones, probabilidades previas para las primitivas y sugerencias textuales, para aprender eficientemente las transformaciones. Con estos sistemas, mostramos que la combinación de programación inductiva, con la selección dinámica de las primitivas apropiadas a partir del conocimiento previo, es capaz de mejorar los resultados de otras herramientas actuales específicas para la preparación de datos. / [CA] El procés de ciència de dades és essencial per extraure valor de les dades. No obstant això, la part més tediosa del procés, la preparació de les dades, implica una sèrie de transformacions, neteja i identificació de problemes que principalment són tasques manuals. La preparació de dades encara es resisteix a l'automatització en part perquè el problema depén en gran manera de la informació del domini, que es converteix en un coll de botella per als sistemes d'última generació a mesura que augmenta la diversitat de dominis, formats i estructures de les dades. En aquesta tesi ens enfoquem a generar algorismes que aprofiten el coneixement del domini per a l'automatització de parts del procés de preparació de dades. Mostrem la forma en què les tècniques generals d'inducció de programes, en lloc dels llenguatges específics del domini, es poden aplicar de manera flexible a problemes on el coneixement és important, mitjançant l'ús dinàmic de coneixement específic del domini. De manera més general, sostenim que una combinació d'enfocaments d'aprenentatge dinàmics i basats en coneixement pot conduir a les bones solucions. Proposem diverses estratègies per seleccionar o construir automàticament el coneixement previ apropiat en diversos escenaris de preparació de dades. La idea principal es basa a triar les millors primitives especialitzades d'acord amb el context del problema particular a resoldre. Abordem dos escenaris. En el primer, manegem dades personals (noms, dates, telèfons, etc.) que es presenten en formats de cadena de text molt diferents i han de ser transformats a un format unificat. El problema és com construir una transformació compositiva a partir d'un gran conjunt de primitives en el domini (per exemple, manejar mesos, anys, dies de la setmana, etc.). Desenvolupem un sistema (BK-ADAPT) que guia la cerca a través del coneixement previ extraient diverses meta-característiques dels exemples que caracteritzen el domini de la columna. En el segon escenari, ens enfrontem a la transformació de matrius de dades en llenguatges de programació genèrics com a R, utilitzant com a exemples una matriu d'entrada i algunes dades de la matriu d'eixida. També desenvolupem un sistema guiat per una cerca basada en arbres (AUTOMAT[R]IX) que usa diverses restriccions, probabilitats prèvies per a les primitives i suggeriments textuals, per aprendre eficientment les transformacions. Amb aquests sistemes, mostrem que la combinació de programació inductiva amb la selecció dinàmica de les primitives apropiades a partir del coneixement previ, és capaç de millorar els resultats d'altres enfocaments de preparació de dades d'última generació i més específics. / [EN] Data science is essential for the extraction of value from data. However, the most tedious part of the process, data wrangling, implies a range of mostly manual formatting, identification and cleansing manipulations. Data wrangling still resists automation partly because the problem strongly depends on domain information, which becomes a bottleneck for state-of-the-art systems as the diversity of domains, formats and structures of the data increases. In this thesis we focus on generating algorithms that take advantage of the domain knowledge for the automation of parts of the data wrangling process. We illustrate the way in which general program induction techniques, instead of domain-specific languages, can be applied flexibly to problems where knowledge is important, through the dynamic use of domain-specific knowledge. More generally, we argue that a combination of knowledge-based and dynamic learning approaches leads to successful solutions. We propose several strategies to automatically select or construct the appropriate background knowledge for several data wrangling scenarios. The key idea is based on choosing the best specialised background primitives according to the context of the particular problem to solve. We address two scenarios. In the first one, we handle personal data (names, dates, telephone numbers, etc.) that are presented in very different string formats and have to be transformed into a unified format. The problem is how to build a compositional transformation from a large set of primitives in the domain (e.g., handling months, years, days of the week, etc.). We develop a system (BK-ADAPT) that guides the search through the background knowledge by extracting several meta-features from the examples characterising the column domain. In the second scenario, we face the transformation of data matrices in generic programming languages such as R, using an input matrix and some cells of the output matrix as examples. We also develop a system guided by a tree-based search (AUTOMAT[R]IX) that uses several constraints, prior primitive probabilities and textual hints to efficiently learn the transformations. With these systems, we show that the combination of inductive programming with the dynamic selection of the appropriate primitives from the background knowledge is able to improve the results of other state-of-the-art and more specific data wrangling approaches. / This research was supported by the Spanish MECD Grant FPU15/03219;and partially by the Spanish MINECO TIN2015-69175-C4-1-R (Lobass) and RTI2018-094403-B-C32-AR (FreeTech) in Spain; and by the ERC Advanced Grant Synthesising Inductive Data Models (Synth) in Belgium. / Contreras Ochando, L. (2020). Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160724 / TESIS
69

Akcelerace genetického algoritmu s využitím GPU / The GPU-Based Acceleration of the Genetic Algorithm

Pospíchal, Petr January 2009 (has links)
This thesis represents master's thesis focused on acceleration of Genetic algorithms using GPU. First chapter deeply analyses Genetic algorithms and corresponding topics like population, chromosome, crossover, mutation and selection. Next part of the thesis shows GPU abilities for unified computing using both DirectX/OpenGL with Cg and specialized GPGPU libraries like CUDA. The fourth chapter focuses on design of GPU implementation using CUDA, coarse-grained and fine-grained GAs are discussed, and completed by sorting and random number generation task accelerated by GPU. Next chapter covers implementation details -- migration, crossover and selection schemes mapped on CUDA software model. All GA elements and quality of GPU results are described in the last chapter.
70

Metode i postupci ubrzavanja operacija i upita u velikim sistemima baza i skladišta podataka (Big Data sistemi) / The methods and procedures for accelerating operations and queries in large database systems and data warehouses ( Big Data Systems )

Ivković Jovan 29 September 2016 (has links)
<p>Predmet istraživanja ove doktorske disertacije je mogućnost uspostavljanja modela Big Data sistema sa pripadajućom softversko &ndash;&nbsp; hardverskom arhitekturom za podr&scaron;ku senzorskim mrežama i IoT uređajima. Razvijeni model počiva na energetsko efikasnim, heterogenim, masovno paralelizovaim SoC hardverskim platformama, uz podr&scaron;ku softverske aplikativne arhitekture (poput OpenCL) za unifikovan rad.<br />Pored aktuelnih hardverskih, softverskih i mrežnih računarskih tehnologija i arhitektura namenjenih za rad podkomponenata modelovanog sistema u radu je predstavljen istorijski osvrt na njihov razvoj. Time je nagla&scaron;ena tendencija cikličnog kretanja koncepcijskih paradigmi računarstva, kroz svojevrstne ere centralizacije &ndash; decentralizacije computinga. U radu su predstavljene tehnologije i metode za ubrzavanje operacija&nbsp; u bazama i skladi&scaron;tima podataka. Istražene su mogućnosti za bolju pripremu Big Data informacionih sistema&nbsp; koji treba da zadovolje potrebe&nbsp; novo najavljene informatičke revolucije op&scaron;te primene računarstva tzv. Ubiquitous computing-a i Interneta stvari (IoT).</p> / <p>The research topic of this doctoral thesis is the possibility of establishing a model for Big Data System with corresponding software-hardware architecture to support sensor networks and IoT devices.&nbsp; The developed model is based on energy efficient, heterogeneous, massively parallelized SoC hardware platforms, with the support of software application architecture. (Such as an open CL) for unified operation.&nbsp; In addition to current hardware, software and network computing technologies, and architecture intended to operate subcomponents of the system modeled in this paper is presented as an historical overview of their development.&nbsp; Which emphasizes the tendency of the cyclic movement of the conceptual paradigm of computing, through the unique era of centralization/decentralization of computing. The thesis presents the technology and methods to accelerate operations in databases and data warehouses. We also investigate the possibilities for a better preparation of Big Data information systems to meet the needs of the newly announced IT revolution in the announced general application of computing called Ubiquitous computing and the Internet of Things (IoT).</p>

Page generated in 0.0896 seconds