• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

En prestandajämförelse mellan objektorienterad design och dataorienterad design i C++, Java och Java Project Valhalla / A performance comparison between object-oriented design and data-oriented design in C++, Java, and Java Project Valhalla

Isacsson, Andreas, Renström, Christopher January 2023 (has links)
Datorspel behöver utnyttja hårdvara effektivt för att kunna vara tillgänglig på så många system som möjligt med varierande kapacitet. En viktig del i detta är att optimera användandet av processorns cacheminnen. Programmerare kan följa en dataorienterad design för att implementera cacheeffektiv kod. Detta förutsätter att programmeraren har kontroll över hur data lagras i minnet, vilket är svårt att uppnå i Java. Project Valhalla kan erbjuda funktioner för att åtgärda detta problem. I detta arbete jämfördes objektorienterad design med dataorienterad design i språken C++, Java och den pågående utvecklingen av Project Valhalla. En prototyp utvecklades och implementerades med olika kombinationer av dessa språk och designer. För varje implementation mättes prestanda och cacheeffektivitet. Resultatet visade att Java med Project Valhalla kan uppnå liknande prestanda som C++ vid dataorienterade implementationer. På grund av vissa begränsningar i Valhallas funktioner kommer det troligtvis inte bli ett praktiskt alternativ till lågnivåspråk inom en snar framtid. / Computer games need to utilize hardware efficiently in order to be available on as many systems as possible with varying capabilities. An important part of this is to optimize the use of the CPU’s cache. A programmer can follow a data-oriented design to implement cache-efficient code. This requires the programmer to have control over how data is stored in memory, which is difficult to achieve in Java. Project Valhalla may offer features to address this issue. In this thesis, object-oriented design was compared with data-oriented design in the languages C++, Java, and the ongoing development of Project Valhalla. A prototype was developed and implemented using different combinations of these languages and designs. For each implementation, performance and cache efficiency were measured. The result showed that Java with Project Valhalla can achieve similar performance to C++ in data-oriented implementations. Due to some limitations in Valhalla's features, it is unlikely that it will become a suitable alternative to low-level languages anytime soon.
732

Methods for Comparing Database Management Systems

Törnqvist, Jakob January 2023 (has links)
Zenon AB is an it-company of which, this thesis was made in collaboration with. Zenon AB has clients that generate large amounts of data, therefore it is important for Zenon AB that they make competent choices of database management systems (DBMS) when designing systems for their clients. This thesis will therefore entail research carried out into the comparison of DBMS. Nowadays, there exists a large variety of DBMS. Despite this, there seems to be a lack of comparisons between types of DBMS and therefore a lack clarity of when each type should be used. Thus, this thesis aims to highlight these differences of DBMS types by creating a tailored test for each DBMS type and compare how each type performs in each-others area of specialization. This process will show how big the differences can be and highlight the importance of the choice of DBMS. The time it takes, and how simple DBMSs are to implement seems to be a factor most developers take into consideration when choosing DBMS but there is little research on how to compare the aspect. Therefore, this thesis will investigate the viability of a method to compare how easy the DBMSs are to implement into systems by querying programming help forums such as Stackoverflow. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
733

Subtree Hashing of Tests in Build Systems : Rust Tricorder / Subträd Hashing av tester i byggsystem : Rust Tricorder

Capitanu, Calin January 2023 (has links)
Software applications are built by teams of developers that constantly iterate over the codebase. Software projects rely on a build system, which handles the management of dependencies, compilation, testing, and deployment of the software. The execution of the tests during each build allow developers to validate that their changes do not introduce regressions. However, the execution of the test suite during each build can take a long time, potentially impacting the development process. To facilitate quicker feedback, build systems use incremental building in order to avoid the reprocessing of unmodified artifacts. This is achieved by maintaining a cache of source files, and only rebuilding artifacts that differ from their cached version. Yet, changing any part of a source file invalidates the cache, triggering the re-execution of unmodified tests. This focus over an entire file can be misleading to the build system, as it can not determine whether the actual function being tested has changed, thus invoking redundant re-testing. In this thesis, we propose a finer-grained approach to caching within build systems, by caching components within the Abstract Syntax Tree instead of entire source files. We compare their hashes on subsequent runs, in order to identify components that have changed. The potential advantage of this strategy is that re-running a specific test that has not been modified will leverage the use of caches even if the file that contains it has been modified. We implement our approach in a system called TRICORDER, and integrate it within a build system called WARP. TRICORDER works by analyzing RUST source code in order to identify the test cases that have not been changed, such as through the addition of comments, or modifications of unrelated functions. This can benefit developers by avoiding the re-execution of tests that are unmodified. We evaluate our approach against 4 notable, open-source RUST projects, targeting a set of 16 tests within them. We first analyze the accuracy with which TRICORDER detects the internal dependencies of a test function, which is needed for the code slicing done by TRICORDER, in order to cache code items related to the target test function. We then introduce artificial changes to our study subjects in order to determine whether or not TRICORDER indicates tests that need to be re-run. Finally, we analyze the ability of TRICORDER to identify real changes based on the commit history of our study subjects. Our results show that the more granular approach to caching can avoid the unnecessary recompilation and re-execution of test cases. An important direction for future work is to extend the current implementation to support the entire set of RUST features in order to evaluate TRICORDER on a larger set of study subjects. / Programvaruapplikationer byggs av utvecklingsteam som ständigt itererar över kodbasen. Programvaruprojekt förlitar sig på ett byggsystem som hanterar beroenden, kompilering, testning och implementering av programvaran. Utförande av testerna under varje byggprocess möjliggör för utvecklare att validera att deras ändringar inte introducerar regressionsfel. Dock kan utförningen av testsviten under varje byggprocess ta lång tid och potentiellt påverka utvecklingsprocessen. För att underlätta snabbare återkoppling använder byggsystemen inkrementell byggning för att undvika onödig återbearbetning av oförändrade artefakter. Detta uppnås genom att bibehålla en cache av källkodsfilerna och endast bygga om artefakter som skiljer sig från deras cachade version. Att ändra vilken del som helst av en källkodsfil invaliderar cachet och utlöser körningen av oförändrade tester. Fokuseringen på en hel fil kan vara vilseledande för byggsystemet, då det inte kan avgöra om den faktiska funktionen som testas har ändrats och därigenom påbörjar onödig omtestning. I detta projekt föreslår vi en mer detaljerad cache-strategi inom byggsystem, genom att cacha komponenter inom det abstrakta syntaxträdet istället för hela källkodsfiler. Vi jämför deras hash-värden vid senare körningar för att identifiera ändringar. Den potentiella fördelen med denna strategi är när man kör om ett specifikt test som inte har ändrats kan cachen användas även om filen som innehåller testet har modifierats. Vi implementerar vår metod i ett system som kallas TRICORDER och integrerar det i ett byggsystem som heter WARP. TRICORDER fungerar genom att analysera RUST-källkod för att identifiera testfall som inte har ändrats, till exempel genom tillägg av kommentarer eller ändringar av irrelevanta funktioner. Detta kan gynna utvecklare genom att undvika att köra om tester som inte har ändrats. Vi utvärderar vår metod mot 4 välkända öppen källkodsprojekt i RUST och riktar in oss på en uppsättning av 16 tester inom dem. Först analyserar vi noggrannheten med vilken TRICORDER identifierar de interna beroendena hos en testfunktion, vilket behövs för kodavskärningen som TRICORDER utför för att cachelagra kodenheter relaterade till måltestfunktionen. Sedan inför vi konstgjorda ändringar i våra studieobjekt för att avgöra om TRICORDER indikerar tester som behöver köras om. Slutligen analyserar vi TRICORDER förmåga att identifiera verkliga ändringar baserat på ändringshistoriken för våra studieobjekt. Våra resultat visar att den mer granulära cachelagringsmetoden kan undvika onödig omkompilering och omkörning av testfall. En viktig riktning för framtida arbete är att utöka den nuvarande implementationen för att stödja hela uppsättningen av RUST-funktioner för att utvärdera TRICORDER på en större uppsättning studieobjekt. / Aplicațiile software sunt dezvoltate de programatori care iterează constant asupra codului. Proiectele de software se bazează pe un sistem de generare care gestionează dependențele, compilarea, testarea și lansarea software-ului. Execuția testelor permite dezvoltatorilor să valideze că modificările lor nu introduc regresii. Cu toate acestea, execuția testelor în cadrul fiecărei generări poate dura mult timp, având potențialul de a incetinii dezvoltarea. Pentru a facilita o reprocesare mai rapidă, sistemele de generare utilizează construirea incrementală pentru a evita reprelucrarea a artefactelor nemodificate. Acest lucru se realizează prin menținerea unei cache și reconstruirea doar a artefactelor care diferă de cele din cache. Cu toate acestea, orice modificare a unui fișier sursă invalidează cache-ul, declanșând reprocesarea. Focalizarea asupra unui fișier întreg poate induce în eroare sistemul de generare, deoarece nu poate determina dacă funcția testată a suferit modificări, declanșând astfel teste redundante. În această teză, propunem o abordare mai detaliată a cache-ului în cadrul sistemelor de generare, prin cacharea componentelor Arborelui Sintactic Abstract, în locul întregilor fișiere sursă. Comparăm hash-urile acestora în rulările ulterioare pentru a identifica componentele modificate. Avantajul potențial al acestei strategii constă în faptul că reexecutarea unui test care nu a nemodificat va utiliza cache-urile chiar dacă fișierul a fost modificat. Implementăm abordarea noastră într-un sistem numit TRICORDER și îl integrăm într-un sistem de construire numit WARP. TRICORDER funcționează prin analizarea codului sursă RUST pentru a identifica cazurile de testare care nu au fost modificate, cum ar fi prin adăugarea de comentarii sau modificări ale funcțiilor nerelevante. Acest lucru poate fi benefic pentru dezvoltatori, evitând reexecutarea testelor care nu au fost modificate. Evaluăm abordarea noastră în raport cu 4 proiecte notabile open-source în RUST, având în vedere un set de 16 teste în cadrul acestora. Mai întâi, analizăm precizia cu care TRICORDER detectează dependențele interne ale unei funcții de testare, ceea ce este necesar pentru tăierea de cod realizată de TRICORDER, pentru a memora în cache elementele de cod legate de funcția de testare țintă. Apoi, introducem modificări artificiale în subiecții noștri de studiu pentru a determina dacă TRICORDER indică sau nu teste care trebuie reluate. În final, analizăm capacitatea TRICORDER de a identifica schimbări reale pe baza istoricului de angajări al subiecților noștri de studiu. Rezultatele noastre arată că abordarea mai granulară a memorării în cache poate evita recompilarea și reexecutarea inutilă a cazurilor de testare. O direcție importantă pentru viitor este extinderea implementării curente pentru a sprijini întregul set de caracteristici RUST, pentru a evalua TRICORDER pe un set mai mare de subiecți de studiu.
734

Optimizations In Storage Area Networks And Direct Attached Storage

Dharmadeep, M C 02 1900 (has links)
The thesis consists of three parts. In the first part, we introduce the notion of device-cache-aware schedulers. Modern disk subsystems have many megabytes of memory for various purposes such as prefetching and caching. Current disk scheduling algorithms make decisions oblivious of the underlying device cache algorithms. In this thesis, we propose a scheduler architecture that is aware of underlying device cache. We also describe how the underlying device cache parameters can be automatically deduced and incorporated into the scheduling algorithm. In this thesis, we have only considered adaptive caching algorithms as modern high end disk subsystems are by default configured to use such algorithms. We implemented a prototype for Linux anticipatory scheduler, where we observed, compared with the anticipatory scheduler, upto 3 times improvement in query execution times with Benchw benchmark and upto 10 percent improvement with Postmark benchmark. The second part deals with implementing cooperative caching for the Redhat Global File System. The Redhat Global File System (GFS) is a clustered shared disk file system. The coordination between multiple accesses is through a lock manager. On a read, a lock on the inode is acquired in shared mode and the data is read from the disk. For a write, an exclusive lock on the inode is acquired and data is written to the disk; this requires all nodes holding the lock to write their dirty buffers/pages to disk and invalidate all the related buffers/pages. A DLM (Distributed Lock Manager) is a module that implements the functions of a lock manager. GFS’s DLM has some support for range locks, although it is not being used by GFS. While it is clear that a data sourced from a memory copy is likely to have lower latency, GFS currently reads from the shared disk after acquiring a lock (just as in other designs such as IBM’s GPFS) rather than from remote memory that just recently had the correct contents. The difficulties are mainly due to the circular relationships that can result between GFS and the generic DLM architecture while integrating DLM locking framework with cooperative caching. For example, the page/buffer cache should be accessible from DLM and yet DLM’s generality has to be preserved. The symmetric nature of DLM (including the SMP concurrency model) makes it even more difficult to understand and integrate cooperative caching into it (note that GPFS has an asymmetrical design). In this thesis, we describe the design of a cooperative caching scheme in GFS. To make it more effective, we also have introduced changes to the locking protocol and DLM to handle range locks more efficiently. Experiments with micro benchmarks on our prototype implementation reveal that, reading from a remote node over gigabit Ethernet can be upto 8 times faster than reading from a enterprise class SCSI disk for random disk reads. Our contributions are an integrated design for cooperative caching and lock manager for GFS, devising a novel method to do interval searches and determining when sequential reads from a remote memory perform better than sequential reads from a disk. The third part deals with selecting a primary network partition in a clustered shared disk system, when node/network failures occur. Clustered shared disk file systems like GFS, GPFS use methods that can fail in case of multiple network partitions and also in case of a 2 node cluster. In this thesis, we give an algorithm for fault-tolerant proactive leader election in asynchronous shared memory systems, and later its formal verification. Roughly speaking, a leader election algorithm is proactive if it can tolerate failure of nodes even after a leader is elected, and (stable) leader election happens periodically. This is needed in systems where a leader is required after every failure to ensure the availability of the system and there might be no explicit events such as messages in the (shared memory) system. Previous algorithms like DiskPaxos are not proactive. In our model, individual nodes can fail and reincarnate at any point in time. Each node has a counter which is incremented every period, which is same across all the nodes (modulo a maximum drift). Different nodes can be in different epochs at the same time. Our algorithm ensures that per epoch there can be at most one leader. So if the counter values of some set of nodes match, then there can be at most one leader among them. If the nodes satisfy certain timeliness constraints, then the leader for the epoch with highest counter also becomes the leader for the next epoch (stable property). Our algorithm uses shared memory proportional to the number of processes, the best possible. We also show how our protocol can be used in clustered shared disk systems to select a primary network partition. We have used the state machine approach to represent our protocol in Isabelle HOL logic system and have proved the safety property of the protocol.
735

Αρχιτεκτονικές επεξεργαστών και μνημών ειδικού σκοπού για την υποστήριξη φερέγγυων (ασφαλών) δικτυακών υπηρεσιών / Processor and memory architectures for trusted computing platforms

Κεραμίδας, Γεώργιος 27 October 2008 (has links)
Η ασφάλεια των υπολογιστικών συστημάτων αποτελεί πλέον μια πολύ ενεργή περιοχή και αναμένεται να γίνει μια νέα παράμετρος σχεδίασης ισάξια μάλιστα με τις κλασσικές παραμέτρους σχεδίασης των συστημάτων, όπως είναι η απόδοση, η κατανάλωση ισχύος και το κόστος. Οι φερέγγυες υπολογιστικές πλατφόρμες έχουν προταθεί σαν μια υποσχόμενη λύση, ώστε να αυξήσουν τα επίπεδα ασφάλειας των συστημάτων και να παρέχουν προστασία από μη εξουσιοδοτημένη άδεια χρήσης των πληροφοριών που είναι αποθηκευμένες σε ένα σύστημα. Ένα φερέγγυο σύστημα θα πρέπει να διαθέτει τους κατάλληλους μηχανισμούς, ώστε να είναι ικανό να αντιστέκεται στο σύνολο, τόσο γνωστών όσο και νέων, επιθέσεων άρνησης υπηρεσίας. Οι επιθέσεις αυτές μπορεί να έχουν ως στόχο να βλάψουν το υλικό ή/και το λογισμικό του συστήματος. Ωστόσο, η μεγαλύτερη βαρύτητα στην περιοχή έχει δοθεί στην αποτροπή επιθέσεων σε επίπεδο λογισμικού. Στην παρούσα διατριβή προτείνονται έξι μεθοδολογίες σχεδίασης ικανές να θωρακίσουν ένα υπολογιστικό σύστημα από επιθέσεις άρνησης υπηρεσίας που έχουν ως στόχο να πλήξουν το υλικό του συστήματος. Η κύρια έμφαση δίνεται στο υποσύστημα της μνήμης (κρυφές μνήμες). Στις κρυφές μνήμες αφιερώνεται ένα μεγάλο μέρος της επιφάνειας του ολοκληρωμένου, είναι αυτές που καλούνται να "αποκρύψουν" τους αργούς χρόνους απόκρισης της κύριας μνήμης και ταυτόχρονα σε αυτές οφείλεται ένα μεγάλο μέρος της συνολικής κατανάλωσης ισχύος. Ως εκ τούτου, παρέχοντας βελτιστοποιήσεις στις κρυφές μνήμες καταφέρνουμε τελικά να μειώσουμε τον χρόνο εκτέλεσης του λογισμικού, να αυξήσουμε το ρυθμό μετάδοσης των ψηφιακών δεδομένων και να θωρακίσουμε το σύστημα από επιθέσεις άρνησης υπηρεσίας σε επίπεδο υλικού. / Data security concerns have recently become very important, and it can be expected that security will join performance, power and cost as a key distinguish factor in computer systems. Trusted platforms have been proposed as a promising approach to enhance the security of the modern computer system and prevent unauthorized accesses and modifications of the sensitive information stored in the system. Unfortunately, previous approaches only provide a level of security against software-based attacks and leave the system wide open to hardware attacks. This dissertation thesis proposes six design methodologies to shield a uniprocessor or a multiprocessor system against a various number of Denial of Service (DoS) attacks at the architectural and the operating system level. Specific focus is given to the memory subsystem (i.e. cache memories). The cache memories account for a large portion of the silicon area, they are greedy power consumers and they seriously determine system performance due to the even growing gap between the processor speed and main memory access latency. As a result, in this thesis we propose methodologies to optimize the functionality and lower the power consumption of the cache memories. The goal in all cases is to increase the performance of the system, the achieved packet throughput and to enhance the protection against a various number of passive and Denial of Service attacks.
736

Distribution et Stockage de Contenus dans les Réseaux

Modrzejewski, Remigiusz 24 October 2013 (has links) (PDF)
Dans cette thèse, nous étudions divers problèmes dont l'objectif est de gérer la croissance d'internet plus efficacement. En effet celle-ci est très vive : 41% pour le pic en 2012. Afin de répondre aux défis posés par cette évolution aux divers acteurs du réseau, des protocoles de gestion et de communication plus intelligents sont nécessaires. Les protocoles de l'Internet furent conçus comme des protocoles point à point. Or, la part de la diffusion de média dans le trafic est prépondérante et en nette hausse, et des projections indiquent qu'en 2016 80-90% du trafic sera engendré par de la diffusion vidéo. Cette divergence entraîne des inefficacités, car des multiples copies d'un message transitent par un lien. Dans cette thèse, nous étudions comment remediér á cette inefficacité. Nos contributions sont organisées selon les couches et les phases de déploiement du réseau. Nous étudions le placement de caches lors de la conception du réseau. Ensuite, pour la gestion d'un réseau, nous regardons quand placer des appareils en veille, en utilisant un mécanisme de cache et en coopération avec des réseaux de distribution. Puis, au niveau de la couche application, nous étudions un problème de maintenance d'arbres équilibrés pour la diffusion de média. Enfin, nous analysons la probabilité de survie des données dans un système de sauvegarde distribuée. Notre travail se fonde à la fois sur des méthodes théoriques (Chaînes de Markov, Programmation Linéaire), mais aussi sur des outils empiriques tels que la simulation et l'expérimentation.
737

Sur des modèles pour l'évaluation de performance des caches dans un réseau cœur et de la consommation d'énergie dans un réseau d'accès sans-fil

Choungmo Fofack, Nicaise Éric 21 February 2014 (has links) (PDF)
Internet est un véritable écosystème. Il se développe, évolue et s'adapte aux besoins des utilisateurs en termes de communication, de connectivité et d'ubiquité. Dans la dernière décennie, les modèles de communication ont changé passant des interactions machine-à-machine à un modèle machine-à-contenu. Cependant, différentes technologies sans-fil et de réseaux (tels que les smartphones et les réseaux 3/4G, streaming en ligne des médias, les réseaux sociaux, réseaux-orientés contenus) sont apparues pour améliorer la distribution de l'information. Ce développement a mis en lumière les problèmes liés au passage à l'échelle et à l'efficacité énergétique; d'où la question: Comment concevoir ou optimiser de tels systèmes distribués qui garantissent un accès haut débit aux contenus tout en (i) réduisant la congestion et la consommation d'énergie dans le réseau et (ii) s'adaptant à la demande des utilisateurs dans un contexte connectivité quasi-permanente? Dans cette thèse, nous nous intéressons à deux solutions proposées pour répondre à cette question: le déploiement des réseaux de caches et l'implantation des protocoles économes en énergie. Précisément, nous proposons des modèles analytiques pour la conception de ces réseaux de stockage et la modélisation de la consommation d'énergie dans les réseaux d'accès sans fil. Nos études montrent que la prédiction de la performance des réseaux de caches réels peut être faite avec des erreurs relatives absolues de l'ordre de 1% à 5% et qu'une proportion importante soit 70% à 90% du coût de l'énergie dans les cellules peut être économisée au niveau des stations de base et des mobiles sous des conditions réelles de trafic.
738

Calcul de majorants sûrs de temps d'exécution au pire pour des tâches d'applications temps-réels critiques, pour des systèmes disposants de caches mémoire

Louise, Stéphane 21 January 2002 (has links) (PDF)
Ce mémoire présente une nouvelle approche pour le calcul de temps d'exécution au pire (WCET) de tâche temps-réel critique, en particulier en ce qui concerne les aléas dus aux caches mémoire. Le point général est fait sur la problématique et l'état de l'art en la matière, mais l'accent est mis sur la théorie elle-même et son formalisme, d'abord dans le cadre monotâche puis dans le cadre multitâche. La méthode utilisée repose sur une technique d'interprétation abstraite, comme la plupart des autres méthodes de calcul de WCET, mais le formalisme est dans une approche probabiliste (bien que déterministe dans le cadre monotâche) de par l'utilisation de chaînes de Markov. La généralisation au cadre multitâche utilise les propriétés proba- bilistes pour faire une évaluation pessimiste d'un WCET et d'un écart type au pire, grâce à une modification astucieuse du propagateur dans ce cadre. Des premières évaluations du modèle, codées à la main à partir des résultats de compilation d'applications assez simples montrent des résultats promet- teurs quant à l'application du modèle sur des programmes réels en vraie grandeur.
739

Simulation de la dynamique des dislocations à très grande échelle / Hybrid parallelism on large scale dislocation dynamic simulation

Etcheverry, Arnaud 23 November 2015 (has links)
Le travail réalisé durant cette thèse vise à offrir à un code de simulation en dynamique des dislocations les composantes essentielles pour permettre le passage à l’échelle sur les calculateurs modernes. Nous abordons plusieurs aspects de la simulation numérique avec tout d’abord des considérations algorithmiques. Pour permettre de réaliser des simulations efficaces en terme de complexité algorithmique pour des grandes simulations, nous explorons les contraintes des différentes étapes de la simulation en offrant une analyse et des améliorations aux algorithmes. Ensuite, une considération particulière est apportée aux structures de données. En prenant en compte les nouveaux algorithmes, nous proposons une structure de données pour bénéficier d’accès performants à travers la hiérarchie mémoire. Cette structure est modulaire pour faire face à deux types d’algorithmes, avec d’un côté la gestion du maillage nécessitant une gestion dynamique de la mémoire et de l’autre les phases de calcul intensifs avec des accès rapides. Pour cela cette structure modulaire est complétée par un octree pour gérer la décomposition de domaine et aussi les algorithmes hiérarchiques comme le calcul du champ de contrainte et la détection des collisions. Enfin nous présentons les aspects parallèles du code. Pour cela nous introduisons une approche hybride, avec un parallélisme à grain fin à base de threads, et un parallélisme à gros grain de type MPI nécessitant une décomposition de domaine et un équilibrage de charge.Finalement, ces contributions sont testées pour valider les apports pour la simulation numérique. Deux cas d’étude sont présentés pour observer et analyser le comportement des différentes briques de la simulation. Tout d’abord une simulation extrêmement dynamique, composée de sources de Frank-Read dans un cristal de zirconium est utilisée, avant de présenter quelques résultats sur une simulation cible contenant une forte densité de défauts d’irradiation. / This research work focuses on bringing performances in 3D dislocation dynamics simulation, to run efficiently on modern computers. First of all, we introduce some algorithmic technics, to reduce the complexity in order to target large scale simulations. Second of all, we focus on data structure to take into account both memory hierachie and algorithmic data access. On one side we build this adaptive data structure to handle dynamism of data and on the other side we use an Octree to combine hierachie decompostion and data locality in order to face intensive arithmetics with force field computation and collision detection. Finnaly, we introduce some parallel aspects of our simulation. We propose a classical hybrid parallelism, with task based openMP threads and domain decomposition technics for MPI.
740

Non-Parametric Clustering of Multivariate Count Data

Tekumalla, Lavanya Sita January 2017 (has links) (PDF)
The focus of this thesis is models for non-parametric clustering of multivariate count data. While there has been significant work in Bayesian non-parametric modelling in the last decade, in the context of mixture models for real-valued data and some forms of discrete data such as multinomial-mixtures, there has been much less work on non-parametric clustering of Multi-variate Count Data. The main challenges in clustering multivariate counts include choosing a suitable multivariate distribution that adequately captures the properties of the data, for instance handling over-dispersed data or sparse multivariate data, at the same time leveraging the inherent dependency structure between dimensions and across instances to get meaningful clusters. As the first contribution, this thesis explores extensions to the Multivariate Poisson distribution, proposing efficient algorithms for non-parametric clustering of multivariate count data. While Poisson is the most popular distribution for count modelling, the Multivariate Poisson often leads to intractable inference and a suboptimal t of the data. To address this, we introduce a family of models based on the Sparse-Multivariate Poisson, that exploit the inherent sparsity in multivariate data, reducing the number of latent variables in the formulation of Multivariate Poisson leading to a better t and more efficient inference. We explore Dirichlet process mixture model extensions and temporal non-parametric extensions to models based on the Sparse Multivariate Poisson for practical use of Poisson based models for non-parametric clustering of multivariate counts in real-world applications. As a second contribution, this thesis addresses moving beyond the limitations of Poisson based models for non-parametric clustering, for instance in handling over dispersed data or data with negative correlations. We explore, for the first time, marginal independent inference techniques based on the Gaussian Copula for multivariate count data in the Dirichlet Process mixture model setting. This enables non-parametric clustering of multivariate counts without limiting assumptions that usually restrict the marginal to belong to a particular family, such as the Poisson or the negative-binomial. This inference technique can also work for mixed data (combination of counts, binary and continuous data) enabling Bayesian non-parametric modelling to be used for a wide variety of data types. As the third contribution, this thesis addresses modelling a wide range of more complex dependencies such as asymmetric and tail dependencies during non-parametric clustering of multivariate count data with Vine Copula based Dirichlet process mixtures. While vine copula inference has been well explored for continuous data, it is still a topic of active research for multivariate counts and mixed multivariate data. Inference for multivariate counts and mixed data is a hard problem owing to ties that arise with discrete marginal. An efficient marginal independent inference approach based on extended rank likelihood, based on recent work in the statistics literature, is proposed in this thesis, extending the use vines for multivariate counts and mixed data in practical clustering scenarios. This thesis also explores the novel systems application of Bulk Cache Preloading by analysing I/O traces though predictive models for temporal non-parametric clustering of multivariate count data. State of the art techniques in the caching domain are limited to exploiting short-range correlations in memory accesses at the milli-second granularity or smaller and cannot leverage long range correlations in traces. We explore for the first time, Bulk Cache Preloading, the process of pro-actively predicting data to load into cache, minutes or hours before the actual request from the application, by leveraging longer range correlation at the granularity of minutes or hours. This enables the development of machine learning techniques tailored for caching due to relaxed timing constraints. Our approach involves a data aggregation process, converting I/O traces into a temporal sequence of multivariate counts, that we analyse with the temporal non-parametric clustering models proposed in this thesis. While the focus of our thesis is models for non-parametric clustering for discrete data, particularly multivariate counts, we also hope our work on bulk cache preloading paves the way to more inter-disciplinary research for using data mining techniques in the systems domain. As an additional contribution, this thesis addresses multi-level non-parametric admixture modelling for discrete data in the form of grouped categorical data, such as document collections. Non-parametric clustering for topic modelling in document collections, where a document is as-associated with an unknown number of semantic themes or topics, is well explored with admixture models such as the Hierarchical Dirichlet Process. However, there exist scenarios, where a doc-ument requires being associated with themes at multiple levels, where each theme is itself an admixture over themes at the previous level, motivating the need for multilevel admixtures. Consider the example of non-parametric entity-topic modelling of simultaneously learning entities and topics from document collections. This can be realized by modelling a document as an admixture over entities while entities could themselves be modeled as admixtures over topics. We propose the nested Hierarchical Dirichlet Process to address this gap and apply a two level version of our model to automatically learn author entities and topics from research corpora.

Page generated in 0.0551 seconds