• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Advances in secure remote electronic voting

Dossogne, Jérôme 30 October 2015 (has links)
In this document, most readers should be easily introduced to the challengesoffered to a designer, an implementer and a user when using electronic voting.Some of these challenges are receiving an answer in the second part of thedocument where we introduce and describe several distinct scientific resultsobtained during our years as PhD student covering essentially the years 2009 to2011 included. All these results are aimed towards either better understandingthe issues of electronic voting or solving them. Nonetheless, a reader might beinterested in picking one of these contributions to use for his own electronicvoting system while leaving the rest. That is, the different chapters of thesecond part of the document are able to stand on their own most of the timeand could be used without the others which leads us to introduce each of themseparately.After concluding in the third part, we provide a certain amount of appendicesthat were not thoroughly discussed within the second part of the documentbut that might be of interest to the reader. These appendices are made ofvarious researches, collaborations and analyzes that we performed during thosesame years and which are related to electronic voting. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
2

Requirement-driven Design and Optimization of Data-Intensive Flows

Jovanovic, Petar 26 September 2016 (has links)
Data have become number one assets of today's business world. Thus, its exploitation and analysis attracted the attention of people from different fields and having different technical backgrounds. Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. However, designing and optimizing such data flows, to satisfy both users' information needs and agreed quality standards, have been known as a burdensome task, typically left to the manual efforts of a BI system designer. These tasks have become even more challenging for next generation BI systems, where data flows typically need to combine data from in-house transactional storages, and data coming from external sources, in a variety of formats (e.g. social media, governmental data, news feeds). Moreover, for making an impact to business outcomes, data flows are expected to answer unanticipated analytical needs of a broader set of business users' and deliver valuable information in near real-time (i.e. at the right time). These challenges largely indicate a need for boosting the automation of the design and optimization of data-intensive flows. This PhD thesis aims at providing automatable means for managing the lifecycle of data-intensive flows. The study primarily analyzes the remaining challenges to be solved in the field of data-intensive flows, by performing a survey of current literature, and envisioning an architecture for managing the lifecycle of data-intensive flows. Following the proposed architecture, we further focus on providing automatic techniques for covering different phases of the data-intensive flows' lifecycle. In particular, the thesis first proposes an approach (CoAl) for incremental design of data-intensive flows, by means of multi-flow consolidation. CoAl not only facilitates the maintenance of data flow designs in front of changing information needs, but also supports the multi-flow optimization of data-intensive flows, by maximizing their reuse. Next, in the data warehousing (DW) context, we propose a complementary method (ORE) for incremental design of the target DW schema, along with systematically tracing the evolution metadata, which can further facilitate the design of back-end data-intensive flows (i.e. ETL processes). The thesis then studies the problem of implementing data-intensive flows into deployable formats of different execution engines, and proposes the BabbleFlow system for translating logical data-intensive flows into executable formats, spanning single or multiple execution engines. Lastly, the thesis focuses on managing the execution of data-intensive flows on distributed data processing platforms, and to this end, proposes an algorithm (H-WorD) for supporting the scheduling of data-intensive flows by workload-driven redistribution of data in computing clusters. The overall outcome of this thesis an end-to-end platform for managing the lifecycle of data-intensive flows, called Quarry. The techniques proposed in this thesis, plugged to the Quarry platform, largely facilitate the manual efforts, and assist users of different technical skills in their analytical tasks. Finally, the results of this thesis largely contribute to the field of data-intensive flows in today's BI systems, and advocate for further attention by both academia and industry to the problems of design and optimization of data-intensive flows. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
3

Dedicated Hardware Context-Switch Services for Real-Time Multiprocessor Systems

Allard, Yannick 07 November 2017 (has links) (PDF)
Computers are widely present in our daily life and are used in critical applic-ations like cars, planes, pacemakers. Those real-time systems are nowadaysbased on processors which have an increasing complexity and have specifichardware services designed to reduce task preemption and migration over-heads. However using those services can add unpredictable overheads whenthe system has to switch from one task to another in some cases.This document screens existing solutions used in commonly availableprocessors to ease preemption and migration to highlight their strengths andweaknesses. A new hardware service is proposed to speed up task switchingat the L1 cache level, to reduce context switch overheads and to improvesystem predictability.The solution presented is based on stacking several identical cachememories at the L1 level. Each layer is able to save and restore its completestate independently to/from the main memory. One layer can be used forthe active task running on the processor while another layers can be restoredor saved concurrently. The active task can remain in execution until thepreempting task is ready in another layer after restoration from the mainmemory. The context switch between tasks can then be performed in avery short time by switching to the other layer which is now ready to runthe preempting task. Furthermore, the task will be resumed with the exactL1 cache memory state as saved earlier after the previous preemption. Theprevious task state can be sent back to the main memory for future use.Using this mechanism can lead to minimise the time required for migrationsand preemptions and consequently lower overheads and limit cache missesdue to preemptions and usually considered in the cache migration andpreemption delays. Isolation between tasks is also provided as they areexecuted from a dedicated layer.Both uniprocessor and multiprocessor designs are presented along withimplications on the real-time theory induced by the use of this hardware ser-vice. An implementation of the system is characterized and results show im-provements to the maximum and average execution time of a set of varioustasks: When the same size is used for the baseline cache and HwCS layers,94% of the tasks have a better execution time (up to 67%) and 80% have a bet-ter Worst Case Execution Time (WCET). 80% of the tasks are more predictableand the remaining 20% still have a better execution time. When we split thebaseline cache size among layers of the HwCS, measurements show that 75%of the tasks have a better execution time (up to 67%) leading to 50% of thetasks having a better WCET. Only 6% of the tasks suffer from worse executiontime and worse predictability while 75% of the tasks remain more predictablewhen using the HwCS compared to the baseline cache. / Les ordinateurs ont envahi notre quotidien et sont de plus en plus souventutilisés pour remplir des missions critiques. Ces systèmes temps réel sontbasés sur des processeurs dont la complexité augmente sans cesse. Des ser-vices matériels spécifiques permettent de réduire les coûts de préemption etmigration. Malheureusement, ces services ajoutent des temps morts lorsquele système doit passer d’une tâche à une autre.Ce document expose les solutions actuelles utilisées dans les processeurscourants pour mettre en lumière leurs qualités et défauts. Un nouveau ser-vice matériel (HwCS) est proposé afin d’accélérer le changement de tâches aupremier niveau de mémoire (L1) et de réduire ainsi les temps morts dus auxchangements de contextes tout en améliorant la prédictibilité du système.Bien que cette thèse se concentre sur le cache L1, le concept développépeut également s’appliquer aux autres niveaux de mémoire ainsi qu’àtout bloc dépendant du contexte. La solution présentée se base sur unempilement de caches identiques au premier niveau. Chaque couche del’empilement est capable de sauvegarder ou recharger son état vers/depuisla mémoire principale du système en toute autonomie. Une couche peutêtre utilisée par la tâche active pendant qu’une autre peut sauvegarder ourestaurer l’état d’une autre tâche. La tâche active peut ainsi poursuivre sonexécution en attendant que la tâche suivante soit rechargée. Le changementde contexte entre la tâche active et la tâche suivante peut alors avoir lieu enun temps très court. De plus, la tâche reprendra son exécution sur un cacheL1 dont l’état sera identique à celui au moment où elle a été interrompueprécédemment. L’état du cache de la tâche désormais inactive peut êtresauvegardé dans la mémoire principale en vue d’une utilisation ultérieure.Ce mécanisme permet de réduire au strict minimum le temps de calculperdu à cause des préemptions et migrations, les temps de sauvegarde et derechargement de la L1 n’ayant plus d’influence sur l’exécution des tâches. Deplus, chaque niveau étant dédié à une tâche, les interférences entre tâchessont réduites.Les propriétés ainsi que les implications sur les aspects temps réelsthéoriques sont présentées pour des systèmes mono et multiprocesseurs.Une implémentation d’un système uniprocesseur incluant ce servicematériel et sa caractérisation par rapport à l’exécution d’un set de tâchessont également présentées ainsi que les bénéfices apportés par le HwCS:Lorsque les couches du HwCS ont la même taille que le cache de base, 94%des tâches ont un meilleur temps d’exécution (jusqu’à 67%) et 80% ont unmeilleur pire temps d’exécution (WCET). 80% des tâches deviennent plusprédictibles et les 20% restants bénéficient néanmoins d’un meilleur WCET.Toutefois, si la taille du cache est partagée entre les couches du HwCS, lesmesures montrent que 75% des tâches ont un meilleur temps d’exécution,impliquant un meilleur WCET pour la moitié des tâches du système. Seule-ment 6% des tâches voient leur WCET augmenter et leur prédictibilitédiminuer tandis que 75% des tâches améliorent leur prédictibilité grâce auHwCS. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
4

Ordonnancement efficace de systèmes embarqués temps réel strict sur plates-formes hétérogènes

Poczekajlo, Xavier 30 October 2020 (has links) (PDF)
Les systèmes embarqués sont de plus en plus présents dans notre quotidien, à l’instar des téléphones ou des équipements des voitures modernes. Les systèmes embarqués modernes utilisent des plates-formes de plus en plus complexes. Après avoir longtemps utilisé un seul processeur, les plates-formes modernes peuvent désormais contenir plusieurs processeurs. Depuis quelques années, afin de continuer à améliorer la performance de ces systèmes à moindre coût, certaines de ces plates-formes embarquent désormais plusieurs processeurs différents, parfois même capables de modifier rapidement leurs caractéristiques pendant l’exécution du système. C’est ce qu’on appelle des plates-formes hétérogènes.Cette thèse traite de l’ordonnancement d’applications temps réel strict pour des plates-formes hétérogènes reconfigurables. Établir une polituqe d’ordonnancement consiste à garantir l’exécution d’ensembles de tâches récurrentes, avec le respect des contraintes temporelles de chaque tâche. Dans un contexte de temps réel strict, une tâche doit nécessairement être pleinement exécutée avant son échéance. Tout retard pourrait compromettre la sécurité du système ou des utilisateurs.Produire un ordonnancement temps réel strict efficace pour de telles plates-formes hétérogènes est particulièrement difficile. En effet, la vitesse d’exécution d’un processeur d’une telle plates-forme dépend à la fois du type du processeur et de la tâche exécutée. Cela rend les tâches difficilement interchangeables et augmente ainsi considérablement la complexité des polituqes d’ordonnancement. De plus, le coût d’une migration – le déplacement d’une tâche en cours d’exécution – d’un processeur à un autre est élevé, ce qui peut rendre les polituqes d’ordonnancement peu efficaces en pratique.Dans cette thèse, deux voies sont explorées pour tirer parti des possibilités offertes par ces plates-formes hétérogènes. Tout d’abord, en proposant un ordonnanceur dit global, qui permet une utilisation théorique de l’entièreté de la plates-forme. Pour atteindre cet objectif, nous isolons différents sous-problèmes, en suivant un schéma établi par la littérature existante. Pour chaque sous-problème, nous proposons une amélioration significative par rapport à l’état de l’art. L’ensemble constitue un nouvel ordonnanceur. Une évaluation empirique montre que ses performances sont bien supérieures à celles des ordonnanceurs existants. De plus, la polituqe d’ordonnancement proposée a une meilleure applicabilité, car elle réduit le nombre de migrations d’un processeur à un autre.Une deuxième voie explorée est le paradigme d’application dite multimode. Nous proposons ici le premier modèle où le matériel comme le logiciel peuvent être modifiés pendant l’exécution de l’application, afin de s’adapter au contexte dans lequel elle se trouve. Enfin, deux nouveaux protocoles utilisant ce modèle sont proposés et évalués. Il est montré théoriquement et empiriquement que ces protocoles présentent une faible complexité et de bonnes performances, et correspondent donc au besoin d’applications réelles. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
5

Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism

Paolillo, Antonio 17 October 2018 (has links)
Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism. Nowadays, embedded systems are part of our daily lives.Some of these systems are called safetycritical and have strong requirements in terms of safety and reliability.Additionally, these systems must have a long autonomy, good performance and minimal costs.Finally, these systems must exhibit predictable behaviour and provide their results within firm deadlines.When these different constraints are combined in the requirement specifications of a modern product, classic design techniques making use of single core platforms are not sufficient.Academic research in the field of real-time embedded systems has produced numerous techniques to exploit the capabilities of modern hardware platforms.These techniques are often based on using parallelism inherently present in modern hardware to improve the system performance while reducing the platform power dissipation.However, very few systems existing on the market are using these state-of-the-art techniques.Moreover, few of these techniques have been validated in the context of practical experiments.In this thesis, we realise the study of operating system level techniques allowing to exploit hardware parallelism through the implementation of parallel software in order to boost the performance of target applications and to reduce the overall system energy consumption while satisfying strict application timing requirements.We detail the theoretical foundations of the ideas applied in the dissertation and validate these ideas through experimental work.To this aim, we use a new Real-Time Operating System kernel written in the context of the creation of a spin-off of the Université libre de Bruxelles.Our experiments are based on the execution of applications on the operating system which run on a real-world platform for embedded systems.Our results show that, compared to traditional design techniques, using parallel and power-aware scheduling techniques in order to exploit hardware and software parallelism allows to execute embedded applications with substantial savings in terms of energy consumption.We present future and ongoing research work that exploit the capabilities of recent embedded platforms.These platforms combine multi-core processors and reconfigurable hardware logic, allowing further improvements in performance and energy consumption. / Optimisation de Métriques de Performances de Systèmes Embarqués Temps Réel Durs par utilisation du Parallélisme Logiciel et Matériel. De nos jours, les systèmes embarqués font partie intégrante de notre quotidien.Certains de ces systèmes, appelés systèmes critiques, sont soumis à de fortes contraintes de fiabilité et de robustesse.De plus, des contraintes de coûts, d’autonomie et de performances s’additionnent à la fiabilité.Enfin, ces systèmes doivent très souvent respecter des délais très stricts de façon prédictible.Lorsque ces différentes contraintes sont combinées dans le cahier de charge d’un produit, les techniques classiques de conception consistant à utiliser un seul cœur d’un processeur ne suffisent plus.La recherche académique dans le domaine des systèmes embarqués temps réel a produit de nombreuses techniques pour exploiter les plate-formes modernes.Ces techniques sont souvent basées sur l’exploitation du parallélisme inhérent au matériel pour améliorer les performances du système et la puissance dissipée par la plate-forme.Cependant, peu de systèmes existant sur le marché exploitent ces techniques de la littérature et peu de ces techniques ont été validées dans le cadre d’expériences pratiques.Dans cette thèse, nous réalisons l’étude des techniques, au niveau du système d’exploitation, permettant l’exploitation du parallélisme matériel par l’implémentation de logiciels parallèles afin de maximiser les performances et réduire l’impact sur l’énergie consommée tout en satisfaisant les contraintes temporelles strictes du cahier de charge applicatif. Nous détaillons les fondements théoriques des idées qui sont appliquées dans la dissertation et nous les validons par des travaux expérimentaux.A ces fins, nous utilisons le nouveau noyau d’un système d’exploitation écrit dans le cadre de la création d’une spin-off de l’Université libre de Bruxelles.Nos expériences, basées sur l’exécution d’applications sur le système d’exploitation qui s’exécute lui-même sur une plate-forme embarquée réelle, montre que l’utilisation de techniques d’ordonnancement exploitant le parallélisme matériel et logiciel permet de larges économies d’énergie consommée lors de l’exécution d’applications embarquées.De futurs travaux en cours de réalisation sont présentés.Ceux-ci exploitent des plate-formes innovantes qui combinent processeurs multi-cœurs et matériel reconfigurable, permettant d’aller encore plus loin dans l’amélioration des performances et les gains énergétiques. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
6

Use of simulators for side-channel analysis: Leakage detection and analysis of cryptographic systems in early stages of development

Veshchikov, Nikita 23 August 2017 (has links) (PDF)
Cryptography is the foundation of modern IT security,it provides algorithms and protocols that can be usedfor secure communications. Cryptographic algorithmsensure properties such as confidentiality and data integrity.Confidentiality can be ensured using encryption algorithms.Encryption algorithms require a secret information called a key.These algorithms are implemented in cryptographic devices.There exist many types of attacks against such cryptosystems,the main goal of these attacks is the extraction of the secret key.Side-channel attacks are among the strongest types of attacksagainst cryptosystems. Side-channel attacks focus on the attacked device, they measure its physicalproperties in order to extract the secret key. Thus, these attacks targetweaknesses in an implementation of an algorithm rather than the abstract algorithm itself.Power analysis is a type of side-channel attacks that can be used to extract a secretkey from a cryptosystem through the analysis of its power consumption whilethe target device executes an encryption algorithm. We can say that the secret information is leaking from the device through itspower consumption. One of the biggest challenges in the domain of side-channel analysisis the evaluation of a device from the perspective of side-channel attacksor in other words the detection of information leakage.A device can be subject to several sources of information leakageand it is actually relatively easy to find just one side-channel attack that works(by exploiting just one source of leakage),however it is very difficult to find all sources of information leakage or to show that there is no information leakage in the givenimplementation of an encryption algorithm. Evaluators use various statistical tests during the analysis of a cryptographic device to checkthat it does not leak the secret key. However, in order to performsuch tests the evaluation lab needs the device to acquire the measurementsand analyse them. Unfortunately, the development process of cryptographicsystems is rather long and has to go through several stages. Thus, an information leakagethat can lead to a side-channel attackcan be discovered by an evaluation lab at the very last stage using the finalproduct. In such case, the whole process has to be restarted in order to fix the issue,this can lead to significant time and budget overheads. The rationale is that developers of cryptographic systems would like to be able to detect issues related to side-channel analysis during the development of the system,preferably on the early stages of its development. However, it is far from beinga trivial task because the end product is not yet available andthe nature of side-channel attacks is such that it exploits the properties ofthe final version of the cryptographic device that is actually available to the end user. The goal of this work is to show how simulators can be used for the detection of issues related to side-channel analysis during the development of cryptosystems.This work lists the advantages of simulators compared to physical experimentsand suggests a classification of simulators for side-channel analysis.This work presents existing simulators that were created for side-channel analysis,more specifically we show that there is a lack of available simulation toolsand that therefore simulators are rarely used in the domain. We present threenew open-source simulators called Silk, Ascold and Savrasca.These simulators are working at different levels of abstraction,they can be used by developers to perform side-channel analysisof the device during different stages of development of a cryptosystem.We show how Silk can be used during the preliminary analysisand development of cryptographic algorithms using simulations based on high level of abstraction source code. We used it to compare S-boxesas well as to compare shuffling countermeasures against side-channel analysis.Then, we present the tool called Ascold that can be used to find side-channel leakagein implementations with masking countermeasure using the analysis of assembly code of the encryption.Finally, we demonstrate how our simulator called Savrasca can be used to find side-channelleakage using simulations based on compiled executable binaries. We use Savrascato analyse masked implementation of a well-known contest on side-channel analysis (the 4th edition of DPA Contest),as a result we demonstrate that the analysed implementation contains a previouslyundiscovered information leakage. Through this work we alsocompared results of our simulated experiments with real experiments comingfrom implementations on microcontrollers and showed that issues found using our simulatorsare also present in the final product. Overall, this work emphasises that simulatorsare very useful for the detection of side-channel leakages in early stages of developmentof cryptographic systems. / Option Informatique du Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.1276 seconds