391 |
An Evaluation of Robotics in Nursing Homes to Reduce Adverse Drug EventsUeal Jr., Ozell 01 January 2016 (has links)
Adverse drug events (ADE) cause many deaths annually in addition to affecting the quality of life of many others. The descriptive mixed methods approach, specifically exploratory case study and experimental design that guided this research utilized the survey and focus group methods to evaluate perceptions about robotic technology (RT) to reduce the rate of ADEs in U.S. nursing homes (NH). There is a lack of scholarly research into whether a conceptual approach rooted in RT can be implemented to assist with drug administrations in NHs. The purpose of this study was twofold. The first purpose was to evaluate the causes of ADEs specifically related to tablets, capsules, and pills. The second purpose was to evaluate the perceptions of nurses and administrators relative to the use of RT to assist in reducing ADEs. In the quantitative part, the sample means from 102 surveys from nurses and administrators were evaluated with the t test and the paired t test; while in the qualitative part, survey results, reported errors, and focus group data was assessed collectively. The research results did not indicate any new causes of ADEs and showed that the participants had a favorable perception of RT. Based on the results of this research, RT may be tailored in such a way that it can significantly reduce ADE occurrences for citizens in U.S. NHs.
|
392 |
A Study On Effects Of Phase - Amplitude Errors In Planar Near Field Measurement FacilityVarughese, Suma 01 1900 (has links)
Antenna is an indispensable part of a radar or free space communication system. Antenna requires different stringent specifications for different applications. Designed and fabricated for an intended application, antenna or antenna array has to be evaluated for its far-field characteristics in real free space environment which requires setting up of far-field test site. Maintenance of the site to keep the stray reflections levels low, the cost of the real estate are some of the disadvantages.
Nearfield measurements are compact and can be used to test the antennas by exploiting the relationship between near-field and far-field. It is shown that the far-field patterns of an antenna can be sufficiently accurately predicted provided the near-field measurements are accurate. Due to limitation in the near-field measurement systems, errors creep in corrupting the nearfield-measured data thus making error in prediction of the far field. All these errors ultimately corrupt the phase and amplitude data.
In this thesis, one such near-field measurement facility, the Planar Near Field Measurement facility is discussed. The limitations of the facility and the errors that occur due to their limitations are discussed. Various errors that occur in measurements ultimately corrupt the near-field phase and amplitude. Investigations carried out aim at a detailed study of these phase and amplitude errors and their effect on the far-field patterns of the antenna. Depending on the source of error, the errors are classified as spike, pulse and random errors. The location of occurrence of these types of errors in the measurement plane, their effects on the far-field of the antenna is studied both for phase and amplitude errors.
The studies conducted for various phase and amplitude errors show that the near-field phase and amplitude data are more tolerant to random errors as the far-field patterns do not get affected even for low sidelobe cases. The spike errors, though occur as a wedge at a single point in the measurement plane, have more pronounced effect on the far-field patterns. Lower the taper value of the antenna, more pronounced is the error. It is also noticed that the far-field pattern gets affected only in the plane where the error has occurred and has no effect in the orthogonal plane. Pulse type of errors which occur even for a short length in the measurement affect both the principle plane far-field patterns.
This study can be used extensively as a tool to determine to the level to which various error such as mechanical, RF etc need to be controlled to make useful and correct pattern predictions on a particular facility. Thereby, the study can be used as a tool to economise the budget of the facility wherein the parameters required for building the facility need not be over specified beyond the requirement. In general, though this is a limited study, it is certainly a trendsetter in this direction.
|
393 |
Some methods for reducing the total consumption and production prediction errors of electricity: Adaptive Linear Regression of Original Predictions and Modeling of Prediction ErrorsOleksandra, Shovkun January 2014 (has links)
Balance between energy consumption and production of electricityis a very important for the electric power system operation and planning. Itprovides a good principle of effective operation, reduces the generation costin a power system and saves money. Two novel approaches to reduce thetotal errors between forecast and real electricity consumption wereproposed. An Adaptive Linear Regression of Original Predictions (ALROP)was constructed to modify the existing predictions by using simple linearregression with estimation by the Ordinary Least Square (OLS) method.The Weighted Least Square (WLS) method was also used as an alternativeto OLS. The Modeling of Prediction Errors (MPE) was constructed in orderto predict errors for the existing predictions by using the Autoregression(AR) and the Autoregressive-Moving-Average (ARMA) models. For thefirst approach it is observed that the last reported value is of mainimportance. An attempt was made to improve the performance and to getbetter parameter estimates. The separation of concerns and the combinationof concerns were suggested in order to extend the constructed approachesand raise the efficacy of them. Both methods were tested on data for thefourth region of Sweden (“elområde 4”) provided by Bixia. The obtainedresults indicate that all suggested approaches reduce the total percentageerrors of prediction consumption approximately by one half. Resultsindicate that use of the ARMA model slightly better reduces the total errorsthan the other suggested approaches. The most effective way to reduce thetotal consumption prediction errors seems to be obtained by reducing thetotal errors for each subregion.
|
394 |
Inter- and intralingual errors in Chinese students' compositions : A case study / Förstaspråks- och målspråksfel i kinesiska studenters uppsatser : En fallstudieBjörkegren, David January 2018 (has links)
In this quantitative study, time controlled written English compositions by 39 Chinese university majors of English were analyzed by means of Error Analysis (EA) in order to find out what grammatical errors were made. It investigates errors made by more than one fifth of the participants, in order to see whether they can be ascribed to either interlingual or intralingual influence. An error taxonomy based on previous research was created specifically for the errors encountered in the EA. The following grammatical errors were analyzed in the error analysis: article errors, noun number errors, prepositions errors, and verb errors. The results of the error analysis showed that while Chinese learners of English make mistakes due to both interlingual and intralingual influence, the vast majority are due to interlingual influence. These findings strengthen previous notions that when the target language belongs to another language family than the L1, errors are due more often to interlingual influence (also referred to as negative transfer) than to intralingual influence. / I den här kvantitativa studien har en felanalys gjorts på tidskontrollerade engelskuppsatser skrivna av 39 kinesiska universitetsstudenter med engelska som huvudämne. Studiens syfte var att undersöka vilka typer av grammatiska fel som är vanligast. Den undersöker vilka fel som är frekventa hos studenterna och om de kan hänföras till förstaspråks- eller målspråksinterferens. En felanalys baserad på tidigare forskning utarbetades specifikt för de kategorier som påträffades i felanalysen. Följande grammatiska fel analyserades: artikelfel, fel i substantivets singular- och pluralform, prepositionsfel och verbfel. Resultaten visade att medan kinesiska studenter tenderar att göra misstag som är grundade i både förstaspråks- och målspråksinterferens, så beror de flesta felen på förstaspråksinterferens. De här resultaten styrker tidigare forskning som visar att när målspråket tillhör en annan språkfamilj än förstaspråket, uppstår felen oftare på grund av förstaspråksinterferens (även kallat negativ transfer) än av målspråksinterferens.
|
395 |
Resilient and energy-efficient scheduling algorithms at scale / Algorithmes d'ordonnancement fiables et efficaces énergétiquement à l'échelleAupy, Guillaume 16 September 2014 (has links)
Dans cette thèse, j'ai considéré d'un point de vue théorique deux problèmes importants pour les futures plateformes dîtes Exascales : les restrictions liées à leur fiabilité ainsi que les contraintes énergétiques. En première partie de cette thèse, je me suis intéressé à l'étude de placements optimal de ces checkpoints dans un but de minimisation de temps total d'exécution. En particulier, j'ai considéré les checkpoints périodiques et coordonnés. J'ai considéré des prédicteurs de fautes capables de prévoir, de manière imparfaite, les fautes arrivant sur la plateforme. Dans ce contexte, j'ai conçu des algorithmes efficaces pour résoudre mes problèmes. Dans un deuxième temps, j'ai considéré des fautes silencieuses. Ces fautes ne peuvent être détectées qu'uniquement par un système de vérification.Dans le cas où une de ces fautes est détectée, l'utilisateur doit retourner au point de sauvegarde le plus récent qui n'a pas été affecté par cette faute, si un tel point existe ! Dans ce contexte, j'ai à nouveau proposé des algorithmes optimaux au premier ordre, mixant points de sauvegarde et points de vérification. Dans la seconde partie de cette thèse, j'ai considéré des problèmes énergétiques liés à ces mêmes plateformes. Ces problèmes critiques doivent être reliés aux problèmes de fiabilité de la partie précédente. Dans ce contexte, j'ai couplé des techniques de baisse de consommation énergétique à des techniques d'augmentation de fiabilité comme la reexécution, la réplication ainsi que le checkpoint. Pour ces différents problèmes, j'ai pu fournir des algorithmes dont l'efficacité a été montrée soit au travers de simulations, soit grâce à des preuves mathématiques. / This thesis deals with two issues for future Exascale platforms, namelyresilience and energy.In the first part of this thesis, we focus on the optimal placement ofperiodic coordinated checkpoints to minimize execution time.We consider fault predictors, a software used by system administratorsthat tries to predict (through the study of passed events) where andwhen faults will strike. In this context, we propose efficientalgorithms, and give a first-order optimal formula for the amount ofwork that should be done between two checkpoints.We then focus on silent data corruption errors. Contrarily to fail-stopfailures, such latent errors cannot be detected immediately, and amechanism to detect them must be provided. We compute the optimal periodin order to minimize the waste.In the second part of the thesis we address the energy consumptionchallenge.The speed scaling technique consists in diminishing the voltage of theprocessor, hence diminishing its execution speed. Unfortunately, it waspointed out that DVFS increases the probability of failures. In thiscontext, we consider the speed scaling technique coupled withreliability-increasing techniques such as re-execution, replication orcheckpointing. For these different problems, we propose variousalgorithms whose efficiency is shown either through thoroughsimulations, or approximation results relatively to the optimalsolution. Finally, we consider the different energetic costs involved inperiodic coordinated checkpointing and compute the optimal period tominimize energy consumption, as we did for execution time.
|
396 |
Chyba ve výuce matematiky na základních školách / Error in teaching mathematics at elementary schoolsKrpálková, Romana January 2017 (has links)
The diploma thesis deals with the position of error in the teaching of mathematics in the awareness of pupils and teachers. It also focuses on the responses of pupils and teachers to errors. Main attention is paid to mathematical errors. The thesis consists of two parts, theoretical and practical. The theoretical part contains an overview of selected error views. It contains a look at the development and changes of views on the concept of "error", the different concepts of error classification and pedagogical and psychological views on the term "error". The practical part is based on the research carried out at the 2nd level of two primary schools in Prague and Neratovice. Pupils filled in a questionnaire and a guided interview was conducted with teachers. KEYWORDS work with error, error classification, causes of errors, learning without errors, learning with error
|
397 |
Lattice - Based Cryptography - Security Foundations and Constructions / Cryptographie reposant sur les réseaux Euclidiens - Fondations de sécurité et ConstructionsRoux-Langlois, Adeline 17 October 2014 (has links)
La cryptographie reposant sur les réseaux Euclidiens est une branche récente de la cryptographie dans laquelle la sécurité des primitives repose sur la difficulté présumée de certains problèmes bien connus dans les réseaux Euclidiens. Le principe de ces preuves est de montrer que réussir une attaque contre une primitive est au moins aussi difficile que de résoudre un problème particulier, comme le problème Learning With Errors (LWE) ou le problème Small Integer Solution (SIS). En montrant que ces problèmes sont au moins aussi difficiles à résoudre qu'un problème difficile portant sur les réseaux, présumé insoluble en temps polynomial, on en conclu que les primitives construites sont sûres. Nous avons travaillé sur l'amélioration de la sécurité et des constructions de primitives cryptographiques. Nous avons étudié la difficulté des problèmes SIS et LWE et de leurs variantes structurées sur les anneaux d'entiers de corps cyclotomiques, et les modules libres sur ceux-ci. Nous avons montré d'une part qu'il existe une preuve de difficulté classique pour le problème LWE (la réduction existante de Regev en 2005 était quantique), d'autre part que les variantes sur les modules sont elles-aussi difficiles. Nous avons aussi proposé deux nouvelles variantes de signatures de groupe dont la sécurité repose sur SIS et LWE. L'une est la première reposant sur les réseaux et ayant une taille et une complexité poly-logarithmique en le nombre d'utilisateurs. La seconde construction permet de plus la révocation d'un membre du groupe. Enfin, nous avons amélioré la taille de certains paramètres dans le travail sur les applications multilinéaires cryptographiques de Garg, Gentry et Halevi. / Lattice-based cryptography is a branch of cryptography exploiting the presumed hardness of some well-known problems on lattices. Its main advantages are its simplicity, efficiency, and apparent security against quantum computers. The principle of the security proofs in lattice-based cryptography is to show that attacking a given scheme is at least as hard as solving a particular problem, as the Learning with Errors problem (LWE) or the Small Integer Solution problem (SIS). Then, by showing that those two problems are at least as hard to solve than a hard problem on lattices, presumed polynomial time intractable, we conclude that the constructed scheme is secure.In this thesis, we improve the foundation of the security proofs and build new cryptographic schemes. We study the hardness of the SIS and LWE problems, and of some of their variants on integer rings of cyclotomic fields and on modules on those rings. We show that there is a classical hardness proof for the LWE problem (Regev's prior reduction was quantum), and that the module variants of SIS and LWE are also hard to solve. We also give two new lattice-based group signature schemes, with security based on SIS and LWE. One is the first lattice-based group signature with logarithmic signature size in the number of users. And the other construction allows another functionality, verifier-local revocation. Finally, we improve the size of some parameters in the work on cryptographic multilinear maps of Garg, Gentry and Halevi in 2013.
|
398 |
Prospective Memory and Intention Deactivation: Challenges, Mechanisms and ModulatorsMöschl, Marcus 20 December 2019 (has links)
From the simple act of picking up a glass of water while talking to someone at a party, to remembering to swing by the bike shop to pick up an inner tube while riding through traffic on our way home from the office, intentions guide and alter our behavior—often while we are busily engaged in other ongoing tasks. Particularly, performing delayed intentions, like stopping at the bike shop on our way home, relies on a set of cognitive processes summarized as prospective memory (PM) that enable us to postpone intended actions until a later point in time (time-based PM) or until specific reminders or PM cues signal the appropriate opportunity to retrieve and perform an intended action (event-based PM). Interestingly, over the past decades a growing number of studies showed that successfully completing an event-based intention does not necessarily lead to its immediate deactivation. Instead, no-longer-relevant PM cues can incur so-called aftereffects that impair task performance and sometimes even trigger erroneous repetitions of the intended action (i.e., commission errors). Although in our everyday lifes we frequently rely on both PM and intention deactivation, still relatively little is known about how our cognitive system actually manages to deactivate completed intentions, under which conditions this may fail, and how well PM and intention deactivation function under extreme conditions, like acute stress.
In order to answer these questions, I first conducted a comprehensive review of the published literature on aftereffects of completed intentions. Here, I found that although intentions can incur aftereffects in terms of commission errors and performance costs that most likely result from continued intention retrieval, they generally seem to be deactivated or even inhibited at some point. Most importantly, this deactivation process does not operate like a light switch but dynamically moves along a continuum from complete reactivation to complete deactivation of intentions, and is substantially modulated by factors that also affect retrieval of intentions prior to their completion. Specifically, intention deactivation is most likely to fail when we remain within the same context in which we originally completed the intention and encounter no-longer-relevant PM cues that are extremely salient and were strongly linked to the intended action.
Subsequently, in Study 1 I directly tested a dual-mechanisms account of aftereffects of completed intentions. Building on findings of impaired intention deactivation in older adults who often show deficits in cognitive-control abilities, this account posits that aftereffects and commission errors in particular stem from a failure to exert cognitive control when no-longer-relevant PM cues trigger retrieval of an intention. Accordingly, intention deactivation should hinge on the availability of cognitive-control resources at the moment we encounter no-longer-relevant PM cues. In order to test this, I assessed aftereffects of completed intentions in younger and older adults while manipulating transient demands on information processing during encounters of no-longer-relevant PM cues on a trial-by-trial basis. In Experiment 1, nominally more older adults than younger adults made a commission error. Additionally, medium demands on cognitive control substantially reduced aftereffects compared to low and high demands (i.e., u-shaped relation). In Experiment 2, which extended this manipulation but only tested younger adults, however, this control-demand effect did not replicate. Instead, aftereffects occurred regardless of cognitive-control demands. The lack of a consistent control-demand effect on aftereffects across two experiments, suggested that cognitive control either only plays a minor role for the occurrence of aftereffects or that, more likely, intention deactivation hinges on other specific cognitive-control abilities, like response inhibition.
In two subsequent studies, I extended this research and tested the effects of acute stress—a potent modulator of cognitive-control functioning—on PM and intention deactivation. Previous studies showed that, under moderate demands, acute stress had no effect on PM-cue detection, intention deactivation or performance costs that presumably arise from monitoring for PM cues. Importantly, however, based on these studies it remained unclear if acute stress affects PM and intention deactivation under high demands, as has been observed, for instance, with working-memory performance. To test such a potential demand-dependence of acute stress effects on PM, I first assessed the effects of psychosocial stress induction with the Trier Social Stress Test on PM and intention deactivation when detecting PM cues and intention deactivation were either low or high demanding (Study 2). Building on this work, I then tested the effects of combined physiological and psychosocial stress induction with the Maastricht Acute Stress Test on PM and the ability to track one’s own performance (i.e., output monitoring), when PM-cue detection was difficult and ongoing tasks additionally posed either low or high demands on working memory (Study 3). Despite successful stress induction (e.g., increased levels of salivary cortisol and impaired subjective mood), both studies showed that PM-cue detection and intention retrieval were not affected by acute stress under any of these conditions. Study 2 revealed a tendency for a higher risk of making commission errors under stress when no-longer-relevant PM cues were salient and difficult to ignore. Study 3 additionally showed that acute stress had no effect on output monitoring. Most importantly, however, across the different PM tasks and stress-induction protocols in these studies, acute stress substantially reduced performance costs from monitoring for PM cues, but did so only when PM-cue detection was difficult. This effect suggested that, depending on task demands, acute stress might shift retrieval processes in PM away from costly monitoring-based retrieval towards a more economic spontaneous retrieval of intended actions.
In summary, the present thesis suggests that the processes underlying prospective remembering and intention deactivation are tightly woven together and are only selectively affected by cognitive-control availability and effects of acute stress. With this, it contributed substantially to our understanding of these essential cognitive capacities and their reliability. My research showed that PM is remarkably resilient against effects of acute stress experiences when remembering intended actions is supported by external reminders. Acute stress may actually make monitoring for such reminders more efficient when they are hard to detect. Additionally, it showed that, in most circumstances, we seem to be able to successfully and quickly deactivate intentions once they are completed. It is only under some conditions that intention deactivation may be slow, sporadic or fail, which can lead to continued retrieval of completed intentions. While this seems not to be affected by transient demands on information processing during encounters of no-longer-relevant PM cues, intention deactivation might become difficult for older adults and stressed individuals when no-longer-relevant reminders of intentions easily trigger the associated action and are hard to ignore.
|
399 |
Automated Medication Dispensing Cabinet and Medication ErrorsWalsh, Marie Helen 01 January 2015 (has links)
The number of deaths due to medical errors in hospitals ranges from 44,000 to 98,000 yearly. More than 7,000 of these deaths have taken place due to medication errors. This project evaluated the implementation of an automated medication dispensing cabinet or PYXIS machine in a 25-bed upper Midwestern critical access hospital. Lewin's stage theory of organizational change and Roger's diffusion of innovations theory supported the project. Nursing staff members were asked to complete an anonymous, qualitative survey approximately 1 month after the implementation of the PYXIS and again 1 year later. Questions were focused on the device and its use in preventing medication errors in the hospital. In addition to the surveys that were completed, interviews were conducted with the pharmacist, the pharmacy techs, and the director of nursing 1 year after implementation to ascertain perceptions of the change from paper-based medication administration to use of the automated medication dispensing cabinet. Medication errors before, during, and after the PYXIS implementation were analyzed. The small sample and the small number of medication errors allowed simple counts and qualitative analysis of the data. The staff members were generally satisfied with the change, although they acknowledged workflow disruption and increased medication errors. The increase in medication errors may be due in part to better documentation of errors during the transition and after implementation. Social change in practice was supported through the patient safety mechanisms and ongoing process changes that were put in place to support the new technology. This project provides direction to other critical access hospitals regarding planning considerations and best practices in implementing a PYXIS machine.
|
400 |
Vers l'efficacité et la sécurité du chiffrement homomorphe et du cloud computing / Towards efficient and secure Fully Homomorphic Encryption and cloud computingChillotti, Ilaria 17 May 2018 (has links)
Le chiffrement homomorphe est une branche de la cryptologie, dans laquelle les schémas de chiffrement offrent la possibilité de faire des calculs sur les messages chiffrés, sans besoin de les déchiffrer. L’intérêt pratique de ces schémas est dû à l’énorme quantité d'applications pour lesquels ils peuvent être utilisés. En sont un exemple le vote électronique, les calculs sur des données sensibles, comme des données médicales ou financières, le cloud computing, etc..Le premier schéma de chiffrement (complètement) homomorphe n'a été proposé qu'en 2009 par Gentry. Il a introduit une technique appelée bootstrapping, utilisée pour réduire le bruit des chiffrés : en effet, dans tous les schémas de chiffrement homomorphe proposés, les chiffrés contiennent une petite quantité de bruit, nécessaire pour des raisons de sécurité. Quand on fait des calculs sur les chiffrés bruités, le bruit augmente et, après avoir évalué un certain nombre d’opérations, ce bruit devient trop grand et, s'il n'est pas contrôlé, risque de compromettre le résultat des calculs.Le bootstrapping est du coup fondamental pour la construction des schémas de chiffrement homomorphes, mais est une technique très coûteuse, qu'il s'agisse de la mémoire nécessaire ou du temps de calcul. Les travaux qui on suivi la publication de Gentry ont eu comme objectif celui de proposer de nouveaux schémas et d’améliorer le bootstrapping pour rendre le chiffrement homomorphe faisable en pratique. L’une des constructions les plus célèbres est GSW, proposé par Gentry, Sahai et Waters en 2013. La sécurité du schéma GSW se fonde sur le problème LWE (learning with errors), considéré comme difficile en pratique. Le bootstrapping le plus rapide, exécuté sur un schéma de type GSW, a été proposé en 2015 par Ducas et Micciancio. Dans cette thèse on propose une nouvelle variante du schéma de chiffrement homomorphe de Ducas et Micciancio, appelée TFHE.Le schéma TFHE améliore les résultats précédents, en proposant un bootstrapping plus rapide (de l'ordre de quelques millisecondes) et des clés de bootstrapping plus petites, pour un même niveau de sécurité. TFHE utilise des chiffrés de type TLWE et TGSW (scalaire et ring) : l’accélération du bootstrapping est principalement due à l’utilisation d’un produit externe entre TLWE et TGSW, contrairement au produit externe GSW utilisé dans la majorité des constructions précédentes.Deux types de bootstrapping sont présentés. Le premier, appelé gate bootstrapping, est exécuté après l’évaluation homomorphique d’une porte logique (binaire ou Mux) ; le deuxième, appelé circuit bootstrapping, peut être exécuté après l’évaluation d’un nombre d'opérations homomorphiques plus grand, pour rafraîchir le résultat ou pour le rendre compatible avec la suite des calculs.Dans cette thèse on propose aussi de nouvelles techniques pour accélérer l’évaluation des calculs homomorphiques, sans bootstrapping, et des techniques de packing des données. En particulier, on présente un packing, appelé vertical packing, qui peut être utilisé pour évaluer efficacement des look-up table, on propose une évaluation via automates déterministes pondérés, et on présente un compteur homomorphe appelé TBSR qui peut être utilisé pour évaluer des fonctions arithmétiques.Pendant les travaux de thèse, le schéma TFHE a été implémenté et il est disponible en open source.La thèse contient aussi des travaux annexes. Le premier travail concerne l’étude d’un premier modèle théorique de vote électronique post-quantique basé sur le chiffrement homomorphe, le deuxième analyse la sécurité des familles de chiffrement homomorphe dans le cas d'une utilisation pratique sur le cloud, et le troisième ouvre sur une solution différente pour le calcul sécurisé, le calcul multi-partite. / Fully homomorphic encryption is a new branch of cryptology, allowing to perform computations on encrypted data, without having to decrypt them. The main interest of homomorphic encryption schemes is the large number of practical applications for which they can be used. Examples are given by electronic voting, computations on sensitive data, such as medical or financial data, cloud computing, etc..The first fully homomorphic encryption scheme has been proposed in 2009 by Gentry. He introduced a new technique, called bootstrapping, used to reduce the noise in ciphertexts: in fact, in all the proposed homomorphic encryption schemes, the ciphertexts contain a small amount of noise, which is necessary for security reasons. If we perform computations on noisy ciphertexts, the noise increases and, after a certain number of operations, the noise becomes to large and it could compromise the correctness of the final result, if not controlled.Bootstrapping is then fundamental to construct fully homomorphic encryption schemes, but it is very costly in terms of both memory and time consuming.After Gentry’s breakthrough, the presented schemes had the goal to propose new constructions and to improve bootstrapping, in order to make homomorphic encryption practical. One of the most known schemes is GSW, proposed by Gentry, Sahai et Waters in 2013. The security of GSW is based on the LWE (learning with errors) problem, which is considered hard in practice. The most rapid bootstrapping on a GSW-based scheme has been presented by Ducas and Micciancio in 2015. In this thesis, we propose a new variant of the scheme proposed by Ducas and Micciancio, that we call TFHE.The TFHE scheme improves previous results, by performing a faster bootstrapping (in the range of a few milliseconds) and by using smaller bootstrapping keys, for the same security level. TFHE uses TLWE and TGSW ciphertexts (both scalar and ring): the acceleration of bootstrapping is mainly due to the replacement of the internal GSW product, used in the majority of previous constructions, with an external product between TLWE and TGSW.Two kinds of bootstrapping are presented. The first one, called gate bootstrapping, is performed after the evaluation of a homomorphic gate (binary or Mux); the second one, called circuit bootstrapping, can be executed after the evaluation of a larger number of homomorphic operations, in order to refresh the result or to make it compatible with the following computations.In this thesis, we also propose new techniques to improve homomorphic computations without bootstrapping and new packing techniques. In particular, we present a vertical packing, that can be used to efficiently evaluate look-up tables, we propose an evaluation via weighted deterministic automata, and we present a homomorphic counter, called TBSR, that can be used to evaluate arithmetic functions.During the thesis, the TFHE scheme has been implemented and it is available in open source.The thesis contains also ancillary works. The first one concerns the study of the first model of post-quantum electronic voting based on fully homomorphic encryption, the second one analyzes the security of homomorphic encryption in a practical cloud implementation scenario, and the third one opens up about a different solution for secure computing, multi-party computation.
|
Page generated in 0.0439 seconds