• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 80
  • 80
  • 31
  • 24
  • 22
  • 20
  • 16
  • 14
  • 13
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Energy and Transient Power Minimization During Behavioral Synthesis

Mohanty, Saraju P 17 October 2003 (has links)
The proliferation of portable systems and mobile computing platforms has increased the need for the design of low power consuming integrated circuits. The increase in chip density and clock frequencies due to technology advances has made low power design a critical issue. Low power design is further driven by several other factors such as thermal considerations and environmental concerns. In low-power design for battery driven portable applications, the reduction of peak power, peak power differential, average power and energy are equally important. In this dissertation, we propose a framework for the reduction of these parameters through datapath scheduling at behavioral level. Several ILP based and heuristic based scheduling schemes are developed for datapath synthesis assuming : (i) single supply voltage and single frequency (SVSF), (ii) multiple supply voltages and dynamic frequency clocking (MVDFC), and (iii) multiple supply voltages and multicycling (MVMC). The scheduling schemes attempt to minimize : (i) energy, (ii) energy delay product, (iii) peak power, (iv) simultaneous peak power and average power, (v) simultaneous peak power, average power, peak power differential and energy, and (vi) power fluctuation. A new parameter called "Cycle Power Function" (CPF) is defined which captures the transient power characteristics as the equally weighted sum of normalized mean cycle power and normalized mean cycle differential power. Minimizing this parameter using multiple supply voltages and dynamic frequency clocking results in the reduction of both energy and transient power. The cycle differential power can be modeled as either the absolute deviation from the average power or as the cycle-to-cycle power gradient. The switching activity information is obtained from behavioral simulations. Power fluctuation is modeled as the cycle-to-cycle power gradient and to reduce fluctuation the mean power gradient (MPG) is minimized. The power models take into consideration the effect of switching activity on the power consumption of the functional units. Experimental results for selected high-level synthesis benchmark circuits under different constraints indicate that significant reductions in power, energy and energy delay product can be obtained and that the MVDFC and MVMC schemes yield better power reduction compared to the SVSF scheme. Several application specific VLSI circuits were designed and implemented for digital watermarking of images. Digital watermarking is the process that embeds data called a watermark into a multimedia object such that the watermark can be detected or extracted later to make an assertion about the object. A class of VLSI architectures were proposed for various watermarking algorithms : (i) spatial domain invisible-robust watermarking scheme, (ii) spatial domain invisible-fragile watermarking scheme, (iii) spatial domain visible watermarking scheme, (iv) DCT domain invisible-robust watermarking scheme, and (v) DCT domain visible watermarking scheme. Prototype implementation of (i), (ii) and (iii) are given. The hardware modules can be incorporated in a "JPEG encoder" or in a "digital still camera".
72

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
73

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
74

Energy and transient power minimization during behavioral synthesis [electronic resource] / by Saraju P Mohanty.

Mohanty, Saraju P. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 289 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: The proliferation of portable systems and mobile computing platforms has increased the need for the design of low power consuming integrated circuits. The increase in chip density and clock frequencies due to technology advances has made low power design a critical issue. Low power design is further driven by several other factors such as thermal considerations and environmental concerns. In low-power design for battery driven portable applications, the reduction of peak power, peak power differential, average power and energy are equally important. In this dissertation, we propose a framework for the reduction of these parameters through datapath scheduling at behavioral level. Several ILP based and heuristic based scheduling schemes are developed for datapath synthesis assuming : (i) single supply voltage and single frequency (SVSF), (ii) multiple supply voltages and dynamic frequency clocking (MVDFC), and (iii) multiple supply voltages and multicycling (MVMC). / ABSTRACT: The scheduling schemes attempt to minimize : (i) energy, (ii) energy delay product, (iii) peak power, (iv) simultaneous peak power and average power, (v) simultaneous peak power, average power, peak power differential and energy, and (vi) power fluctuation. A new parameter called "Cycle Power Function" CPF) is defined which captures the transient power characteristics as the equally weighted sum of normalized mean cycle power and normalized mean cycle differential power. Minimizing this parameter using multiple supply voltages and dynamic frequency clocking results in the reduction of both energy and transient power. The cycle differential power can be modeled as either the absolute deviation from the average power or as the cycle-to-cycle power gradient. The switching activity information is obtained from behavioral simulations. Power fluctuation is modeled as the cycle-to-cycle power gradient and to reduce fluctuation the mean power gradient MPG is minimized. / ABSTRACT: The power models take into consideration the effect of switching activity on the power consumption of the functional units. Experimental results for selected high-level synthesis benchmark circuits under different constraints indicate that significant reductions in power, energy and energy delay product can be obtained and that the MVDFC and MVMC schemes yield better power reduction compared to the SVSF scheme. Several application specific VLSI circuits were designed and implemented for digital watermarking of images. Digital watermarking is the process that embeds data called a watermark into a multimedia object such that the watermark can be detected or extracted later to make an assertion about the object. / ABSTRACT: A class of VLSI architectures were proposed for various watermarking algorithms : (i) spatial domain invisible-robust watermarking scheme, (ii) spatial domain invisible-fragile watermarking scheme, (iii) spatial domain visible watermarking scheme, (iv) DCT domain invisible-robust watermarking scheme, and (v) DCT domain visible watermarking scheme. Prototype implementation of (i), (ii) and (iii) are given. The hardware modules can be incorporated in a "JPEG encoder" or in a "digital still camera". / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
75

Securing digital images

Kailasanathan, Chandrapal. January 2003 (has links)
Thesis (Ph.D.)--University of Wollongong, 2003. / Typescript. Includes bibliographical references: leaf 191-198.
76

Développement de méthodes de tatouage sûres pour le traçage de contenus multimédia / Secure watermarking methods for fingerprinting of multimedia contents

Mathon, Benjamin 07 July 2011 (has links)
Dans cette thèse, nous étudions dans une première partie l'impact de la contrainte de sécurité en tatouage. Dans le contexte WOA (Watermarked contents Only Attack), un adversaire possède plusieurs contenus tatoués et cherche à estimer la clé secrète d'insertion afin d'accéder aux messages cachés. Une nouvelle manière de tatouer en étalement de spectre est présentée ici. Celle-ci est basée sur la construction de distributions circulaires dans le sous-espace secret de tatouage. Cette technique permet de minimiser la distorsion en moyenne provoquée par l'ajout de la marque dans le contexte WOA en utilisant l'algorithme d'optimisation des Hongrois et la théorie du transport. Nous vérifions ensuite qu'un tatouage sûr est utilisable en pratique en prenant comme exemple le tatouage d'images naturelles. Dans une seconde partie, nous nous intéressons au cadre de l'estampillage d'oe uvres numériques permettant de tracer les redistributeurs de copies illégales. Les codes traçants utilisés sont ceux proposés par Gabor Tardos et sont résistants aux attaques de coalition, c'est-à-dire au groupement d'adversaires mettant en commun leurs contenus numériques afin de forger une version pirate. Puisque les techniques de tatouage permettent l'insertion de codes traçants dans un contenu numérique, nous avons conçu une attaque "au pire cas" qui dépend du niveau de sécurité et qui permet, pour les adversaires, de baisser leur accusation. Nous montrons que pour le cas particulier de l'estampillage un tatouage sûr sera plus efficace qu'un tatouage non-sûr (à robustesse équivalente). Finalement, une implantation des codes traçants dans un contenu vidéo utilisant des méthodes sûres par étalement de spectre est proposée. Nous montrons alors l'efficacité de l'accusation des adversaires dans ce cadre pratique. / In this thesis, we first study the constraint of security in watermarking. In the WOA (Watermarked contents Only Attack) framework, an adversary owns several marked contents and try to estimate the secret key used for embedding in order to have access to the hidden messages. We present a new mean for spread-spectrum watermarking based on circular distributions in the private watermarking subspace. Thanks to this technique, we are able to minimise the distortion (on expectation) caused by the watermark in the WOA framework using the Hungarian optimisation method and the transportation theory. Then, we show that secure watermarking can be used in practical works with the example of still image watermarking. In the second part, we are interested about the problem of active fingerprinting which allows to trace re-distributors of illegal copies of a numerical content. The codes we use here are the ones proposed by Gabor Tardos. These codes are resistant against collusion attacks e.g. a group of malicious users who forges a new content by mixing their copies. Since watermarking techniques allow the embedding of these codes in numerical contents, a new worst case attack taking into account the security level of the watermarking system is proposed to reduce the accusation rate of the coalition. We show that secure watermarking is more efficient that insecure one (with similar robustness) for fingerprinting application. Finally, traitor tracing codes are implemented on video sequences by using spread-spectrum techniques in order to demonstrate that the accusation of adversaries is practically possible.
77

Equalização e identificação adaptativas utilizando marca d'agua como sinal de supervisão / Adaptative equalization and identification using watermark as supervision signal

Uliani Neto, Mario 18 January 2008 (has links)
Orientadores: João Marcos Travassos Romano, Leandro de Campos Teixeira Gomes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-10T02:05:39Z (GMT). No. of bitstreams: 1 UlianiNeto_Mario_M.pdf: 12916890 bytes, checksum: 53ed98252c8c6c265c985346ceaeec4f (MD5) Previous issue date: 2008 / Resumo: O objetivo deste trabalho é investigar o uso de uma marca d'água digital como sinal de referência na filtragem adaptativa supervisionada, aplicada aos problemas de equalização e identificação de canais. Ao contrário de métodos mais tradicionais, nos quais a comunicação é periodicamente interrompida para a transmissão de seqüências de treinamento, uma marca d'água é transmitida ininterruptamente, juntamente com o sinal de informação. Através da comparação entre o sinal recebido processado pelo filtro equalizador e a marca d'água original, ou entre o sinal recebido e a marca d' água processada pelo filtro identificador, os coeficientes deste filtro são continuamente adaptados para estimar e rastrear as características do canal ao longo do tempo. Tanto a transmissão de informação útil como a adaptação do filtro equalizador/identificador nunca são interrompidas. Para que o processo de detecção de uma marca d'água digital seja eficiente, é necessária a sincronização da marca no detector. Efeitos de dessincron}zação podem prejudicar a detecção, potencialmente inviabilizando a extração adequada da informação contida na marca d'água. Esta dissertação apresenta dois métodos de ressincronização para sistemas de filtragem empregando marca d'água. Ambos os métodos são baseados no uso de seqüências de treinamento embutidas na marca d'água e revertem os efeitos de uma ampla classe de ataques de dessincronização. No caso de sinais que apresentem uma interpretação sensorial (e.g. áudio, imagens, vídeo), a adição da marca d'água não deve causar distorção perceptível no sinal original. Propomos o uso de um modelo psicoacústico em conjunto com um algoritmo de conformação espectral para demonstrar a viabilidade do método quando aplicado a sinais de áudio. Resultados de simulações computacionais são apresentados para evidenciar o desempenho das propostas fr_nte a técnicas tradicionais de filtragem adaptativa supervisionada / Abstact: The objective of this work is to investigate the use of a digital watermark as a reference signal in supervised adaptive filtering, applied to the channel equalization and identification problems. Contrary to traditional adaptive methods, in which communication is periodically interrupted for the transmission of training sequences, a watermark is transmitted uninterruptedly, along with the information signal. By comparing the received signal and the watermark processed by the identification filter, or the received signal processed by the equalization filter and the original watermark, the coefficients of this filter are continuously adapted to estimate and monitor channel characteristics through time. Both the transmission of useful information and the adaptation of the identification/equalization filter are never interrupted. For the watermark detection process to be efficient, synchronization of the watermark in the detector is required. Desynchronization effects can reduce detection performance, potentially making impracticable the appropriate extraction of the information contained in the watermark. This dissertation presents two resynchronization methods for filtering systems using a watermark. Both methods are based on training sequences that are embedded in the watermark, and can reverse the effects of a large class of desynchronization attacks. The embedding of a watermark into signals with sensorial interpretation (e.g. audio, image, video) should not introduce noticeable distortion in the original signal. To achieve this when applying our method to audio signals, we propose the use of a psychoacoustic model with a spectral shaping algorithm. Simulation results are presented to illustrate the performance of the proposed method when compared to traditional supervised adaptive filtering techniques / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
78

Data hiding algorithms for healthcare applications

Fylakis, A. (Angelos) 12 November 2019 (has links)
Abstract Developments in information technology have had a big impact in healthcare, producing vast amounts of data and increasing demands associated with their secure transfer, storage and analysis. To serve them, biomedical data need to carry patient information and records or even extra biomedical images or signals required for multimodal applications. The proposed solution is to host this information in data using data hiding algorithms through the introduction of imperceptible modifications achieving two main purposes: increasing data management efficiency and enhancing the security aspects of confidentiality, reliability and availability. Data hiding achieve this by embedding the payload in objects, including components such as authentication tags, without requirements in extra space or modifications in repositories. The proposed methods satisfy two research problems. The first is the hospital-centric problem of providing efficient and secure management of data in hospital networks. This includes combinations of multimodal data in single objects. The host data were biomedical images and sequences intended for diagnoses meaning that even non-visible modifications can cause errors. Thus, a determining restriction was reversibility. Reversible data hiding methods remove the introduced modifications upon extraction of the payload. Embedding capacity was another priority that determined the proposed algorithms. To meet those demands, the algorithms were based on the Least Significant Bit Substitution and Histogram Shifting approaches. The second was the patient-centric problem, including user authentication and issues of secure and efficient data transfer in eHealth systems. Two novel solutions were proposed. The first method uses data hiding to increase the robustness of face biometrics in photos, where due to the high robustness requirements, a periodic pattern embedding approach was used. The second method protects sensitive user data collected by smartphones. In this case, to meet the low computational cost requirements, the method was based on Least Significant Bit Substitution. Concluding, the proposed algorithms introduced novel data hiding applications and demonstrated competitive embedding properties in existing applications. / Tiivistelmä Modernit terveydenhuoltojärjestelmät tuottavat suuria määriä tietoa, mikä korostaa tiedon turvalliseen siirtämiseen, tallentamiseen ja analysointiin liittyviä vaatimuksia. Täyttääkseen nämä vaatimukset, biolääketieteellisen tiedon täytyy sisältää potilastietoja ja -kertomusta, jopa biolääketieteellisiä lisäkuvia ja -signaaleja, joita tarvitaan multimodaalisissa sovelluksissa. Esitetty ratkaisu on upottaa tämä informaatio tietoon käyttäen tiedonpiilotusmenetelmiä, joissa näkymättömiä muutoksia tehden saavutetaan kaksi päämäärää: tiedonhallinnan tehokkuuden nostaminen ja luottamuksellisuuteen, luotettavuuteen ja saatavuuteen liittyvien turvallisuusnäkökulmien parantaminen. Tiedonpiilotus saavuttaa tämän upottamalla hyötykuorman, sisältäen komponentteja, kuten todentamismerkinnät, ilman lisätilavaatimuksia tai muutoksia tietokantoihin. Esitetyt menetelmät ratkaisevat kaksi tutkimusongelmaa. Ensimmäinen on sairaalakeskeinen ongelma tehokkaan ja turvallisen tiedonhallinnan tarjoamiseen sairaaloiden verkoissa. Tämä sisältää multimodaalisen tiedon yhdistämisen yhdeksi kokonaisuudeksi. Tiedon kantajana olivat biolääketieteelliset kuvat ja sekvenssit, jotka on tarkoitettu diagnosointiin, missä jopa näkymättömät muutokset voivat aiheuttaa virheitä. Siispä määrittävin rajoite oli palautettavuus. Palauttavat tiedonpiilotus-menetelmät poistavat lisätyt muutokset, kun hyötykuorma irrotetaan. Upotuskapasiteetti oli toinen tavoite, joka määritteli esitettyjä algoritmeja. Saavuttaakseen nämä vaatimukset, algoritmit perustuivat vähiten merkitsevän bitin korvaamiseen ja histogrammin siirtämiseen. Toisena oli potilaskeskeinen ongelma, joka sisältää käyttäjän henkilöllisyyden todentamisen sekä turvalliseen ja tehokkaaseen tiedonsiirtoon liittyvät haasteet eHealth-järjestelmissä. Työssä ehdotettiin kahta uutta ratkaisua. Ensimmäinen niistä käyttää tiedonpiilotusta parantamaan kasvojen biometriikan kestävyyttä valokuvissa. Korkeasta kestävyysvaatimuksesta johtuen käytettiin periodisen kuvion upottamismenetelmää. Toinen menetelmä suojelee älypuhelimien keräämää arkaluontoista käyttäjätietoa. Tässä tapauksessa, jotta matala laskennallinen kustannus saavutetaan, menetelmä perustui vähiten merkitsevän bitin korvaamiseen. Yhteenvetona ehdotetut algoritmit esittelivät uusia tiedonpiilotussovelluksia ja osoittivat kilpailukykyisiä upotusominaisuuksia olemassa olevissa sovelluksissa.
79

Chiffrement homomorphe et recherche par le contenu sécurisé de données externalisées et mutualisées : Application à l'imagerie médicale et l'aide au diagnostic / Homomorphic encryption and secure content based image retieval over outsourced data : Application to medical imaging and diagnostic assistance

Bellafqira, Reda 19 December 2017 (has links)
La mutualisation et l'externalisation de données concernent de nombreux domaines y compris celui de la santé. Au-delà de la réduction des coûts de maintenance, l'intérêt est d'améliorer la prise en charge des patients par le déploiement d'outils d'aide au diagnostic fondés sur la réutilisation des données. Dans un tel environnement, la sécurité des données (confidentialité, intégrité et traçabilité) est un enjeu majeur. C'est dans ce contexte que s'inscrivent ces travaux de thèse. Ils concernent en particulier la sécurisation des techniques de recherche d'images par le contenu (CBIR) et de « machine learning » qui sont au c'ur des systèmes d'aide au diagnostic. Ces techniques permettent de trouver des images semblables à une image requête non encore interprétée. L'objectif est de définir des approches capables d'exploiter des données externalisées et sécurisées, et de permettre à un « cloud » de fournir une aide au diagnostic. Plusieurs mécanismes permettent le traitement de données chiffrées, mais la plupart sont dépendants d'interactions entre différentes entités (l'utilisateur, le cloud voire un tiers de confiance) et doivent être combinés judicieusement de manière à ne pas laisser fuir d'information lors d'un traitement.Au cours de ces trois années de thèse, nous nous sommes dans un premier temps intéressés à la sécurisation à l'aide du chiffrement homomorphe, d'un système de CBIR externalisé sous la contrainte d'aucune interaction entre le fournisseur de service et l'utilisateur. Dans un second temps, nous avons développé une approche de « Machine Learning » sécurisée fondée sur le perceptron multicouches, dont la phase d'apprentissage peut être externalisée de manière sûre, l'enjeu étant d'assurer la convergence de cette dernière. L'ensemble des données et des paramètres du modèle sont chiffrés. Du fait que ces systèmes d'aides doivent exploiter des informations issues de plusieurs sources, chacune externalisant ses données chiffrées sous sa propre clef, nous nous sommes intéressés au problème du partage de données chiffrées. Un problème traité par les schémas de « Proxy Re-Encryption » (PRE). Dans ce contexte, nous avons proposé le premier schéma PRE qui permet à la fois le partage et le traitement des données chiffrées. Nous avons également travaillé sur un schéma de tatouage de données chiffrées pour tracer et vérifier l'intégrité des données dans cet environnement partagé. Le message tatoué dans le chiffré est accessible que l'image soit ou non chiffrée et offre plusieurs services de sécurité fondés sur le tatouage. / Cloud computing has emerged as a successful paradigm allowing individuals and companies to store and process large amounts of data without a need to purchase and maintain their own networks and computer systems. In healthcare for example, different initiatives aim at sharing medical images and Personal Health Records (PHR) in between health professionals or hospitals with the help of the cloud. In such an environment, data security (confidentiality, integrity and traceability) is a major issue. In this context that these thesis works, it concerns in particular the securing of Content Based Image Retrieval (CBIR) techniques and machine learning (ML) which are at the heart of diagnostic decision support systems. These techniques make it possible to find similar images to an image not yet interpreted. The goal is to define approaches that can exploit secure externalized data and enable a cloud to provide a diagnostic support. Several mechanisms allow the processing of encrypted data, but most are dependent on interactions between different entities (the user, the cloud or a trusted third party) and must be combined judiciously so as to not leak information. During these three years of thesis, we initially focused on securing an outsourced CBIR system under the constraint of no interaction between the users and the service provider (cloud). In a second step, we have developed a secure machine learning approach based on multilayer perceptron (MLP), whose learning phase can be outsourced in a secure way, the challenge being to ensure the convergence of the MLP. All the data and parameters of the model are encrypted using homomorphic encryption. Because these systems need to use information from multiple sources, each of which outsources its encrypted data under its own key, we are interested in the problem of sharing encrypted data. A problem known by the "Proxy Re-Encryption" (PRE) schemes. In this context, we have proposed the first PRE scheme that allows both the sharing and the processing of encrypted data. We also worked on watermarking scheme over encrypted data in order to trace and verify the integrity of data in this shared environment. The embedded message is accessible whether or not the image is encrypted and provides several services.
80

Itérations chaotiques pour la sécurité de l'information dissimulée / Chaotic iterations for the Hidden Information Security

Friot, Nicolas 05 June 2014 (has links)
Les systèmes dynamiques discrets, œuvrant en itérations chaotiques ou asynchrones, se sont avérés être des outils particulièrement intéressants à utiliser en sécurité informatique, grâce à leur comportement hautement imprévisible, obtenu sous certaines conditions. Ces itérations chaotiques satisfont les propriétés de chaos topologiques et peuvent être programmées de manière efficace. Dans l’état de l’art, elles ont montré tout leur intérêt au travers de schémas de tatouage numérique. Toutefois, malgré leurs multiples avantages, ces algorithmes existants ont révélé certaines limitations. Cette thèse a pour objectif de lever ces contraintes, en proposant de nouveaux processus susceptibles de s’appliquer à la fois au domaine du tatouage numérique et au domaine de la stéganographie. Nous avons donc étudié ces nouveaux schémas sur le double plan de la sécurité dans le cadre probabiliste. L’analyse de leur biveau de sécurité respectif a permis de dresser un comparatif avec les autres processus existants comme, par exemple, l’étalement de spectre. Des tests applicatifs ont été conduits pour stéganaliser des processus proposés et pour évaluer leur robustesse. Grâce aux résultats obtenus, nous avons pu juger de la meilleure adéquation de chaque algorithme avec des domaines d’applications ciblés comme, par exemple, l’anonymisation sur Internet, la contribution au développement d’un web sémantique, ou encore une utilisation pour la protection des documents et des donnés numériques. Parallèlement à ces travaux scientifiques fondamentaux, nous avons proposé plusieurs projets de valorisation avec pour objectif la création d’une entreprise de technologies innovantes. / Discrete dynamical systems by chaotic or asynchronous iterations have proved to be highly interesting toolsin the field of computer security, thanks to their unpredictible behavior obtained under some conditions. Moreprecisely, these chaotic iterations possess the property of topological chaos and can be programmed in anefficient way. In the state of the art, they have turned out to be really interesting to use notably through digitalwatermarking schemes. However, despite their multiple advantages, these existing algorithms have revealedsome limitations. So, these PhD thesis aims at removing these constraints, proposing new processes whichcan be applied both in the field of digital watermarking and of steganography. We have studied these newschemes on two aspects: the topological security and the security based on a probabilistic approach. Theanalysis of their respective security level has allowed to achieve a comparison with the other existing processessuch as, for example, the spread spectrum. Application tests have also been conducted to steganalyse and toevaluate the robustness of the algorithms studied in this PhD thesis. Thanks to the obtained results, it has beenpossible to determine the best adequation of each processes with targeted application fields as, for example,the anonymity on the Internet, the contribution to the development of the semantic web, or their use for theprotection of digital documents. In parallel to these scientific research works, several valorization perspectiveshave been proposed, aiming at creating a company of innovative technology.

Page generated in 0.1132 seconds