• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • Tagged with
  • 20
  • 20
  • 9
  • 9
  • 8
  • 7
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Secure Co-design: Confidentiality Preservation in Online Engineering Collaborations

Siva Chaitanya Chaduvula (6417071) 12 October 2021 (has links)
<p>Research in engineering design assumes that data flows smoothly among different designers within a product realization process. This assumption is not valid in many scenarios, including when designers partner with a future competitor or when designers search for potential collaborators is hampered by an inability to share sensitive data. This information asymmetry among designers has an adverse effect on the outcomes of the product realization process. Designers need a secure yet collaborative design process that enables them to overcome these information-related risks borne from collaborators participating in their product realization process. Existing cryptographic techniques aimed at overcoming these risks are computationally intensive, making them unsuitable for heavy engineering computations such as finite element analysis (FEA). FEA is a widely used computation technique in several engineering applications, including structural analysis, heat transfer, and fluid flow. In this work, we developed a new approach, secure finite element analysis (sFEA), using which designers can perform their analysis without revealing their confidential design data to anyone, including their design collaborators even though the computed answer depends on confidential inputs from all the collaborators. sFEA is a secure, scalable, computationally lightweight, and cloud-compatible. In addition to sFEA, we developed prototypes and demonstrated that the computational framework within sFEA is general enough to be applied to different stages of the product realization process.</p>
12

Image steganography applications for secure communication

Morkel, Tayana 28 November 2012 (has links)
To securely communicate information between parties or locations is not an easy task considering the possible attacks or unintentional changes that can occur during communication. Encryption is often used to protect secret information from unauthorised access. Encryption, however, is not inconspicuous and the observable exchange of encrypted information between two parties can provide a potential attacker with information on the sender and receiver(s). The presence of encrypted information can also entice a potential attacker to launch an attack on the secure communication. This dissertation investigates and discusses the use of image steganography, a technology for hiding information in other information, to facilitate secure communication. Secure communication is divided into three categories: self-communication, one-to-one communication and one-to-many communication, depending on the number of receivers. In this dissertation, applications that make use of image steganography are implemented for each of the secure communication categories. For self-communication, image steganography is used to hide one-time passwords (OTPs) in images that are stored on a mobile device. For one-to-one communication, a decryptor program that forms part of an encryption protocol is embedded in an image using image steganography and for one-to-many communication, a secret message is divided into pieces and different pieces are embedded in different images. The image steganography applications for each of the secure communication categories are discussed along with the advantages and disadvantages that the applications have over more conventional secure communication technologies. An additional image steganography application is proposed that determines whether information is modified during communication. Copyright / Dissertation (MSc)--University of Pretoria, 2012. / Computer Science / unrestricted
13

Secure digital documents using Steganography and QR Code

Hassanein, Mohamed Sameh January 2014 (has links)
With the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking.
14

Towards robust steganalysis : binary classifiers and large, heterogeneous data

Lubenko, Ivans January 2013 (has links)
The security of a steganography system is defined by our ability to detect it. It is of no surprise then that steganography and steganalysis both depend heavily on the accuracy and robustness of our detectors. This is especially true when real-world data is considered, due to its heterogeneity. The difficulty of such data manifests itself in a penalty that has periodically been reported to affect the performance of detectors built on binary classifiers; this is known as cover source mismatch. It remains unclear how the performance drop that is associated with cover source mismatch is mitigated or even measured. In this thesis we aim to show a robust methodology to empirically measure its effects on the detection accuracy of steganalysis classifiers. Some basic machine-learning based methods, which take their origin in domain adaptation, are proposed to counter it. Specifically, we test two hypotheses through an empirical investigation. First, that linear classifiers are more robust than non-linear classifiers to cover source mismatch in real-world data and, second, that linear classifiers are so robust that given sufficiently large mismatched training data they can equal the performance of any classifier trained on small matched data. With the help of theory we draw several nontrivial conclusions based on our results. The penalty from cover source mismatch may, in fact, be a combination of two types of error; estimation error and adaptation error. We show that relatedness between training and test data, as well as the choice of classifier, both have an impact on adaptation error, which, as we argue, ultimately defines a detector's robustness. This provides a novel framework for reasoning about what is required to improve the robustness of steganalysis detectors. Whilst our empirical results may be viewed as the first step towards this goal, we show that our approach provides clear advantages over earlier methods. To our knowledge this is the first study of this scale and structure.
15

Adaptive multiobjective memetic optimization: algorithms and applications

Dang, Hieu January 1900 (has links)
The thesis presents research on multiobjective optimization based on memetic computing and its applications in engineering. We have introduced a framework for adaptive multiobjective memetic optimization algorithms (AMMOA) with an information theoretic criterion for guiding the selection, clustering, and local refinements. A robust stopping criterion for AMMOA has also been introduced to solve non-linear and large-scale optimization problems. The framework has been implemented for different benchmark test problems with remarkable results. This thesis also presents two applications of these algorithms. First, an optimal image data hiding technique has been formulated as a multiobjective optimization problem with conflicting objectives. In particular, trade-off factors in designing an optimal image data hiding are investigated to maximize the quality of watermarked images and the robustness of watermark. With the fixed size of a logo watermark, there is a conflict between these two objectives, thus a multiobjective optimization problem is introduced. We propose to use a hybrid between general regression neural networks (GRNN) and the adaptive multiobjective memetic optimization algorithm (AMMOA) to solve this challenging problem. This novel image data hiding approach has been implemented for many different test natural images with remarkable robustness and transparency of the embedded logo watermark. We also introduce a perceptual measure based on the relative Rényi information spectrum to evaluate the quality of watermarked images. The second application is the problem of joint spectrum sensing and power control optimization for a multichannel, multiple-user cognitive radio network. We investigated trade-off factors in designing efficient spectrum sensing techniques to maximize the throughput and minimize the interference. To maximize the throughput of secondary users and minimize the interference to primary users, we propose a joint determination of the sensing and transmission parameters of the secondary users, such as sensing times, decision threshold vectors, and power allocation vectors. There is a conflict between these two objectives, thus a multiobjective optimization problem is used again in the form of AMMOA. This algorithm learns to find optimal spectrum sensing times, decision threshold vectors, and power allocation vectors to maximize the averaged opportunistic throughput and minimize the averaged interference to the cognitive radio network. / February 2016
16

Création et évaluation statistique d'une nouvelle de générateurs pseudo-aléatoires chaotiques / Creation and statistical evaluation of a new pseudo-random generators chaotic

Wang, Qianxue 27 March 2012 (has links)
Dans cette thèse, une nouvelle manière de générer des nombres pseudo-aléatoires est présentée.La proposition consiste à mixer deux générateurs exitants avec des itérations chaotiquesdiscrètes, qui satisfont à la définition de chaos proposée par Devaney. Un cadre rigoureux estintroduit, dans lequel les propriétés topologiques du générateur résultant sont données. Deuxréalisations pratiques d’un tel générateur sont ensuite présentées et évaluées. On montre que lespropriétés statistiques des générateurs fournis en entrée peuvent être grandement améliorées enprocédant ainsi. Ces deux propositions sont alors comparées, en profondeur, entre elles et avecun certain nombre de générateurs préexistants. On montre entre autres que la seconde manièrede mixer deux générateurs est largement meilleure que la première, à la fois en terme de vitesseet de performances.Dans la première partie de ce manuscrit, la fonction d’itérations considérée est la négation vectorielle.Dans la deuxième partie, nous proposons d’utiliser des graphes fortement connexescomme critère de sélection de bonnes fonctions d’itérations. Nous montrons que nous pouvonschanger de fonction sans perte de propriétés pour le générateur obtenu. Finalement, une illustrationdans le domaine de l’information dissimulée est présentée, et la robustesse de l’algorithmede tatouage numérique proposé est évalué. / In this thesis, a new way to generate pseudorandom numbers is presented. The propositionis to mix two exiting generators with discrete chaotic iterations that satisfy the Devaney’sdefinition of chaos. A rigorous framework is introduced, where topological properties of theresulting generator are given, and two practical designs are presented and evaluated. It is shownthat the statistical quality of the inputted generators can be greatly improved by this way, thusfulfilling the up-to-date standards. Comparison between these two designs and existing generatorsare investigated in details. Among other things, it is established that the second designedtechnique outperforms the first one, both in terms of performance and speed.In the first part of this manuscript, the iteration function embedded into chaotic iterations isthe vectorial Boolean negation. In the second part, we propose a method using graphs havingstrongly connected components as a selection criterion.We are thus able to modify the iterationfunction without deflating the good properties of the associated generator. Simulation resultsand basic security analysis are then presented to evaluate the randomness of this new family ofpseudorandom generators. Finally, an illustration in the field of information hiding is presented,and the robustness of the obtained data hiding algorithm against attacks is evaluated.
17

Tatouage conjoint a la compression d'images fixes dans JPEG2000 / joint watermarking and compression of JPEG2000 images

Goudia, Dalila 06 December 2011 (has links)
Les technologies numériques et du multimédia ont connu de grandes avancées ces dernières années. La chaîne de transmission des images est constituée de plusieurs traitements divers et variés permettant de transmettre un flux de données toujours plus grand avec toujours plus de services à la clé. Nous citons par exemple, la compression, l'augmentation de contenu, la confidentialité, l'intégrité et l'authenticité des images pendant leur transmission. Dans ce contexte, les approches conjointes ont suscité un intérêt certain de la part de la communauté du traitement d'images car elles permettent d'obtenir des systèmes de faible complexité calculatoire pouvant être utilisés dans des applications nécessitant peu de ressources matérielles. La dissimulation de données ou Data Hiding, est l'art de cacher un message dans un support numérique. L'une des branches les plus importantes du data hiding est le tatouage numérique ou watermarking. La marque doit rester présente dans l'image hôte même si celle-ci subit des modifications appelées attaques. La compression d'images a comme objectif de réduire la taille des images stockées et transmises afin d'augmenter la capacité de stockage et de minimiser le temps de transmission. La compression représente une opération incontournable du stockage ou du transfert d'images. Elle est considérée par le data hiding comme une attaque particulièrement destructrice. La norme JPEG2000 est le dernier standard ISO/ITU-T pour le codage des images fixes. Dans cette thèse, nous étudions de manière conjointe la compression avec perte et le data hiding dans le domaine JPEG2000. L'approche conjointe offre de nombreux avantages dont le plus important est que la compression ne constitue plus une attaque vis-à-vis du data hiding. Les contraintes à respecter sont exprimées en termes de compromis à atteindre: compromis entre la quantité d'information insérée (payload), le taux de compression, la distorsion induite par l'insertion du message et la robustesse de la marque dans le cas du tatouage.Nos travaux de recherche ont conduit à l'élaboration de plusieurs schémas conjoints : un schéma conjoint d'insertion de données cachées et deux schémas conjoints de tatouage dans JPEG2000. Tous ces systèmes conjoints reposent sur des stratégies d'insertion informée basées sur la quantification codée par treillis (TCQ). Les propriétés de codage de canal de la TCQ sont exploitées pour pouvoir à la fois quantifier et insérer un message caché (ou une marque) pendant l'étape de quantification de JPEG2000. / Technological advances in the fields of telecommunications and multimedia during the two last decades, derive to create novel image processing services such as copyright protection, data enrichment and information hiding applications. There is a strong need of low complexity applications to perform seveval image processing services within a single system. In this context, the design of joint systems have attracted researchers during the last past years. Data hiding techniques embed an invisible message within a multimedia content by modifying the media data. This process is done in such a way that the hidden data is not perceptible to an observer. Digital watermarking is one type of data hiding. The watermark should be resistant to a variety of manipulations called attacks. The purpose of image compression is to represent images with less data in order to save storage costs or transmission time. Compression is generally unavoidable for transmission or storage purposes and is considered as one of the most destructive attacks by the data hiding. JPEG2000 is the last ISO/ ITU-T standard for still image compression.In this thesis, joint compression and data hiding is investigated in the JPEG2000 framework. Instead of treating data hiding and compression separately, it is interesting and beneficial to look at the joint design of data hiding and compression system. The joint approach have many advantages. The most important thing is that compression is no longer considered as an attack by data hiding.The main constraints that must be considered are trade offs between payload, compression bitrate, distortion induced by the insertion of the hidden data or the watermark and robustness of watermarked images in the watermarking context. We have proposed several joint JPEG2000 compression and data hiding schemes. Two of these joint schemes are watermarking systems. All the embedding strategies proposed in this work are based on Trellis Coded Quantization (TCQ). We exploit the channel coding properties of TCQ to reliably embed data during the quantization stage of the JPEG2000 part 2 codec.
18

Time-based Key for Coverless Audio Steganography: A Proposed Behavioral Method to Increase Capacity

Alanko Öberg, John, Svensson, Carl January 2023 (has links)
Background. Coverless steganography is a relatively unexplored area of steganography where the message is not embedded into a cover media. Instead the message is derived from one or several properties already existing in the carrier media. This renders steganalysis methods used for traditional steganography useless. Early coverless methods were applied to images or texts but more recently the possibilities in the video and audio domain have been explored. The audio domain still remains relatively unexplored however, with the earliest work being presented in 2022. In this thesis, we narrow the existing research gap by proposing an audio-compatible method which uses the timestamp that marks when a carrier media was received to generate a time-based key which can be applied to the hash produced by said carrier. This effectively allows one carrier to represent a range of different hashes depending on the timestamp specifying when it was received, increasing capacity. Objectives. The objectives of the thesis are to explore what features of audio are suitable for steganographic use, to establish a method for finding audio clips which can represent a specific message to be sent and to improve on the current state-of-the-art method, taking capacity, robustness and cost into consideration. Methods. A literature review was first conducted to gain insight on techniques used in previous works. This served both to illuminate features of audio that could be used to good effect in a coverless approach, and to identify coverless approaches which could work but had not been tested yet. Experiments were then performed on two datasets to show the effective capacity increase of the proposed method when used in tandem with the existing state-of-the-art method for coverless audio steganography. Additional robustness tests for said state-of-the-art method were also performed. Results. The results show that the proposed method could increase the per-message capacity from eight bits to 16 bits, while still retaining 100% effective capacity using only 200 key permutations, given a database consisting of 50 one-minute long audio clips. They further show that the time cost added by the proposed method is in total less than 0.1 seconds for 2048 key permutations. The robustness experiments show that the hashing algorithms used in the state-of-the-art method have high robustness against additive white gaussian noise, low-pass filters, and resampling attacks but are weaker against compression and band-pass filters.  Conclusions. We address the scientific gap and complete our objectives by proposing a method which can increase capacity of existing coverless steganography methods. We demonstrate the capacity increase our method brings by using it in tandem with the state-of-the-art method for the coverless audio domain. We argue that our method is not limited to the audio domain, or to the coverless method with which we performed our experiments. Finally, we discuss several directions for future works. / Bakgrund. Täcklös steganografi är ett relativt outforskat område inom steganografi där meddelandet, istället för att gömmas i ett medium, representeras av en eller flera egenskaper som kan erhållas från mediet. Detta faktum hindrar nuvarande steganalysmetoder från att upptäcka bruk av täcklös steganografi. Tidiga studier inom området behandlar bilder och text, senare studier har utökat området genom att behandla video och ljud. Den första studien inom täcklös ljudsteganografi publicerades år 2022. Målet med examensarbetet är att utöka forskningen med en föreslagen ljudkompatibel metod som använder tidsstämpeln då ett meddelande mottagits för att skapa en tidsbaserad nyckel som kan appliceras på en hash erhållen från ett steganografiskt medium. Detta tillåter mediet att representera olika hashar beroende på tiden, vilket ökar kapaciteten.   Syfte. Syftet med examensarbetet är att utforska vilka egenskaper i ett ljudmedia som lämpar sig åt steganografiskt bruk, att skapa en metod som kan hitta ljudklipp som representerar ett efterfrågat meddelande, samt att förbättra nuvarande state-of-the-art inom täcklös ljudsteganografi genom att finna en bra balans mellan kapacitet, robusthet och kostnad.   Metod. En litteraturstudie utfördes för att få förståelse för metoder använda i tidigare studier. Syftet var att hitta egenskaper i ljud som lämpar sig åt täcklös ljudsteganografi samt identifiera icke-täcklösa metoder som skulle kunna anpassas för att fungera som täcklösa. Experiment utfördes sedan på två dataset för att påvisa den ökning i effektiv kapacitet den föreslagna metoden ger när den appliceras på state-of-the-art-metoden inom täcklös ljudsteganografi. Experiment utfördes även för att utöka tidigare forskning på robustheten av state-of-the-art-metoden inom täcklös ljudsteganografi. Resultat. Resultaten visar att den föreslagna metoden kan öka kapaciteten per meddelande från åtta till 16 bits med 100% effektiv kapacitet med 200 nyckelpermutationer och en databas bestående av 50 stycken en-minut långa ljudklipp. De visar även att tidskostnaden för den föreslagna metoden är mindre än 0,1 sekund för 2048 nyckelpermutationer. Experimenten på robusthet visar att state-of-the-art-metoden har god robusthet mot additivt vitt gaussiskt brus, lågpassfilter och omsampling men är svagare mot kompression och bandpassfilter. Slutsatser. Vi fullbordar målen och utökar forskningen inom området genom att föreslå en metod kan öka kapaciteten av befintliga täcklösa metoder. Vi demonstrerar kapacitetsökningen genom att applicera vår metod på den senaste täcklösa ljudsteganografimetoden. Vi presenterar argument för vår metods tillämpning i områden utanför ljuddomänen och utanför metoden som den applicerades på. Slutligen diskuteras riktningar för framtida forskning.
19

A study in how to inject steganographic data into videos in a sturdy and non-intrusive manner / En studie i hur steganografisk data kan injiceras i videor på ett robust och icke-påträngande sätt

Andersson, Julius, Engström, David January 2019 (has links)
It is desirable for companies to be able to hide data inside videos to be able to find the source of any unauthorised sharing of a video. The hidden data (the payload) should damage the original data (the cover) by an as small amount as possible while also making it hard to remove the payload without also severely damaging the cover. It was determined that the most appropriate place to hide data in a video was in the visual information, so the cover is an image. Two injection methods were developed and three methods for attacking the payload. One injection method changes the pixel values of an image directly to hide the payload and the other transforms the image to cosine waves that represented the image and it then changes those cosine waves to hide the payload. Attacks were developed to test how hard it was to remove the hidden data. The methods for attacking the payload where to add and remove a random value to each pixel, to set all bits of a certain importance to 1 or to compress the image with JPEG. The result of the study was that the method that changed the image directly was significantly faster than the method that transformed the image and it had a capacity for a larger payload. The injection methods protected the payload differently well against the various attacks so which method that was the best in that regard depends on the type of attack. / Det är önskvärt för företag att kunna gömma data i videor så att de kan hitta källorna till obehörig delning av en video. Den data som göms borde skada den ursprungliga datan så lite som möjligt medans det också är så svårt som möjligt att radera den gömda datan utan att den ursprungliga datan skadas mycket. Studien kom fram till att det bästa stället att gömma data i videor är i den visuella delen så datan göms i bilderna i videon. Två metoder skapades för att injektera gömd data och tre skapades för att förstöra den gömda datan. En injektionsmetod ändrar pixelvärdet av bilden direkt för att gömma datat medans den andra transformerar bilden till cosinusvågor och ändrar sedan de vågorna för att gömma datat. Attacker utformades för att testa hur svårt det var att förstöra den gömda datan. Metoderna för att attackera den gömda datan var att lägga till eller ta bort ett slumpmässigt värde från varje pixel, att sätta varje bit av en särskild nivå till 1 och att komprimera bilden med JPEG. Resultatet av studien var att metoden som ändrade bilden direkt var mycket snabbare än metoden som först transformerade bilden och den hade också plats för mer gömd data. Injektionsmetoderna var olika bra på att skydda den gömda datan mot de olika attackerna så vilken metod som var bäst i den aspekten beror på vilken typ av attack som används.
20

Malleability, obliviousness and aspects for broadcast service attachment

Harrison, William January 2010 (has links)
An important characteristic of Service-Oriented Architectures is that clients do not depend on the service implementation's internal assignment of methods to objects. It is perhaps the most important technical characteristic that differentiates them from more common object-oriented solutions. This characteristic makes clients and services malleable, allowing them to be rearranged at run-time as circumstances change. That improvement in malleability is impaired by requiring clients to direct service requests to particular services. Ideally, the clients are totally oblivious to the service structure, as they are to aspect structure in aspect-oriented software. Removing knowledge of a method implementation's location, whether in object or service, requires re-defining the boundary line between programming language and middleware, making clearer specification of dependence on protocols, and bringing the transaction-like concept of failure scopes into language semantics as well. This paper explores consequences and advantages of a transition from object-request brokering to service-request brokering, including the potential to improve our ability to write more parallel software.

Page generated in 0.4322 seconds