• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 428
  • 49
  • 43
  • 27
  • 23
  • 19
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 768
  • 234
  • 166
  • 159
  • 154
  • 141
  • 131
  • 89
  • 82
  • 81
  • 80
  • 78
  • 75
  • 74
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

E-crimes and e-authentication - a legal perspective

Njotini, Mzukisi Niven 27 October 2016 (has links)
E-crimes continue to generate grave challenges to the ICT regulatory agenda. Because e-crimes involve a wrongful appropriation of information online, it is enquired whether information is property which is capable of being stolen. This then requires an investigation to be made of the law of property. The basis for this scrutiny is to establish if information is property for purposes of the law. Following a study of the Roman-Dutch law approach to property, it is argued that the emergence of an information society makes real rights in information possible. This is the position because information is one of the indispensable assets of an information society. Given the fact that information can be the object of property, its position in the law of theft is investigated. This study is followed by an examination of the conventional risks that ICTs generate. For example, a risk exists that ICTs may be used as the object of e-crimes. Furthermore, there is a risk that ICTs may become a tool in order to appropriate information unlawfully. Accordingly, the scale and impact of e-crimes is more than those of the offline crimes, for example theft or fraud. The severe challenges that ICTs pose to an information society are likely to continue if clarity is not sought regarding: whether ICTs can be regulated or not, if ICTs can be regulated, how should an ICT regulatory framework be structured? A study of the law and regulation for regulatory purposes reveals that ICTs are spheres where regulations apply or should apply. However, better regulations are appropriate in dealing with the dynamics of these technologies. Smart-regulations, meta-regulations or reflexive regulations, self-regulations and co-regulations are concepts that support better regulations. Better regulations enjoin the regulatory industries, for example the state, businesses and computer users to be involved in establishing ICT regulations. These ICT regulations should specifically be in keeping with the existing e-authentication measures. Furthermore, the codes-based theory, the Danger or Artificial Immune Systems (the AIS) theory, the Systems theory and the Good Regulator Theorem ought to inform ICT regulations. The basis for all this should be to establish a holistic approach to e-authentication. This approach must conform to the Precautionary Approach to E-Authentication or PAEA. PAEA accepts the importance of legal rules in the ICT regulatory agenda. However, it argues that flexible regulations could provide a suitable framework within which ICTs and the ICT risks are controlled. In addition, PAEA submit that a state should not be the single role-player in ICT regulations. Social norms, the market and nature or architecture of the technology to be regulated are also fundamental to the ICT regulatory agenda. / Jurisprudence / LL. D.
652

Méthodologie et développement de solutions pour la sécurisation des circuits numériques face aux attaques en tensions / Methodology and design of solutions to secure digital circuits against power attacks

Gomina, Kamil 11 September 2014 (has links)
Les applications grand public comme la téléphonie mobile ou les cartes bancaires manipulent des données confidentielles. A ce titre, les circuits qui les composent font de plus en plus l'objet d'attaques qui présentent des menaces pour la sécurité des données. Les concepteurs de systèmes sur puce (SoC) doivent donc proposer des solutions sécurisées, tout en limitant le coût et la complexité globale des applications. L’analyse des attaques existantes sur les circuits numériques nous a orienté vers celles se basant sur la tension d'alimentation, dans des nœuds technologiques avancés.Dans un premier temps, nous avons déterminé la signature électrique d’un circuit en phase de conception. Pour cela, un modèle électrique a été proposé, prenant en compte la consommation en courant et la capacité de la grille d'alimentation. L'extraction de ces paramètres ainsi que l'évaluation du modèle sont présentées. L’utilisation de ce modèle a permis de mesurer la vulnérabilité d’un circuit mais aussi d’évaluer quantitativement des contremesures, notamment celle utilisant des capacités de découplage. Ensuite, l’étude se consacre à l’injection de fautes par impulsions de tension d’alimentation. Les mécanismes d’injection de fautes sur des circuits numériques ont été étudiés. Dès lors, des solutions de détection d’attaques ont été proposées et évaluées à la fois en simulation et par des tests électriques sur circuit. Les résultats ont permis de confirmer les analyses théoriques et la méthodologie utilisée.Ce travail a ainsi montré la faisabilité de solutions à bas coût contre les attaques actives et passives en tension, utilisables dans le cadre d’un développement industriel de produits. / General use products as mobile phones or smartcards manipulate confidential data. As such, the circuits composing them are more and more prone to physical attacks, which involve a threat for their security. As a result, SoC designers have to develop efficient countermeasures without increasing overall cost and complexity of the final application. The analysis of existing attacks on digital circuits leads to consider power attacks, in advanced technology nodes.First of all, the power signature of a circuit was determined at design time. To do so, an electrical model was suggested based on the current consumption and the overall power grid capacitance. The methodology to extract these parameters, as well as the evaluation of the model are presented. This model allows designers to anticipate information leakage at design time and to quantify the protection of countermeasures, as the use of integrated decoupling capacitors. Then, the study was dedicated to power glitch attacks. The different fault injection mechanisms were analyzed in details. From then on, a set of detection circuits were suggested and evaluated at design time and on silicon by electrical tests. Both the theoretical analysis and the given methodology were confirmed by the test campaigns.This work demonstrated that the design of low-cost solutions against passive and active power attacks can be achieved, and used in a large scale product development.
653

Reverse engineering secure systems using physical attacks / Rétro-conception de systèmes sécurisés par attaques physiques

Heckmann, Thibaut 18 June 2018 (has links)
Avec l’arrivée des dernières générations de téléphones chiffrés (BlackBerry PGP, iPhone), l’extraction des données par les experts est une tâche de plus en plus complexe et devient un véritable défi notamment après une catastrophe aérienne ou une attaque terroriste. Dans cette thèse, nous avons développé des attaques physiques sur systèmes cryptographiques à des fins d’expertises judiciaires. Une nouvelle technique de re-brasage à basse température des composants électroniques endommagés, utilisant un mélange eutectique 42Sn/58Bi, a été développée. Nous avons exploité les propriétés physico-chimiques de colles polymères et les avons utilisées dans l’extraction de données chiffrées. Une nouvelle technique a été développée pour faciliter l’injection et la modification à haute-fréquence des données. Le prototype permet des analyses en temps réel des échanges processeur-mémoire en attaque par le milieu. Ces deux techniques sont maintenant utilisées dans des dispositifs d’attaques plus complexes de systèmes cryptographiques. Nos travaux nous ont mené à sensibiliser les colles polymères aux attaques laser par pigmentation. Ce processus permet des réparations complexes avec une précision laser de l’ordre de 15 micromètres. Cette technique est utilisable en réparations judiciaires avancées des crypto-processeurs et des mémoires. Ainsi, les techniques développées, mises bout à bout et couplées avec des dispositifs physiques (tomographie 3D aux rayons X, MEB, laser, acide fumant) ont permis de réussir des transplantations judiciaires de systèmes chiffrés en conditions dégradées et appliquées pour la première fois avec succès sur les téléphones BlackBerry chiffrés à l’aide de PGP. / When considering the latest generation of encrypted mobile devices (BlackBerry’s PGP, Apple’s iPhone), data extraction by experts is an increasingly complex task. Forensic analyses even become a real challenge following an air crash or a terrorist attack. In this thesis, we have developed physical attacks on encrypted systems for the purpose of forensic analysis. A new low-temperature re-soldering technique of damaged electronic components, using a 42Sn/58Bi eutectic mixture, has been developed. Then we have exploited the physico-chemical properties of polymer adhesives and have used them for the extraction of encrypted data. A new technique has been developed to facilitate injection and high-frequency data modification. By a man-in-the-middle attack, the prototype allows analysing, in real-time, the data exchanges between the processor and the memory. Both techniques are now used in more complex attacks of cryptographic systems. Our research has led us to successfully sensitise polymer adhesives to laser attacks by pigmentation. This process allowed complex repairs with a laser with 15 micrometres precision and has been used in advanced forensic repair of crypto-processors and memory chips. Finally, the techniques developed in this thesis, put end-to-end and coupled with physical devices (X-ray 3D tomography, laser, SEM, fuming acids), have made it possible to have successful forensic transplants of encrypted systems in degraded conditions. We have successfully applied them, for the first time, on PGP-encrypted BlackBerry mobile phone.
654

Detekce síťových útoků pomocí nástroje Tshark / Detection of Network Attacks Using Tshark

Dudek, Jindřich January 2018 (has links)
This diploma thesis deals with the design and implementation of a tool for network attack detection from a captured network communication. It utilises the tshark packet analyser, the meaning of which is to convert the input file with the captured communications to the PDML format. The objective of this conversion being, increasing the flexibility of input data processing. When designing the tool, emphasis has been placed on the ability to expand it to detect new network attacks and on integrating these additions with ease. For this reason, the thesis also includes the design of a complex declarative descriptions for network attacks in the YAML serialization format. This allows us to specify the key properties of the network attacks and the conditions for their detection. The resulting tool acts as an interpreter of proposed declarative descriptions allowing it to be expanded with new types of attacks.
655

Leakage Conversion For Training Machine Learning Side Channel Attack Models Faster

Rohan Kumar Manna (8788244) 01 May 2020 (has links)
Recent improvements in the area of Internet of Things (IoT) has led to extensive utilization of embedded devices and sensors. Hence, along with utilization the need for safety and security of these devices also increases proportionately. In the last two decades, the side-channel attack (SCA) has become a massive threat to the interrelated embedded devices. Moreover, extensive research has led to the development of many different forms of SCA for extracting the secret key by utilizing the various leakage information. Lately, machine learning (ML) based models have been more effective in breaking complex encryption systems than the other types of SCA models. However, these ML or DL models require a lot of data for training that cannot be collected while attacking a device in a real-world situation. Thus, in this thesis, we try to solve this issue by proposing the new technique of leakage conversion. In this technique, we try to convert the high signal to noise ratio (SNR) power traces to low SNR averaged electromagnetic traces. In addition to that, we also show how artificial neural networks (ANN) can learn various non-linear dependencies of features in leakage information, which cannot be done by adaptive digital signal processing (DSP) algorithms. Initially, we successfully convert traces in the time interval of 80 to 200 as the cryptographic operations occur in that time frame. Next, we show the successful conversion of traces lying in any time frame as well as having a random key and plain text values. Finally, to validate our leakage conversion technique and the generated traces we successfully implement correlation electromagnetic analysis (CEMA) with an approximate minimum traces to disclosure (MTD) of 480.
656

PROGRAM ANOMALY DETECTION FOR INTERNET OF THINGS

Akash Agarwal (13114362) 01 September 2022 (has links)
<p>Program anomaly detection — modeling normal program executions to detect deviations at runtime as cues for possible exploits — has become a popular approach for software security. To leverage high performance modeling and complete tracing, existing techniques however focus on subsets of applications, e.g., on system calls or calls to predefined libraries. Due to limited scope, it is insufficient to detect subtle control-oriented and data-oriented attacks that introduces new illegal call relationships at the application level. Also such techniques are hard to apply on devices that lack a clear separation between OS and the application layer. This dissertation advances the design and implementation of program anomaly detection techniques by providing application context for library and system calls making it powerful for detecting advanced attacks targeted at manipulating intra- and inter-procedural control-flow and decision variables. </p> <p><br></p> <p>This dissertation has two main parts. The first part describes a statically initialized generic calling context program anomaly detection technique LANCET based on Hidden Markov Modeling to provide security against control-oriented attacks at program runtime. It also establishes an efficient execution tracing mechanism facilitated through source code instrumentation of applications. The second part describes a program anomaly detection framework EDISON to provide security against data-oriented attacks using graph representation learning and language models for intra and inter-procedural behavioral modeling respectively.</p> <p><br> This dissertation makes three high-level contributions. First, the concise descriptions demonstrates the design, implementation and extensive evaluation of an aggregation-based anomaly detection technique using fine-grained generic calling context-sensitive modeling that allows for scaling the detection over entire applications. Second, the precise descriptions show the design, implementation, and extensive evaluation of a detection technique that maps runtime traces to the program’s control-flow graph and leverages graphical feature representation to learn dynamic program behavior. Finally, this dissertation provides details and experience for designing program anomaly detection frameworks from high-level concepts, design, to low-level implementation techniques.</p>
657

Benutting van Gestaltspelterapie met die fokus op selfondersteuning by die kind in die middelkinderjare / The utilization of Gestalt play therapy and self-support with the child in middle childhood years

Stone, Maria Magdalena 30 November 2007 (has links)
Text in Afrikaans / In this study the researcher explored and described the use of Gestalt play therapy with specific focus on self-support with the child in middle childhood years. A literature study was undertaken to examine the concepts of child, Gestalt play therapy, self-support and the play therapy process. This literature study forms the theoretical frame in which this study was done. After the completion of the literature study, the empirical study was conducted. The researcher made use of unstructured interviews within a intrinsic single case study in order to compile research data. During the empirical study ten therapy sessions were conducted with the participant which was explored within the framework of qualitative research methodology. The researcher was able to use ample Gestalt play therapy concepts and principles during the description of the case study in order to explore self-support within the child during middle childhood. These concepts and principles will be discussed in depth within this study. / Social Work / M.Diac. (Spelterapie-rigting)
658

Information security, privacy, and compliance models for cloud computing services

Alruwaili, Fahad F. 13 April 2016 (has links)
The recent emergence and rapid advancement of Cloud Computing (CC) infrastructure and services have made outsourcing Information Technology (IT) and digital services to Cloud Providers (CPs) attractive. Cloud offerings enable reduction in IT resources (hardware, software, services, support, and staffing), and provide flexibility and agility in resource allocation, data and resource delivery, fault-tolerance, and scalability. However, the current standards and guidelines adopted by many CPs are tailored to address functionality (such as availability, speed, and utilization) and design requirements (such as integration), rather than protection against cyber-attacks and associated security issues. In order to achieve sustainable trust for cloud services with minimal risks and impact on cloud customers, appropriate cloud information security models are required. The research described in this dissertation details the processes adopted for the development and implementation of an integrated information security cloud based approach to cloud service models. This involves detailed investigation into the inherent information security deficiencies identified in the existing cloud service models, service agreements, and compliance issues. The research conducted was a multidisciplinary in nature, with detailed investigations on factors such as people, technology, security, privacy, and compliance involved in cloud risk assessment to ensure all aspects are addressed in holistic and well-structured models. The primary research objectives for this dissertation are investigated through a series of scientific papers centered on these key research disciplines. The assessment of information security, privacy, and compliance implementations in a cloud environment is described in Chapters two, three, four, and five. Paper 1 (CCIPS: A Cooperative Intrusion Detection and Prevention Framework for Cloud Services) outlines a framework for detecting and preventing known and zero-day threats targeting cloud computing networks. This framework forms the basis for implementing enhanced threat detection and prevention via behavioral and anomaly data analysis. Paper 2 (A Trusted CCIPS Framework) extends the work of cooperative intrusion detection and prevention to enable trusted delivery of cloud services. The trusted CCIPS model details and justifies the multi-layer approach to enhance the performance and efficiency of detecting and preventing cloud threats. Paper 3 (SOCaaS: Security Operations Center as a Service for Cloud Computing Environments) describes the need for a trusted third party to perform real-time monitoring of cloud services to ensure compliance with security requirements by suggesting a security operations center system architecture. Paper 4 (SecSLA: A Proactive and Secure Service Level Agreement Framework for Cloud Services) identifies the necessary cloud security and privacy controls that need to be addressed in the contractual agreements, i.e. service level agreements (SLAs), between CPs and their customers. Papers five, six, seven, and eight (Chapters 6 – 9) focus on addressing and reducing the risk issues resulting from poor assessment to the adoption of cloud services and the factors that influence such as migration. The investigation of cloud-specific information security risk management and migration readiness frameworks, detailed in Paper 5 (An Effective Risk Management Framework for Cloud Computing Services) and Paper 6 (Information Security, Privacy, and Compliance Readiness Model) was achieved through extensive consideration of all possible factors obtained from different studies. An analysis of the results indicates that several key factors, including risk tolerance, can significantly influence the migration decision to cloud technology. An additional issue found during this research in assessing the readiness of an organization to move to the cloud is the necessity to ensure that the cloud service provider is actually with information security, privacy, and compliance (ISPC) requirements. This investigation is extended in Paper 7 (A Practical Life Cycle Approach for Cloud based Information Security) to include the six phases of creating proactive cloud information security systems beginning with initial design, through the development, implementation, operations and maintenance. The inherent difficulty in identifying ISPC compliant cloud technology is resolved by employing a tracking method, namely the eligibility and verification system presented in Paper 8 (Cloud Services Information Security and Privacy Eligibility and Verification System). Finally, Paper 9 (A Case Study of Migration to a Compliant Cloud Technology) describes the actual implementation of the proposed frameworks and models to help the decision making process faced by the Saudi financial agency in migrating their IT services to the cloud. Together these models and frameworks suggest that the threats and risks associated with cloud services are continuously changing and more importantly, increasing in complexity and sophistication. They contribute to making stronger cloud based information security, privacy, and compliance technological frameworks. The outcomes obtained significantly contribute to best practices in ensuring information security controls are addressed, monitoring, enforced, and compliant with relevant regulations. / Graduate / 0984 / 0790 / fahd333@gmail.com
659

Rhetorical strategies of legitimation : the 9/11 Commission's public inquiry process

Parks, Ryan William January 2011 (has links)
This research project seeks to explore aspects of the post-reporting phase of the public inquiry process. Central to the public inquiry process is the concept of legitimacy and the idea that a public inquiry provides and opportunity to re-legitimate the credibility of failed public institutions. The current literature asserts that public inquiries re-legitimise through the production of authoritative narratives. As such, most of this scholarship has focused on the production of inquiry reports and, more recently, the reports themselves. However, in an era of accountability, and in the aftermath of such a poignant attack upon society, the production of a report may represent an apogee, but by no means an end, of the re-legitimation process. Appropriately, this thesis examines the post-reporting phase of the 9/11 Commission’s public inquiry process. The 9/11 Commission provides a useful research vehicle due to the bounded, and relatively linear, implementation process of the Commission’s recommendations. In little more than four months a majority of the Commission’s recommendations were passed into law. Within this implementation phase the dominant discursive process took place in the United States Congress. It is the legislative reform debates in the House of Representatives and the Senate that is the focus of this research project. The central research question is: what rhetorical legitimation strategies were employed in the legislative reform debates of the post-reporting phase of the 9/11 Commission’s public inquiry process? This study uses a grounded theory approach to the analysis of the legislative transcripts of the Congressional reform debates. This analysis revealed that proponents employed rhetorical strategies to legitimise a legislative ‘Call to Action’ narrative. Also, they employed rhetorical legitimation strategies that emphasised themes of bipartisanship, hard work and expertise in order to strengthen the standing of the legislation. Opponents of the legislation focused rhetorical de-legitimation strategies on the theme of ‘flawed process’. Finally, nearly all legislators, regardless of their view of the legislation, sought to appropriate the authoritative legitimacy of the Commission, by employing rhetorical strategies that presented their interests and motives as in line with the actions and wishes of the Commission.
660

Security Analytics: Using Deep Learning to Detect Cyber Attacks

Lambert, Glenn M, II 01 January 2017 (has links)
Security attacks are becoming more prevalent as cyber attackers exploit system vulnerabilities for financial gain. The resulting loss of revenue and reputation can have deleterious effects on governments and businesses alike. Signature recognition and anomaly detection are the most common security detection techniques in use today. These techniques provide a strong defense. However, they fall short of detecting complicated or sophisticated attacks. Recent literature suggests using security analytics to differentiate between normal and malicious user activities. The goal of this research is to develop a repeatable process to detect cyber attacks that is fast, accurate, comprehensive, and scalable. A model was developed and evaluated using several production log files provided by the University of North Florida Information Technology Security department. This model uses security analytics to complement existing security controls to detect suspicious user activity occurring in real time by applying machine learning algorithms to multiple heterogeneous server-side log files. The process is linearly scalable and comprehensive; as such it can be applied to any enterprise environment. The process is composed of three steps. The first step is data collection and transformation which involves identifying the source log files and selecting a feature set from those files. The resulting feature set is then transformed into a time series dataset using a sliding time window representation. Each instance of the dataset is labeled as green, yellow, or red using three different unsupervised learning methods, one of which is Partitioning around Medoids (PAM). The final step uses Deep Learning to train and evaluate the model that will be used for detecting abnormal or suspicious activities. Experiments using datasets of varying sizes of time granularity resulted in a very high accuracy and performance. The time required to train and test the model was surprisingly fast even for large datasets. This is the first research paper that develops a model to detect cyber attacks using security analytics; hence this research builds a foundation on which to expand upon for future research in this subject area.

Page generated in 0.0261 seconds