• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 17
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 78
  • 28
  • 25
  • 25
  • 24
  • 22
  • 21
  • 21
  • 21
  • 21
  • 20
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Možnosti zabezpečení mezi-procesorové komunikace pro dedikovaná zařízení / Possibilities of securing inter-processor communication for dedicated devices

Rácek, Luboš January 2016 (has links)
Diploma thesis focuses on introduce problematic issue of inter-core communication, implementation of Linux distribution made by project Yocto on chosen platform of NXP i.MX microcontrolers family and solution for secured inter-core communication.
112

A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER

Hafiz Muhammad Junaid Khan (8119418) 12 December 2019 (has links)
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.
113

On Reliability Methods Quantifying Risks to Transfer Capability in Electric Power Transmission Systems

Setréus, Johan January 2009 (has links)
In the operation, planning and design of the transmission system it is of greatest concern to quantify the reliability security margin to unwanted conditions. The deterministic N-1 criterion has traditionally provided this security margin to reduce the consequences of severe conditions such as widespread blackouts. However, a deterministic criterion does not include the likelihood of different outage events. Moreover, experience from blackouts shows, e.g. in Sweden-Denmark September 2003, that the outages were not captured by the N-1 criterion. The question addressed in this thesis is how this system security margin can be quantified with probabilistic methods. A quantitative measure provides one valuable input to the decision-making process of selecting e.g. system expansions alternatives and maintenance actions in the planning and design phases. It is also beneficial for the operators in the control room to assess the associated security margin of existing and future network conditions. This thesis presents a method that assesses each component's risk to an insufficient transfer capability in the transmission system. This shows on each component's importance to the system security margin. It provides a systematic analysis and ranking of outage events' risk of overloading critical transfer sections (CTS) in the system. The severity of each critical event is quantified in a risk index based on the likelihood of the event and the consequence of the section's transmission capacity. This enables a comparison of the risk of a frequent outage event with small CTS consequences, with a rare event with large consequences. The developed approach has been applied for the generally known Roy Billinton Test System (RBTS). The result shows that the ranking of the components is highly dependent on the substation modelling and the studied system load level. With the restriction of only evaluating the risks to the transfer capability in a few CTSs, the method provides a quantitative ranking of the potential risks to the system security margin at different load levels. Consequently, the developed reliability based approach provides information which could improve the deterministic criterion for transmission system planning.
114

EMULATION FOR MULTIPLE INSTRUCTION SET ARCHITECTURES

Christopher M Wright (10645670) 07 May 2021 (has links)
<p>System emulation and firmware re-hosting are popular techniques to answer various security and performance related questions, such as, does a firmware contain security vulnerabilities or meet timing requirements when run on a specific hardware platform. While this motivation for emulation and binary analysis has previously been explored and reported, starting to work or research in the field is difficult. Further, doing the actual firmware re-hosting for various Instruction Set Architectures(ISA) is usually time consuming and difficult, and at times may seem impossible. To this end, I provide a comprehensive guide for the practitioner or system emulation researcher, along with various tools that work for a large number of ISAs, reducing the challenges of getting re-hosting working or porting previous work for new architectures. I layout the common challenges faced during firmware re-hosting and explain successive steps and survey common tools to overcome these challenges. I provide emulation classification techniques on five different axes, including emulator methods, system type, fidelity, emulator purpose, and control. These classifications and comparison criteria enable the practitioner to determine the appropriate tool for emulation. I use these classifications to categorize popular works in the field and present 28 common challenges faced when creating, emulating and analyzing a system, from obtaining firmware to post emulation analysis. I then introduce a HALucinator [1 ]/QEMU [2 ] tracer tool named HQTracer, a binary function matching tool PMatch, and GHALdra, an emulator that works for more than 30 different ISAs and enables High Level Emulation.</p>
115

Efficient and Secure Equality-based Two-party Computation

Javad Darivandpour (11190051) 27 July 2021 (has links)
<div>Multiparty computation refers to a scenario in which multiple distinct yet connected parties aim to jointly compute a functionality. Over recent decades, with the rapid spread of the internet and digital technologies, multiparty computation has become an increasingly important topic. In addition to the integrity of computation in such scenarios, it is essential to ensure that the privacy of sensitive information is not violated. Thus, secure multiparty computation aims to provide sound approaches for the joint computation of desired functionalities in a secure manner: Not only must the integrity of computation be guaranteed, but also each party must not learn anything about the other parties' private data. In other words, each party learns no more than what can be inferred from its own input and its prescribed output.</div><div><br></div><div> This thesis considers secure two-party computation over arithmetic circuits based on additive secret sharing. In particular, we focus on efficient and secure solutions for fundamental functionalities that depend on the equality of private comparands. The first direction we take is providing efficient protocols for two major problems of interest. Specifically, we give novel and efficient solutions for <i>private equality testing</i> and multiple variants of <i>secure wildcard pattern matching</i> over any arbitrary finite alphabet. These problems are of vital importance: Private equality testing is a basic building block in many secure multiparty protocols; and, secure pattern matching is frequently used in various data-sensitive domains, including (but not limited to) private information retrieval and healthcare-related data analysis. The second direction we take towards a performance improvement in equality-based secure two-party computation is via introducing a generic functionality-independent secure preprocessing that results in an overall computation and communication cost reduction for any subsequent protocol. We achieve this by providing the first precise functionality formulation and secure protocols for replacing original inputs with much smaller inputs such that this replacement neither changes the outcome of subsequent computations nor violates the privacy of sensitive inputs. Moreover, our input-size reduction opens the door to a new approach for efficiently solving Private Set Intersection. The protocols we give in this thesis are typically secure in the semi-honest adversarial threat model.</div>
116

On Statistical Properties of Arbiter Physical Unclonable Functions

Gajland, Phillip January 2018 (has links)
The growing interest in the Internet of Things (IoT) has led to predictions claiming that by 2020 we can expect to be surrounded by 50 billion Internet connected devices. With more entry points to a network, adversaries can potentially use IoT devices as a stepping stone for attacking other devices connected to the network or the network itself. Information security relies on cryptographic primitives that, in turn, depend on secret keys. Furthermore, the issue of Intellectual property (IP) theft in the field of Integrated circuit (IC) design can be tackled with the help of unique device identifiers. Physical unclonable functions (PUFs) provide a tamper-resilient solution for secure key storage and fingerprinting hardware. PUFs use intrinsic manufacturing differences of ICs to assign unique identities to hardware. Arbiter PUFs utilise the differences in delays of identically designed paths, giving rise to an unpredictable response unique to a given IC. This thesis explores the statistical properties of Boolean functions induced by arbiter PUFs. In particular, this empirical study looks into the distribution of induced functions. The data gathered shows that only 3% of all possible 4-variable functions can be induced by a single 4 stage arbiter PUF. Furthermore, some individual functions are more than 5 times more likely than others. Hence, the distribution is non-uniform. We also evaluate alternate PUF designs, improving the coverage vastly, resulting in one particular implementation inducing all 65,536 4-variable functions. We hypothesise the need for n XORed PUFs to induce all 22n possible n-variable Boolean functions.
117

Management de la sécurité des systèmes d'information : les collectivités territoriales face aux risques numériques / IT risk management : local authorities facing the digital risks

Février, Rémy 10 April 2012 (has links)
Cette thèse a pour objectif de répondre à la question suivante : Quel est le niveau de prise en compte de la Sécurité des Systèmes d’Information (SSI) par les collectivités territoriales françaises face aux risques numériques ? Ces dernières étant aujourd’hui confrontées à de nouveaux défis qui nécessitent un recours toujours plus important aux nouvelles technologies (administration électronique, e-démocratie, dématérialisation des appels d’offre…), le management de la sécurité des Systèmes d’Information (SI) territoriaux devient un enjeu majeur -bien qu’encore peu étudié- en matière de service public et de protection des données à caractère personnel. Etablie au travers de postures professionnelles successives et dans le cadre d’une approche naturaliste de la décision, notre modélisation théorique tend à mesurer le niveau réel de prise en compte du risque numérique en partant d’hypothèses fondées sur l’influence respective d’un ensemble de caractéristiques propres aux collectivités territoriales. Il se traduit par une enquête de terrain menée directement auprès de responsables territoriaux. Alors que cet enjeu nécessite une prise de conscience, par les décideurs locaux, de la nécessité de protéger les données qui leur sont confiés, il s’avère que ceux-ci n’ont, au mieux, qu’une connaissance très imparfaite des enjeux et des risques inhérents à la sécurisation d’un SI ainsi que de l’ensemble des menaces, directes ou indirectes, susceptibles de compromettre leur bonne utilisation. Une solution potentielle pourrait résider, simultanément à de la mise en place de procédures adaptées à l’échelon de chaque collectivité, par la définition d’une politique publique spécifique. / This doctoral thesis aims at answering a key question: what is the level of consideration given to Information Systems Security (ISS) by the French local authorities (LAs)? The latter are now facing new challenges that require an ever-increasing use of new technologies (e-government, e-democracy, dematerialization of call for tenders...). The under-researched territorial IT risk becomes a major issue in the sphere of public services and the protection of personal data. Theoretically based and constructed through successive professional positions, our theoretical model helps measure the actual level of inclusion of digital risk taking into account the respective influence of a set of characteristics of local authorities. A field survey was conducted with the close collaboration of representatives of LAs.While numerical risk requires a high level awareness by LA decision makers, it appears that they have a very imperfect knowledge of IT security related risks as well as of direct or indirect threats that may jeopardize their management systems. A potential solution lies with the definition of a specific public policy and with the implementation of appropriate procedures at the level of each community.
118

Data Protection in Transit and at Rest with Leakage Detection

Denis A Ulybyshev (6620474) 15 May 2019 (has links)
<p>In service-oriented architecture, services can communicate and share data among themselves. This thesis presents a solution that allows detecting several types of data leakages made by authorized insiders to unauthorized services. My solution provides role-based and attribute-based access control for data so that each service can access only those data subsets for which the service is authorized, considering a context and service’s attributes such as security level of the web browser and trust level of service. My approach provides data protection in transit and at rest for both centralized and peer-to-peer service architectures. The methodology ensures confidentiality and integrity of data, including data stored in untrusted cloud. In addition to protecting data against malicious or curious cloud or database administrators, the capability of running a search through encrypted data, using SQL queries, and building analytics over encrypted data is supported. My solution is implemented in the “WAXEDPRUNE” (Web-based Access to Encrypted Data Processing in Untrusted Environments) project, funded by Northrop Grumman Cybersecurity Research Consortium. WAXEDPRUNE methodology is illustrated in this thesis for two use cases, including a Hospital Information System with secure storage and exchange of Electronic Health Records and a Vehicle-to-Everything communication system with secure exchange of vehicle’s and drivers’ data, as well as data on road events and road hazards. </p><p>To help with investigating data leakage incidents in service-oriented architecture, integrity of provenance data needs to be guaranteed. For that purpose, I integrate WAXEDPRUNE with IBM Hyperledger Fabric blockchain network, so that every data access, transfer or update is recorded in a public blockchain ledger, is non-repudiatable and can be verified at any time in the future. The work on this project, called “Blockhub,” is in progress.</p>
119

Analysis of Computer System Incidents and Security Level Evaluation / Incidentų kompiuterių sistemose tyrimas ir saugumo lygio įvertinimas

Paulauskas, Nerijus 10 June 2009 (has links)
The problems of incidents arising in computer networks and the computer system security level evaluation are considered in the thesis. The main research objects are incidents arising in computer networks, intrusion detection systems and network scanning types. The aim of the thesis is the investigation of the incidents in the computer networks and computer system security level evaluation. The following main tasks are solved in the work: classification of attacks and numerical evaluation of the attack severity level evaluation; quantitative evaluation of the computer system security level; investigation of the dependence of the computer system performance and availability on the attacks affecting the system and defense mechanisms used in it; development of the model simulating the computer network horizontal and vertical scanning. The thesis consists of general characteristic of the research, five chapters and general conclusions. General characteristic of the thesis is dedicated to an introduction of the problem and its topicality. The aims and tasks of the work are also formulated; the used methods and novelty of solutions are described; the author‘s publications and structure of the thesis are presented. Chapter 1 covers the analysis of existing publications related to the problems of the thesis. The survey of the intrusion detection systems is presented and methods of the intrusion detection are analyzed. The currently existing techniques of the attack classification are... [to full text] / Disertacijoje nagrinėjamos incidentų kompiuterių tinkluose ir kompiuterių sistemų saugumo lygio įvertinimo problemos. Pagrindiniai tyrimo objektai yra incidentai kompiuterių tinkluose, atakų atpažinimo sistemos ir kompiuterių tinklo žvalgos būdai. Disertacijos tikslas – incidentų kompiuterių tinkluose tyrimas ir kompiuterių sistemų saugumo lygio įvertinimas. Darbe sprendžiami šie pagrindiniai uždaviniai: atakų klasifikavimas ir jų sunkumo lygio skaitinis įvertinimas; kompiuterių sistemos saugumo lygio kiekybinis įvertinimas; kompiuterių sistemos našumo ir pasiekiamumo priklausomybės nuo sistemą veikiančių atakų ir joje naudojamų apsaugos mechanizmų tyrimas; modelio, imituojančio kompiuterių tinklo horizontalią ir vertikalią žvalgą kūrimas. Disertaciją sudaro įvadas, penki skyriai ir bendrosios išvados. Įvadiniame skyriuje nagrinėjamas problemos aktualumas, formuluojamas darbo tikslas bei uždaviniai, aprašomas mokslinis darbo naujumas, pristatomi autoriaus pranešimai ir publikacijos, disertacijos struktūra. Pirmasis skyrius skirtas literatūros apžvalgai. Jame apžvelgiamos atakų atpažinimo sistemos, analizuojami atakų atpažinimo metodai. Nagrinėjami atakų klasifikavimo būdai. Didelis dėmesys skiriamas kompiuterių sistemos saugumo lygio įvertinimo metodams, kompiuterių prievadų žvalgos būdams ir žvalgos atpažinimo metodams. Skyriaus pabaigoje formuluojamos išvados ir konkretizuojami disertacijos uždaviniai. Antrajame skyriuje pateikta sudaryta atakų nukreiptų į kompiuterių... [toliau žr. visą tekstą]
120

Enhancing security in distributed systems with trusted computing hardware

Reid, Jason Frederick January 2007 (has links)
The need to increase the hostile attack resilience of distributed and internet-worked computer systems is critical and pressing. This thesis contributes to concrete improvements in distributed systems trustworthiness through an enhanced understanding of a technical approach known as trusted computing hardware. Because of its physical and logical protection features, trusted computing hardware can reliably enforce a security policy in a threat model where the authorised user is untrusted or when the device is placed in a hostile environment. We present a critical analysis of vulnerabilities in current systems, and argue that current industry-driven trusted computing initiatives will fail in efforts to retrofit security into inherently flawed operating system designs, since there is no substitute for a sound protection architecture grounded in hardware-enforced domain isolation. In doing so we identify the limitations of hardware-based approaches. We argue that the current emphasis of these programs does not give sufficient weight to the role that operating system security plays in overall system security. New processor features that provide hardware support for virtualisation will contribute more to practical security improvement because they will allow multiple operating systems to concurrently share the same processor. New operating systems that implement a sound protection architecture will thus be able to be introduced to support applications with stringent security requirements. These can coexist alongside inherently less secure mainstream operating systems, allowing a gradual migration to less vulnerable alternatives. We examine the effectiveness of the ITSEC and Common Criteria evaluation and certification schemes as a basis for establishing assurance in trusted computing hardware. Based on a survey of smart card certifications, we contend that the practice of artificially limiting the scope of an evaluation in order to gain a higher assurance rating is quite common. Due to a general lack of understanding in the marketplace as to how the schemes work, high evaluation assurance levels are confused with a general notion of 'high security strength'. Vendors invest little effort in correcting the misconception since they benefit from it and this has arguably undermined the value of the whole certification process. We contribute practical techniques for securing personal trusted hardware devices against a type of attack known as a relay attack. Our method is based on a novel application of a phenomenon known as side channel leakage, heretofore considered exclusively as a security vulnerability. We exploit the low latency of side channel information transfer to deliver a communication channel with timing resolution that is fine enough to detect sophisticated relay attacks. We avoid the cost and complexity associated with alternative communication techniques suggested in previous proposals. We also propose the first terrorist attack resistant distance bounding protocol that is efficient enough to be implemented on resource constrained devices. We propose a design for a privacy sensitive electronic cash scheme that leverages the confidentiality and integrity protection features of trusted computing hardware. We specify the command set and message structures and implement these in a prototype that uses Dallas Semiconductor iButtons. We consider the access control requirements for a national scale electronic health records system of the type that Australia is currently developing. We argue that an access control model capable of supporting explicit denial of privileges is required to ensure that consumers maintain their right to grant or withhold consent to disclosure of their sensitive health information in an electronic system. Finding this feature absent in standard role-based access control models, we propose a modification to role-based access control that supports policy constructs of this type. Explicit denial is difficult to enforce in a large scale system without an active central authority but centralisation impacts negatively on system scalability. We show how the unique properties of trusted computing hardware can address this problem. We outline a conceptual architecture for an electronic health records access control system that leverages hardware level CPU virtualisation, trusted platform modules, personal cryptographic tokens and secure coprocessors to implement role based cryptographic access control. We argue that the design delivers important scalability benefits because it enables access control decisions to be made and enforced locally on a user's computing platform in a reliable way.

Page generated in 0.0617 seconds