• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 12
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 10
  • 10
  • 10
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Une micro-ethnographie indigène au sein d'un dispositif de validation d'acquis dans l'académie de Montpellier à travers les méandres, franchissant les écluses, ainsi coule la reconnaissance de l'expérience... /

Yahiel-Eriksson, Véronique Taylor, Paul V.. January 2007 (has links)
Thèse de doctorat : Sciences de l'éducation : Rennes 2 : 2007. / Bibliogr. f. 324-346. Annexes.
22

Determining the Integrity of Applications and Operating Systems using Remote and Local Attesters

January 2011 (has links)
abstract: This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute, and the code generates results which are sent to the external entity. These results provide the external entity an assurance as to whether the client application and the OS are in pristine condition. This work also presents a technique where it can be verified that the application which was attested, did not get replaced by a different application after completion of the attestation. The implementation of these three techniques was achieved entirely in software and is backward compatible with legacy machines on the Intel x86 architecture. This research also presents two approaches to incorporating software based "root of trust" using Virtual Machine Monitors (VMMs). The first approach determines the integrity of an executing Guest OS from the Host OS using Linux Kernel-based Virtual Machine (KVM) and qemu emulation software. The second approach implements a small VMM called MIvmm that can be utilized as a trusted codebase to build security applications such as those implemented in this research. MIvmm was conceptualized and implemented without using any existing codebase; its minimal size allows it to be trustworthy. Both the VMM approaches leverage processor support for virtualization in the Intel x86 architecture. / Dissertation/Thesis / Ph.D. Computer Science 2011
23

Trusted terminal-based systems / Garantera tilltro i terminalbaserade system

Faxö, Elias January 2011 (has links)
Trust is a concept of increasing importance in today’s information systems where information storage and generation to a higher extent is distributed among several entities throughout local or global networks. This trend in information science requires new ways to sustain the information security in the systems. This document defines trust in the context of a terminal-based system and analyzes the architecture of a distributed terminal-based system using threat modeling tools to elicit the prerequisites for trust in such a system. The result of the analysis is then converted into measures and activities that can be performed to fulfill these prerequisites. The proposed measures include hardware identification and both hardware and software attestation supported by the Trusted Computing Group standards and Trusted Platform Modules that are included in a connection handshake protocol. The proposed handshake protocol is evaluated against a practical case of a terminal-based casino system where the weaknesses of the protocol, mainly the requirement to build a system-wide Trusted Computing Base, are made evident. Proposed solutions to this problem such as minimization of the Trusted Computing Base are discussed along with the fundamental reason of the problem and the future solutions using the next generation of CPUs and Operating System kernels.
24

User Perceptions of CSR Disclosure Credibility with Reasonable, Limited and Hybrid Assurances

Sheldon, Mark Donald 18 April 2016 (has links)
Firms seek independent assurance from accountants on their Corporate Social Responsibility (CSR) disclosures for various reasons, including to enhance the credibility of such disclosures or to enhance the reliability of management's CSR report. However, there are multiple levels of assurance available for CSR disclosures. The forthcoming clarified U.S. attestation standards re-frame the two levels of assurance on non-financial information as reasonable (higher) and limited (lower). While not currently addressed by U.S. standards, accountants also issue hybrid reports with both reasonable and limited assurance on CSR disclosures. I conduct an experiment to identify differences in nonprofessional investors' perceptions of CSR disclosures when reasonable, limited, or hybrid assurances are provided and manipulate firm CSR performance as a possible moderator for the influence of assurance. Findings indicate that nonprofessional investors find CSR disclosures on greenhouse gas emissions to be credible, and the degree of credibility does not vary significantly based on the firm's performance in controlling emissions or on the level of assurance provided by an accountant. However, nonprofessional investors do differ in their perceptions of the overall reliability of representations made in management's CSR report. While management's CSR report supported by hybrid assurance is generally perceived to be as reliable as when only limited or only reasonable assurance is provided, the perceived reliability differs between limited and reasonable assurance. Supplemental analyses reveal an interaction such that management's CSR report is perceived as more reliable with limited assurance rather than with reasonable or no assurance for firms with better performance at controlling greenhouse gas emissions; this association reverses for firms with worse performance. This interaction may be due, in part, to language in limited assurance reports that makes it clear higher assurance was available but not pursued by management. Results address a gap in the literature for hybrid assurance and show that nonprofessional investors find management's CSR report with hybrid assurance to generally be as credible and reliable as when either limited or reasonable assurance is provided. Further, results offer insight into the interactive effects of firm performance and level of assurance on nonprofessional investors' perceptions of the reliability of management's CSR report. / Ph. D.
25

Transmitter Authentication in Dynamic Spectrum Sharing

Kumar, Vireshwar 02 February 2017 (has links)
Recent advances in spectrum access technologies, such as software-defined radios, have made dynamic spectrum sharing (DSS) a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of "rogue" transmitter radios which may cause significant interference to other radios in DSS. One approach for countering such threats is to employ a transmitter authentication scheme at the physical (PHY) layer. In PHY-layer authentication, an authentication signal is generated by the transmitter, and embedded into the message signal. This enables a regulatory enforcement entity to extract the authentication signal from the received signal, uniquely identify a transmitter, and collect verifiable evidence of a rogue transmission that can be used later during an adjudication process. There are two primary technical challenges in devising a transmitter authentication scheme for DSS: (1) how to generate and verify the authentication signal such that the required security and privacy criteria are met; and (2) how to embed and extract the authentication signal without negatively impacting the performance of the transmitters and the receivers in DSS. With regard to dealing with the first challenge, the authentication schemes in the prior art, which provide privacy-preserving authentication, have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. In this dissertation, the novel approaches which significantly improve scalability of the transmitter authentication with respect to revocation, are proposed. With regard to dealing with the second challenge, in the existing PHY-layer authentication techniques, the authentication signal is embedded into the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal's signal to interference and noise ratio (SINR) and the authentication signal's SINR. In this dissertation, the novel approaches which are not constrained by the aforementioned tradeoff between message and authentication signals, are proposed. / Ph. D. / Recent advances in spectrum access technologies, such as software-defined radios, have made dynamic spectrum sharing (DSS) a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of “rogue” transmitter radios which may cause significant interference to other radios in DSS. One approach for countering such threats is to employ a <i>transmitter authentication</i> scheme at the physical (PHY) layer. In PHY-layer authentication, an authentication signal is generated by the transmitter, and embedded into the message signal. This enables a regulatory enforcement entity to extract the authentication signal from the received signal, uniquely identify a transmitter, and collect verifiable evidence of a rogue transmission that can be used later during an adjudication process. There are two primary technical challenges in devising a transmitter authentication scheme for DSS: (1) how to generate and verify the authentication signal such that the required security and privacy criteria are met; and (2) how to embed and extract the authentication signal without negatively impacting the performance of the transmitters and the receivers in DSS. With regard to dealing with the first challenge, the authentication schemes in the prior art, which provide privacy-preserving authentication, have limited practical value for use in large networks due to the high computational complexity of their revocation check procedures. In this dissertation, the novel approaches which significantly improve scalability of the transmitter authentication with respect to revocation, are proposed. With regard to dealing with the second challenge, in the existing PHY-layer authentication techniques, the authentication signal is embedded into the message signal in such a way that the authentication signal appears as noise to the message signal and vice versa. Hence, existing schemes are constrained by a fundamental tradeoff between the message signal’s signal to interference and noise ratio (SINR) and the authentication signal’s SINR. In this dissertation, the novel approaches which are not constrained by the aforementioned tradeoff between message and authentication signals, are proposed.
26

Towards attack-tolerant trusted execution environments : Secure remote attestation in the presence of side channels

Crone, Max January 2021 (has links)
In recent years, trusted execution environments (TEEs) have seen increasing deployment in computing devices to protect security-critical software from run-time attacks and provide isolation from an untrustworthy operating system (OS). A trusted party verifies the software that runs in a TEE using remote attestation procedures. However, the publication of transient execution attacks such as Spectre and Meltdown revealed fundamental weaknesses in many TEE architectures, including Intel Software Guard Exentsions (SGX) and Arm TrustZone. These attacks can extract cryptographic secrets, thereby compromising the integrity of the remote attestation procedure. In this work, we design and develop a TEE architecture that provides remote attestation integrity protection even when confidentiality of the TEE is compromised. We use the formally verified seL4 microkernel to build the TEE, which ensures strong isolation and integrity. We offload cryptographic operations to a secure co-processor that does not share any vulnerable microarchitectural hardware units with the main processor, to protect against transient execution attacks. Our design guarantees integrity of the remote attestation procedure. It can be extended to leverage co-processors from Google and Apple, for wide-scale deployment on mobile devices. / Under de senaste åren används betrodda exekveringsmiljöer (TEE) allt mera i datorutrustning för att skydda säkerhetskritisk programvara från attacker och för att isolera dem från ett opålitligt operativsystem. En betrodd part verifierar programvaran som körs i en TEE med hjälp av fjärrattestering. Nyliga mikroarkitekturella anfall, t.ex. Spectre och Meltdown, har dock visat grundläggande svagheter i många TEE-arkitekturer, inklusive Intel SGX och Arm TrustZone. Dessa attacker kan avslöja kryptografiska hemligheter och därmed äventyra integriteten av fjärrattestning. I det här arbetet utvecklar vi en arkitektur för en betrodd exekveringsmiljö (TEE) som ger integritetsskydd genom fjärrattestering även när TEE:s konfidentialitet äventyras. Vi använder den formellt verifierade seL4-mikrokärnan för att bygga TEE:n som garanterar stark isolering och integritet. För att skydda kryptografiska operationer, overför vi dem till en säker samprocessor som inte delar någon sårbar mikroarkitektur med huvudprocessorn. Vår arktektur garanterar fjärrattesteringens integritet och kan utnyttja medprocessorer från Google och Apple för att användas i stor skala på mobila enheter.
27

Conscience and Attestation : The Methodological Role of the “Call of Conscience” (Gewissensruf) in Heidegger’s Being and Time

Kasowski, Gregor Bartolomeus 10 1900 (has links)
Travail réalisé en cotutelle (Université de Paris IV-La Sorbonne). / Cette étude vise à exposer le rôle méthodologique que Martin Heidegger attribue à la conscience (Gewissen) dans Être et temps et à faire ressortir les implications de son interprétation de « l’appel de la conscience » comme le moyen de produire l’attestation (Bezeugung) de l’existence authentique en tant que possibilité du Dasein (ou être-dans-le-monde). Notre objectif initial est de montrer comment la notion heideggérienne de conscience a évolué avant la publication d’Être et temps en 1927 et d’identifier les sources qui ont contribué à l’interprétation existentiale de la conscience comme « l’appel du souci. » Notre analyse historique révèle notamment que Heidegger n’a jamais décrit la conscience comme un « appel » avant sa lecture du livre Das Gewissen (1925) par Hendrik G. Stoker, un jeune philosophe sud-africain qui a étudié à Cologne sous la direction de Max Scheler. Nous démontrons plus spécifiquement comment l’étude phénoménologique de Stoker—qui décrit la conscience comme « l’appel du devoir (Pflichtruf) » provenant de l’étincelle divine (synteresis) placée dans l’âme de chaque personne par Dieu—a influencé l’élaboration du concept de « l’appel existentiel » chez Heidegger. Mettant l’accent sur le rôle méthodologique de la conscience dans Être et temps, nous soulignons aussi l’importance des liens entre son concept de la conscience et la notion de « l’indication formelle » que Heidegger a mise au cœur de sa « méthode » dans ses cours sur la phénoménologie à Freiburg et Marbourg. Alors que de nombreux commentateurs voient dans « l’appel de la conscience » une notion solipsiste qui demeure impossible en tant qu’expérience, nous proposons un moyen de lever cette difficulté apparente en tentant de faire ressortir ce qui est « indiqué formellement » par la notion même de la conscience (Gewissen) dans Être et temps. Cette approche nous permet d’affirmer que le concept de conscience chez Heidegger renvoie à un phénomène de « témoignage » qui est radicalement différent de la notion traditionnelle de conscientia. Guidé par les principes mêmes de la phénoménologie heideggérienne, nous procédons à une analyse « destructrice » de l’histoire du mot allemand Gewissen qui nous révèle que la signification originelle de ce mot (établie dans le plus ancien livre préservé dans la langue allemande : le Codex Abrogans) était testimonium et non conscientia. À l’origine, Gewissen signifiait en effet « attestation »—ce qui est précisément le rôle assigné à la conscience par Heidegger dans Être et temps. Sur la base de cette découverte, nous proposons une manière de comprendre cette « attestation » comme une expérience possible : l’écoute du « témoignage silencieux » du martyr qui permet à Dasein de reconnaître sa propre possibilité d’authenticité. / This study aims to exhibit the methodological role that Martin Heidegger assigns to conscience (Gewissen) in Being and Time and to reveal the implications of his interpretation of the “call of conscience” as the means of producing the attestation (Bezeugung) of authentic existence as a possibility of Being-in-the-world (or Dasein). We begin by seeking to understand how Heidegger’s notion of conscience evolved prior to the 1927 publication of Being and Time and to identify the sources which contributed to his interpretation of conscience as the “call of care.” Our historical analysis notably reveals that Heidegger never once describes conscience as a “call” before reading Das Gewissen (1925) by Hendrik G. Stoker, a young South African philosopher who studied under Max Scheler’s direction at the University of Cologne. We specifically examine how Stoker’s phenomenological study—which describes conscience as the “call-of-duty” issued to each human being by the divine “spark” (synteresis) placed in his or her soul by God—contributed to shaping Heidegger’s account of the “existential call.” Focusing on the methodological role of conscience in Being and Time, we analyze Heidegger’s major work in light of his early lectures on phenomenology at Freiburg and Marburg. This approach confirms the relation between conscience in Being and Time and the concept of “formal indication” that Heidegger placed at the heart of his evolving “method” of phenomenological investigation. While many commentators have argued that Heidegger’s “call of conscience” is solipsistic and impossible to experience, we propose a way of reconsidering this apparent impasse by examining what Being and Time itself “formally indicates” with regard to conscience. We show that Heidegger’s conscience points to a phenomenon of existential “testimony” which is radically different from the traditional notion of conscientia. Guided by Heidegger’s “formal indication” of conscience, we “destructively” review the history of the German word Gewissen and reveal its original meaning to be “testimonium” not “conscientia.” In recognizing that Gewissen originally meant “attestation,” we show how Heidegger’s existential phenomenon of conscience can be understood as Dasein’s experience of hearing the “silent testimony” of the martyr.
28

Radium: Secure Policy Engine in Hypervisor

Shah, Tawfiq M. 08 1900 (has links)
The basis of today’s security systems is the trust and confidence that the system will behave as expected and are in a known good trusted state. The trust is built from hardware and software elements that generates a chain of trust that originates from a trusted known entity. Leveraging hardware, software and a mandatory access control policy technology is needed to create a trusted measurement environment. Employing a control layer (hypervisor or microkernel) with the ability to enforce a fine grained access control policy with hyper call granularity across multiple guest virtual domains can ensure that any malicious environment to be contained. In my research, I propose the use of radium's Asynchronous Root of Trust Measurement (ARTM) capability incorporated with a secure mandatory access control policy engine that would mitigate the limitations of the current hardware TPM solutions. By employing ARTM we can leverage asynchronous use of boot, launch, and use with the hypervisor proving its state and the integrity of the secure policy. My solution is using Radium (Race free on demand integrity architecture) architecture that will allow a more detailed measurement of applications at run time with greater semantic knowledge of the measured environments. Radium incorporation of a secure access control policy engine will give it the ability to limit or empower a virtual domain system. It can also enable the creation of a service oriented model of guest virtual domains that have the ability to perform certain operations such as introspecting other virtual domain systems to determine the integrity or system state and report it to a remote entity.
29

Architectural Introspection and Applications

Litty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions. Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection. To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
30

Architectural Introspection and Applications

Litty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions. Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection. To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.

Page generated in 0.027 seconds