• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 22
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Context-Aware Malicious Code Detection

Gu, Boxuan 19 December 2012 (has links)
No description available.
42

A Comprehensive and Comparative Examination of Healthcare Data Breaches: Assessing Security, Privacy, and Performance

Al Kinoon, Mohammed 01 January 2024 (has links) (PDF)
The healthcare sector is pivotal, offering life-saving services and enhancing well-being and community life quality, especially with the transition from paper-based to digital electronic health records (EHR). While improving efficiency and patient safety, this digital shift has also made healthcare a prime target for cybercriminals. The sector's sensitive data, including personal identification information, treatment records, and SSNs, are valuable for illegal financial gains. The resultant data breaches, increased by interconnected systems, cyber threats, and insider vulnerabilities, present ongoing and complex challenges. In this dissertation, we tackle a multi-faceted examination of these challenges. We conducted a detailed analysis of healthcare data breaches using the VERIS (Vocabulary for Event Recording and Incident Sharing) dataset. We delve into the trends of these breaches, investigate the attack vectors, and identify patterns to inform effective mitigation strategies. We conducted a spatiotemporal analysis of the VERIS and the Office of Civil Rights (OCR) datasets. We explored the geographical and temporal distribution of breaches and focused on the types of targeted assets to decipher the attackers' motives. Additionally, we conducted a detailed analysis of hospitals' online presence, focusing on their security and performance features. By comparing government, non-profit, and private hospitals in the U.S., we examined their security practices, content, and domain attributes to highlight the differences and similarities in the digital profiles of these hospital types. Furthermore, we expand our scope to include a comparative sector-based study investigating data breaches across various critical sectors. This broader view provides a contextual understanding of the healthcare sector's unique vulnerabilities compared to other sectors. Overall, this dissertation contributes fundamental insights into healthcare data breaches and hospitals' digital presence and underscores the urgent need for enhanced understanding and implementation of robust security measures in this vitally important sector, striving for a balance between technological advancement and data security.
43

利用可信度本體論與代理者程式以建構具有語意溝通的資訊網服務

楊銘煇, Yang, Min-huei Unknown Date (has links)
為了解決代理者程式在開放式網際網路上溝通的問題,我們採用了語意網中本體論的技術。目的是透過可信度等多個本體論的使用,代理者程式可以進行具有語意程度的溝通以完成代理者程式的互信。研究中,我們利用了DAML+OIL這種具有強大的表達能力的語言來描述數位電子憑証及安全相關的字彙,以及代理者程式彼此間的關係以便於檢驗代理者可信度,最後我們藉代理者程式的認証、授權等的溝通協定來達成資訊網服務的資源及可信度的控管機制。 / We use ontology technology from the Semantic Web to solve the agent communication problem. The idea is to build trust and other ontologies for the multi-agent system to ensure semantic level agent communication on agent trust control. In this research, we use a powerful ontology language, i.e. DAML+OIL to explicitly describe a variety of digital certificates and the relationship among agents for agent trust verification. Furthermore, we fulfill the resource and trust control mechanism using agent authentication/authorization communication protocols on the Web Services environment.
44

Characterizing the Third-Party Authentication Landscape : A Longitudinal Study of how Identity Providers are Used in Modern Websites / Longitudinella mätningar av användandet av tredjepartsautentisering på moderna hemsidor

Josefsson Ågren, Fredrik, Järpehult, Oscar January 2021 (has links)
Third-party authentication services are becoming more common since it eases the login procedure by not forcing users to create a new login for every website thatuses authentication. Even though it simplifies the login procedure the users still have to be conscious about what data is being shared between the identity provider (IDP) and the relying party (RP). This thesis presents a tool for collecting data about third-party authentication that outperforms previously made tools with regards to accuracy, precision and recall. The developed tool was used to collect information about third-party authentication on a set of websites. The collected data revealed that third-party login services offered by Facebook and Google are most common and that Twitters login service is significantly less common. Twitter's login service shares the most data about the users to the RPs and often gives the RPs permissions to perform write actions on the users Twitter account.  In addition to our large-scale automatic data collection, three manual data collections were performed and compared to previously made manual data collections from a nine-year period. The longitudinal comparison showed that over the nine-year period the login services offered by Facebook and Google have been dominant.It is clear that less information about the users are being shared today compared to earlier years for Apple, Facebook and Google. The Twitter login service is the only IDP that have not changed their permission policies. This could be the reason why the usage of the Twitter login service on websites have decreased.  The results presented in this thesis helps provide a better understanding of what personal information is exchanged by IDPs which can guide users to make well educated decisions on the web.
45

On the security of authentication protocols on the web / La sécurité des protocoles d’authentification sur leWeb

Delignat-Lavaud, Antoine 14 March 2016 (has links)
Est-il possible de démontrer un théorème prouvant que l’accès aux données confidentielles d’un utilisateur d’un service Web (tel que GMail) nécessite la connaissance de son mot de passe, en supposant certaines hypothèses sur ce qu’un attaquant est incapable de faire (par exemple, casser des primitives cryptographiques ou accéder directement aux bases de données de Google), sans toutefois le restreindre au point d’exclure des attaques possibles en pratique?Il existe plusieurs facteurs spécifiques aux protocoles du Web qui rendent impossible une application directe des méthodes et outils existants issus du domaine de l’analyse des protocoles cryptographiques.Tout d’abord, les capacités d’un attaquant sur le Web vont largement au-delà de la simple manipulation des messages échangés entre le client et le serveur sur le réseau. Par exemple, il est tout à fait possible (et même fréquent en pratique) que l’utilisateur ait dans son navigateur un onglet contenant un site contrôlé par l’adversaire pendant qu’il se connecte à sa messagerie (par exemple, via une bannière publicitaire) ; cet onglet est, comme n’importe quel autre site, capable de provoquer l’envoi de requêtes arbitraires vers le serveur de GMail, bien que la politique d’isolation des pages du navigateur empêche la lecture directe de la réponse à ces requêtes. De plus, la procédure pour se connecter à GMail implique un empilement complexe de protocoles : tout d’abord, un canal chiffré, et dont le serveur est authentifié, est établi avec le protocole TLS ; puis, une session HTTP est créée en utilisant un cookie ; enfin, le navigateur exécute le code JavaScript retourné par le client, qui se charge de demander son mot de passe à l’utilisateur.Enfin, même en imaginant que la conception de ce système soit sûre, il suffit d’une erreur minime de programmation (par exemple, une simple instruction goto mal placée) pour que la sécurité de l’ensemble de l’édifice s’effondre.Le but de cette thèse est de bâtir un ensemble d’outils et de librairies permettant de programmer et d’analyser formellement de manière compositionelle la sécurité d’applicationsWeb confrontées à un modère plausible des capacités actuelles d’un attaquant sur le Web. Dans cette optique, nous étudions la conception des divers protocoles utilisés à chaque niveau de l’infrastructure du Web (TLS, X.509, HTTP, HTML, JavaScript) et évaluons leurs compositions respectives. Nous nous intéressons aussi aux implémentations existantes et en créons de nouvelles que nous prouvons correctes afin de servir de référence lors de comparaisons. Nos travaux mettent au jour un grand nombre de vulnérabilités aussi bien dans les protocoles que dans leurs implémentations, ainsi que dans les navigateurs, serveurs, et sites internet ; plusieurs de ces failles ont été reconnues d’importance critiques. Enfin, ces découvertes ont eu une influence sur les versions actuelles et futures du protocole TLS. / As ever more private user data gets stored on the Web, ensuring proper protection of this data (in particular when it transits through untrusted networks, or when it is accessed by the user from her browser) becomes increasingly critical. However, in order to formally prove that, for instance, email from GMail can only be accessed by knowing the user’s password, assuming some reasonable set of assumptions about what an attacker cannot do (e.g. he cannot break AES encryption), one must precisely understand the security properties of many complex protocols and standards (including DNS, TLS, X.509, HTTP, HTML,JavaScript), and more importantly, the composite security goals of the complete Web stack.In addition to this compositional security challenge, onemust account for the powerful additional attacker capabilities that are specific to the Web, besides the usual tampering of network messages. For instance, a user may browse a malicious pages while keeping an active GMail session in a tab; this page is allowed to trigger arbitrary, implicitly authenticated requests to GMail using JavaScript (even though the isolation policy of the browser may prevent it from reading the response). An attacker may also inject himself into honest page (for instance, as a malicious advertising script, or exploiting a data sanitization flaw), get the user to click bad links, or try to impersonate other pages.Besides the attacker, the protocols and applications are themselves a lot more complex than typical examples from the protocol analysis literature. Logging into GMail already requires multiple TLS sessions and HTTP requests between (at least) three principals, representing dozens of atomic messages. Hence, ad hoc models and hand written proofs do not scale to the complexity of Web protocols, mandating the use of advanced verification automation and modeling tools.Lastly, even assuming that the design of GMail is indeed secure against such an attacker, any single programming bug may completely undermine the security of the whole system. Therefore, in addition to modeling protocols based on their specification, it is necessary to evaluate implementations in order to achieve practical security.The goal of this thesis is to develop new tools and methods that can serve as the foundation towards an extensive compositional Web security analysis framework that could be used to implement and formally verify applications against a reasonably extensive model of attacker capabilities on the Web. To this end, we investigate the design of Web protocols at various levels (TLS, HTTP, HTML, JavaScript) and evaluate their composition using a broad range of formal methods, including symbolic protocol models, type systems, model extraction, and type-based program verification. We also analyze current implementations and develop some new verified versions to run tests against. We uncover a broad range of vulnerabilities in protocols and their implementations, and propose countermeasures that we formally verify, some of which have been implemented in browsers and by various websites. For instance, the Triple Handshake attack we discovered required a protocol fix (RFC 7627), and influenced the design of the new version 1.3 of the TLS protocol.

Page generated in 0.0419 seconds