• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 52
  • 31
  • 17
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 437
  • 437
  • 179
  • 91
  • 84
  • 81
  • 74
  • 71
  • 64
  • 58
  • 55
  • 51
  • 51
  • 50
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Longitudinal analysis of the certificate chains of big tech company domains / Longitudinell analys av certifikatkedjor till domäner tillhörande stora teknikföretag

Klasson, Sebastian, Lindström, Nina January 2021 (has links)
The internet is one of the most widely used mediums for communication in modern society and it has become an everyday necessity for many. It is therefore of utmost importance that it remains as secure as possible. SSL and TLS are the backbones of internet security and an integral part of these technologies are the certificates used. Certificate authorities (CAs) can issue certificates that validate that domains are who they claim to be. If a user trusts a CA they can in turn also trust domains that have been validated by them. CAs can in turn trust other CAs and this, in turn, creates a chain of trust called a certificate chain. In this thesis, the structure of these certificate chains is analysed and a longitudinal dataset is created. The analysis looks at how the certificate chains have changed over time and puts extra focus on the domains of big tech companies. The dataset created can also be used for further analysis in the future and will be a useful tool in the examination of historical certificate chains. Our findings show that the certificate chains of the domains studied do change over time; both their structure and the lengths of them vary noticeably. Most of the observed domains show a decrease in average chain length between the years of 2013 and 2020 and the structure of the chains vary significantly over the years.
362

Quantifying Trust and Reputation for Defense against Adversaries in Multi-Channel Dynamic Spectrum Access Networks

Bhattacharjee, Shameek 01 January 2015 (has links)
Dynamic spectrum access enabled by cognitive radio networks are envisioned to drive the next generation wireless networks that can increase spectrum utility by opportunistically accessing unused spectrum. Due to the policy constraint that there could be no interference to the primary (licensed) users, secondary cognitive radios have to continuously sense for primary transmissions. Typically, sensing reports from multiple cognitive radios are fused as stand-alone observations are prone to errors due to wireless channel characteristics. Such dependence on cooperative spectrum sensing is vulnerable to attacks such as Secondary Spectrum Data Falsification (SSDF) attacks when multiple malicious or selfish radios falsify the spectrum reports. Hence, there is a need to quantify the trustworthiness of radios that share spectrum sensing reports and devise malicious node identification and robust fusion schemes that would lead to correct inference about spectrum usage. In this work, we propose an anomaly monitoring technique that can effectively capture anomalies in the spectrum sensing reports shared by individual cognitive radios during cooperative spectrum sensing in a multi-channel distributed network. Such anomalies are used as evidence to compute the trustworthiness of a radio by its neighbours. The proposed anomaly monitoring technique works for any density of malicious nodes and for any physical environment. We propose an optimistic trust heuristic for a system with a normal risk attitude and show that it can be approximated as a beta distribution. For a more conservative system, we propose a multinomial Dirichlet distribution based conservative trust framework, where Josang*s Belief model is used to resolve any uncertainty in information that might arise during anomaly monitoring. Using a machine learning approach, we identify malicious nodes with a high degree of certainty regardless of their aggressiveness and variations introduced by the pathloss environment. We also propose extensions to the anomaly monitoring technique that facilitate learning about strategies employed by malicious nodes and also utilize the misleading information they provide. We also devise strategies to defend against a collaborative SSDF attack that is launched by a coalition of selfish nodes. Since, defense against such collaborative attacks is difficult with popularly used voting based inference models or node centric isolation techniques, we propose a channel centric Bayesian inference approach that indicates how much the collective decision on a channels occupancy inference can be trusted. Based on the measured observations over time, we estimate the parameters of the hypothesis of anomalous and non-anomalous events using a multinomial Bayesian based inference. We quantitatively define the trustworthiness of a channel inference as the difference between the posterior beliefs associated with anomalous and non-anomalous events. The posterior beliefs are updated based on a weighted average of the prior information on the belief itself and the recently observed data. Subsequently, we propose robust fusion models which utilize the trusts of the nodes to improve the accuracy of the cooperative spectrum sensing decisions. In particular, we propose three fusion models: (i) optimistic trust based fusion, (ii) conservative trust based fusion, and (iii) inversion based fusion. The former two approaches exclude untrustworthy sensing reports for fusion, while the last approach utilizes misleading information. All schemes are analyzed under various attack strategies. We propose an asymmetric weighted moving average based trust management scheme that quickly identifies on-off SSDF attacks and prevents quick trust redemption when such nodes revert back to temporal honest behavior. We also provide insights on what attack strategies are more effective from the adversaries* perspective. Through extensive simulation experiments we show that the trust models are effective in identifying malicious nodes with a high degree of certainty under variety of network and radio conditions. We show high true negative detection rates even when multiple malicious nodes launch collaborative attacks which is an improvement over existing voting based exclusion and entropy divergence techniques. We also show that we are able to improve the accuracy of fusion decisions compared to other popular fusion techniques. Trust based fusion schemes show worst case decision error rates of 5% while inversion based fusion show 4% as opposed majority voting schemes that have 18% error rate. We also show that the proposed channel centric Bayesian inference based trust model is able to distinguish between attacked and non-attacked channels for both static and dynamic collaborative attacks. We are also able to show that attacked channels have significantly lower trust values than channels that are not– a metric that can be used by nodes to rank the quality of inference on channels.
363

"(Un-)making" data to "make" security: A discursive and visual inquiry into the production, circulation and use of data across the pan-European information infrastructure

Ugolini, Vanessa 01 March 2023 (has links)
To counter hybrid threats – for example, international terrorism, transnational organised crime and (cyber-)attacks – security and intelligence communities increasingly gather, process and exchange vast amounts of data on presumably suspect individuals. This trend has been enabled by recent developments in surveillance capacities related to Information and Communications Technologies (ICTs). As a result, cross-border data transfers have become not only an element of international trade but also an important component of law enforcement strategies. Nevertheless, the exchange of data for policing purposes is not always smooth. Rather, there are frictions that emerge therein as well as technical and legal issues relating to the combination of data from different information systems and under different formats. This study advances the concept of data lifecycle in relation to the practices, such as the collection, entry, processing, storing, and analysis that direct data in specific ways to create multiple “cycles” of uses. Through the analytical lens of the lifecycle I aim to examine specifically how data are repurposed, not only by digital technologies, but also by provisions regulating access, storage and use of information for criminal matters. The core task consists in identifying the socio-political, legal and technical conditions of possibility that allow for the exchange of data at the pan-European level. By bringing together multiple conceptual and methodological subfields, I shed light on the politicality of EU data infrastructures that appear physically very remote or less visible, yet in a way that people do not realise how mundane they have become. Investigating the data lifecycle as a network of practices generates findings that are useful for understanding how security is enacted through the collection and use of different forms of data and hence for interpreting the evolving landscape of data-driven security governance in the EU.
364

The Shifting Web of Trust : Exploring the Transformative Journey of Certificate Chains in Prominent Domains / Förtroendets Föränderliga Väv : Att Utforska den Transformativa Resan av Certifikatkedjor av Populära Domäner

Döberl, Marcus, Freiherr von Wangenheim, York January 2023 (has links)
The security and integrity of TLS certificates are essential for ensuring secure transmission over the internet and protecting millions of people from man-in-the-middle attacks. Certificate Authorities (CA) play a crucial role in issuing and managing thesecertificates. This bachelor thesis presents a longitudinal analysis of certificate chains forpopular domains, examining their evolution over time and across different categories. Using publicly available certificate data from sources such as crt.sh and censys.io, we createda longitudinal dataset of certificate chains for domains from the Top 1-M list of Tranco.We categorized the certificates based on their type, and the particular service categories.We analyzed a selected set of domains over time and identified the patterns and trendsthat emerged in their certificate chains. Our analysis revealed several noteworthy trends,including an increase in the use of new CAs and a shift of which types of certificates areused, we also found a trend in shorter certificate chains and fewer paths from domain toroot certificate. This implies a more streamlined and simplified certificate process overtime until today. Our findings have implications for the broader cybersecurity communityand demonstrate the importance of ongoing monitoring and analysis of certificate chainsfor popular domains.
365

Using machine learning to visualize and analyze attack graphs

Cottineau, Antoine January 2021 (has links)
In recent years, the security of many corporate networks have been compromised by hackers who managed to obtain important information by leveraging the vulnerabilities of those networks. Such attacks can have a strong economic impact and affect the image of the entity whose network has been attacked. Various tools are used by network security analysts to study and improve the security of networks. Attack graphs are among these tools. Attack graphs are graphs that show all the possible chains of exploits an attacker could follow to access an important host on a network. While attack graphs are useful for network security, they may become hard to read because of their size when networks become larger. Previous work tried to deal with this issue by applying simplification algorithms on graphs. Experience shows that even if these algorithms can help improve the visualization of attack graphs, we believe that improvements can be made, especially by relying on Machin Learning (ML) algorithms. Thus, the goal of this thesis is to investigate how ML can help improve the visualization of attack graphs and the security analysis of networks based on their attack graph. To reach this goal, we focus on two main areas. First we used graph clustering which is the process of creating a partition of the nodes based on their position in the graph. This improves visualization by allowing network analysts to focus on a set of related nodes instead of visualizing the whole graph. We also design several metrics for security analysis based on attack graphs. We show that the ML algorithms in both areas. The ML clustering algorithms even produce better clusters than non-ML algorithms with respect to the coverage metric, at the cost of computation time. Moreover, the ML security evaluation algorithms show faster computation times on dense attack graphs than the non-ML baseline, while producing similar results. Finally, a user interface that permits the application of the methods presented   in the thesis is also developed, with the goal of making the use of such methods easier by network analysts. / Under de senaste åren har säkerheten för många företagsnätverk äventyrats av hackare som lyckats få fram viktig information genom att utnyttja sårbarheterna i dessa nätverk. Sådana attacker kan ha en stark ekonomisk inverkan och påverka bilden av den enhet vars nätverk har angripits. Olika verktyg användes av nätverkssäkerhetsanalytiker för att studera och förbättra säkerheten i nätverken. Attackgrafer ät bland dessa verktyg. Attackgrafer är diagram som visar alla möjliga kedjor av utnyttjande en angripare kan följa för att komma åt en viktig värd i ett nätverk. Även om attackgrafer är användbara för nätverkssäkerhet, kan de bli svåra att läsa på grund av deras storlek när nätverk blir större. Tidigare arbete försökte hantera detta problem genom att tillämpa förenklingsalgoritmer på grafer. Erfarenheten visar att även om dessa algoritmer kan hjälpa till att förbättra visualiseringen av attackgrafer tror vi att förbättringar kan göras, särskilt genom att förlita sig på Machine Learning  (ML) algoritmer. Således är målet med denna avhandling att undersöka hur ML kan hjälpa till att förbättra visualiseringen av attackgrafer och säkerhetsanalys av nätverk baserat på deras attackgraf. För att nå detta mål fokuserar vi på två huvudområden. Först använder vi grafklustering som är processen för att skapa en partition av noderna baserat på deras position i grafen. Detta förbättrar visualiseringen genom att låta nätverksanalytiker fokusera på en uppsättning relaterade noder istället för att visualisera hela grafen. Vi utformar också flera mätvärden för säkerhetsanalys baserat på attackgrafer. Vi visar att ML-algoritmerna är lika effektiva som icke-LM-algoritmer inom båda områdena. Klusteringsalgoritmerna ML producerar till och med bättre kluster än icke-ML-algoritmer med avseende på täckningsvärdet, till kostnaden för beräkningstid. Dessutom visar ML säkerhetsutvärderingsalgoritmerna snabbare beräkningstider på täta attackgrafer än icke-ML baslinjen, samtidigt som de ger liknande resultat. Slutligen utvecklas också ett användargränssnitt som tillåter tillämpning av metoderna som presenteras i avhandlingen, med målet att göra användningen av sådana metoder enklare för nätverksanalytiker.
366

Improved performance high speed network intrusion detection systems (NIDS). A high speed NIDS architectures to address limitations of Packet Loss and Low Detection Rate by adoption of Dynamic Cluster Architecture and Traffic Anomaly Filtration (IADF).

Akhlaq, Monis January 2011 (has links)
Intrusion Detection Systems (IDS) are considered as a vital component in network security architecture. The system allows the administrator to detect unauthorized use of, or attack upon a computer, network or telecommunication infrastructure. There is no second thought on the necessity of these systems however; their performance remains a critical question. This research has focussed on designing a high performance Network Intrusion Detection Systems (NIDS) model. The work begins with the evaluation of Snort, an open source NIDS considered as a de-facto IDS standard. The motive behind the evaluation strategy is to analyze the performance of Snort and ascertain the causes of limited performance. Design and implementation of high performance techniques are considered as the final objective of this research. Snort has been evaluated on highly sophisticated test bench by employing evasive and avoidance strategies to simulate real-life normal and attack-like traffic. The test-methodology is based on the concept of stressing the system and degrading its performance in terms of its packet handling capacity. This has been achieved by normal traffic generation; fussing; traffic saturation; parallel dissimilar attacks; manipulation of background traffic, e.g. fragmentation, packet sequence disturbance and illegal packet insertion. The evaluation phase has lead us to two high performance designs, first distributed hardware architecture using cluster-based adoption and second cascaded phenomena of anomaly-based filtration and signature-based detection. The first high performance mechanism is based on Dynamic Cluster adoption using refined policy routing and Comparator Logic. The design is a two tier mechanism where front end of the cluster is the load-balancer which distributes traffic on pre-defined policy routing ensuring maximum utilization of cluster resources. The traffic load sharing mechanism reduces the packet drop by exchanging state information between load-balancer and cluster nodes and implementing switchovers between nodes in case the traffic exceeds pre-defined threshold limit. Finally, the recovery evaluation concept using Comparator Logic also enhance the overall efficiency by recovering lost data in switchovers, the retrieved data is than analyzed by the recovery NIDS to identify any leftover threats. Intelligent Anomaly Detection Filtration (IADF) using cascaded architecture of anomaly-based filtration and signature-based detection process is the second high performance design. The IADF design is used to preserve resources of NIDS by eliminating large portion of the traffic on well defined logics. In addition, the filtration concept augment the detection process by eliminating the part of malicious traffic which otherwise can go undetected by most of signature-based mechanisms. We have evaluated the mechanism to detect Denial of Service (DoS) and Probe attempts based by analyzing its performance on Defence Advanced Research Projects Agency (DARPA) dataset. The concept has also been supported by time-based normalized sampling mechanisms to incorporate normal traffic variations to reduce false alarms. Finally, we have observed that the IADF has augmented the overall detection process by reducing false alarms, increasing detection rate and incurring lesser data loss. / National University of Sciences & Technology (NUST), Pakistan
367

TRACE DATA-DRIVEN DEFENSE AGAINST CYBER AND CYBER-PHYSICAL ATTACKS.pdf

Abdulellah Abdulaziz M Alsaheel (17040543) 11 October 2023 (has links)
<p dir="ltr">In the contemporary digital era, Advanced Persistent Threat (APT) attacks are evolving, becoming increasingly sophisticated, and now perilously targeting critical cyber-physical systems, notably Industrial Control Systems (ICS). The intersection of digital and physical realms in these systems enables APT attacks on ICSs to potentially inflict physical damage, disrupt critical infrastructure, and jeopardize human safety, thereby posing severe consequences for our interconnected world. Provenance tracing techniques are essential for investigating these attacks, yet existing APT attack forensics approaches grapple with scalability and maintainability issues. These approaches often hinge on system- or application-level logging, incurring high space and run-time overheads and potentially encountering difficulties in accessing source code. Their dependency on heuristics and manual rules necessitates perpetual updates by domain-knowledge experts to counteract newly developed attacks. Additionally, while there have been efforts to verify the safety of Programming Logic Controller (PLC) code as adversaries increasingly target industrial environments, these works either exclusively consider PLC program code without connecting to the underlying physical process or only address time-related physical safety issues neglecting other vital physical features.</p><p dir="ltr">This dissertation introduces two novel frameworks, ATLAS and ARCHPLC, to address the aforementioned challenges, offering a synergistic approach to fortifying cybersecurity in the face of evolving APT and ICS threats. ATLAS, an effective and efficient multi-host attack investigation framework, constructs end-to-end APT attack stories from audit logs by combining causality analysis, Natural Language Processing (NLP), and machine learning. Identifying key attack patterns, ATLAS proficiently analyzes and pinpoints attack events, minimizing alert fatigue for cyber analysts. During evaluations involving ten real-world APT attacks executed in a realistic virtual environment, ATLAS demonstrated an ability to recover attack steps and construct attack stories with an average precision of 91.06%, a recall of 97.29%, and an F1-score of 93.76%, providing a robust framework for understanding and mitigating cyber threats.</p><p dir="ltr">Concurrently, ARCHPLC, an advanced approach for enhancing ICS security, combines static analysis of PLC code and data mining from ICS data traces to derive accurate invariants, providing a comprehensive understanding of ICS behavior. ARCHPLC employs physical causality graph analysis techniques to identify cause-effect relationships among plant components (e.g., sensors and actuators), enabling efficient and quantitative discovery of physical causality invariants. Supporting patching and run-time monitoring modes, ARCHPLC inserts derived invariants into PLC code using program synthesis in patching mode and inserts invariants into a dedicated monitoring program for continuous safety checks in run-time monitoring mode. ARCHPLC adeptly detects and mitigates run-time anomalies, providing exceptional protection against cyber-physical attacks with minimal overhead. In evaluations against 11 cyber-physical attacks on a Fischertechnik manufacturing plant and a chemical plant simulator, ARCHPLC protected the plants without any false positives or negatives, with an average run-time overhead of 14.31% in patching mode and 0.4% in run-time monitoring mode.</p><p dir="ltr">In summary, this dissertation provides invaluable solutions that equip cybersecurity professionals to enhance APT attack investigation, enabling them to identify and comprehend complex attacks with heightened accuracy. Moreover, these solutions significantly bolster the safety and security of ICS infrastructure, effectively protecting critical systems and strengthening defenses against cyber-physical attacks, thereby contributing substantially to the field of cybersecurity.</p>
368

Securing resource constrained platforms with low-cost solutions.

Arslan Khan (17592498) 11 December 2023 (has links)
<p dir="ltr">This thesis focuses on securing different attack surfaces of embedded systems while meeting the stringent requirements imposed by these systems. Due to the specialized architecture of embedded systems, the security measures should be customized to match the unique requirements of each specific domain. To this end, this thesis identified novel security architectures using techniques such as anomaly detection, program analysis, compartmentalization, etc. This thesis synergizes work at the intersection of programming languages, compilers, computer architecture, operating systems, and embedded systems. </p>
369

Autonomous Cyber Defense for Resilient Cyber-Physical Systems

Zhang, Qisheng 09 January 2024 (has links)
In this dissertation research, we design and analyze resilient cyber-physical systems (CPSs) under high network dynamics, adversarial attacks, and various uncertainties. We focus on three key system attributes to build resilient CPSs by developing a suite of the autonomous cyber defense mechanisms. First, we consider network adaptability to achieve the resilience of a CPS. Network adaptability represents the network ability to maintain its security and connectivity level when faced with incoming attacks. We address this by network topology adaptation. Network topology adaptation can contribute to quickly identifying and updating the network topology to confuse attacks by changing attack paths. We leverage deep reinforcement learning (DRL) to develop CPSs using network topology adaptation. Second, we consider the fault-tolerance of a CPS as another attribute to ensure system resilience. We aim to build a resilient CPS under severe resource constraints, adversarial attacks, and various uncertainties. We chose a solar sensor-based smart farm as one example of the CPS applications and develop a resource-aware monitoring system for the smart farms. We leverage DRL and uncertainty quantification using a belief theory, called Subjective Logic, to optimize critical tradeoffs between system performance and security under the contested CPS environments. Lastly, we study system resilience in terms of system recoverability. The system recoverability refers to the system's ability to recover from performance degradation or failure. In this task, we mainly focus on developing an automated intrusion response system (IRS) for CPSs. We aim to design the IRS with effective and efficient responses by reducing a false alarm rate and defense cost, respectively. Specifically, We build a lightweight IRS for an in-vehicle controller area network (CAN) bus system operating with DRL-based autonomous driving. / Doctor of Philosophy / In this dissertation research, we design and analyze resilient cyber-physical systems (CPSs) under high network dynamics, adversarial attacks, and various uncertainties. We focus on three key system attributes to build resilient CPSs by developing a suite of the autonomous cyber defense mechanisms. First, we consider network adaptability to achieve the resilience of a CPS. Network adaptability represents the network ability to maintain its security and connectivity level when faced with incoming attacks. We address this by network topology adaptation. Network topology adaptation can contribute to quickly identifying and updating the network topology to confuse attacks by changing attack paths. We leverage deep reinforcement learning (DRL) to develop CPSs using network topology adaptation. Second, we consider the fault-tolerance of a CPS as another attribute to ensure system resilience. We aim to build a resilient CPS under severe resource constraints, adversarial attacks, and various uncertainties. We chose a solar sensor-based smart farm as one example of the CPS applications and develop a resource-aware monitoring system for the smart farms. We leverage DRL and uncertainty quantification using a belief theory, called Subjective Logic, to optimize critical tradeoffs between system performance and security under the contested CPS environments. Lastly, we study system resilience in terms of system recoverability. The system recoverability refers to the system's ability to recover from performance degradation or failure. In this task, we mainly focus on developing an automated intrusion response system (IRS) for CPSs. We aim to design the IRS with effective and efficient responses by reducing a false alarm rate and defense cost, respectively. Specifically, We build a lightweight IRS for an in-vehicle controller area network (CAN) bus system operating with DRL-based autonomous driving.
370

Preserving Security and Privacy: a WiFi Analyzer Application based on Authentication and Tor

Kolonia, Alexandra, Forsberg, Rebecka January 2020 (has links)
Numerous mobile applications have the potential to collect and share userspecific information on top of the essential data handling. This is made possible through poor application design and its improper implementation. The lack of security and privacy in an application is of main concern since the spread of sensitive and personal information can cause both physical and emotional harm, if it is being shared with unauthorized people. This thesis investigates how to confidentially transfer user information in such a way that the user remains anonymous and untraceable in a mobile application. In order to achieve that, the user will first authenticate itself to a third party, which provides the user with certificates or random generated tokens. The user can then use this as its communication credentials towards the server, which will be made through the Tor network. Further, when the connection is established, the WiFi details are sent periodically to the server without the user initiating the action. The results show that it is possible to establish connection, both with random tokens and certificates. The random tokens took less time to generate compared to the certificate, however the certificate took less time to verify, which balances off the whole performance of the system. Moreover, the results show that the implementation of Tor is working since it is possible for the system to hide the real IP address, and provide a random IP address instead. However, the communication is slower when Tor is used which is the cost for achieving anonymity and improving the privacy of the user. Conclusively, this thesis proves that combining proper implementation and good application design improves the security in the application thereby protecting the users’  privacy. / Många mobilapplikationer har möjlighet att samla in och dela användarspecifik information, utöver den väsentliga datahanteringen. Det här problemet möjliggörs genom dålig applikationsdesign och felaktig implementering. Bristen på säkerhet och integritet i en applikation är därför kritisk, eftersom spridning av känslig och personlig information kan orsaka både fysisk och emotionell skada, om den delas med obehöriga personer. Denna avhandling undersöker hur man konfidentiellt kan överföra användarinformation på ett sätt som tillåter användaren av mobilapplikationen att förbli både anonym och icke spårbar. För att uppnå detta kommer användaren först att behöva autentisera sig till en tredje part, vilket förser användaren med slumpmässigt genererade tecken eller med ett certifikat. Användaren kan sedan använda dessa till att kommunicera med servern, vilket kommer att göras över ett Tor-nätverk. Slutligen när anslutningen upprättats, kommer WiFi-detaljerna att skickas över periodvis till servern, detta sker automatiskt utan att användaren initierar överföringen. Resultatet visar att det är möjligt att skapa en anslutning både med ett certifikat eller med slumpmässiga tecken. Att generera de slumpmässiga tecknen tog mindre tid jämfört med certifikaten, däremot tog certifikaten mindre tid att verifiera än tecknen. Detta resulterade i att de båda metoderna hade en jämn prestanda om man ser över hela systemet. Resultatet visar vidare att det implementeringen av Tor fungerar då det är möjligt för systemet att dölja den verkliga IPadressen och att istället tillhandahålla en slumpmässig IP-adress. Kommunikationen genom Tor gör dock systemet långsammare, vilket är kostnaden för att förbättra användarens integritet och uppnå anonymitet. Sammanfattningsvis visar denna avhandling att genom att kombinera korrekt implementering och bra applikationsdesign kan man förbättra säkerheten i applikationen och därmed skydda användarnas integritet.

Page generated in 0.0565 seconds